Home Blog Page 301

What Is the MEAN Stack? JavaScript Web Applications

Most anyone who has developed web applications knows the acronym LAMP, which is used to describe web stacks made with Linux, Apache (web server), MySQL (database server), and PHP, Perl, or Python (programming language).

Another web-stack acronym has come to prominence in the last few years: MEAN—signifying a stack that uses MongoDB (database server), Express (server-side JavaScript framework), Angular (client-side JavaScript framework), and Node.js (JavaScript runtime).

MEAN is one manifestation of the rise of JavaScript as a “full-stack development” language. Node.js provides a JavaScript runtime on the server; Angular and Express are JavaScript frameworks used to build web clients and Node.js applications, respectively; and MongoDB’s data structures are stored in a binary JSON (JavaScript Object Notation) format, while its queries are expressed in JSON.

In short, the MEAN stack is JavaScript from top to bottom, or back to front. A big part of MEAN’s appeal is this consistency. Life is simpler for developers because every component of the application—from the objects in the database to the client-side code—is written in the same language. 

Read more at InfoWorld

The Great “DevOps Engineer” Title Debate

At the DevOps Enterprise Summit in Las Vegas last month, DevOps author and researcher Gene Kim unveiled his latest definition of DevOps:

The architecture, technical practices, and cultural norms that enable us to: increase our ability to deliver applications and services; quickly and safely, which enables rapid experimentation and innovation, and the fastest delivery of value to our customers; while ensuring world-class security, reliability and stability so that we can win in the marketplace.

By this definition, it’s somewhat difficult to surmise what role a DevOps engineer would fill.

According to the latest LinkedIn report chronicling the most “in demand” jobs of 2018, DevOps engineer was, in fact, the most heavily recruited job specific to the engineering field, followed by front-end engineers and cloud architects.

But, what is a DevOps engineer, exactly? 

Read more at The Enterprisers Project

Ruby in Containers

There was a time when deploying software was an event, a ceremony because of the difficulty that was required to keep this consistency. Teams spent a lot of time making the destination environments run the software as the source environment. They thereafter prayed that the gods kept the software running perfectly in production as in development.

With containers, deployments are more frequent because we package our applications with their libraries as a unit making them portable thereby helping us maintain consistency and reliability when moving software between environments. For developers, this is improved productivity, portability and ease of scaling.

Because of this portability, containers have become the universal language of the cloud allowing us to move software from one cloud to another without much trouble.

In this article, I will discuss two major concepts to note while working with containers in Ruby. I will discuss how to create small container images and how to test them.

Read more at The New Stack

AI in the Real World

We are living in the future – it is just unevenly distributed with “an outstanding amount of hype and this anthropomorphization of what [AI] technology can actually provide for us,” observed Hilary Mason, general manager for machine learning at Cloudera, who led a keynote on “AI in the Real World: Today and Tomorrow,” at the recent Open FinTech Forum.

AI has existed as an academic field of research since the mid-1950s, and if the forum had been held 10 years ago, we would have been talking about big data, she said. But, today, we have machine learning and feedback loops that allow systems continue to improve with the introduction of more data.

Machine learning provides a set of techniques that fall under the broad umbrella of data science. AI has returned, from a terminology perspective, Mason said, because of the rise of deep learning, a subset of machine learning techniques based around neural networks that has provided not just more efficient capabilities but the ability to do things we couldn’t do at all five years ago.

Imagine the future

All of this “creates a technical foundation on which we can start to imagine the future,’’ she said. 

Watch the complete video at The Linux Foundation

New IoT Security Regulations

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or “IoT” devices sold in the state­ — and the effects will soon be felt worldwide.

California’s new SB 327 law, which will take effect in January 2020, requires all “connected devices” to have a “reasonable security feature.” The good news is that the term “connected devices” is broadly defined to include just about everything connected to the Internet. The not-so-good news is that “reasonable security” remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. 

Read more at Schneier on Security

CNCF Survey: Cloud Usage in Asia Has Grown 135% Since March 2018

The bi-annual CNCF survey takes a pulse of the community to better understand the adoption of cloud native technologies. This is the second time CNCF has conducted its cloud native survey in Mandarin to better gauge how Asian companies are adopting open source and cloud native technologies. The previous Mandarin survey was conducted in March 2018. This post also makes comparisons to the most recent North American / European version of this survey from August 2018.

Key Takeaways

  • Usage of public and private clouds in Asia has grown 135% since March 2018, while on-premise has dropped 48%.
  • Usage of nearly all container management tools in Asia has grown, with commercial off-the-shelf solutions up 58% overall, and home-grown solutions up 690%. Kubernetes has grown 11%.
  • The number of Kubernetes clusters in production is increasing. Organizations in Asia running 1-5 production clusters decreased 37%, while respondents running 11-50 clusters increased 154%.
  • Use of serverless technology in Asia has spiked 100% with 29% of respondents using installable software and 21% using a hosted platform.

Growth of Containers

Container usage is becoming prevalent in all phases of the development cycle. There has been a significant jump in the use of containers for testing, up to 42% from 24% in March 2018 with an additional 27% of respondents citing future plans. There has also been an increase in use of containers for Proof of Concept (14% up from 8%).

Read more at CNCF

An Introduction to Udev: The Linux Subsystem for Managing Device Events

Udev is the Linux subsystem that supplies your computer with device events. In plain English, that means it’s the code that detects when you have things plugged into your computer, like a network card, external hard drives (including USB thumb drives), mouses, keyboards, joysticks and gamepads, DVD-ROM drives, and so on. That makes it a potentially useful utility, and it’s well-enough exposed that a standard user can manually script it to do things like performing certain tasks when a certain hard drive is plugged in.

This article teaches you how to create a udev script triggered by some udev event, such as plugging in a specific thumb drive. Once you understand the process for working with udev, you can use it to do all manner of things, like loading a specific driver when a gamepad is attached, or performing an automatic backup when you attach your backup drive.

A basic script

The best way to work with udev is in small chunks. Don’t write the entire script upfront…

Read more at OpenSource.com

Machine Learning for Operations

Managing infrastructure is a complex problem with a massive amount of signals and many actions that can be taken in response; that’s the classic definition of a situation where machine learning can help. 

IT and operations is a natural home for machine learning and data science. According to Vivek Bhalla, until recently a Gartner research director covering AIOps and now director of product management at Moogsoft, if there isn’t a data science team in your organization the IT team will often become the “center of excellence”.

By 2022, Gartner predicts, 40 percent of all large enterprises will use machine learning to support or even partly replace monitoring, service desk and automation processes. That’s just starting to happen in smaller numbers.

In a recent Gartner survey, the most popular use of AI in IT and operations is analyzing big data (18 percent) and chatbots for IT service management — 15 percent are already using chatbots and a further 30 percent plan to do so by the end of 2019.

Read more at The New Stack

Uber Joins the Linux Foundation as a Gold Member

“Uber has been influential in the open source community for years, and we’re very excited to welcome them as a Gold member at the Linux Foundation,” said Jim Zemlin, Executive Director of the Linux Foundation. “Uber truly understands the power of open source and community collaboration, and I am honored to witness that first hand as a part of Uber Open Summit 2018.”

Through this membership, Uber will support the Linux Foundation’s mission to build ecosystems that accelerate open source technology development. Uber will continue collaborating with the community, working with other leaders in the space to solve complex technical problems and further promote open source adoption globally.

Read more at Uber

Virtualizing the Clock

Dmitry Safonov wanted to implement a namespace for time information. The twisted and bizarre thing about virtual machines is that they get more virtual all the time. There’s always some new element of the host system that can be given its own namespace and enter the realm of the virtual machine. But as that process rolls forward, virtual systems have to share aspects of themselves with other virtual systems and the host system itself—for example, the date and time.

Dmitry’s idea is that users should be able to set the day and time on their virtual systems, without worrying about other systems being given the same day and time. This is actually useful, beyond the desire to live in the past or future. Being able to set the time in a container is apparently one of the crucial elements of being able to migrate containers from one physical host to another, as Dmitry pointed out in his post.

Read more at Linux Journal