Home Blog Page 683

Current State of Kernel Audit and Linux Namespaces, Looking Ahead to Containers

Richard Guy Briggs, a kernel security engineer and Senior Software Engineer at Red Hat, spoke about the current state of Kernel Audit and Linux Namespaces at Linux Security Summit. He also shared problems plaguing containers and what might be done to address them soon.

Node.js v6 Transitions to LTS

The Node.js project has three major updates this month:

  • Node.js v7 will become a current release line.
  • Node.js v6, code named “Boron,” transitions to LTS.
  • Node.js v0.10 will reach “End of Life” at the end of the month. There will be no further releases of this line, including security or stability patches.

Node.js v6 transitioned to LTS line today, so let’s talk about what this means, where other versions stand, and what to expect with Node.js v7.

Node.js Project’s LTS Strategy

In a nutshell, the Long Term Support (LTS) strategy is focused on creating stability and security to organizations with complex environments that find it cumbersome to continually upgrade Node.js. These release lines are even numbered and are supported for 30 months — more information on the LTS strategy can be found here.

 
0*O1JbbvvEGtUkNxd_.

*This image is under copyright of NodeSource.

Another good source for the history and strategy of the Node.js release lines, can be found in Rod Vagg’s blog post, “Farewell to Node.js v5, Preparing for Node.js v7.” Rod is the Node.js Project’s technical steering committee director and a Node.js Foundation board member.

Node.js follows semantic versioning (semver). Essentially, semver is how we signal how changes will affect the software, and whether or not upgrading will “break” software to help developers determine whether they should download a new version, and when they should download a new version. There is a simple set of rules and requirements that dictate how version numbers are assigned and incremented, and whether they fall into the following categories:

  • Patch Release: Is a bug fix or a small improvement to performance. It doesn’t add new features or change the way the software works. Patches are an easy upgrade.
  • Minor Release: This is any change to the software that introduces new features, but does not change the way that the software works. Given that there is a new feature being release, it is generally best to wait to upgrade to a minor release after it has been tested and patched.
  • Major Release: This is a big breaking change. It changes how the software works and functions. With Node.js, it can be as simple as changing an error message to upgrading V8.

If you want more information on how releases work, watch Myles Borins’ presentation at JSConf Uruguay: https://www.youtube.com/watch?v=5un1I2qkojg. Myles is a member of the Node.js Project and Node.js Core Technical Committee.

Node.js v6 Moves from “Current” to “LTS”

Node.js v6 will be the LTS release line until April 2018, meaning new features (semver-minor) may only land with consent of the Node.js project’s Core Technical Committee and the LTS Working Group. These features will land on an infrequent basis.

Changes in a LTS-covered major version are limited to:

  1. Bug fixes;
  2. Security updates;
  3. Non-semver-major npm updates;
  4. Relevant documentation updates;
  5. Certain performance improvements where the risk of breaking existing applications is minimal;
  6. Changes that introduce a large amount of code churn where the risk of breaking existing applications is low and where the change in question may significantly ease the ability to backport future changes due to the reduction in diff noise.

After April 2018, Node.js v6 will transition into “maintenance” mode for 12 additional months. Maintenance mode means that only critical bugs, critical security fixes, and documentation updates will be permitted.

Node.js v6 is important to enterprises and users that need stability. If you have a large production environment and need to keep Node.js humming, then you want to be on an LTS release line. If you fall within this category, we suggest that you update to Node.js v6, especially if you are on v0.10 or v0.12. *More information on this as well as what to do if you are on Node.js v4 below.

Features, Focus and More Features

Node.js v6 became a current release line in April 2016. Its main focus is on performance improvements, increased reliability and better security. A few notable features and updates include:

Security Enhancements

  • New Buffer API creation for increased safety and security.
  • Experimental support for the “v8_inspector,” a new experimental debugging protocol. *If you have an environment that cannot handle updates or testing, do not try this new feature as it is not fully supported and could have bugs.

Increased Reliability

  • Print warning to standard error when a native Promise rejection occurs, but there is no handler to receive it. This is particularly important for distributed teams building applications. Before this capability, they would have to chase down the problem, which is equivalent to finding a needle in a haystack. Now, they can easily pinpoint where the problem is and solve it.

Performance Improvements

Node.js v6 Equipped with npm v3

  • Npm3 resolves dependencies differently than npm2 in that it tries to mitigate the deep trees and redundancy that nesting causes in npm2 — more on this can be found in npm’s blog post on the subject. The flattened dependency tree will be very important in particular to Windows users who have file path length limitations.
  • In addition, npm’s shrinkwrap functionality has changed. The updates will provide a more consistent way to stay in sync with package.json when you use the save flag or adjust dependencies. Users who deploy projects using shrinkwrap consistently (most enterprises do) should watch for changes in behaviour.

Updating Node.js v4 to Node.js v6

If you are on Node.js v4, you have 18 months to transition from Node.js v4 to Node.js v6. We suggest starting now. Node.js v4 will stop being maintained April 2018.

At the current rate of download, Node.js v6 will take over the current LTS line v4 in downloads by the end of the year. This is a good thing as v6 will be the LTS line and in maintenance mode for the next 30 months. Node.js v4 will stop being maintained in April 2018.

 
0*zGWdvtPWdbr3Ctz5.

*Data pulled from Node.js metrics section: https://nodejs.org/metrics/

Time To Transition Off v0.12 & v0.10

On v0.12, v0.10, v5? Please upgrade! We understand you may have time constraints, but Node.js v0.10 will not be maintained after this month (October). This means no further official releases, including fixes for critical security bugs. End of life for Node.js v0.12 will be December 2016.

You might be wondering what our main reasons are for doing this? After December 31, we won’t be able to get OpenSSL updates for those versions. So that means we won’t be able to provide any security updates.

Additionally, the Node.js Core team has been maintaining the version of V8 included in Node.js v0.10 alone since the Chromium team retired it four years ago. This represents a risk for users as the team will no longer maintain this.

If you have a robust test environment setup, then an upgrade to Node.js v6 is what we would suggest. If you don’t feel comfortable making that big of a version leap, then Node.js v4 is also a good upgrade, however it won’t be supported as long as Node.js v6.

Node.js v4 and Node.js v6 are more stable than Node.js v0.10 and v0.12 and have more modern versions of V8, OpenSSL, and other critical dependencies. Bottom line: it’s time to get update.

What’s holding you back from upgrading? Let us know it the comments section below. If you have questions along the way, please ask them in this forum: https://github.com/nodejs/help

Okay, So What’s the Deal with Node.js v7?

Node.js v7 was released into beta at the end of September and is due to be released the week of October 25. Node.js v7 is a checkpoint release for the Node.js project and will focus on stability, incremental improvement over Node.js v6, and updating to the latest versions of V8, libuv, and ICU.

Node.js v7 will ship with JavaScript Engine V8 5.4, which focuses on performance improvements linked to memory. Included in this are new JavaScript language features such as the exponentiation operator, new Object property iterators, and experimental support for async functions. To note, the async function support will be unsupported until V8 5.5 ships. These features are still in experimental mode, so you can play around with them, but they likely contain bugs and should not be used in production.

Given it is an odd numbered release, it will only be available for eight months with its end of life slated for June 2017. It has some really awesome features, but it might not be right for you to download. If you can easily upgrade your deployment and can tolerate a bit of instability, then this is a good upgrade for you.

Want more technical information about breaking changes in Node.js v7? See the full list here: https://github.com/nodejs/node/pull/9099

Beyond v7, we’ll be focusing our efforts on language compatibility, adopting modern web standards, growth internally for VM neutrality and API development, and support for growing Node.js use cases. To learn more, check out James Snell’s’ recent keynote from Node.js Interactive Amsterdam “Node.js Core State of the Union” on where Node.js core has been over the past year and where we’re going. James is a member of the Node.js Technical Steering Committee. Additional technical details around Node.js v6 and additional release lines can be found here.

This article originally appeared on the Node.js Foundation blog.

Watch Videos from LinuxCon + ContainerCon Europe

Thank you for your interest in the recorded sessions from LinuxCon + ContainerCon Europe 2016! View more than 25+ sessions from the event below.

Keynotes

 

Developer

 

Wildcard

 

Understanding and Securing Linux Namespaces

Richard Guy Briggs, a kernel security engineer and Senior Software Engineer at Red Hat, talked about the current state of Kernel Audit and Linux Namespaces at the Linux Security Summit. He also shared problems plaguing containers and what might be done to address them soon.

His insights are borne of deep experience. Briggs was an early adopter of Linux back in 1992, and has written UNIX and Linux device drivers for telecom, video and network applications and embedded devices.

Audit, which he describes as “Syslog on steroids,” exists as the means to securely document exactly what occurred where, when and by whom in case the need to pinpoint a problem in a court of law arises. It works well with SELinux and with other security modules in the kernel, and it only reports behavior; thus, it doesn’t interfere with anything running on the system. You can, however, panic the kernel and stop the run, that is, shut down the machine, if a situation exists that Audit is unable to document or report. Otherwise, configurable kernel filters render select activities of interest or more detail on something questionable while ignoring other behavior reports without interference with other activities.

To understand the current state of Kernel Audit, it’s important to understand its relationship with namespaces, which are kernel-enforced user space views. Currently, there are seven namespaces: peer namespaces which include the Mount namespace; the UTS namespace (hostname and domain information); the IPC namespace (no one really knows what this one does); NET namespaces which are peer systems, peer namespaces so the system comes up in the initial NET namespace and all of the physical devices appear in that namespace; and, three hierarchy namespaces – PID, User, and Cgroups –  so that the permissions are inherited from one to another.

The User namespaces are the most contentious, because there are a number of traps inherent to their use making securing them vexing. “As a result, there are number of distributions that have not enabled user namespaces by default yet, because there’s still some work to be done to iron out where these are going,” Briggs said.

There Can Be Only One

As to containers, there is no hard definition as to what they are and certainly the kernel has no concept of containers. As far as the kernel is concerned, it’s a concept that’s completely in user space at the moment.

There is interest in the community to move beyond the general consensus in defining containers as a combination of kernel namespaces, secure computing, seccomp, and cgroups, to a clearer definition of what a container is allowed to do in order to create a better auditing trail.

The biggest problem with containers is that a set of namespaces doesn’t work. “At this point, there can only be one audit daemon, and it has to live in the initial user and PID namespace and that’s locked down by kernel rules that basically say it detects what namespaces you’re in,” he explained.

There have been security weaknesses in the Mount namespace since its inception, but that was not overly concerning as they were not actually abusable. Until, that is, other namespaces and more ways to use them were added.  

Work is ongoing in addressing the security issues in User namespaces and newly exposed issues in Mount.

“In terms of audit, the audit daemon, it seems to make the most sense to tie the audit daemon in to the user namespace,” Briggs said.

“In terms of the network namespaces, the initial network namespace was the one that was originally listening and there were a number of proposals on how to try and deal with it. In the end, the least complex solution won out for the moment for the short term.”

Watch the complete presentation below:

linux-com_ctas_security_090716_452x150.jpg?itok=XsvIOO55

Sign up to access more than 40 recorded sessions from LinuxCon + ContainerCon, including keynotes from Joe Beda, Jim Whitehurst, Cory Doctorow, and more.

Solving Enterprise Monitoring Issues with Prometheus

Chicago-based ShuttleCloud helps developers import user contacts and email data into their applications through standard API requests. As the venture-backed startup began to acquire more customers, they needed a way to scale system monitoring to meet the terms of their service-level agreements (SLAs). They turned to Prometheus, the open source systems monitoring and alerting toolkit originally built at SoundCloud, which is now a project at the Cloud-Native Computing Foundation. 

In advance of Prometheus Day, to be held Nov. 8-9 in Seattle, we talked to Ignacio Carretero, a ShuttleCloud software engineer, about why they chose Prometheus as their monitoring tool and what advice they would give to other small businesses seeking a similar solution.

Ignacio P. Carretero, ShuttleCloud

Linux.com: Why did your enterprise start using a monitoring solution like Prometheus?

Ignacio Carretero: It started when our number of projects increased and new clients SLAs became more demanding. We had some systems in place to monitor the operation metrics (the status of our instances), business metrics (how we were performing) and if our external front ends were up and running. However, we did not have a centralized monitoring system or a standard alerting system. In addition, some business metrics had to be manually reviewed. All of the aforementioned have been solved with a monitoring solution like Prometheus.

Some of the reasons we chose Prometheus over other monitoring systems are its flexibility with the metric system (it doesn’t have to be fixed from the beginning), the independency from other external services (such as message buses or databases) and the simplicity of its installation and execution as it is a Go file.

Linux.com: What are the most important things for small businesses to know when bringing on an in-house monitoring stack?

Ignacio: The most important thing we would mention is that having a Prometheus-based in-house monitoring solution does not have to be expensive. It is possible to start monitoring a complete infrastructure with only one instance and not a lot of development/setup time. Apart from that, it is good to know that monitoring is not a goal but a journey, and we must confess that this has been a pleasant one. Throughout this journey you’ll fine tune alerts and progress through the stages of getting your infrastructure monitored. In the precise case of Prometheus, we are also very satisfied with the available exporters. They mostly can be integrated without investing a lot of time which is always important for small businesses.

Linux.com: What is the journey like to equip your infrastructure with monitoring technology? Is the process different for small businesses?

Ignacio: The main difference is that we do not have a specialized team that can take care of that process so the whole team has to be involved. Every single developer in our engineering team collaborates with what is being monitored by Prometheus and what remains monitored by the legacy systems. We all solve the alerts triggered so we can participate on the tuning of thresholds and adding missing alerts and removing unnecessary ones.

Linux.com: What lessons did you learn while deploying a monitoring technology?

Ignacio: At the beginning we were very constrained with the time we could dedicate to the implementation so we decided to start small. Therefore, we recycled some of the systems that we had already in place. To do so, we took some decisions that were against the design patterns of Prometheus. It might not be the ideal design but at least we had a starting point. From the starting point, we iterated and improved our system as we started to understand some of the things we were doing wrong and what things could be improved. If we had waited until we designed a perfect system, more than likely we would still have our old service in place.

Linux.com: What are the major benefits your environment has seen from using Prometheus?

Ignacio: The most important thing for us, the developers, is that we now totally trust our monitoring system. Before we were constantly checking if everything was alright and if there was any issue. We currently know that if any of the thresholds is reached, someone will be paged or an email will be sent depending on the urgency of the issue.

Another major benefit is that the system is fairly easy to maintain. We still do improvements and fine tuning but the overall maintenance overhead has been kept to a minimum, even if we continue growing.

Finally, we would like to point out that PromQL is really useful and logical (the Prometheus query language). The learning curve is definitely worth the effort. PromQL is also used for chart creation in Grafana, which is very easy to integrate with Prometheus.

For Linux.com readers only: get 20% off your CloudNativeCon + KubeCon & PrometheusDay passes with code CNKC16LNXCM. Register now.

WTF Is a Container?

You can’t go to a developer conference today and not hear about software containers: Docker, Kubernetes, Mesos and a bunch of other names with a nautical ring to them. Microsoft, Google, Amazon and everybody else seems to have jumped on this bandwagon in the last year or so, but why is everybody so excited about this stuff?

To understand why containers are such a big deal, let’s think about physical containers for a moment. The modern shipping industry only works as well as it does because we have standardized on a small set of shipping container sizes. Before the advent of this standard, shipping anything in bulk was a complicated, laborious process. Imagine what a hassle it would be to move some open pallet with smartphones off a ship and onto a truck, for example. Instead of ships that specialize in bringing smartphones from Asia, we can just put them all into containers and know that those will fit on every container ship.

Read more at TechCrunch

Industrial Internet of Things Set to Rocket Towards 100bn Devices

The explosive growth of the Internet of Things (IoT) is most often discussed in terms of consumer devices and products. But if you take a second to consider the scale of the industrial products sector and its potential for device connectivity throughout the supply chain, then you can start to see why exactly it is set to dwarf the size of consumer IoT by several magnitudes.

While a few billion consumer devices (think wearables, home automation devices and cars), will become IoT connected during the next five years, the equivalent global growth curve for the industrial IoT (IIoT) is set to rocket towards 100 billion devices as the technology becomes pervasive in industrial sectors worldwide.

Read more at The Australian

OpenStack & Private Cloud, at Scale, Are Cheaper Than Public Cloud

Beyond a certain scale, commercial private clouds and OpenStack distributions are cheaper than public clouds, according to the latest Cloud Price Index from 451 Research. However, private cloud still might be more difficult to plan for.

Commercial private cloud offerings from vendors such as VMware and Microsoft offer a lower total cost of ownership (TCO) when labor efficiency is lower than 400 virtual machines managed per engineer, according to the report, which was published today.

Read more at SDx Central.

InfraKit Hello World

Docker just shipped InfraKit a few days ago at LinuxCon and, while at the Docker Distributed Systems Summit, I wanted to see if I could get a hello world example up and running. The documentation is lacking at the moment, epecially around how to tie the different components like instances and flavors together.

The following example isn’t going to do anything particularly useful, but it’s hopefully simple enough to help anyone else trying to get started. I’m assuming you’ve checked out and built the binaries as described in theREADME.

Read more at More Than Seven

Software-Defined Networking Puts Network Managers in the Driver’s Seat

SDNs can help organizations keep up with evolving network demands in an app-centric IT environment and give network managers much more flexibility.

The strategy behind the network architecture inside many of today’s data centers was developed during the x86 era of server-desktop computing. Much has changed since then, and organizations must now give serious thought to a fresh approach to networking — software defined networking (SDN) — that more accurately reflects today’s computing reality.

Read more at BizTech