Home Blog Page 534

Streamlining Kubernetes Development with Draft

Application containers have skyrocketed in popularity over the last few years. In recent months, Kubernetes has emerged as a popular solution for orchestrating these containers. While many turn to Kubernetes for its extensible architecture and vibrant open-source community, some still view Kubernetes as too difficult to use.

Today, my team is proud to announce Draft, a tool that streamlines application development and deployment into any Kubernetes cluster. Using two simple commands, developers can now begin hacking on container-based applications without requiring Docker or even installing Kubernetes themselves.

Draft in action

Draft targets the “inner loop” of a developer’s workflow – while developers write code and iterate, but before they commit changes to version control. Let’s see it in action.

Read more at Microsoft blog

7 Cool KDE Tweaks That Will Change Your Life

The great thing about KDE’s Plasma desktop is that it’s universally familiar enough for anybody to use, but it’s also got all the knobs and switches needed to become a power user. There’s no way to cover all the great options available in the customizable desktop environment here, but these seven tweaks can change your Plasma experience for the better.

These are based on KDE 5. Most of them also apply to KDE 4, although in some cases extra packages are needed, or the configuration options are in slightly different locations.

 

Read more at OpenSource.com

The SCION Internet Architecture

The Internet has been successful beyond even the most optimistic expectations. It permeates almost every aspect of our society and economy worldwide. This success has created universal dependence on communication, as many of the processes underpinning modern society would grind to a halt if it were unavailable. However, the state of the safety and availability of the Internet is far from commensurate with its importance.

This article describes SCION, or Scalability, Control, and Isolation On Next-generation networks, an inter-domain network architecture designed to address these issues, covering SCION’s goals, design, and functionality, as well as the results of six years of research we have conducted since our initial publication.

Read more at Communications of ACM 

CoreOS Brings Kubernetes-as-a-Service to Enterprise

CoreOS today said it added features to its enterprise container-orchestration platform that include Kubernetes-as-a-service.

The upcoming Tectonic 1.6.4 will allow enterprises to deploy and manage the latest version of upstream Kubernetes across bare metal, public-, private-, and hybrid-cloud environments. The container company says this gives enterprises the flexibility of running their applications on the cloud, without cloud vendor lock-in.

“With Kubernetes-as-a-service, you can update your cluster with no downtime, smart ordering, and either in one click or fully automated,” wrote CoreOS CEO Alex Polvi in a blog post.

Read more at SDxCentral

Patches Available for Linux sudo Vulnerability

Red Hat, Debian and other Linux distributions yesterday pushed out patches for a high-severity vulnerability in sudo that could be abused by a local attacker to gain root privileges.

Sudo is a program for Linux and UNIX systems that allows standard users to run specific commands as a superuser, such as adding users or performing system updates.

In this case, researchers at Qualys found a vulnerability in sudo’s get_process_ttyname function that allows a local attacker with sudo privileges to run commands as root or elevate privileges to root.

Read more at ThreatPost

How the Moby Project Pivots Docker’s Open Source Business Model

When Docker creator Solomon Hykes announced the Moby Project at DockerCon, he said it was very close to his heart. It was the second most important project since Docker itself.

What makes the Moby Project so special for Hykes is that it fundamentally changes the Docker world. It changes the way Docker is being developed and consumed. It changes the relationship between Docker the company and Docker the ecosystem, which is the open source community around the project. It changes the entire business model.

Moby, and other components that Docker has open sourced, are result of the continuous pressure the community has been building on Docker to componentize Docker. There was growing  criticism of Docker that, instead of being building blocks, Docker was becoming a monolithic solution where ecosystem had no say in the stability and development of those components. There were even rumors of forking Docker. The company responded by releasing core components of Docker to address these concerns. Componentizing Docker addresses those concerns of the community.

“It [Moby] has a more fruitful, more central role in how Docker does open source in general, and it’s our answer to a few different concerns that were articulated by the community over time and also an answer to a problem we had,” said Hykes.

There are many factors that, over a period of time, led to the creation of the project. It’s no secret that Docker containers are experiencing an explosive growth. According to Hykes, Docker Hub downloads have gone from 100 million to 6 billion in the past two years.

With this growth, there has been increasing demand for Docker on more platforms. Docker ended up creating Docker for Windows, Docker for Mac, Docker for Cloud, Docker for Azure…bare metal, virtual machines, and what not.

Over time, Docker has released many core components of Docker, the product, into open source making them independent projects. As a result, Docker became modular, which actually made it easier to build all these separate editions.

As Docker worked on these specialized editions, using the open source components, the production model started to break down. They saw wasted efforts as different teams working on different editions were duplicating efforts.

“There’s a lot of work that’s platform specific, and we’ve created pooling and processes for that,” said Hykes. “For creating different specialized systems out of the same common building blocks, because at the end of the day you still need to run containers for Docker. You need to orchestrate, you need to do networking, etc.,” he continued.

Hykes compares it with the car industry where manufacturers share the same components for totally different cars.  Docker is not a huge company, they don’t have all the engineering resources in the world to throw at the problem. They were using the same components for different editions, so all they needed was to create a very efficient internal assembly line to build these systems and solve the problem. Then, they decided to open source this project, which puts all these different open source components together. And, that assembly line is the Moby Project.

Moby has become upstream for Docker. Every bit of code that goes into Docker goes through Moby.

If the relationship between the Moby and Docker reminds you of Fedora and RHEL, then you are right. That’s exactly what it is. The Moby Project is essentially Fedora, and Docker editions equate to RHEL. Moby is a Docker-sponsored project that’s upstream for Docker editions.

“There’s a free version, there’s a commercial version, and the consensus is that at some point beyond a certain scale, if you want to continue to grow both the project and the product, at some point you owe it to both of them to separate them, give them each a clean, well-defined space where the product can thrive as a project and the project can thrive as a product. So, in a nutshell, that’s what Moby is,” Hykes said.

Now the ecosystem and the community can not only contribute and improve the independent components that make Docker, they can participate in the assembly process. It also means they can build their own Docker to suit their own needs.

“Docker is a citizen of this container ecosystem, and the only way for Docker to succeed is if the ecosystem succeeds,” Hykes said.

The Moby project comprises three core components:

  • A library of containerized back-end components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, etc.)

  • A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.

  • A reference assembly called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

And because all of the Moby components are containers, according to the project page, creating new components is as easy as building a new OCI-compatible container.

Growth of Docker, the company

Moby solved another problem for Docker: identity crisis. Hykes admitted that there was some confusion between Docker the company, Docker the product, and Docker the project. That confusion has been lifted by the creation of the Moby Project.

“Anything with Docker mark is product land, and anything with the Moby Project name or related to it involves open source specific projects created by the open source community, not Docker itself,” said Chris Aniszczyk, executive director of the Open Container Initiative, an open governance project — formed in 2015 by Docker, CoreOS, and others —  for the purpose of creating open industry standards around container formats and runtime.

But the creation of Moby has repercussions beyond Docker. “Moby is allowing other people to take parts of the Moby stack and make their own respective product, which people do,” said Aniszczyk.

“Communities can use these components in their own project. The Mesos community is looking at containerd as the core runtime, Kubernetes is looking at adding support for containerd through its CRI effort. It allows tools to be reused across ecosystems, at the same time allowing Docker to innovate on the product side,” Aniszczyk continued.

Along with the freedom to innovate on the product side, Docker has brought in enterprise veteran Steve Singh as CEO.

“I think With Singh coming in, Docker will have a focus on enterprise sales. With Moby and Docker Enterprise Edition, we will see Docker trying to mimic what Red Hat has done with Fedora and RHEL. We will see them push enterprises toward paid Docker Enterprise Edition,” said Aniszczyk.

Evolution of the platform

Right now, containers are the focus of the discussion but that will change as the platform matures. Aniszczyk compares it with the web. Initially, the discussion was around CSS, HTML, and file formats. W3C standardized what it meant to be CSS and HTML. Now the actual innovation and competition is happening between web browser, between developer tooling, different browser engines…  on performance. Once you create a web page, it works everywhere.

The same standardization is happening in the container world, where organizations like OCI are standardizing the container format and runtime, the core components. Just as there are different web frameworks like ReactOS, Angular.js, there will be different components and orchestration platforms in the container world, such as Docker Editions, OpenShift, Kubernetes, Mesos, Cloud Foundry. As long as there are standards, these projects are going to compete on features.

“I think in five years, people will talk less about containers, just the way we don’t talk about CSS and HTML,” said Aniszczyk. “Containers will be just be the plumbing making everything work; people will be competing at a high level, at a product level that will package technologies like Kubernetes and OpenShift and so on.”

Want to learn more about containers? Containers Fundamentals (LFS253) is an online, self-paced course designed for those new to container technologies. The course, which is presented as a series of short videos, gives a high-level overview of what containers are and how they help streamline and manage the application lifecycle. Access all the free sample chapter videos now!

Avoid Using Lazy, Privileged Docker Containers

It’s probably a little unfair to call everyone who uses privileged containers “lazy” but, from what I’ve seen, some (even security) vendors deserve to be labeled as such.

Running your container using privileged mode opens up a world of pain if your container is abused. Not only are your host’s resources directly accessed with impunity by code within your container (a little like enabling the omnipotent CAP_SYS_ADMIN capability) but you’re also relinquishing the cgroups resource limitations which were added to the kernel as a level of protection, too.

By enabling this dangerous mode, you might consider it like leaving a window open in your house and going away on holiday. It’s simply an unnecessary risk that you shouldn’t be taking.

Don’t get me wrong, certain system “control” containers need full host access, but you’re much better off spending some time figuring out every single capability that your powerful container requires and then opening each one up. You should always strictly work with a “default deny” approach.

In other words, lock everything down and then only open up precisely what you need. This is the only way that security can truly work.

Opening up a specific system component for access should only happen when a requirement has been identified. Then the access must be carefully considered, analyzed, and tested. Otherwise, it remains closed without exception.

You’ll be glad to know that such tasks needn’t be too onerous. Think of an IPtables rule. You might have a ephemeral, temporary End Point which will be destroyed programatically in seven days time. You could create a new rule, make sure it works and set a scheduled job — e.g., using a cron job or an at job, to remove that rule in seven days. The process is logical; test the access works and then delete the rule.  Hopefully, it’s relatively easy.

Back to being lax with your container security. Let’s now look at a quick example which, admittedly, is designed to scare you against using privileged mode on your servers unless you really have to.

Directly on our host we’ll check the setting of a kernel parameter, as so:

$ sysctl -a | grep hostname

kernel.hostname = chrisbinnie

Our host is definitely called “chrisbinnie” in this case.

Next, we’ll create a container and prove that the container’s hostname isn’t the same as the host’s name, as seen in Figure 1.

Figure 1: How I created a simple Debian container.

We can see above that we’ve fired up a vanilla Debian container, entered the container and been offered its hashed hostname as we’d expect (6b898d49131e in this case).

Now from within our container we can try and alter the container’s hostname. Note how directly connected to our host’s kernel that an arbitrary container is by default.

Thankfully, however, our host is rejecting the kernel parameter change as shown below.

root@6b898d49131e /# sysctl kernel.hostname=Jurgen
sysctl: setting key "kernel.hostname": Read-only file system

Next (please ignore the container name-change), I’ll simply fire up another container in exactly the same way.

This time I’m going to use this “ps” command below inside our container as shown in Figure 2.

$ ps -eo uid,gid,args

Figure 2: Inside the container, I’m definitely the superuser, the “root” user with UID 0, but I’m not affecting the container’s hostname or thankfully the host’s.

Still with me? Now we’re going to try the same approach but the lazy way. Yes, correct, by opening up the aforementioned and daunting “–privileged” mode.

$ docker run –privileged -it debian bash

In Figure 3, we can see below that the container and host didn’t complain but instead, frighteningly, we had access to the host’s kernel directly and made a parameter change.

Figure 3: We can affect the host’s kernel from within a container.

As you can imagine, altering hostnames is just the beginning. There’s all kinds of permuations from having access to both the kernel and the pseudo filesystems on the host from within a container.

I’d encourage you to experiment using this simple example and other kernel parameters.

In the meantime, make sure that you avoid elevating privileges within your containers at all costs. It’s simply not worth the risk.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

This article originally appeared on DevSecOps.

Contributing to Open Source Projects Is Key to Igalia’s Business

Igalia is an open source development company offering consultancy services for desktop, mobile, and web technologies. The company’s developers contribute code for several open source projects, including GNOME, WebKit, and the Linux kernel.

The company was founded in September 2001 in A Coruña, Spain, by a group of 10 software professionals, who were inspired by Free Software and shared the goal of creating a company based on cooperation and innovation.

“Open source and Free Software are part of Igalia’s DNA,” says Xavier Castaño García, one of the company’s founding members.  

Besides focusing its participation on desktop, mobile, embedded, and kernel development initiatives, Igalia also sponsors many events, including the recent Open Networking Summit, the Embedded Linux Conferences, and the upcoming Automotive Linux Summit. Here, Castaño explains more about the company’s current projects.

Linux.com: What does Igalia do?

Xavier Castaño García: Igalia is an open source consultancy specializing in the development of innovative projects and solutions. Our engineers have expertise in a wide range of technological areas, including browsers and client-side web technologies, graphics pipeline, compilers and virtual machines.

Leading the development of essential projects in the areas of web rendering and browsers, we have the most WPE, WebKit, Chromium/Blink, and Firefox expertise found in the consulting business, including many reviewers and committers with very strong presence in the communities. Igalia designs, develops, customizes and optimizes GNU/Linux-based solutions for companies across the globe. Our work and contributions are present in almost anything running on top of a Linux kernel.

Linux.com: How and why do you use Linux and open source?

Castaño: Open Source and Free Software are part of Igalia’s DNA. At Igalia, we all share the free software philosophy and believe that open source collaboration is fundamental for sprouting innovation. Since the very beginning, Igalia decided to invest in open source, in particular, in projects and communities that have been important for the company.

Igalia contributes actively to many open source projects including WebKit, Chromium, Servo, Mesa 3D, and GStreamer. Most of these projects are state-of-the-art open source technologies, and most of the big players of the industry are involved in them. Because we have committed many years of intensive contributions to these projects, we have a wide range of experience. Companies that are interested in getting involved in those projects find Igalia a great partner. We can help them use, improve, customize, optimize, and contribute back any changes to any of these projects.

Linux.com: How has participating in the Linux and open source communities changed or benefited the company?

Castaño: GNOME and WebKit open source communities have been key for Igalia. All the contributions done in GNOME ecosystem were the main reason why some of our developers contributed to integrate Epiphany with WebKit. This is one of the most important milestones in our history. Thanks to these contributions, Igalia became the independent consultancy with the most contributions to Chrome and WebKit.

Linux.com: Why did you join The Linux Foundation?

Castaño: The Linux Foundation is a reference organization in open source and business. Additionally, The Linux Foundation is nowadays a platform for boosting open source ecosystems. In parallel, Igalia is very active in associations. Hence, Igalia considered that becoming a member of The Linux Foundation would be a natural step for the company.

Furthermore, Igalia is currently sponsoring many events hosted by The Linux Foundation. For example, we sponsor Open Source Summit in North America, Japan, and Europe, the Embedded Linux Conferences, the Automotive Linux Summit, and Open Networking Summit.

Linux.com: What interesting or innovative trends in your industry are you seeing and what role do Linux and open source play?

Castaño: Linux has become the key and the core foundation in the embedded world. Most of the embedded devices deploy a Linux-based distro and open source components. In addition to this, there is also a trend in many industries of introducing HTML5 user interfaces in those devices, which means that they need to deploy an open source web engine either based on WebKit or on Chromium.

Linux.com: Is there anything else important or upcoming that you’d like to share?

Castaño: We have recently released WPE as an official port of WebKit. WPE is a new WebKit port optimized for Embedded platforms that can support a variety of display protocols like Wayland or X11. WPE serves as a base for systems and environments that mainly or completely rely on web platform technologies to build their interfaces.

WPE is now part of the Reference Design Kit (RDK) and has been accepted upstream at webkit.org as a new official port of WebKit. We expect WPE to be deployed in millions of set-top boxes by the end of Q3. As an open source project, we welcome new contributors and adopters to the project.

Open Networking Summit, the industry’s premier open networking event, brings Enterprises, Carriers and Cloud Service providers together with the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking. Watch the ONS keynote presentations now.

Kubeless UI Now in Alpha

Kubeless is the Kubernetes native serverless framework that we started developing at Skippbox and that we keep on improving at Bitnami. FWIW, Kubeless is not a proprietary play for us, we aim to build a community around it, as we only care about application architecture and shifts in application platforms.

To make it easy for folks to use Kubeless, we just released the first version of our serverless plugin, so that you can deploy functions on Kubeless using the the Go serverless framework. Basically, this now works with kubeless:

Read more at Bitnami

Node.js 8: Big Improvements for the Debugging and Native Module Ecosystem

We are excited to announce Node.js 8.0.0 today. The new improvements and features of this release create the best workflow for Node.js developers to date. Highlighted updates and features include adding Node.js API for native module developers, async_hooks, JS bindings for the inspector, zero-filling Buffers, util.promisify, and more.

1*6-_PzFOl9FRNZPn-LEOi4Q.jpeg

Throwing confetti now that we have Node.js 8!

The Node.js 8 release, replaces version 7 in our current release line. The Node.js release line will become a Node.js Long Term Support (LTS) release in October 2017 (more details on LTS strategy here). The LTS release line is focused on stability and security and is best for those who want guaranteed stability when they upgrade and/or are using Node.js in the enterprise.

Those who need stability and have complex production environments (i.e. medium and large enterprises) should wait until Node.js 8 goes into LTS before upgrading it for production.

Now that we’ve provided this PSA, let’s dive into the interesting updates in this release.

Native Modular Ecosystem Gets a Boost

The much-awaited Node.js API (N-API) will be added as an experimental feature to this release — it will be behind a flag. This is an incredibly important technology as it will eliminate breakage that happens between major releases lines with native modules.

Although native modules (modules written in C or C++ and directly bound to the Chrome V8) are a small portion of the massive modular ecosystem, 30 percent of all modules rely indirectly on native modules. Every time Node.js has a major release update, package maintainers have to update these dependencies.

These efforts would not be possible without significant contributions from Google, IBM, Intel, Microsoft, nearFrom, NodeSource, and individual contributors. Read the full details around these efforts and this technology here.

Anyone who builds or uses native modules should test out the N-API feature.

Welcome, V8 5.8

Node.js 8 ships with V8 5.8, a significant update to the JavaScript runtime that includes major improvements in performance and developer facing APIs. V8 5.8 is guaranteed to have forwards ABI compatibility with V8 5.9 and the upcoming V8 6.0, which will help ensure stability of the Node.js native addon ecosystem. During Node.js 8’s lifetime, the Node.js Project plans to move to 5.9 and possibly 6.0.

The V8 5.8 engine also helps set up a pending transition to the new Turbofan and Ignition compiler pipeline, which leads to lower memory consumption and faster startup across Node.js applications. Although this has existed in previous versions of V8, TurboFan and Ignition will be enabled by default for the first time in V8 5.9. The new compiler pipeline represents such a significant change that the Node.js Core Technical Committee (CTC) chose to postpone the Node.js 8 release in order to better accommodate it.

Buffer Improvements

The zero-filling Buffer (num) and a new Buffer (num) are added by default. The benefit of the zero-filling Buffer helps with security and privacy to prevent information leaks. However, the downside with this buffer is that folks using it will take performance hits, but this can be avoided by migrating to buffer.allocUnsafe(). It is suggested that Node.js users only use this function, if they are aware of the risks and know how to avoid those problems.

WHATWG URL Parser is Now Stable

WHATWG URL parser goes from experimental status to fully supported in this version, allowing people to use a URL parser that is compliant to the spec and more compatible with the browser. This new URL implementation matches the URL implementation and API available in modern web browsers like Chrome, Firefox, Edge and Safari, allowing code using URLs to be shared across environments.

Performance, Security and Interface Boost in npm@5

Npm, Inc. recently announced the release of version 5.0.0 of the npm client and we are happy to include this new version within Node.js 8.

Common package management tasks such as package installation and version updates are now up to five times faster; lockfiles ensure consistent installations across development environments; and a self-healing cache with automatic error recovery protects against corrupted downloads. npm@5 also introduces SHA-512 code verification.

“Since npm first shipped with Node.js in 2011, our mission has been to reduce friction for Node.js developers and help people build amazing things. Using Node.js 8 with npm@5 will make modular software development dramatically faster and easier — it’s the largest performance improvement ever,” said Isaac Z. Schlueter, CEO of npm, Inc. “We’re proud of our commitment to the Node.js community, and collaboration to bring innovative products to market. I’m excited to see what comes next.”

Insights to the Tooling Ecosystem and Debugging

This release line will provide deep insight via the new tracing and async tracking features. The experimental ‘async_hooks’ module (formerly ‘async_wrap’) received a major update in Node.js 8. This diagnostics API allows developers to monitor the operation of the Node.js event loop, tracking asynchronous request and handles through their complete lifecycle and enabling better diagnostic tools and other utilities.

These additions, along with the removal of the legacy debugger (which is replaced by the newer CLI debugger that landed in v7) make it easier to debug and track changes within Node.js, allowing commercial and open source tooling vendors to pinpoint performance degradation in Node.js applications.

Another experimental feature added to this release includes JS bindings for the inspector. The new inspector core module enables developers to leverage the debug protocol used by the Chrome inspector in order to inspect currently running JavaScript code.

Improved Support for Promises

Node.js includes a new util.promisify() API that allows developers to wrap callback APIs to return Promises with little overhead, using a standard API.

For all of our major updates, please go to our technical blog and read more here.

This article originally appeared on Node.js blog.