Home Blog Page 535

How the Moby Project Pivots Docker’s Open Source Business Model

When Docker creator Solomon Hykes announced the Moby Project at DockerCon, he said it was very close to his heart. It was the second most important project since Docker itself.

What makes the Moby Project so special for Hykes is that it fundamentally changes the Docker world. It changes the way Docker is being developed and consumed. It changes the relationship between Docker the company and Docker the ecosystem, which is the open source community around the project. It changes the entire business model.

Moby, and other components that Docker has open sourced, are result of the continuous pressure the community has been building on Docker to componentize Docker. There was growing  criticism of Docker that, instead of being building blocks, Docker was becoming a monolithic solution where ecosystem had no say in the stability and development of those components. There were even rumors of forking Docker. The company responded by releasing core components of Docker to address these concerns. Componentizing Docker addresses those concerns of the community.

“It [Moby] has a more fruitful, more central role in how Docker does open source in general, and it’s our answer to a few different concerns that were articulated by the community over time and also an answer to a problem we had,” said Hykes.

There are many factors that, over a period of time, led to the creation of the project. It’s no secret that Docker containers are experiencing an explosive growth. According to Hykes, Docker Hub downloads have gone from 100 million to 6 billion in the past two years.

With this growth, there has been increasing demand for Docker on more platforms. Docker ended up creating Docker for Windows, Docker for Mac, Docker for Cloud, Docker for Azure…bare metal, virtual machines, and what not.

Over time, Docker has released many core components of Docker, the product, into open source making them independent projects. As a result, Docker became modular, which actually made it easier to build all these separate editions.

As Docker worked on these specialized editions, using the open source components, the production model started to break down. They saw wasted efforts as different teams working on different editions were duplicating efforts.

“There’s a lot of work that’s platform specific, and we’ve created pooling and processes for that,” said Hykes. “For creating different specialized systems out of the same common building blocks, because at the end of the day you still need to run containers for Docker. You need to orchestrate, you need to do networking, etc.,” he continued.

Hykes compares it with the car industry where manufacturers share the same components for totally different cars.  Docker is not a huge company, they don’t have all the engineering resources in the world to throw at the problem. They were using the same components for different editions, so all they needed was to create a very efficient internal assembly line to build these systems and solve the problem. Then, they decided to open source this project, which puts all these different open source components together. And, that assembly line is the Moby Project.

Moby has become upstream for Docker. Every bit of code that goes into Docker goes through Moby.

If the relationship between the Moby and Docker reminds you of Fedora and RHEL, then you are right. That’s exactly what it is. The Moby Project is essentially Fedora, and Docker editions equate to RHEL. Moby is a Docker-sponsored project that’s upstream for Docker editions.

“There’s a free version, there’s a commercial version, and the consensus is that at some point beyond a certain scale, if you want to continue to grow both the project and the product, at some point you owe it to both of them to separate them, give them each a clean, well-defined space where the product can thrive as a project and the project can thrive as a product. So, in a nutshell, that’s what Moby is,” Hykes said.

Now the ecosystem and the community can not only contribute and improve the independent components that make Docker, they can participate in the assembly process. It also means they can build their own Docker to suit their own needs.

“Docker is a citizen of this container ecosystem, and the only way for Docker to succeed is if the ecosystem succeeds,” Hykes said.

The Moby project comprises three core components:

  • A library of containerized back-end components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, etc.)

  • A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.

  • A reference assembly called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

And because all of the Moby components are containers, according to the project page, creating new components is as easy as building a new OCI-compatible container.

Growth of Docker, the company

Moby solved another problem for Docker: identity crisis. Hykes admitted that there was some confusion between Docker the company, Docker the product, and Docker the project. That confusion has been lifted by the creation of the Moby Project.

“Anything with Docker mark is product land, and anything with the Moby Project name or related to it involves open source specific projects created by the open source community, not Docker itself,” said Chris Aniszczyk, executive director of the Open Container Initiative, an open governance project — formed in 2015 by Docker, CoreOS, and others —  for the purpose of creating open industry standards around container formats and runtime.

But the creation of Moby has repercussions beyond Docker. “Moby is allowing other people to take parts of the Moby stack and make their own respective product, which people do,” said Aniszczyk.

“Communities can use these components in their own project. The Mesos community is looking at containerd as the core runtime, Kubernetes is looking at adding support for containerd through its CRI effort. It allows tools to be reused across ecosystems, at the same time allowing Docker to innovate on the product side,” Aniszczyk continued.

Along with the freedom to innovate on the product side, Docker has brought in enterprise veteran Steve Singh as CEO.

“I think With Singh coming in, Docker will have a focus on enterprise sales. With Moby and Docker Enterprise Edition, we will see Docker trying to mimic what Red Hat has done with Fedora and RHEL. We will see them push enterprises toward paid Docker Enterprise Edition,” said Aniszczyk.

Evolution of the platform

Right now, containers are the focus of the discussion but that will change as the platform matures. Aniszczyk compares it with the web. Initially, the discussion was around CSS, HTML, and file formats. W3C standardized what it meant to be CSS and HTML. Now the actual innovation and competition is happening between web browser, between developer tooling, different browser engines…  on performance. Once you create a web page, it works everywhere.

The same standardization is happening in the container world, where organizations like OCI are standardizing the container format and runtime, the core components. Just as there are different web frameworks like ReactOS, Angular.js, there will be different components and orchestration platforms in the container world, such as Docker Editions, OpenShift, Kubernetes, Mesos, Cloud Foundry. As long as there are standards, these projects are going to compete on features.

“I think in five years, people will talk less about containers, just the way we don’t talk about CSS and HTML,” said Aniszczyk. “Containers will be just be the plumbing making everything work; people will be competing at a high level, at a product level that will package technologies like Kubernetes and OpenShift and so on.”

Want to learn more about containers? Containers Fundamentals (LFS253) is an online, self-paced course designed for those new to container technologies. The course, which is presented as a series of short videos, gives a high-level overview of what containers are and how they help streamline and manage the application lifecycle. Access all the free sample chapter videos now!

Avoid Using Lazy, Privileged Docker Containers

It’s probably a little unfair to call everyone who uses privileged containers “lazy” but, from what I’ve seen, some (even security) vendors deserve to be labeled as such.

Running your container using privileged mode opens up a world of pain if your container is abused. Not only are your host’s resources directly accessed with impunity by code within your container (a little like enabling the omnipotent CAP_SYS_ADMIN capability) but you’re also relinquishing the cgroups resource limitations which were added to the kernel as a level of protection, too.

By enabling this dangerous mode, you might consider it like leaving a window open in your house and going away on holiday. It’s simply an unnecessary risk that you shouldn’t be taking.

Don’t get me wrong, certain system “control” containers need full host access, but you’re much better off spending some time figuring out every single capability that your powerful container requires and then opening each one up. You should always strictly work with a “default deny” approach.

In other words, lock everything down and then only open up precisely what you need. This is the only way that security can truly work.

Opening up a specific system component for access should only happen when a requirement has been identified. Then the access must be carefully considered, analyzed, and tested. Otherwise, it remains closed without exception.

You’ll be glad to know that such tasks needn’t be too onerous. Think of an IPtables rule. You might have a ephemeral, temporary End Point which will be destroyed programatically in seven days time. You could create a new rule, make sure it works and set a scheduled job — e.g., using a cron job or an at job, to remove that rule in seven days. The process is logical; test the access works and then delete the rule.  Hopefully, it’s relatively easy.

Back to being lax with your container security. Let’s now look at a quick example which, admittedly, is designed to scare you against using privileged mode on your servers unless you really have to.

Directly on our host we’ll check the setting of a kernel parameter, as so:

$ sysctl -a | grep hostname

kernel.hostname = chrisbinnie

Our host is definitely called “chrisbinnie” in this case.

Next, we’ll create a container and prove that the container’s hostname isn’t the same as the host’s name, as seen in Figure 1.

Figure 1: How I created a simple Debian container.

We can see above that we’ve fired up a vanilla Debian container, entered the container and been offered its hashed hostname as we’d expect (6b898d49131e in this case).

Now from within our container we can try and alter the container’s hostname. Note how directly connected to our host’s kernel that an arbitrary container is by default.

Thankfully, however, our host is rejecting the kernel parameter change as shown below.

root@6b898d49131e /# sysctl kernel.hostname=Jurgen
sysctl: setting key "kernel.hostname": Read-only file system

Next (please ignore the container name-change), I’ll simply fire up another container in exactly the same way.

This time I’m going to use this “ps” command below inside our container as shown in Figure 2.

$ ps -eo uid,gid,args

Figure 2: Inside the container, I’m definitely the superuser, the “root” user with UID 0, but I’m not affecting the container’s hostname or thankfully the host’s.

Still with me? Now we’re going to try the same approach but the lazy way. Yes, correct, by opening up the aforementioned and daunting “–privileged” mode.

$ docker run –privileged -it debian bash

In Figure 3, we can see below that the container and host didn’t complain but instead, frighteningly, we had access to the host’s kernel directly and made a parameter change.

Figure 3: We can affect the host’s kernel from within a container.

As you can imagine, altering hostnames is just the beginning. There’s all kinds of permuations from having access to both the kernel and the pseudo filesystems on the host from within a container.

I’d encourage you to experiment using this simple example and other kernel parameters.

In the meantime, make sure that you avoid elevating privileges within your containers at all costs. It’s simply not worth the risk.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

This article originally appeared on DevSecOps.

Contributing to Open Source Projects Is Key to Igalia’s Business

Igalia is an open source development company offering consultancy services for desktop, mobile, and web technologies. The company’s developers contribute code for several open source projects, including GNOME, WebKit, and the Linux kernel.

The company was founded in September 2001 in A Coruña, Spain, by a group of 10 software professionals, who were inspired by Free Software and shared the goal of creating a company based on cooperation and innovation.

“Open source and Free Software are part of Igalia’s DNA,” says Xavier Castaño García, one of the company’s founding members.  

Besides focusing its participation on desktop, mobile, embedded, and kernel development initiatives, Igalia also sponsors many events, including the recent Open Networking Summit, the Embedded Linux Conferences, and the upcoming Automotive Linux Summit. Here, Castaño explains more about the company’s current projects.

Linux.com: What does Igalia do?

Xavier Castaño García: Igalia is an open source consultancy specializing in the development of innovative projects and solutions. Our engineers have expertise in a wide range of technological areas, including browsers and client-side web technologies, graphics pipeline, compilers and virtual machines.

Leading the development of essential projects in the areas of web rendering and browsers, we have the most WPE, WebKit, Chromium/Blink, and Firefox expertise found in the consulting business, including many reviewers and committers with very strong presence in the communities. Igalia designs, develops, customizes and optimizes GNU/Linux-based solutions for companies across the globe. Our work and contributions are present in almost anything running on top of a Linux kernel.

Linux.com: How and why do you use Linux and open source?

Castaño: Open Source and Free Software are part of Igalia’s DNA. At Igalia, we all share the free software philosophy and believe that open source collaboration is fundamental for sprouting innovation. Since the very beginning, Igalia decided to invest in open source, in particular, in projects and communities that have been important for the company.

Igalia contributes actively to many open source projects including WebKit, Chromium, Servo, Mesa 3D, and GStreamer. Most of these projects are state-of-the-art open source technologies, and most of the big players of the industry are involved in them. Because we have committed many years of intensive contributions to these projects, we have a wide range of experience. Companies that are interested in getting involved in those projects find Igalia a great partner. We can help them use, improve, customize, optimize, and contribute back any changes to any of these projects.

Linux.com: How has participating in the Linux and open source communities changed or benefited the company?

Castaño: GNOME and WebKit open source communities have been key for Igalia. All the contributions done in GNOME ecosystem were the main reason why some of our developers contributed to integrate Epiphany with WebKit. This is one of the most important milestones in our history. Thanks to these contributions, Igalia became the independent consultancy with the most contributions to Chrome and WebKit.

Linux.com: Why did you join The Linux Foundation?

Castaño: The Linux Foundation is a reference organization in open source and business. Additionally, The Linux Foundation is nowadays a platform for boosting open source ecosystems. In parallel, Igalia is very active in associations. Hence, Igalia considered that becoming a member of The Linux Foundation would be a natural step for the company.

Furthermore, Igalia is currently sponsoring many events hosted by The Linux Foundation. For example, we sponsor Open Source Summit in North America, Japan, and Europe, the Embedded Linux Conferences, the Automotive Linux Summit, and Open Networking Summit.

Linux.com: What interesting or innovative trends in your industry are you seeing and what role do Linux and open source play?

Castaño: Linux has become the key and the core foundation in the embedded world. Most of the embedded devices deploy a Linux-based distro and open source components. In addition to this, there is also a trend in many industries of introducing HTML5 user interfaces in those devices, which means that they need to deploy an open source web engine either based on WebKit or on Chromium.

Linux.com: Is there anything else important or upcoming that you’d like to share?

Castaño: We have recently released WPE as an official port of WebKit. WPE is a new WebKit port optimized for Embedded platforms that can support a variety of display protocols like Wayland or X11. WPE serves as a base for systems and environments that mainly or completely rely on web platform technologies to build their interfaces.

WPE is now part of the Reference Design Kit (RDK) and has been accepted upstream at webkit.org as a new official port of WebKit. We expect WPE to be deployed in millions of set-top boxes by the end of Q3. As an open source project, we welcome new contributors and adopters to the project.

Open Networking Summit, the industry’s premier open networking event, brings Enterprises, Carriers and Cloud Service providers together with the ecosystem to share learnings, highlight innovation and discuss the future of Open Source Networking. Watch the ONS keynote presentations now.

Kubeless UI Now in Alpha

Kubeless is the Kubernetes native serverless framework that we started developing at Skippbox and that we keep on improving at Bitnami. FWIW, Kubeless is not a proprietary play for us, we aim to build a community around it, as we only care about application architecture and shifts in application platforms.

To make it easy for folks to use Kubeless, we just released the first version of our serverless plugin, so that you can deploy functions on Kubeless using the the Go serverless framework. Basically, this now works with kubeless:

Read more at Bitnami

Node.js 8: Big Improvements for the Debugging and Native Module Ecosystem

We are excited to announce Node.js 8.0.0 today. The new improvements and features of this release create the best workflow for Node.js developers to date. Highlighted updates and features include adding Node.js API for native module developers, async_hooks, JS bindings for the inspector, zero-filling Buffers, util.promisify, and more.

1*6-_PzFOl9FRNZPn-LEOi4Q.jpeg

Throwing confetti now that we have Node.js 8!

The Node.js 8 release, replaces version 7 in our current release line. The Node.js release line will become a Node.js Long Term Support (LTS) release in October 2017 (more details on LTS strategy here). The LTS release line is focused on stability and security and is best for those who want guaranteed stability when they upgrade and/or are using Node.js in the enterprise.

Those who need stability and have complex production environments (i.e. medium and large enterprises) should wait until Node.js 8 goes into LTS before upgrading it for production.

Now that we’ve provided this PSA, let’s dive into the interesting updates in this release.

Native Modular Ecosystem Gets a Boost

The much-awaited Node.js API (N-API) will be added as an experimental feature to this release — it will be behind a flag. This is an incredibly important technology as it will eliminate breakage that happens between major releases lines with native modules.

Although native modules (modules written in C or C++ and directly bound to the Chrome V8) are a small portion of the massive modular ecosystem, 30 percent of all modules rely indirectly on native modules. Every time Node.js has a major release update, package maintainers have to update these dependencies.

These efforts would not be possible without significant contributions from Google, IBM, Intel, Microsoft, nearFrom, NodeSource, and individual contributors. Read the full details around these efforts and this technology here.

Anyone who builds or uses native modules should test out the N-API feature.

Welcome, V8 5.8

Node.js 8 ships with V8 5.8, a significant update to the JavaScript runtime that includes major improvements in performance and developer facing APIs. V8 5.8 is guaranteed to have forwards ABI compatibility with V8 5.9 and the upcoming V8 6.0, which will help ensure stability of the Node.js native addon ecosystem. During Node.js 8’s lifetime, the Node.js Project plans to move to 5.9 and possibly 6.0.

The V8 5.8 engine also helps set up a pending transition to the new Turbofan and Ignition compiler pipeline, which leads to lower memory consumption and faster startup across Node.js applications. Although this has existed in previous versions of V8, TurboFan and Ignition will be enabled by default for the first time in V8 5.9. The new compiler pipeline represents such a significant change that the Node.js Core Technical Committee (CTC) chose to postpone the Node.js 8 release in order to better accommodate it.

Buffer Improvements

The zero-filling Buffer (num) and a new Buffer (num) are added by default. The benefit of the zero-filling Buffer helps with security and privacy to prevent information leaks. However, the downside with this buffer is that folks using it will take performance hits, but this can be avoided by migrating to buffer.allocUnsafe(). It is suggested that Node.js users only use this function, if they are aware of the risks and know how to avoid those problems.

WHATWG URL Parser is Now Stable

WHATWG URL parser goes from experimental status to fully supported in this version, allowing people to use a URL parser that is compliant to the spec and more compatible with the browser. This new URL implementation matches the URL implementation and API available in modern web browsers like Chrome, Firefox, Edge and Safari, allowing code using URLs to be shared across environments.

Performance, Security and Interface Boost in npm@5

Npm, Inc. recently announced the release of version 5.0.0 of the npm client and we are happy to include this new version within Node.js 8.

Common package management tasks such as package installation and version updates are now up to five times faster; lockfiles ensure consistent installations across development environments; and a self-healing cache with automatic error recovery protects against corrupted downloads. npm@5 also introduces SHA-512 code verification.

“Since npm first shipped with Node.js in 2011, our mission has been to reduce friction for Node.js developers and help people build amazing things. Using Node.js 8 with npm@5 will make modular software development dramatically faster and easier — it’s the largest performance improvement ever,” said Isaac Z. Schlueter, CEO of npm, Inc. “We’re proud of our commitment to the Node.js community, and collaboration to bring innovative products to market. I’m excited to see what comes next.”

Insights to the Tooling Ecosystem and Debugging

This release line will provide deep insight via the new tracing and async tracking features. The experimental ‘async_hooks’ module (formerly ‘async_wrap’) received a major update in Node.js 8. This diagnostics API allows developers to monitor the operation of the Node.js event loop, tracking asynchronous request and handles through their complete lifecycle and enabling better diagnostic tools and other utilities.

These additions, along with the removal of the legacy debugger (which is replaced by the newer CLI debugger that landed in v7) make it easier to debug and track changes within Node.js, allowing commercial and open source tooling vendors to pinpoint performance degradation in Node.js applications.

Another experimental feature added to this release includes JS bindings for the inspector. The new inspector core module enables developers to leverage the debug protocol used by the Chrome inspector in order to inspect currently running JavaScript code.

Improved Support for Promises

Node.js includes a new util.promisify() API that allows developers to wrap callback APIs to return Promises with little overhead, using a standard API.

For all of our major updates, please go to our technical blog and read more here.

This article originally appeared on Node.js blog.

Inclusion Done Right: Hiring

This is Part Three of a three-part series on Inclusion Done Right. Part One talked about the experience of employees, from engineers to CEOs, of working at a company where inclusion is part of the culture. Part Two discussed the specific actions the companies take to create a feeling of inclusion. This part will discuss the hiring process, and how to hire not just for diversity, but for inclusion.

Aubrey Blanche, Atlassian global head of diversity and inclusion at Atlassian defines the distinction between the two. “Diversity is getting an invite to the party, inclusion is being glad to be there.”

Read more at The New Stack

ARM’s New Processors Are Designed to Power the Machine-Learning Machines

On the eve of Computex, Taiwan’s big showpiece event where PC makers roll out the latest and best implementations of Intel CPUs, mobile rival ARM is announcing its own big news with the unveiling of a new generation of ARM CPUs and GPUs. Official today, the ARM Cortex-A75 is the new flagship-tier mobile processor design, with a claimed 22 percent improvement in performance over the incumbent A73. It’s joined by the new Cortex-A55, which has the highest power efficiency of any mid-range CPU ARM’s ever designed, and the Mali-G72 graphics processor, which also comes with a 25 percent improvement in efficiency relative to its predecessor G71.

The efficiency improvements are evolutionary and predictable, but the revolutionary aspects of this new lineup relate to artificial intelligence: this is the first set of processing components designed specifically to tackle the challenges of onboard AI and machine learning. Plus, last year’s updates to improve performance in the power-hugry tasks of augmented and virtual reality are being extended and elaborated.

Read more at The Verge

What Are the Differences in Web Hosting and Linux Web Hosting?

Web hosting is very popular these days. There are some differences between web hosting and Linux web hosting. There are two kinds of operating systems. Web hosting allows a company and an organization to post their websites on the internet. A reliable web host offers essential services that are needed to view the website.

Linux web hosting is a famous web hosting service providers. It offers ease to use and very cost-effective for the clients. It contains its own specifications. Both the forms have differences and similarities in the way of operation.

By knowing the web hosting and how to do you can boost up the profit of your business web hosting is easy for a user. Learning about the Search engine Optimization is basic for the marketing purpose. It will assist to enhance the traffic to your website. But the basic question is that what are search engines searching for? What should be your strategy for building your website? An expert web host always uses techniques to please customers and visitors, other search engines, Bing and Google. 

Are you searching for the premier web hosting services provider? Linux web hosting is the right choice for expert web hosting services. The fundamental objective is to provide excellence in these services. An expert web hosting organization is admired due to its organized system. A user can enjoy a wonderful services related to the search engine optimization by availing variety of deals and web hosting service including domain analysis, audit of your website, Anchor Text Variation including exact match keywords, naked URL and branded keywords.

On the same server, it is wonderful option for making a bunch of sites that offers more than 1 IP. With the help of a reliable Linux web hosting users can easily get C Class IP address for their website. It is the best way for forming SEO friendly websites. By buying web hosting plan users can easily get free domain.

Features of Webhosting
•    Safe Harbor Certified.
•    SEO friendly Website design.
•    Link building.
•    On-site optimization.
•    On-page optimization.
•    Money back guarantee.
•    SEO friendly content writing services.
•    100%  uptime guarantee.
•    Emails accounts, FTP accounts, unlimited sub domains.
•    Control panel that is easy to use, flexible.

A proficient Linux webhosting service carefully chooses the keywords that are relevant for on-page optimization. It offers excellent on page and off page optimization to make noticeable for search engine algorithm. In off-page optimization it offers predominately to backlinks.

Plans and Pricing
For introducing variety of plans and packages in affordable rates the company is incredible. A huge circle of business users can avail these facilities very easily. Online marketing is a collaborative effort that we offer in our services. It can be done successfully with the help of the eligible team. The exclusive plans make your website demanding and offer high ranking.
 
How does it help in increasing website ranking?
By availing Linux webhosting services users can enjoy websites that works vigorously. It is one of excellent packages that facilitate the users by boosting up traffic towards the website. Here, users can avail special introductory packages for new clients. 
•    Offers free security suite.
•    Delivers free online store.
•    Intended with free drop and drag builder.
•    Integrated with domain registration.
•    Offers an easy and quick access.
•    Requires no special experience.
•    30-day money back guarantee.
•    Offers email addresses and unlimited disk space.
•    Free Marketing credits and search engines.

This Linux hosting contains all the specific tools to design a functional site entirely. It delivers the opportunity to run the websites in an innovative way. Scalable and innovative hosting packages are enough to increase your business. It is perfect for the users because it offers plenty of mobile friendly templates and simple drag and drop tools. It offers cloud hosting, Word Press hosting, dedicated hosting and VPS hosting as well.

Users can upgrade their plans any time for meeting their needs for more IP addresses, bandwidth and disk space. By availing Linux hosting service users can get wide range of unique C Class. It ensures supreme uptime, security and performance.  Due to technical specifications including Green webhosting, application hosting, award winning support, email, programming, data base and cPanel Control Panel it is right option for you.

Building Blocks of Containers

This article series previews the new Containers Fundamentals training course from The Linux Foundation, which is designed for those who are new to container technologies. In previous excerpts, we talked about what containers are and what they’re not and explained a little of their history. In this last post of the series, we will look at the building blocks for containers, specifically, namespaces, control groups, and UnionFS.

Namespace is a feature of the Linux kernel, which isolates and virtualizes system resources for a process, so that each process gets its own resource, like its own IP address, hostname, etc. System resources that can be virtualized are: mount [mnt], process ID [PID], network [net], Interprocess Communication [IPC], hostnames [UTS], and users [User IDs].

Using the namespace feature of the Linux kernel, we can isolate one process from another. The container is nothing but a process for the kernel, so we isolate each container using different namespaces.

Another important feature that enables containerization is control groups. With control groups, we can limit, account, and isolate the resource users like CPU, memory, disk, network, etc.  And, with UnionFS, we can transparently overlay two or more directories and implement a layered approach for containers.

You can get more details in the sample course video below, presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Want to learn more? Access all the free sample chapter videos now!

The Companies That Support Linux and Open Source: Atomic Rules

Atomic Rules has been providing Field Programmable Gate Array (FPGA) design services and interconnection network solutions since 2008. In April, they joined The Linux Foundation to further their commitment to open source and to support and participate in the DPDK project, which provides a programming framework that enables faster development of high-speed data packet networking applications.

Additionally, Atomic Rules recently released Arkville, a DPDK-aware tool that provides a high-throughput connection, or conduit, between FPGA hardware and GPP (general purpose processor) software. According to the company, Arkville was designed with the specific goal of accelerating and empowering DPDK.

In this interview, Shep Siegel, founder of Atomic Rules, provides more information about the company’s products and services.

Linux.com: What does Atomic Rules do?

Shep Siegel: Atomic Rules have been providing FPGA design services since 2008. We’re experts in reconfigurable computing with FPGAs and provide our clients with effective solutions to problems involving interconnection networks and reconfigurable computing. Our practice employs scalable, rule-based methods to tackle complex concurrency among heterogeneous processors.

Shep Siegel, Founder of Atomic Rules
In 2014, we began augmenting services with IP Core products. Our first product was a UDP Offload Engine operating at 10, 25, 40, 50, 100 or 400 GbE. Today, we are launching a new product named Arkville, which is a DPDK-aware FPGA/GPP data mover that helps offload server cycles to FPGA gates. It is this new product that made it important for us to be part of the Linux Foundation.

How and why do you use Linux and open source?

Siegel: Linux and open source democratize the development process through API, ABI, and Interface standardization. We love the idea of common, open interfaces where everyone can then compete on quality of implementation.

Why did you join The Linux Foundation and the DPDK project?

Siegel: The Linux Foundation amplifies the openness and legitimacy of over a decade of DPDK development. By becoming a Linux Foundation member, our contributions will help the community flourish.

What interesting or innovative trends in your industry are you witnessing and what role do Linux and open source play in them?  

Siegel: Linux and open source have catalyzed architectural innovation in the form of heterogeneous compute and communication. The frontiers once dividing the main families of processing devices available to systems architects (FPGA, DSP, GPP, etc.) are getting easier to cross, thanks to an expanding ecosystem of communication bridges allowing data movement from one type of processor to another more easily, at faster line rate, and with less latency.

How is your company participating in that innovation?

Siegel: By introducing Arkville, Atomic Rules is enabling Linux DPDK applications that seek acceleration in a software-first fashion to offload server cycles to FPGA gates. This allows project managers to bring their product to market faster and focus on differentiating their product by not having to re-invent a GPP/FPGA packet mover.

How has participating in the Linux and open source communities changed your company?

Siegel: It has reinforced one of our guiding aphorisms, “Interface before Implementation”!

Is there anything else important or upcoming that you’d like to share?

Siegel: Our Arkville launch brings five man-years of DPDK-first, software-first passion to market. If Linux kernel bypass matters to you, and if you are looking for a solution to offload server cycles to FPGA gates, please give it a look and tell us what you think!

Learn more about Linux Foundation corporate membership and see a full list of members at https://www.linuxfoundation.org/members/join.

Read More:

The Companies That Support Linux and Open Source: Pinterest

The Companies That Support Linux and Open Source: VMware

The Companies That Support Linux and Open Source: Hart