Home Blog Page 1049

Docker VP Marianna Tessel: What’s Next For Containers

Marianna TesselLinux container technology has evolved rapidly over the past year as adoption expands beyond large web companies to become the de facto way organizations are building distributed applications today. The technology has become more sophisticated to support multi-container, multi-host applications, and has even expanded beyond Linux to the Windows architecture, says Marianna Tessel, Senior Vice President of Engineering at Docker.

Docker, too, has evolved to meet its customers needs through both its commercial and open source projects.

“Docker containers initially started out as a developer tool and have evolved to incorporate the features and capabilities users need to deploy container technology in production,” Tessel said.

Docker is also now participating in the Open Container Project, a Linux Foundation Collaborative Project to create open industry standards around container formats and runtimes.

What’s next for container technology? Tessel will present her view in a keynote session at LinuxCon, CloudOpen and ContainerCon North America, Aug. 17-19, 2015. Here she discusses container technology as it exists today, how it has changed, and the role that the Open Container Project will play in advancing container technology in the coming months and years.

Linux.com: What is the state of container technology today? Where is it succeeding and what are its challenges?

Marianna Tessel: In a short period of time, container technology has rapidly evolved to affect the way users and companies build, ship and run distributed applications. Containers have transformed the capabilities of developers and the companies that they work for – increasing productivity while reducing cost.  

To give a couple of examples: Companies like ING are able to move faster through its development pipeline using Docker. In ING’s case, it went from a monolithic application with code changes into production measured in months, to 300 changes a day that go from code commit to production in 15 minutes. Other organizations are using container technology to streamline their legacy application architectures into a more agile microservices environment. Booz Allen is working with a large federal agency to create a secure DevOps framework for application development teams as they evolve legacy applications into distributed applications running in the cloud. These applications are used in managing the government-wide systems for those who award, administer, or receive federal financial assistance contracts and intergovernmental transactions. To create a unified developer experience and provide a uniform set of tooling and shared content, this large government agency is using container technology to break up these applications into microservices.

The biggest challenge with container technology is probably the rapid rate of adoption. Uptake is faster than anyone could have imagined so it has required Docker’s ecosystem to evolve rapidly. Users and organizations want a way to maintain a seamless experience through the development lifecycle.  As applications become sophisticated and containers more widely adopted, the ecosystem is evolving as well – offering more tooling and options such as networking, storage, monitoring, etc.

Is security still an issue for containers? Why or why not?

Tessel: It is not about securing the container, it is about securing the application. Container technology actually provides another layer of protection for applications by isolating the application from the host and between the applications themselves without using the incremental resources of the underlying infrastructure and by reducing the attack surface area of the host itself. Docker, for example, does this by leveraging and providing a usable interface to numerous security features in the Linux kernel. The security attributes of containers are well recognized and even banking institutions such as Capital One are containerizing some of their critical applications.

Security will continue to be a topic of innovation. As applications are continually changing, the best methods for securing the application will need to evolve as well. Docker is continuing to hone its security capabilities and techniques to evolve from developer tooling to more sophisticated solutions that operations teams use in production.  Docker Notary is designed to serve as a filter for the distribution of containers and Docker-related content in a project, including and especially in the production phase. This way, only digitally signed content that has been entered into Notary’s registration system, gets passed into production. Organizations using containers also need to ensure that they are developing in accordance with industry best practice recommendations. The Docker Bench for Security tool is a helpful utility that automates validating a host’s configuration against the CIS Benchmark recommendations.

How has container technology changed over the past year?

Tessel: Container technology has evolved in both breadth and depth over the last year, becoming the de facto standard for organizations to build, ship and run distributed applications. Docker containers initially started out as a developer tool and have evolved to incorporate the features and capabilities users need to deploy container technology in production. Containers have become more sophisticated and widely-deployed, expanding from a technology capable of managing single container applications to one that handles multi-container, multi-host distributed applications. As result, the type of organizations using container technology has expanded beyond the bleeding edge web companies. We continue to see new use cases and usages, such as,  “Container as a Service” and Big Data Analysis applications. Finally, one of the most significant changes in container technology is the multi-architecture expansion of containers beyond Linux and Solaris to also include Windows.

What role do you see the new Open Container Project playing in advancing container technology?

Tessel: Users can fully commit to container technologies today without worrying that their current choice of any particular infrastructure, cloud provider, DevOps tool, etc. will lock them into any technology vendor for the long run.  With one common standard, users can focus on choosing the best tools to build the best applications they can. Equally important, they will benefit by having the industry focus on innovating and competing at the levels that truly make a difference.  Ultimately, the OCP will ensure  that the original promise of containerization —portability, interoperability, and agility—aren’t lost as we move to a world of applications built from multiple containers run using a diverse set of tools across a diverse set of infrastructures.

How will Docker contribute to the new collaborative project?

Tessel: Docker is donating both a draft specification for the base format and runtime and the code associated with a reference implementation of that specification, to the OCP. Docker has taken the entire contents of the libcontainer project (github/docker/libcontainer), including nsinit, and all modifications needed to make it run independently of Docker, and donated it to this effort. This codebase, called runC, can be found at github/opencontainers/runc. libcontainer will cease to operate as a separate project. Docker will also contribute maintainers to the effort alongside CoreOS, Red Hat, and Google, as well as two independent developers.

Marianna Tessel has over 20 years of experience in engineering and leadership, having worked for both large organizations and startups. She now runs the engineering organization at Docker, which actively contributes to the open source project and is also responsible for Docker’s commercial offerings. Before joining Docker, she was VP of engineering at VMware, having led a team of hundreds of engineers and was responsible for developing various VMware vSphere subsystems. She is known for catalyzing tremendous technology ecosystem growth and was included on the 2013 Business Insider Top 25 Most Powerful Women Engineers in Tech list.

Register now for LinuxCon North America, to be held Aug. 17-19, 2015 at the Sheraton Seattle.

Feral Games Is Now Teasing A New Linux Game Port

Feral Interactive Games, the company that has ported games to Linux like XCOM: Enemy Unknown and Empire Total War and is doing the Batman Arkham Knight port, is teasing another upcoming Linux / OS X game release…

Read more at Phoronix

The New Solus: Putting the Pieces Together Again

The Solus Project is a rebranded and rereleased Linux distro trying to regain its former popularity. In a field of Linux distributions cluttered with look-alike offerings, Solus brings something simple and something new. Solus has impressive potential for being uncomplicated and different. Based in the UK, the Solus Project is the latest iteration of SolusOS, which morphed into Evolve OS. The new Solus is not a complete porting of the old Evolve OS. Other than the bult-from-scratch Budgie desktop, much of it appears to be gutted.

Read more at LinuxInsider

Researcher Lashes Out at Hacking Team Over Open-Source Code Discovery

When the researcher released his code as open-source, Android spyware development for governments was not its intended purpose.

Read more at ZDNet News

Why I created Open Source Protocol

I recently launched the Open Source Protocol (OS Protocol), a standard that can be used to link to where the code for a website is hosted. The protocol is fairly simple—all it involves is metatags, and most websites will only need two or three lines of code to be compliant.

read more

Read more at OpenSource.com

Robolinux 8.1 Cinnamon Runs Windows 10 Inside the OS

Robolinux is a Linux distribution based on Debian that features various flavors and that allows its users to run Windows apps via a virtual machine solution. Now the developer has released the first version in the 8.x branch, and it’s powered by a Cinnamon desktop.

This is the first edition of Robolinux that is based on the new Debian 8, and it looks like the developers also chose to make Cinnamon the default desktop. The previous branch was using GNOME, but it’s not sure i… (read more)

Git 2.5 Is On Approach With Many Changes

Junio Hamano announced the release on Tuesday for Git 2.5.0-rc3, which is made up of more than 500 commits since Git 2.4.0…

Read more at Phoronix

Valve and HTC’s Vive Stand at the Precipice of VR’s Future, But They May Have a Long Wait

The small piece of blue tape is almost lost in the pattern of the carpet in the darkened hotel room.

There is no furniture, no lights, just a bit of illumination coming from the adjacent, connected room.

I shift my weight from left to right foot and inch a bit closer to the blue tape.

“You don’t have to be right on it.”

Continue reading…

Read more at The Verge

Linux Kernel 4.1.3 LTS Is Now the Most Advanced Version Available

The latest version of the stable Linux kernel, 4.1.3, has been made available by Greg Kroah-Hartman, which means that this is now the most advanced version released.

It’s been a while since the 4.x branch was released, and now it’s coming into its own. This particular branch has already received quite a few updates, and it’s been integrated with numerous distros already. The fact that it’s the most advanced out there is also a good incentive to use it if you want the best s… (read more)

Enterprises Can Now Run SUSE on 64-bit ARM Servers

SUSE Linux enterprise server logoAs power consumption has become the biggest challenge for data centers, many companies have looked to the Sun (read solar power) to cut costs and reduce carbon pollution. A much simpler solution lies in the chips powering these server farms, however.

Now that 64-bit ARM architecture is a reality, low-power ARM chips are finally becoming a viable contender in the server space and there are signs the tech industry has started to adopt them. In April PayPal, for example, deployed servers running on 64-bit ARM processors in what will perhaps become the model for other big companies to follow.

Another sign of the times is that Linux companies, which dominate the server space, are also responding to the market trend. One of the trinities of the Linux world – SUSE – has launched its partner program to bring SUSE Linux Enterprise 12 to 64-bit ARM processors.

“The expansion of our program to include the 64-bit ARM code allows our partners to develop, validate and move towards shipping solutions that utilize 64-bit ARM technology in combination with a stable and supportable enterprise Linux operating system,” according to Senior Technical Strategist David Byte and Senior Director of Product Management and Operations Gerald Pfeifer of SUSE. “The ARM platform provides a common base on which various chip vendors can build and differentiate. We are offering these vendors the opportunity to jointly work on these solution designs, provide a solid base, and optimize per individual objectives.”

SUSE is ARMed

Under SUSE’s ARM program the company will offer partners a version of SUSE Linux Enterprise 12 so they can develop, test and deliver products to the market using 64-bit ARM chips from vendors including AMD, AppliedMicro and Cavium along with server manufacturers Dell, HP, Huawei and SoftIron. The goal is to provide customers with more choice, flexibility, and opportunities to save on their technology infrastructure.

When I asked if 64-bit ARM is fully supported by SUSE, Byte and Pfeifer said, “We have had SUSE Linux Enterprise successfully running on 64-bit ARM in our labs for a while, and openSUSE (the “upstream” of SUSE Linux Enterprise) is running on this platform, as well.”

“SUSE is a historical innovator, and this program brings the same benefits and interaction to the ARM AArch64 ecosystem that our partners providing X86-64, Power and System z solutions already experience,” said Ralf Flaxa, SUSE vice president of engineering, in a press statement.

Is ARM on servers as efficient as x86?

ARM’s efficiency is still an area of debate, as there have not been many large-scale ARM deployments before PayPal. We really don’t know how well they perform against the traditional x86 chips.

When I asked SUSE what are the primary advantages of ARM 64 architecture over x86-64 SUSE refrained from getting into specifics. Byte and Pfeifer said, “This is a question best answered by ARM and chip vendors building on their designs. At SUSE, our approach has always been to support as well as possible whatever hardware platform a customer wants to use, more so than suggesting which platform to actually use. The most interesting difference between 64-bit ARM and x86-64 is that licensees of the ARM architecture can add their own IP to the silicon. This could be networking, encryption engines or other technology that benefits a particular solution.”

They did make clear that ARM users will not miss any features as compared to the x86-64 platform. Byte and Pfeifer said, “Our approach always has been to treat and enable platforms on equal footing as much as possible (e.g., we do not feature sound support on mainframes).”

Same performance for half the price?

Contrary to a perceived belief, PayPal was not taking a performance hit by moving from x86 to X-Gene from Applied Micro. Mentioning the deployment Dr. Paramesh Gopi, CEO of ARM chip maker Applied Micro, said during an investor conference, “The X-Gene equipped units cost approximately one-half the price of traditional data center infrastructure hardware and incurred only one-seventh of the annual running cost. Even with these dramatically favorable capital and operating expense reductions, the X-Gene equipped systems delivered performance equivalent to the incumbent infrastructure.”

In a nutshell ARM is cheaper than x86, without compromising the performance. Putting its trust in ARM, SUSE said in the press statement, “ARM server processors provide a scalable technology platform that can be configured to meet diverse business and application needs in the data center, such as efficient web-scale workloads and rapid cloud build out.”

Enterprise customers are interested in ARM

Byte and Pfeifer told me, “We are seeing interest across several key segments of the industry. Of particular note would be the strength of interest from hardware and system vendors around diverse scenarios such as high performance computing over “regular” data center computing, to software-defined storage, as well has telecom. We are also seeing interest from the cloud and appliance builders.”

SUSE sees immense possibilities in different sectors. They believe partners can take advantage of SUSE Linux Enterprise support for ARM processors in various market areas, including purpose-built appliances, such as security, medical and network devices; hyperscale computing; distributed storage; and software-defined and classic networking.

Engaging the community for 64-bit

OpenSUSE Build Service (OBS) is one of the lesser known, but excellent services offered by the company; it’s used even by competitors and many minor and major open source projects. SUSE is using OBS to simplify partner access. They have implemented support for ARM and AArch64 into its OBS. This support will enable the open source community to build packages against real 64-bit ARM hardware and the SUSE Linux Enterprise 12 binaries, improving time to market and compatibility for AArch64 solutions.

This will directly benefit the end users as it would take less time for partners to build, test, and release products.

However, the program is not for everyone Byte and Pfeifer said, “The target audience is slightly biased toward partners who wish to build solutions based on 64-bit ARM technology. In a more traditional sense, these would mostly be considered appliances, but we are not restricting it to just that demographic.”

SUSE is offering this program for free to new and existing partners who focus on ARM-based platforms. If you are in interested in participating, you need to be a member of SUSE’s PartnerNet program.