Home Blog Page 766

Building Serverless Apps with Docker

Every now and then, there are waves of technology that threaten to make the previous generation of technology obsolete.  There has been a lot of talk about a technique called “serverless” for writing apps. The idea is to deploy your application as a series of functions, which are called on-demand when they need to be run. You don’t need to worry about managing servers, and these functions scale as much as you need, because they are called on-demand and run on a cluster.

But serverless doesn’t mean there is no Docker – in fact, Docker is serverless. You can use Docker to containerize these functions, then run them on-demand on a Swarm. Serverless is a technique for building distributed apps and Docker is the perfect platform for building them on.

Read more at Docker Blog

What a Virtual Network Looks Like: Planning

Virtual networks make things easier for the user at the planning level… at least in theory.

Network services don’t spring up unbidden from the earth but rather they’re coerced out of infrastructure in response to business and consumer opportunities. Every operations and management paradigm ever proposed for networking includes an explicit planning dimension to get the service-to-infrastructure and service-to-user relationships right. On the surface, virtualization would seem to help planning by reducing inertia, but don’t you then have to plan for virtualization? How the planning difficulties and improvements balance out has a lot to do with how rapidly we can expect virtualization to evolve.

What virtual networks do is disconnect “service” from “network” in at least some sense. They can do this by laying a new protocol layer on top of existing layers (the Nicira/VMware or software-defined WAN model), or by disconnecting traffic forwarding and network connectivity from legacy adaptive protocols (OpenFlow SDN and white-box switches).

Read more at No Jitter

Xen 4.7 Open Source Linux Hypervisor Arrives with Non-Disruptive, Live Patching

Xen 4.7 arrives eight months after the release of the previous version, Xen 4.6, and it appears to be yet another major release, not that we expected less from the leading open-source virtualization system, which is currently being used in many of the world’s best and renowned cloud hosting services, including AWS (Amazon Web Services), Rackspace Public Cloud, and Verizon Cloud, powering more than 10 million users.

Release highlights of Xen 4.7 include a new XL command-line interface that has been designed to allow the use of PVUSB devices for PV guests, as well as to enable hot-plugging of USB devices, HVM guests, and QEMU disk backends…

Read more at Softpedia

OPNFV Project Scales Up Network Functions Virtualisation Ecosystem

The OPNFV Project,  the Linux Foundation’s open source network functions virtualistion (NFV) platform development organisation, has announced an expansion of its R&D capabilities and an internship programme to help further develop the worldwide NFV ecosystem.

The organisation, which is currently hosting its annual OPNFV Summit in Berlin which brings together developers, end-users and upstream communities, said it was seeing clear momentum for NFV around the world.

Read more at ComputerWeekly

CI and CD at Scale: Scaling Jenkins with Docker and Apache Mesos

https://www.youtube.com/watch?v=XVE3uCRtHVs?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

In this presentation Carlos Sanchez will share his experience running Jenkins at scale, using Docker and Apache Mesos to create one of the biggest (if not the biggest) Jenkins clusters to date.

Finagle, linkerd, and Apache Mesos: Magical Operability Sprinkles for Microservices

https://www.youtube.com/watch?v=VGAFFkn5PiE?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

Finagle and Mesos are two core technologies used by Twitter and many other companies to scale application infrastructure to high traffic workloads. This talk describes how these two technologies work together to form applications that are both highly scalable and resilient to failure.

Successful DevOps Deployment Involves Shift in Culture and Processes

To create sustained high performance, organizations must invest as much in their people and processes as they do in their technology, according to Puppet’s 2016 State of DevOps Report.

The 50+ page report, written by Alanna Brown, Dr. Nicole Forsgren, Jez Humble, Nigel Kersten, and Gene Kim, aimed to better understand how the technical practices and cultural norms associated with DevOps affect IT and organizational performance as well as ROI.

According to the report, which surveyed more than 4,600 technical professionals from around the world, the number of people working in DevOps teams has increased from 16 percent in 2014 to 22 percent in 2016.

Six key findings highlighted in the report showed that:

  • High-performing organizations decisively outperform low-performing organizations in terms of throughput.

  • They have better employee loyalty.

  • High-performing organizations spend 50 percent less time on unplanned work and rework.    

  • They spend 50 percent less time remediating security issues.

  • An experimental approach to product development can improve IT performance.

  • Undertaking a technology transformation initiative can produce sizeable returns for any organization.    

Specifically, in terms of throughput, high IT performers reported routinely doing multiple deployments per day and saw:

  • 200 times more frequent code deployments        

  • 2,555 times faster lead times        

  • 24 times faster mean time to recover        

  • 60 times lower change failure rate

Shift Left                        

Lean and agile product management approaches, which are common in DevOps environments, emphasize product testing and building in quality from the beginning of the process. In this approach, also known as “shifting left,” developers deliver work in small batches throughout the product lifecycle.

“Think of the software delivery process as a manufacturing assembly line. The far left is the developer’s laptop where the code originates, and the far right is the production environment where this code eventually ends up. When you shift left, instead of testing quality at the end, there are multiple feedback loops along the way to ensure that high-quality software gets delivered to users more quickly,” the report states.

This idea also applies to security, as an integral part of continuous delivery. “Continuous delivery improves security outcomes,” according to the report. “We found that high performers were spending 50 percent less time remediating security issues than low performing organizations.”

For companies just getting started with DevOps, the move involves other changes as well.

“Adopting DevOps requires a lot of changes across the organization, so we recommend starting small, proving value, and using the trust you’ve gained to tackle bigger initiatives,” said Alanna Brown, Senior Product Marketing Manager, Puppet, and co-author of the report in an interview.

“We also think it’s important to get alignment across the organization by shifting the incentive structure so that everyone in the value chain has a single incentive: to produce the highest quality product or service for the customer,” Brown said. Employee engagement is key, as “companies with highly engaged workers grew revenues two and a half times as much as those with low engagement levels.”

In this year’s survey, according to Brown, most respondents reported beginning their DevOps journey with deployment automation, infrastructure automation, and version control — or all three.

“We see these practices as the foundation of a solid DevOps practice because automation gives engineers cycles back to work on more strategic initiatives, while the use of version control gives you assurance that you can roll back quickly should a failure occur,” she said. “Without these two practices in place, you can’t implement continuous delivery, provide self-service provisioning, or adopt many of new technologies and methodologies such as containers and microservices.”

Build a Foundation

Ultimately, however, to be successful, DevOps must overcome “political and cultural inertia,” Brown said. “It can’t be a top-down dictate, nor can it be a purely grassroots effort.”                

The 2016 report offers some steps that can make a difference in your organization’s performance. Once you have your foundation in place, Brown said, “you’ll see all the opportunities that exist to automate manual processes… And, of course, there will be the bigger initiatives like moving workloads to a public cloud, building out a self-service private cloud, and spreading DevOps practices to other parts of the organization.”  

 

Docker Launches a New Marketplace for Containerized Software

At its developer conference in Seattle, Docker today announced the private beta of the Docker Store, a new marketplace for trusted and validated dockerized software.

The idea behind the store is to create a self-service portal for Docker’s ecosystem partners to publish and distribute their software through Docker images — and for users to make it easier to deploy these applications.

While Docker already offered its own registry for containers, too, the Docker Store is specifically geared toward the needs of enterprises. The store will offer enterprises “with compliant, commercially supported software from trusted and verified publishers, that is packaged as Docker images,” the company says, and feature both free and paid software. 

Read more at TechCrunch

ClusterHQ’s Mohit Bhatnagar Talks Flocker, Docker, and the Rise of Open Source

Container technology remains very big news, and if you bring up the topic almost everyone immediately thinks of Docker. But, there are other tools that can compete with Docker, and tools that can extend it and make it more flexible. CoreOS’s Rkt, for example, is a command-line tool for running app containers. And, ClusterHQ has an open source project called Flocker that allows developers to run their databases inside Docker containers, leveraging persistent storage, and making data highly portable.

Each of the emerging tools in the container space has unique appeal. ClusterHQ’s Flocker is especially interesting because it marries scalable, enterprise-grade container functionality with persistent storage. Many organizations working with containers are discovering that they need dependable, scalable storage solutions to work in tandem with their applications.

“At ClusterHQ we are building the data layer for containers, enabling developers and operations teams to run not just their stateless applications in containers, but their databases, queues and key-value stores as well,” the company’s CEO Mark Davis has said.

Mohit Bhatnagar, ClusterHQ’s Vice President of Products
We caught up with ClusterHQ’s Vice President of Products Mohit Bhatnagar for an interview, and he notes that containers don’t handle data very well, and that companies need to run their critical data services inside containers so they can realize the full speed and quality benefits of a fully containerized architecture. He also weighed in on the prominence that open source software is gaining relative to proprietary tools.

A ‘Git-for-Data’

“We are working on expanding the capabilities of Flocker to support our growing user base for sure, but we’re also expanding beyond just production operations of stateful containers,” Bhatnagar said. “We’ve heard from our users that they want to be able to manage their Docker volumes as easily on their laptop as they can in production with Flocker.  To serve these needs, we’re working on creating ‘git-for-data,’ where a user can version control their data and push and pull it to a centralized Volume Hub. As they say, watch this space.”

Modern applications are being built from both stateless and stateful microservices and Flocker makes it practical for entire applications, including their state, to be containerized in order to take leverage the portability and massive per-server density benefits inherent in containers.

“Flocker is the leading volume manager for Docker because it is the most advanced technically and has the broadest integrations,” Bhatnagar  added. “Flocker is used at scale from enterprises such as Swisscom to innovative startups disrupting their spaces. Our customers love Flocker because it works with all the major container managers like Docker Swarm, Kubernetes and Mesos, and integrates with all the major storage systems including Amazon, Google, EMC, NetApp, Dell, HPE, VMware, Ceph, Hedvig, Pure and more.”

Bhatnagar also discussed increasing competition in the container space. “As always, competition is great for consumers,” he said. “It leads to more choice and better products. We are excited to see standardization projects like OCI bring together Docker and CoreOS, and CNCF bring together Docker and Google Kubernetes to make sure that this competition doesn’t lead to a situation where differing standards hinder adoption.”

The Rise of Open Source

One topic that Bhatnagar is passionate about is the steady rise of open source, and its increasing popularity relative to proprietary technology.

“The open source stack and all that it engenders is driving the closed source, or proprietary, stack to be less relevant and economically feasible,” he notes. “Take, for example, the great success of Docker with containers and its resulting ecosystem. Its popularity isn’t simply due to the fact that it’s a cool company. After all, in Silicon Valley there are lots of cool companies. It is, rather, largely a result of its open source model that reflects the ascendance of software engineers in the creation and deployment of software. And open source Docker is giving closed source VMware a headache as a result.”

For the first time in our information technology age, we can now build an entire infrastructure stack composed of x86 architecture, commodity components and an open source stack,” he added. “The fastest growth, as we know, is happening among open source companies. Developers today play a far more influential role in application development as the monolithic architectures break down. Demand for microservices, developer-centric workflows, containers, open source, big data are all part of the larger current driving information technology today.”

Red Hat Composes Ansible to Help Build Containers

Red Hat expands its DevOps platform to enable developers to more easily build their own containers.

Red Hat is expanding its open-source Ansible platform with a new module called Ansible Container that enables organizations to build and deploy containers. Ansible is a DevOps automation platform technology that Red Hat acquired in October 2015.

A popular option of many Docker container developers today is to use the Docker Compose tool to build containers. The new Ansible Container effort isn’t necessarily competitive with Docker Compose; in fact, it can be used in a complementary way, according to Greg DeKoenigsberg, director of Ansible Community with Red Hat. Developers don’t have to stop using Docker Compose; rather they can literally copy and paste or reference Docker Compose right from an Ansible playbook with Ansible Container, he said.

Read more at eWeek