At its developer conference in Seattle, Docker today announced the private beta of the Docker Store, a new marketplace for trusted and validated dockerized software.
The idea behind the store is to create a self-service portal for Docker’s ecosystem partners to publish and distribute their software through Docker images — and for users to make it easier to deploy these applications.
While Docker already offered its own registry for containers, too, the Docker Store is specifically geared toward the needs of enterprises. The store will offer enterprises “with compliant, commercially supported software from trusted and verified publishers, that is packaged as Docker images,” the company says, and feature both free and paid software.
Container technology remains very big news, and if you bring up the topic almost everyone immediately thinks ofDocker. But, there are other tools that can compete with Docker, and tools that can extend it and make it more flexible. CoreOS’sRkt, for example, is a command-line tool for running app containers. And,ClusterHQ has an open source project called Flocker that allows developers to run their databases inside Docker containers, leveraging persistent storage, and making data highly portable.
Each of the emerging tools in the container space has unique appeal. ClusterHQ’s Flocker is especially interesting because it marries scalable, enterprise-grade container functionality with persistent storage. Many organizations working with containers are discovering that they need dependable, scalable storage solutions to work in tandem with their applications.
“At ClusterHQ we are building the data layer for containers, enabling developers and operations teams to run not just their stateless applications in containers, but their databases, queues and key-value stores as well,” the company’s CEO Mark Davishas said.
Mohit Bhatnagar, ClusterHQ’s Vice President of Products
We caught up with ClusterHQ’s Vice President of Products Mohit Bhatnagar for an interview, and he notes that containers don’t handle data very well, and that companies need to run their critical data services inside containers so they can realize the full speed and quality benefits of a fully containerized architecture. He also weighed in on the prominence that open source software is gaining relative to proprietary tools.
A ‘Git-for-Data’
“We are working on expanding the capabilities of Flocker to support our growing user base for sure, but we’re also expanding beyond just production operations of stateful containers,” Bhatnagar said. “We’ve heard from our users that they want to be able to manage their Docker volumes as easily on their laptop as they can in production with Flocker. To serve these needs, we’re working on creating ‘git-for-data,’ where a user can version control their data and push and pull it to a centralized Volume Hub. As they say, watch this space.”
Modern applications are being built from both stateless and stateful microservices and Flocker makes it practical for entire applications, including their state, to be containerized in order to take leverage the portability and massive per-server density benefits inherent in containers.
“Flocker is the leading volume manager for Docker because it is the most advanced technically and has the broadest integrations,” Bhatnagar added. “Flocker is used at scale from enterprises such as Swisscom to innovative startups disrupting their spaces. Our customers love Flocker because it works with all the major container managers like Docker Swarm, Kubernetes and Mesos, and integrates with all the major storage systems including Amazon, Google, EMC, NetApp, Dell, HPE, VMware, Ceph, Hedvig, Pure and more.”
Bhatnagar also discussed increasing competition in the container space. “As always, competition is great for consumers,” he said. “It leads to more choice and better products. We are excited to see standardization projects like OCI bring together Docker and CoreOS, and CNCF bring together Docker and Google Kubernetes to make sure that this competition doesn’t lead to a situation where differing standards hinder adoption.”
The Rise of Open Source
One topic that Bhatnagar is passionate about is the steady rise of open source, and its increasing popularity relative to proprietary technology.
“The open source stack and all that it engenders is driving the closed source, or proprietary, stack to be less relevant and economically feasible,” he notes. “Take, for example, the great success of Docker with containers and its resulting ecosystem. Its popularity isn’t simply due to the fact that it’s a cool company. After all, in Silicon Valley there are lots of cool companies. It is, rather, largely a result of its open source model that reflects the ascendance of software engineers in the creation and deployment of software. And open source Docker is giving closed source VMware a headache as a result.”
“For the first time in our information technology age, we can now build an entire infrastructure stack composed of x86 architecture, commodity components and an open source stack,” he added. “The fastest growth, as we know, is happening among open source companies. Developers today play a far more influential role in application development as the monolithic architectures break down. Demand for microservices, developer-centric workflows, containers, open source, big data are all part of the larger current driving information technology today.”
Red Hat expands its DevOps platform to enable developers to more easily build their own containers.
Red Hat is expanding its open-source Ansible platform with a new module called Ansible Container that enables organizations to build and deploy containers. Ansible is a DevOps automation platform technology that Red Hat acquired in October 2015.
A popular option of many Docker container developers today is to use the Docker Compose tool to build containers. The new Ansible Container effort isn’t necessarily competitive with Docker Compose; in fact, it can be used in a complementary way, according to Greg DeKoenigsberg, director of Ansible Community with Red Hat. Developers don’t have to stop using Docker Compose; rather they can literally copy and paste or reference Docker Compose right from an Ansible playbook with Ansible Container, he said.
A recurring theme in our MesosCon North America 2016 series is solving difficult resource provisioning problems. The days of investing days or even weeks in spec’ing, acquiring, and setting up hardware and software to meet increased workloads are long gone. Now we see vast provisioning adjustments taking place in seconds.
Twitter invented something they call “magical operability sprinkles” to handle Twitter’s wildly varying workload demands, which spike from little activity to millions of tweets per minute. These magic sprinkles are built on Finagle, linkerd, and Apache Mesos, and magically provide both massive scalability and reliability.
CloudBees took on the challenge of building a giant Jenkins cluster, possibly the largest one in existence, using Docker and Mesos. They run Jenkins masters in Docker containers and spin up Jenkins slaves on demand. It is a clever structure that solves the difficult problems of scaling Jenkins and of providing isolation for multiple discrete users.
What happens when a container exceeds its memory quota.
Finagle, linkerd, and Apache Mesos: Magical Operability Sprinkles for Microservices
Oliver Gould, CTO of Buoyant
Back in the olden days of Twitter (2010), getting more hardware resources involved bribes of whiskey to the keeper of the hardware, because resources were so scarce. It was an acute problem; Twitter was growing rapidly and was not keeping up with the growth. Consequently, they suffered frequent outages, to the point that during the 2010 World Cup Twitter staff were chanting “Please no goals. Please no goals.” Giant spikes could happen at any time, so their most pressing problem was “How do you provision for these peaks in a way that doesn’t cost you way too much money, and still keep the site up when this happens?”
Oliver Gould explains Twitter’s approach to building both scalability and reliability. “This is a quote from a colleague of mine at Twitter, Marius Eriksen. ‘Resilience is an imperative. Our software runs on the truly dismal computers we call data centers. Besides being heinously complex …they are unreliable and prone to operator error.’ Think about this a second. If we could run a mainframe, one big computer, and it never would fail, whywouldn’t we use that?…We can’t do that. That’s way too expensive to do. Instead, we’ve built these massive data centers, with all commodity hardware and we expect it to fail continuously. That’s the best we can do in computing. We can build big data center computers out of crappy hardware, and we’re going to make that work.”
Another problem is slowness. “’It’s slow’ is absolutely the hardest problem you’ll ever debug. How do we think about slowness in a distributed system? Here, we have 5 services talking to 4 services, or however many. When one of these becomes slow, this isn’t proportional to the slowness downstream. This spreads like wildfire…Load balancing is probably the sharpest tool we have for this,” Gould says.
Microservices and Finagle are key to solving these problems. A microservice is not necessarily small in size, and it may require a lot of CPU or memory. Rather, it is small in scope, doing only one thing and doing it well. So, instead of writing giant complex applications, Twitter engineers can quickly write, test, and deploy microservices. Finagle is a high-concurrency framework that manages scheduling, service discovery, load balancing, and all the other tasks that are necessary to orchestrate all these microservices.
CI and CD at Scale: Scaling Jenkins with Docker and Apache Mesos
Carlos Sanchez, CloudBees
The Jenkins Continuous Integration and Continuous Delivery automation server is a standard tool in shops everywhere. Jenkins is very adaptable for all kinds of workloads. For example, a software company could integrate Jenkins with Git, GitHub, and their download servers to automate building and publishing their software, its documentation, and their web site.
Scaling Jenkins, Sanchez notes, involves tradeoffs. You can use a single master with multiple build agents, or multiple masters. With a single master “The problem is that the master is still a single point of failure. There’s a limit on how many build agents can be attached. Even if you have more and more slaves, there’s going to be a point where the master is not on the scale or you’re going to have a humongous master… The other option is having more masters. The good thing is that you can have multiple organizations, departments with our own Jenkins master. They can be totally independent. The problem obviously is you need now single sign on. You need central configuration and operation. You need a view over how to operate all these Jenkins Masters that you run.”
“What we built was something that it was like the best of both worlds…We have the CloudBees Jenkins Operation Center with multiple masters, and also dynamic slave creation only to master.”
The CloudBees team built their Jenkins Operation Center with Mesosphere Marathon, and installed the Mesos cluster with Terraform. Other components are Amazon Web Services, Packer for building the machine images, OpenStack, Marathon for container management, and several more tools. They had to solve permissions management, storage management, memory management, and several other complexities. The result is a genuine Jenkins cluster for multiple independent users that scales on demand.
And, watch this spot for more blogs on ingenious and creative ways to hack Mesos for large-scale tasks.
MesosCon Europe 2016 offers you the chance to learn from and collaborate with the leaders, developers and users of Apache Mesos. Don’t miss your chance to attend! Register by July 15 to save $100.
Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.
The modern tech business is all about networking infrastructure. For a leading company, the power to communicate effectively with its IT assets is vital. However, that same networking can be a wall to development process; how does a team develop for an environment that is always shifting and changing? Removing the networking concern is a top priority for any business that wants to be efficient and agile.
“The network has to get out of the way to create developer efficiency; how do you make that happen?” Sunil Khandekar asked, opening the conversation. He explained how companies have dynamic environments, but there was a way for networking to tie those environments seamlessly together. A network-based automation platform allows any workload to come together for quick application deployment.
Last year Red Hat, which has been mostly known for selling Linux in the enterprise becamethe first $2 billion open source company. Now it wants to be the first to $5 billion, but it might not be just Linux that gets it there.
A couple of years ago Red Hat CEO Jim Whitehurst recognized, even in the face of rising revenue, that the company couldn’t continue growing forever featuring Red Hat Enterprise Linux (RHEL) alone. As successful as RHEL had been, the world was changing and his company like so many enterprise-focused companies had to change too or risk being left behind.
To quote that old Microsoft commercial that meant, “To the cloud!”
Fedora 24 general release was announced yesterday and many users are now hoping to upgrade to the latest edition of the popular Linux distribution. In this how to guide, we shall look at the…
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Linux developers are going to have more than one choice for building secure, cross-distribution applications.
Ubuntu’s “snap” applications recently went cross-platform, having been ported to other Linux distros including Debian, Arch, Fedora, and Gentoo. The goal is to simplify packaging of applications. Instead of building a deb package for Ubuntu and an RPM for Fedora, a developer could package the application as a snap and have it installed on just about any Linux distribution.
But Linux is always about choice, and snap isn’t the only contender to replace traditional packaging systems. Today, the developers of Flatpak (previously called xdg-app) announced general availability for several major Linux distributions, with a pointer to instructions for installing on Arch, Debian, Fedora, Mageia, and Ubuntu.
During this month, the project has coalesced further, and is on the prowl for more contributors.
A joint proposal between IBM and Digital Assets has now become “Fabric,” an incubator-level project (under active development but not yet production-ready) that the two hope will form the foundation code base of Hyperledger.
Today, June 21, 2016, Fedora Project has announced the general availability of the final release of the Fedora 24 Linux operating system for desktops, servers, cloud, and embedded devices.
Delayed four times during its development cycle, the Fedora 24 distribution is finally available to download today. It looks like it ships with the usual Fedora Workstation, Fedora Server, and Fedora Cloud variants, as well as the official Fedora Spins with the Xfce, LXDE, KDE, MATE/Compiz, Cinnamon, and Sugar desktops.
Of course, users will also be able to get their hands on the Fedora 24 Labs Spins, which include Design Suite, Games, Robotic Suite, Scientific, and Security Lab. Under the hood, all the aforementioned editions and spins are shipping with the same core components, namely Linux kernel 4.5.7 and GNU C Library version 2.23.
Users should be aware of the fact that Linux kernel 4.5.7 is the last in the Linux … (read more)