Home Blog Page 658

Testing LXD, Canonical’s Container Hypervisor for Linux

Canonical is betting that LXD, which it calls the “pure-container hypervisor,” can beat VMware, KVM and other traditional hypervisors. To see for myself, I recently gave it a whirl. Here’s what I found.

By “pure-container hypervisor,” Canonical means it is a hypervisor that works by creating containers running on top of the host system, just like Docker. There is no hardware emulation evolved. Because LXD containers have much less overhead than traditional virtual machines, they theoretically can support many more guest operating systems than traditional hypervisors, while also delivering better performance.

Read more at Container Journal

OPNFV Demonstrates How to Cut the Cord from Proprietary Hardware Designs

There are few things more frustrating than a dropped call. At the keynote during OpenStack Barcelona 2016 conference, the folks behind the Open Platform for Network Functions Virtualization Project (OPNFV) demonstrated, in a most dramatic fashion, the resilience of Network Functions Virtualization (NFV) technology, and how it could minimize such call drops.

During the keynote, technical engineer Ildiko Vancsa made a cell phone call to OPNFV Director Heather Kirksey, using a set of 5G equipment on stage, running OPNFV (An open source implementation of NFV) on top of OpenStack. The call remained intact even though OpenStack Chief Operating Officer Mark Collier, also on-stage, started randomly cutting cables to the 5G gear, “Chaos Monkey” style.

Read more at The New Stack

Radio Free Linux

Do a web search for “Linux radio station”, and the pickings are slim indeed, with most sites promoting instead ham radio software or streaming audio players, and a handful devoted to setting up a streaming web radio station—including one such optimistic article in Linux Journal some 15 years ago (see “Running a Net Radio Station with Open-Source Software”, January 2001).

Unfortunately, much of this documented interest took place a decade or more in the past via domains like opensourceradio.com that are no longer with us. A few projects persevere, but a good number of postings are similarly dated. The fact is, there are more Linux-based ways to stream and listen to radio stations than there actually are the means to broadcast and control them.

Read more at Linux Journal

Trends in the Open Source Cloud: A Shift to Microservices and the Public Cloud

Cloud computing is the cornerstone of the digital economy. Companies across industries now use the cloud — private, public or somewhere in between — to deliver their products and services.

A recent survey of industry analysis and research that we conducted for our 2016 Guide to the Open Cloud report produced overwhelming evidence of this.

Forty-one percent of all enterprise workloads are currently running in some type of public or private cloud, according to 451 Research. That number is expected to rise to 60 percent by mid-2018. And Rightscale reports that some 95 percent of companies are at least experimenting in the cloud. Enterprises are continuing to shift workloads to the cloud as their expertise and experience with the technology increases.

As we mentioned last week, companies in diverse industries — from banking and finance to automotive and healthcare — are facing the reality that they’re now in the technology business. In this new reality, cloud strategies can make or break an organization’s market success. And successful cloud strategies are built on Linux and open source software.

But what does that cloud strategy look like today and what will it look like in the future?

Short Term: Hybrid Cloud Architectures

While deployment and management remain a challenge, microservices architecture is now becoming mainstream. In a recent Nginx survey of 1,800 IT professionals, 44 percent said they’re using microservices in development or in production. Adoption was highest among small and medium-sized businesses. Not coincidentally, the use of public cloud is also predominant among SMBs, which are more nimble and faster to respond to market changes than large enterprises with legacy applications and significant on-premise infrastructure investments.   

Many reports tout hybrid cloud as a fast-growing segment of the cloud. Demand is growing at a compound rate of 27 percent, “far outstripping growth of the overall IT market,” according to researcher MarketsandMarkets. And IDC predicts that more than 80 percent of enterprise IT organizations will commit to hybrid cloud architectures by 2017.

However, hybrid cloud growth is happening predominantly among large enterprises with legacy applications and the budget and staffing to build private clouds. They turn to cloud for storage and scale-out capabilities, but keep most critical workloads on premise.  

In the mid-market, hybrid cloud adoption stands at less than 10 percent, according to 451 Research. Hybrid cloud is, then, a good transition point for legacy workloads and experimenting with cloud implementation. But it suffers from several challenges with more advanced cloud implementations, including management complexity and cost.

“Most organizations are already using a combination of cloud services from different cloud providers. While public cloud usage will continue to increase, the use of private cloud and hosted private cloud services is also expected to increase at least through 2017. The increased use of multiple public cloud providers, plus growth in various types of private cloud services, will create a multi-cloud environment in most enterprises and a need to coordinate cloud usage using hybrid scenarios.

“Although hybrid cloud scenarios will dominate, there are many challenges that inhibit working hybrid cloud implementations. Organizations that are not planning to use hybrid cloud indicated a number of concerns, including: integration challenges, application incompatibilities, a lack of management tools, a lack of common APIs and a lack of vendor support,” according to Gartner’s 2016 Public Cloud Services worldwide forecast.

Long term: Microservices on the Public Cloud

Over the long term, workloads are shifting away from hybrid cloud to a public cloud market dominated by providers like AWS, Azure, and Google Compute. “The share of enterprise workloads moved to the public cloud is expected to triple over the next five years,” from 16 percent to 41.3 percent of workloads runnin g in the public cloud, according to a recent JP Morgan survey of enterprise CIOs. Among this group, 13 percent said they view AWS as “intrinsic to future growth.”

By the end of 2016 the public cloud services market will reach $208.6 billion in revenue, growing by 172 percent from $178 billion in 2015, according to Gartner. Cloud application services (software-as-a-service or SaaS) is one of the largest segments of that and is expected to grow by 21.7 percent in 2016 to reach $38.9 billion while Infrastructure-as-a-Service (IaaS) is projected to see the most growth at 42.8 percent in 2016.

The public cloud itself is largely built on open source software. Offerings including Amazon EC2, Google Compute Engine and OpenStack are all built on open source technologies. They provide APIs that are well documented. They also provide a framework that is consistent enough to allow users to duplicate their infrastructure from one cloud to another without a significant amount of customization.

This allows for application portability, or the ability to move from one system to another without significant effort. The less complex the application the more likely that it can remain portable across cloud providers. And so the development practice that seems to be most suited for this is to abstract things into their simplest parts — a microservices architecture.

A whole new class of open source cloud computing projects has now begun to leverage the elasticity of the public cloud and enable applications designed and built to run on it. Organizations should become familiar with these open source projects, with which IT managers and practitioners can build, manage, and monitor their current and future mission-critical cloud resources.

Learn more about trends in open source cloud computing and see a list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Read the other articles in the series:

4 Notable Trends in Open Source Cloud Computing

3 Emerging Cloud Technologies You Should Know

Why the Open Source Cloud Is Important

 

Speak at The Linux Foundation’s Invite-Only Open Source Leadership Summit

The Linux Foundation Open Source Leadership Summit (formerly known as Collaboration Summit) is where the world’s thought leaders in open source software and collaborative development convene to share best practices and learn how to create and advance the open source infrastructure that runs our lives.

The Linux Foundation is now seeking executives, business and technical leaders, open source program office leaders, and open source foundation and project leaders to share your knowledge, best practices and strategies with fellow leaders at OSLS, to be held Feb. 14-16, 2017, in Lake Tahoe, CA.

Submit your speaking proposal now! The deadline for submissions is Dec. 14.

The invitation-only event offers sessions covering topics from project sustainability to licensing and compliance, open source business strategy, consuming and contributing to open source, and much more. It is a forum for education and collaboration that includes the brightest minds in open source who are shaping strategy and implementation.

Some suggested topics for 2017 speakers include:

  • Consuming and Contributing to Open Source

  • Cultivating Open Source Leadership

  • Driving Participation and Inclusiveness in Open Source Projects

  • How to Run a Business that Relies on Open Source

  • How to Vet the Viability of OS Projects

  • Legal + Compliance

  • Managing Competing Corporate Interests While Driving Coherent Communities

  • Monetizing Open Source & Innovators Dilemma

  • New Frontiers for Open Source in FinTech and Health Care

  • Open Source vs Open Governance

  • Open Source Project Case Studies & Success Stories

  • Successfully Working Upstream & Downstream

  • Sustainability of open source projects

For more ideas, check out the full list of topics or watch video recordings from last year’s event.

Not looking to speak at OSLS, but want to attend? Request an invitation.

The Urgency of Protecting Your Online Data With Let’s Encrypt

We understand that online security is a necessity, so why is only 48.5% of online traffic encrypted? Josh Aas, co-founder of Let’s Encrypt, gives us a simple answer: it’s too difficult. So what do we do about it? Aas has answers for that as well in his LinuxCon North America presentation.

Aas explains how the Achilles heel of managing Web encryption is not encryption itself, but authentication, which requires trusted third parties, and secure mechanisms for managing the trust chain. He says, “The encryption part is relatively easy. It’s a software stack…it comes on most operating systems by default. It just needs to be configured. Most Web servers tie into it directly and take care of things for you. Your biggest challenge is protecting your private key. The authentication part is a bit of a nightmare, and it has been for a while, so if you want to authenticate, the way this works on the web is you need to get a certificate from a certificate authority, and it’s complicated, even for really smart people like my friend Colin here at Cisco.”

Another roadblock is the expense and overhead of selecting and purchasing certificates from trusted vendors. “You need to figure out what kind of certificate you need, and of course the certificate authorities have come up with a million different marketing buzzwords for the different types of certificates. The super-secure and security plus and blah blah blah. Good luck figuring that out. You’ve got to figure out how to request a certificate…You’ve got to figure out how to install your cert, and of course that’s server specific, so you’ve got to have particular knowledge about a server and how this works, and you’ve got to remember to renew it on time. I’m sure everyone’s run into a site with an expired cert just because people forgot.”

There are other technical considerations that make the current system overly difficult, but new standards will take 10 years or more to be established, so Aas came to the conclusion that he had to create his own brand-new certificate authority to encourage encryption and to improve the process of obtaining and managing certificates. Perhaps only a madman would come to this conclusion, but Aas and Eric Rescorla — a friend and colleague from Mozilla — made it happen, and now we have Let’s Encrypt.

Let’s Encrypt is built on four cornerstones:

  • Automated.
  • Free.
  • Transparent and open.
  • Global.

A certificate authority must be trusted by other CAs, bundled into all Web browsers, and meet all manner of compliance rules. Sponsorship with key industry bigwigs including Akamai, Mozilla, the Electronic Frontier Foundation, and Cisco got them off the ground. Then the Linux Foundation came on board to ease the pain of organizational issues. Let’s Encrypt has been in operation for about a year and manages more than 16 million active certificates.

Watch the complete talk (below) to learn technical details, the challenges of meeting demand, dealing with censorship, and future plans. Currently, the Let’s Encrypt project is running a fund-raising campaign, and your generosity can help make the web more secure. Learn more.

LinuxCon videos

Pitfalls to Avoid When Implementing Node.js and Containers

The use of containers and Node.js are on the rise as the two technologies are a good match for effectively developing and deploying microservice architectures. In a recent survey from the Node.js Foundation, the project found that 45 percent of developers that responded to the survey were using Node.js with this technology.

As more enterprises and startups alike look to implement these two technologies together, there are key questions that they need to ask before they begin their process and common pitfalls they want to avoid.

In advance of Node.js Interactive, to be held Nov. 29 through Dec. 2 in Austin, we talked with Ross Kukulinski, Product Manager at NodeSource, about the common pitfalls when implement Node.js with containers, how to avoid them, and what the future holds for both of these technologies.

Linux.com: Why is Node.js a good technology to use within next-generation architectures?

Ross Kukulinski: Node.js is an excellent technology to use within next-generation applications because it enables technical innovation through rapid development, microservice architectures, and flexible horizontal scaling.

Cloud computing and cloud-native applications have accelerated through the use of open source software, and Node.js is particularly well suited to thrive in this environment due to the extensive open-source npm package ecosystem that allows it to quickly build complex applications.

From a containerization standpoint, Node.js and containers both excel in three key areas: performance, packaging, and scalability.

Node.js has low-overhead and is a highly performant web application development platform that can handle large-scale traffic with ease. Developers build with joy and can own the entire application deployment lifecycle when they are enabled with DevOps methodologies, such as continuous integration and continuous deployment.

For packaging, Node.js has a dependency manifest definition (package.json) that leverages the extensive module ecosystem to snap-together functional building blocks. Similarly, containers have a build-once-run-anywhere nature that is defined by an explicit definition file (Dockerfile). Pairing these two together helps to eliminate the “it runs on my machine, so it’s not my fault that it doesn’t work in production” problem.

Finally, and perhaps most importantly, Node.js and containers can handle an impressive request load through the use of horizontal scaling. Both scale at the process level and are fast-to-boot, which means that operations teams can automatically scale up/down applications independently to handle today’s dynamic workloads.

Linux.com: What are some common pitfalls that users experience when getting started with Node.js and Docker and Kubernetes?

Ross Kukulinski: By far, the most common pitfall I see is people abusing containers by treating them like virtual machines. I routinely see teams with Node.js Dockerfiles with the kitchen sink installed: ubuntu, nginx, pm2, nodejs, monit, supervisord, redis, etc., which causes numerous problems. This results in HUGE container image sizes — often over a gigabyte when they should be ~50-200MB. Large image sizes translate to slow deploys and frustrated developers.

In addition, these kitchen sink containers facilitate anti-patterns, which can cause problems later down the road. A prime example would be the use of a process manager (e.g., supervisord, pm2, etc.) inside of a container.

In the event that your Node.js application crashes — you want it to automatically restart. On traditional Linux systems, this is done using a process manager. Running a process manager within a container will correctly restart your Node.js application if it crashes. The problem is, the container runtime (e.g. Docker) does not have visibility to the internal process manager.  So, the container runtime does not know that your application is crashing or having problems.  

When your team inspects the system to see what’s running, by using docker ps or kubectl get pods, for example, the container runtime will report that everything is up and running when in fact your application is crashing.

Finally, shoving everything into one container defeats one of the important features of containers: scaling at the process or application level. In other words, teams should be able to scale any one process type independently of the others. In our example above, we should be able to scale the nginx proxy/cache separately from the Node.js process depending on where our current performance bottleneck is. One of the underlying premises of cloud-native architectures is to enable flexible horizontal scaling.

Linux.com How best can you avoid these pitfalls?

Ross Kukulinski: Before starting down the containerization process, be sure to understand what your business, technology, and process goals are. You should also be thinking about what comes after you containerize your applications.  

A Docker image is just the first step — how do you run, manage, secure, and scale your containers? Is your release process automated with continuous integration and/or continuous deployment? These are all questions that you need to be thinking about while you’re working through the containerization process.

From an organizational point of view, I would encourage management and decision makers to look beyond just “containerizing-all-the-things” and take a holistic approach to their software development, QA, release, and culture.

For developers and operations teams, remember that containers are NOT virtual machines.  If you’re looking for best-practices when containerizing Node.js, I highly recommend reviewing these resources:

Linux.com: What do you think is in store for the future of containers and Node.js? Any new interesting tech on the horizon that you think will further help these technologies?

Ross Kukulinski: I think we’ll continue to see healthy competition and increased feature parity between the major container providers. While they certainly are competing for market share, each of the major container technology ecosystems (Docker, Kubernetes, Nomad, and Mesos) have a core focus. For example, Docker has focused heavily on the developer story, while Kubernetes has nailed the production-grade deployment & scaling aspects. To that end, I think it’s important for businesses looking to adopt these technologies to find the right tool for them.

In terms of Node.js, I think we’ll continue to see increased adoption of containerized Node.js — especially as more and more companies embrace release patterns that allow them to deliver software quicker and more efficiently at scale. Node.js as a development platform enables rapid, iterative development and great scalability while also leveraging the most popular programming language in the world: JavaScript. I do think we will see an increasingly number of polyglot architectures, so expect to see Node.js paired with languages like Go to deliver a comprehensive tool set.

While I’m always experimenting with new technologies and tracking industry trends, I think the one I’m most intrigued by is the so-called “Serverless” paradigm. I’ve certainly heard plenty of horror-stories, especially relating to poor developer workflows, debugging tools, and monitoring systems. As this tooling ecosystem improves, however, I expect we’ll see Node.js used increasingly often in Serverless deployments for certain technological needs.

Where companies will get into trouble, however, is if they go all-in on Serverless. As with most things, Serverless is will not be a silver bullet that solves all of our problems.

View the full schedule to learn more about this marquee event for Node.js developers, companies that rely on Node.js, and vendors. Or register now for Node.js Interactive.

4 Ways to Open Up your Project’s Infrastructure

Open source isn’t just about opening up your code—it’s also about building a supporting infrastructure that invites people to contribute. In order to create a vibrant, growing, and exciting project, the community needs to be able to participate in the governance, the documentation, the code, and the actual structures that keep the project alive. If the overall “hive” is doing well, it attracts more individuals with diverse skills to the project.

Although many projects strive for “open everything,” infrastructure is often closed to contribution. Usually, only a few people run the infrastructure and keep the lights on. They’re sometimes unable to recruit help because, well, you can’t really give the keys to the kingdom to everyone. 

Read more at OpenSource.com

The End of the General Purpose Operating System

As interesting chat on Twitter today reminded me that not everyone is probably aware that we’re seeing a concerted attempt to dislodge the general purpose operating system from our servers.

I gave a talk about some of this nearly two years ago and I though a blog post looking at what I got right, what I got wrong and what’s actually happening would be of interest to folks. The talk was written only a few months after I joined Puppet. With a bunch more time working for a software vendor there are some bits I missed in my original discussion…

Read more at More Than Seven

Best Open Source Management Tools

Open source software provides an attractive alternative to more costly commercial products, but can open source products deliver enterprise-grade results? To answer this question we tested four open source products: OpenNMS, Pandora FMS, NetXMS and Zabbix. All four products were surprisingly good. We liked Pandora FMS for its ease of installation and modern user interface. In general, we found configuration to be easier and more intuitive with Pandora than the other contenders. 

Read more at CIO