Home Blog Page 440

Buoyant’s New Open Source Service Mesh Is Designed with Kubernetes in Mind

This article is part of the KubeCon + CloudNativeCon North America 2017 series.

The Linkerd service mesh for microservices was the first in its category and is the most widely used service mesh in production today. It has seen over a trillion requests and has enterprise customers that include Salesforce, FOX, Target, Paypal, Expedia, AOL, Monzo, and IBM.

Today, Buoyant has announced a new, next-gen open source service mesh called Conduit, which was designed to be incredibly fast and lightweight, highly performant, and secure, with real-world Kubernetes and gRPC use cases in mind.

Ahead of CloudNativeCon + KubeCon 2017 to be held this week in Austin, we spoke to George Miranda, Community Director at Buoyant, the maker of Linkerd. Be sure to catch Buoyant CEO William Morgan’s keynote on Conduit at CloudNativeCon. They’ll also be kicking off the conference with the New Stack’s Pancake Breakfast.  Make sure to catch all of Buoyant’s talks at the conference.

Linux.com: What makes managing services more challenging in a Cloud Native environment?

George Miranda: When you’re running monolithic applications on three-tier legacy infrastructure, you make relatively few service requests. It’s pretty obvious where they’re coming from and going to. If things go wrong, you can quickly understand where problems might be happening.

For example, you may be monitoring network performance for packet loss, transmission failures, and bandwidth utilization. You probably use a latency monitoring tool, like smokeping, to get closer to measuring service health, and an in-band tool, like tcpdump, to monitor service communication at the packet level. You triage those metrics along with your event logs and you can infer where things are likely going wrong.

If you’ve managed production applications before, you know this game well, and for the most part these tools did the trick. But they require you to know how the entire system operates in order to make that process work. As a platform operator with monolithic apps, you’ll typically have deep intrinsic knowledge of the services in use, how they interact, and how they operate at that layer for the entire system.

When you start building cloud-native applications, that holistic grasp of the entire system can quickly scale beyond any one platform operator’s reach. You could be managing hundreds or thousands of microservices in your infrastructure. Managing things like load balancing, automated deployments, encryption, cascading system failures, or troubleshooting outages can become incredibly complex without visibility into the service communication layer. That’s where the service mesh can help.

Linux.com: What are the advantages of a service mesh?

George Miranda: A service mesh adds visibility into requests that were once invisible. It turns service communication into a first-class citizen. Essentially, it provides the logic to monitor, manage, and control service requests by default, everywhere, and helps you make your microservices safe, fast, and reliable.

The service mesh is typically implemented as a set of network proxies that are deployed alongside your application code. Those proxies are transparent to your applications, so there are no code changes required to use them. That allows developers to decouple service communication logic from application code. So you can push that into a lower part of the stack where it can be more easily managed globally across your entire infrastructure. You can use that mesh to weave applications deployed between different infrastructure platforms, data centers, and cloud providers into a single fabric. We’ve had customers use the service mesh as a way of reducing lock-in risk and enabling hybrid multi-cloud deployments.

Linux.com: How does a service mesh work?

George Miranda: A service mesh consists of two main parts — a control plane, and a data plane. The data plane is the proxy layer, where service communication is happening. When you, as a user, interact with the service mesh, you interact with the control plane.

The control plane exposes new primitives you can use to control how your services communicate. Those primitives enable you to do tasks you couldn’t before — like having super granular control over managing specific service requests, setting rate limits, managing auth, setting up circuit-breaking logic, distributed tracing, and so forth. You use those primitives to compose service policies on a global or singular level inside the control plane. The data plane then reads policies from the control plane and alters its behavior accordingly.

Linux.com: What has the response to Linkerd been?

George Miranda: It’s been phenomenal. We’ve had trillions requests served by Linkerd by customers in production across a wide range of industries. We have an active community of contributors, open source users, and enterprise customers. We’ve seen the service mesh used in ways we couldn’t have imagined when we first created it.

For example, one of our customers used Linkerd to enable a move to the cloud. They make an ERP platform that obviously contains sensitive customer data. They started modernizing their application stack and made a move to microservices. As with most companies, that meant that their dev teams started to own different parts of what used to be one giant monolith. Some dev teams were great about managing sensitive data, while others did that inconsistently or not at all. When faced with the prospect of moving that data to the cloud, their Information Security team quickly put a stop to those ambitions.

Then they implemented Linkerd. They used the service mesh to decouple the need to manage secure service communication from their development teams. Instead, their dev teams could all configure their apps to make plain HTTP calls to remote services. At the wire level, Linkerd would then do a protocol upgrade to ensure all communication was happening with TLS by default. Suddenly, the platform team could then easily ensure consistency for encrypting data in transit no matter which application was in use. They were able to work with their Information Security team to find a public cloud vendor up to their standards and that’s where they’re running today. That never would have happened for them without Linkerd.

Linux.com: Are there things that catch new users off guard?

George Miranda: There are different ways to deploy the service mesh. Because it’s a series of interconnected proxies, you have options for how that’s set up. Some users prefer having one proxy deployed per physical host or VM that your containers run from. All containerized processes then route traffic through localhost and the service mesh takes it from there. But attaching the proxy to one physical or virtual host can make management more difficult if you’re not always sure where your containerized processes are running.

A common approach these days is to run the service mesh as a container sidecar and not worry about which proxy lives on each container host. The downside is that resource utilization can become a big concern in that pattern. If you have hundreds of containers on any one host, the footprint required for the service mesh suddenly begins to matter.

The service mesh needs to be remarkably small, lightweight, and incredibly fast. You don’t want to have to choose between having resilient services and sacrificing performance. You should barely be able to notice that the service mesh is even there. That’s been one of the drivers behind why we just released Conduit.

Linux.com: With all of the success behind Linkerd, why are you introducing Conduit now?

George Miranda: At Buoyant, we asked ourselves what it would take to build the ideal service mesh from the ground up, but with all the lessons we’d learned from the past 18 months of running a service mesh in production. The answer was Conduit.

Conduit’s rust-based data plane is crazy fast. With sub-millisecond latency and a tiny memory footprint it’s designed to give you the most frequently used benefits of the service mesh without getting in your way. Rust’s memory-safety benefits also help prevent introducing attack vectors that expose your services to additional risk. Conduit is incredibly fast, ultralight, and fundamentally secure. It’s easy to use, easy to get started with, and a great way to manage Kubernetes-based microservices.

Linux.com: Are there any talks in particular to watch out for at CloudNativeCon + KubeCon North America?

George Miranda: The service mesh is all over CloudNativeCon’s agenda. To me, that validates the need for the service mesh as a fundamental building block in the cloud-native stack. KubeCon + CloudNativeCon is a great place to learn more about how the service mesh can help you manage your stack.

We’ll be talking about both Linkerd and Conduit, starting with the Pancake Breakfast on Wednesday morning. For production-grade and multi-platform use cases requiring a feature-rich approach with deep integrations for modern tooling, check out Linkerd and the many customer talks around how it’s used in their stack. For a next gen and ultralight service mesh specific to Kubernetes, check out Conduit. You’ll hear about Conduit in the CNCF keynotes and we’ll dive deep with it in both our SIG and the Linkerd Salon. Check out our schedule and make sure to swing by our booth for demos.

One Month Left to Submit Your Talk to ELC + OpenIoT Summit NA 2018

Embedded Linux Conference (ELC), happening March 12-14 in Portland, OR, gathers kernel and systems developers, and the technologists building the applications running on embedded Linux platforms, to learn about the newest and most interesting embedded technologies, gain access to leading experts, have fascinating discussions, collaborate with peers, and gain a competitive advantage with innovative embedded Linux solutions.

View Suggested Topics and Submit a Proposal to Speak

Co-located with ELC, the OpenIoT Summit serves the unique needs of system architects, firmware developers and software developers in the booming IoT ecosystem. Join experts from the world’s leading companies and open source projects and present the information needed to lead successful IoT developments and progress the development of IoT solutions.

View Suggested Topics and Submit a Proposal to Speak

Linux Foundation events are an excellent way to get to know the community and share your ideas and the work that you are doing. If you haven’t presented at ELC + OpenIoT Summit NA or other conferences before, we’d especially like to hear from you! In the instance that you aren’t sure about your abstract, reach out to us and we will be more than happy to work with you on your proposal.

Sign up for ELC/OpenIoT Summit updates to get the latest information:

Predictive Analytics in the Multicloud

Cloud computing has plenty of complexities. And while many IT leaders would prefer a unified infrastructure, wherein the business standardizes on one or two cloud vendors, that is not going to happen in the real world.

The reason is simple: Applications the business depends on reside on a variety of clouds. Forcing users to stop using some applications and services in the interest of simplifying the company’s cloud mix is unreasonable. That means a multicloud strategy—managing multiple clouds simultaneously—is the only logical recourse.

Even so, managing the multicloud is a difficult task and fraught with often-unexpected obstacles. For example, abstracting the platform—simplifying the user interface by pushing complex details, such as computer code, to a lower level on the platform—is helpful for developers and users, but it can be more complicated for the IT operations staff. This sort of complexity increases management issues.

Read more at HPE 

What’s the Difference Between a Fork and Clone?

The concept of forking a project has existed for decades in free and open source software. To “fork” means to take a copy of the project, rename it, and start a new project and community around the copy. Those who fork a project rarely, if ever, contribute to the parent project again. It’s the software equivalent of the Robert Frost poem: Two paths diverged in a codebase and I, I took the one less traveled by…and that has made all the difference.

There can be many reasons for a project fork. Perhaps the project has lain fallow for a while and someone wants to revive it. Perhaps the company that has underwritten the project has been acquired and the community is afraid that the new parent company may close the project. Or perhaps there’s a schism within the community itself, where a portion of the community has decided to go a different direction with the project. Often a project fork is accompanied by a great deal of discussion and possibly also community strife. Whatever the reason, a project fork is the copying of a project with the purpose of creating a new and separate community around it. 

Read more at OpenSource.com

What Are Microservices? Lightweight Software Development Explained

Microservices architecture tears down large monolithic applications with massive complex internal architectures into smaller, independently scalable applications. Each microservice is small and less complex to develop, update, and deploy.

When you think about it, why should those functionalities need to be built into a single application in the first place? In theory, at least, you can imagine they live in separate application and data silos without major problems. For example, if the average auction received two bids, but only a quarter of all sales received feedback, the bidding service would be at least eight times as active as the feedback application at any time of day. If these were combined into a single application, you end up running—and updating—more code than you need more often. The bottom line: Separating different functionality groups into separate applications makes intuitive sense.

Read more at InfoWorld

How to Containerize GPU Applications

By providing self-contained execution environments without the overhead of a full virtual machine, containers have become an appealing proposition for deploying applications at scale. The credit goes to Docker for making containers easy-to-use and hence making them popular. From enabling multiple engineering teams to play around with their own configuration for development, to benchmarking or deploying a scalable microservices architecture, containers are finding uses everywhere.

GPU-based applications, especially in the deep learning field, are rapidly becoming part of the standard workflow; deploying, testing and benchmarking these applications in a containerized application has quickly become the accepted convention. But native implementation of Docker containers does not support NVIDIA GPUs yet — that’s why we developed nvidia-docker plugin. Here I’ll walk you through how to use it.

Read more at SuperUser

LiFT Scholarship Winners: Teens and Academic Aces Learn Open Source Skills

Four people have been named recipients of the seventh annual Linux Foundation Training (LiFT) Scholarships for 2017 in the “Academic Aces” and “Teens in Training” categories.

Teens in Training

Vinícius Almeida

Vinícius Almeida, 15, of Brazil, is the youngest recipient to receive an award from the foundation this year. Although he is a high school freshman, Almeida is already taking computer science courses at the Federal University of Bahia. He has written several articles on robotics and open source technologies, and is active in his local hackerspace, the Raul Hacker Club.

Almeida also volunteers to write browser extensions for the GNU Project. Almeida says he hopes the knowledge he gains from this scholarship will help him convince more individuals in Brazil to adopt open source.

“I can’t imagine my life without FOSS technologies!’’ he wrote in his application. “I love using Linux every day, and learning more about open source has already changed my opinion in lots of discussions.” Almeida added that he is further developing his programming skills every day, thanks to the open source community. “My future is FOSS technologies; today I’m using most of them, but soon I want to develop them [for] the community.”

Sydney Dykstra

Sydney Dykstra, 18, of the United States, is the second scholarship recipient in the Teens in Training category. A recent high school graduate, Dykstra has been contributing to several open source projects, including the games The Secret Chronicles of Dr. M., and Supertux. His goal is to become a Linux systems administrator, and he hopes the scholarship will jumpstart that.

“I believe that open source is the future for everything computer related, online and offline, and necessary… if we are to have a ‘free’ world where we are not worried about someone else watching us or taking advantage of our info,’’ he wrote in his scholarship application.

Dykstra says he wants to become a Linux systems administrator, not only because he enjoys working with Linux systems but because of the freedom and flexibility it provides him. “I’m only a beginner,” he wrote, “but have been using Linux for nearly five years now and have been learning more as I go.”

Academic Aces

Asirifi Charles

Asirifi Charles, 22, of Ghana, is a recipient in the Academic Aces category. He is in his final year studying computer science at the University of Ghana. Charles taught himself about web development through free online resources, and recently became interested in open source, completing the free Intro to Linux course on edX. He hopes this scholarship will help him expand his open source expertise, so he can share it with others in Ghana, where it is difficult to access an IT education. 

“Open source lets you share your contribution while learning to better your skills,’’ he wrote in his application.

Camilo Andres Cortes Hernandez

Camilo Andres Cortes Hernandez, 31, of Colombia, is the other scholarship winner in the Academic Aces category. Hernandez studies technology at EAN University in Colombia, where he also runs a nonprofit that teaches individuals about cloud computing. His focus is currently on Azure, and he hopes the scholarship will help him to obtain the MCSA: Linux on Azure certification from The Linux Foundation and Microsoft.

Not only will the scholarship improve his career, he wrote, but it will also help others to embrace open source solutions because of his work in the community. Recently, Hernandez says, he was discussing open source solutions on Azure during a free cloud event, and received good feedback.

“I want to keep teaching others about cloud and top trending technologies, especially open source solutions that can run on environments like Azure. I have a goal within my community (CloudFirst Campus) to teach people about the interoperability of solutions no matter if they are private or open — you can run anything on the cloud.” 

The Linux Foundation Training Scholarships cover the expenses for one class to be chosen by each recipient from the Scholarship Track choices, representing thousands of dollars in value (travel expenses for in-person classes are not included). 

Winners in all categories may also elect to take a Linux Foundation Certified System Administrator, Linux Foundation Certified Engineer, Certified OpenStack Administrator, Cloud Foundry Certified Developer or Certified Kubernetes Administrator exam at no cost following the completion of their training course.

Scholarships are supported by The Linux Foundation members seeking to help train the developers and IT professionals of the future.

Learn more about the LiFT Scholarship program from The Linux Foundation.

How Kubernetes Resource Classes Promise to Change the Landscape for New Workloads

The Colin Powell rule states that you should make a decision when you have 40 percent to 70 percent of the information necessary to make the decision. With Linux container technology like Kubernetes evolving so quickly, it’s difficult for companies to feel like they have 40 percent of the information they need, let alone 70 percent.

Customers often approach me and others at Red Hat to help them get beyond the 40 percent mark to make a decision about Red Hat OpenShift, which is based on Kubernetes.

For many of these customers, public cloud has become commonplace for workloads. However, translating their on-premise architecture into a proper design/architecture for each cloud is challenging (to say the least) in terms of both time and cost. An architecture that works the same, everywhere, is the promise of Kubernetes and OpenShift, but it’s also one of the heaviest burdens for engineers.

This contributed article is part of a series in advance of Kubecon/CloudNativeCon, taking place in Austin, Dec. 6 – 8.

Read more at The New Stack

The OpenChain Project: From A to Community

Communities form in open source all the time to address challenges. The majority of these communities are based around code, but others cover topics as diverse as design or governance. The OpenChain Project is a great example of the latter. What began three years ago as a conversation about reducing overlap, confusion, and wasted resources with respect to open source compliance is now poised to become an industry standard.

The idea to develop an overarching standard to describe what organizations could and should do to address open source compliance efficiently gained momentum until the formal project was born. The basic idea was simple: identify key recommended processes for effective open source management. The goal was equally clear: reduce bottlenecks and risk when using third-party code to make open source license compliance simple and consistent across the supply chain. The key was to pull things together in a manner that balanced comprehensiveness, broad applicability, and real-world usability.

Read more at The Linux Foundation

 

Blockchains Are Poised to End the Password Era

Blockchain technology can eliminate the need for companies and other organizations to maintain centralized repositories of identifying information, and users can gain permanent control over who can access their data (hence “self-sovereign”), says Drummond Reed, chief trust officer at Evernym, a startup that’s developing a blockchain network specifically for managing digital identities.

Self-sovereign identity systems rely on public-key cryptography, the same kind that blockchain networks use to validate transactions. Although it’s been around for decades, the technology has thus far proved difficult to implement for consumer applications. But the popularity of cryptocurrencies has inspired fresh commercial interest in making it more user-friendly.

Read more at Technology Review