“Open innovation has the potential to widen the space for value creation: It allows for many more ways to create value, be it through new partners with complementary skills or by unlocking hidden potential in long-lasting relationships. In a crisis, open innovation can help organizations find new ways to solve pressing problems and at the same time build a positive reputation. Most importantly it can serve as a foundation for future collaboration — in line with sociological research demonstrating that trust develops when partners voluntarily go the extra mile, providing unexpected favors to each other.”
Beginning this month, Lenovo will certify its ThinkStation PCs and ThinkPad P Series laptops for both Ubuntu LTS and Red Hat Enterprise Linux. Every single model, every single configuration across the entire workstation portfolio.
And it doesn’t end there.
“Going beyond the box, this also includes full web support, dedicated Linux forums, configuration guidance and more,” says Rob Herman, General Manager, Executive Director Workstation & Client AI Group at Lenovo.
By Matt Butcher, special to Linux.com
Installing a new app on your phone is simple. So is installing one on your Mac, Linux box, or PC. It should be just as simple to install a distributed application into your cloud — this is the goal of the Cloud Native Application Bundles (CNAB) project. We believe we can achieve this goal without requiring another cloud service or tying the user to only one cloud provider.
Over the last few months, we have witnessed first-hand how much the cloud has to offer. As everything from our daily meetings to our kids’ classrooms has gone online, we are reminded daily of what a potent boon cloud technologies have become.
For those responsible for building and maintaining our cloud presence, we know that some formidable issues are not yet resolved. One of those is how we install, upgrade, and delete applications in the cloud. Using containers, a bit of JSON, and some best-of-breed security infrastructure, we have created a package management standard for the cloud.
A Package Format for the Cloud
While the core cloud technologies like virtual machines and object storage have been around for over a decade, and a rich tapestry of cloud infrastructure exists, managing cloud applications remains a challenge. Two years ago, my team sat down and asked a straightforward question: Why is installing, upgrading, and deleting applications from the cloud is such a challenge? True, there are specific services (like PaaS) that make this manageable for a small segment of the ecosystem. But when it comes to a high-level solution, we are still left doing the orchestration of things either by hand or with bespoke tools.
This led us to one straightforward question:
What if we could find a way to make package management work for the cloud the same way that it works for a local operating system?
This domain was not entirely new ground for us. After all, we’d built the enormously successful Helm package manager for Kubernetes. But we were well aware that Helm is inextricably bound to Kubernetes. While we believe Kubernetes has many attractive features, we do not think it will replace the rest of the cloud landscape.
Enumerating the big features, we started to list things we would want to be able to do:
- Install virtual machines
- Set up object storage and cloud databases databases
- Load containerized workloads onto clusters like Kubernetes, but perhaps not only Kubernetes
- Manage virtual networks and resources like load balancers
- Interoperate with policy and identity control tools
- Make it possible and even easy for developers to introduce support for new services and tools
The list went on in a similar vein for a while. And then came the two killer features:
- Make it extremely easy to use, just like a regular package manager.
- Make it completely cloud-agnostic. It should run just as smoothly on Azure, AKS, on-prem OpenStack, and everything else.
The feature list was looking daunting until a rather elegant solution presented itself: Today’s packages are moved around in self-contained bundles of code and supporting resources. And then the host environment executes that bundle. What if we just used a Docker container as the primary package technology? In that case, we can reuse a considerable amount of cloud infrastructure, easily moving packages around–even across air-gapped boundaries.
This was the critical insight that became Cloud Native Application Bundles (CNAB). With Docker, Datadog, and Pivotal (before their acquisition by VMware), we wrote a specification that described how to build cloud-centric packages that are captured in Docker containers.
Initially announced at DockerCon EU in December of 2018, our combined team has continued to work on the specifications, build tools, and explore better ways of delivering an easy-to-use cloud packaging experience.
Since our initial announcement of CNAB, Docker Apps has rolled CNAB into its production release. Microsoft has built Porter–an open source CNAB builder–and Datadog has led the charge on a CNAB security specification that provides not just a quick verification scheme, but deep software supply chain security.
Docker initially announced their CNAB support for Docker Apps with a great architectural introduction. At the end of last year, they explained how CNAB worked with application templates in Docker Desktop. For Docker, CNAB provides a convenient way to encapsulate applications built using core Docker technology, without requiring the user to learn yet another technology stack. And right now, the newly released Docker Compose specification is supported in Porter, providing a new avenue for integrating Docker’s excellent developer tooling with other cloud technologies.
Microsoft created the Porter project. We had already written a CNAB reference implementation (Duffle) designed to exercise the specification. But it was not necessarily designed to provide a great user experience. Porter, on the other hand, is a user-first design. Through mixins, Porter can support a vast range of cloud technologies, from Terraform to Helm to Docker Compose, making it easy to tailor a CNAB bundle to your preferred target cloud or technology stack.
Finally, thanks to the diligent work of Datadog, the CNAB group is preparing to publish a second specification: The CNAB Security 1.0 Specification. The initial security model for CNAB was designed alongside the core specification. But we wanted to make sure we did our due diligence. We have spent an extra year diving deeper into scenarios and vetting and collaborating popular security products so that it could be accomplished with existing solutions.
Along with covering distribution security, this specification also provides a software supply chain security model. This means that from development through testing, and finally on into release, each step can be verified according to a robust security process. We believe CNAB represents a new generation of security tooling that reduces risk and increases the fidelity of cloud technologies.
CNAB is designed to operate well in enterprise environments. And the CNAB group has two more standards in flight. We are eagerly pushing these toward completion.
One of CNAB’s target environments is the “disconnected cloud.” From physically remote environments, such as research stations and oil rigs, to secure compartmentalized facilities, cloud technologies provide a robust platform even when disconnected from the internet. CNAB is intended to work well in these environments as well. And this means that CNAB must have a robust “air gap” story.
From day one, this has been a goal. Over the last two years, we have refined our model, goals, and features to meet this scenario best. The core specification is written with air-gapped environments in mind, as is the security specification. But our third specification, the CNAB Registry 1.0 Specification, is the last puzzle piece.
This specification describes how CNAB bundles (packages) are stored, discovered, downloaded, and moved. Utilizing the OCI Registry standard, this specification describes how users and tools will share packages. But it also provides details on how bundles can be moved across network boundaries in a high-fidelity manner. With this specification, CNAB becomes a compelling method for transporting sophisticated cloud-native applications from network to network–without sacrificing security or requiring copious amounts of manual labor.
Finally, we have one more specification in the works. The CNAB Claims 1.0 Specification describes how CNAB tools can share a common description of their deployed applications. For example, one tool can “claim” ownership over an application deployment, while another tool can access the shared information about that application and how it was deployed. This brings together distributed management, audit trails, and long-term tool interoperability.
Porter and Duffle already support claims, but we are excited to get a formal standard that enables information sharing across all of the tools in the CNAB ecosystem.
How to Get Involved
The CNAB specification is developed under an open source model. You can dive right in at cnab.io. There you will find not only the specifications, the common source libraries (like cnab-go), and our full command-line reference implementation duffle.
Porter is also open source and is a great starting point if you wish to work with a user-friendly CNAB tool immediately.
Our goal with CNAB is to provide a package management story for the cloud. Just as it is easy to run an installer on our laptops or put a new app on our phone, it should be easy to install a new cloud application. That is the vision that CNAB relentlessly pursues.
We’d love to have you join up, take it for a test drive, and explore the possibilities.
Steven J. Vaughn Nichols writes at ZDNet about the Linux Foundation’s new Cloud Engineer Bootcamp:
While there are plenty of cloud classes out there, the Linux Foundation claims it’s the “first-ever bootcamp program, designed to take individuals from newbie to certified cloud engineer in six months.”
The Bootcamp bundles self-paced eLearning courses with certification exams and dedicated instructor support for a comprehensive and well-rounded educational program. As you would imagine for a Bootcamp from the Linux Foundation it starts with Linux at the operating system layer. Since even Azure is now predominantly Linux, this actually makes good sense. From Linux, it moves up the stack, covering DevOps, cloud, containers, and Kubernetes.
Specifically, it comprises the following classes and exams:
- Essentials of Linux System Administration (LFS201)
- Linux Networking and Administration (LFS211)
- Containers Fundamentals (LFS253)
- DevOps and SRE Fundamentals: Implementing Continuous Delivery (LFS261)
- Kubernetes Fundamentals (LFS258)
- Linux Foundation Certified System Administrator Exam (LFCS)
- Certified Kubernetes Administrator Exam (CKA)
Besides the classes, students will also have access to an online forum with other students and instructors. There will also be live virtual office hours with course instructors five days per week. If you enroll, you can expect to spend 15 hours to 20 hours per week on the materials to complete the Bootcamp in about six months. Upon completion, participants will receive LFCS and CKA certification badges and a badge for completing the entire Bootcamp. Badges can be independently verified by potential employers at any time.
In 2016, Ahmed Alkabary had just graduated from the University of Regina, where he earned degrees in computer science and mathematics. He began using Linux in the second year of his studies and quickly developed such a passion for it that he began extra studies outside of university to advance his skills. Ahmed’s enthusiasm for Linux even led him to develop a free course on Udemy to teach it to others; nearly 50,000 students have enrolled to date. Following the completion of his studies, Ahmed hoped to secure a job as a Linux system administrator.
Ahmed applied for and was selected as the recipient of a LiFT scholarship in the category of Academic Aces, which enabled him to enroll in the Linux Kernel Internals and Development (LFD420) training course and the Linux Foundation Certified SysAdmin exam.
When IBM acquired Red Hat for $34 billion in 2019, it was considered the industry’s largest software acquisition. The synergy between the two companies led them to become one of the leading hybrid multi-cloud providers globally.
In most acquisitions, the acquired entity sometimes loses momentum and sheds some of its original luster. This does not seem to be the case with Red Hat.
“I would define it as a separate company and that’s how we run it,” affirms Paul Cormier, President & CEO of Red Hat, who is credited with conceptualizing the company’s open hybrid cloud platform.
“We set our own strategy, we set our own road maps. It’s completely up to us. We have stayed as a self-contained company. Red Hat still has all the pieces to be a separate company: its own Engineering, product lines, back office, HR, Legal, and Finance. It’s very much like VMware is to Dell, or LinkedIn is to Microsoft,” he explains.
Cormier believes it’s important to have separate identities for partner ecosystems to thrive.
“We are talking about integrating Arc with OpenShift. IBM didn’t even know this was happening as we had kept it confidential,” he says.
Microsoft’s Azure Arc is a management tool for hybrid cloud application infrastructures, while OpenShift is a family of containerization software developed by Red Hat.
“We’re big on Intel platforms. We’re also big on IBM Z, IBM I, IBM P. Since we support Intel in Red Hat Enterprise Linux (REL), we know their road maps long before they’re implemented. However, we have to show Intel that we would not give away their secrets to IBM. This is the most important reason that we must remain separate so that those partner ecosystems remain,” he says.
Linux: The Innovation Engine
Cormier points out that what makes Red Hat very unique is its completely open-source software development model.
“That’s our development model. Open source is not a thing — it’s an action,” he says.
Underlining the importance of Linux, he explains that “Linux went by Unix a long time ago in terms of features, function, and performance. However, Linux was so available that eventually, it became the innovation engine. All the technologies today — on the infrastructure and the development side, on the tools side — they are all built-in and around Linux. OpenShift is still a Linux platform. Its containers are Linux. All the innovation is now around that.”
Cormier is also confident of meeting the demand of customers adopting hybrid cloud.
“For us, it doesn’t matter whether it’s 20% on-premise, 20% in the cloud and 80% on-premise, or 60/40 or 50/50 — it’s still a hybrid world. I can’t predict if the COVID thing is going to push people to the cloud more quickly or more slowly, but we don’t care. It doesn’t matter. For us, it’s the same value proposition,” he avers.
Virtualization meets Kubernetes
Red Hat is now working on bringing VMs into the Kubernetes architecture.
“As opposed to some of our competitors that are trying to bring containers back to their world, we’re moving in the other direction. We are working on advanced cluster management on Kubernetes. As customers increasingly go hybrid, having OpenShift with containers running in different places will help them easily manage across clusters,” Cormier says.
“We’re also focusing on telco 5G use cases on the OpenShift platform. We’re doing a lot of work with Verizon and the other telcos,” he adds.
FOSS Responders has come together to crowd-source support for FOSS contributors and organisations affected by the global pandemic, especially in the face of event cancellations. FOSS Responders has raised a $115,000 support fund, which is made from individual donations and generous donations from its partners.
Read More at TFiR
OpenSUSE Leap 15.2 has progressed to its release candidate phase ahead of the official release planned for first week of July. Now onto release candidate builds, openSUSE Leap 15.2 is under a package freeze. This next version of openSUSE Leap has GNOME 3.34, KDE Plasma 5.18 LTS, and Xfce 4.14 as its primary desktop offerings.
Read More at Phoronix
Symbolic links play a very useful role on Linux systems. If the referenced file is removed, the symlink will remain but not indicate there’s a problem until you try to use it. Here’s how to find and remove symlinks that point to files that have been moved or removed.
Read More at Network World
Two days ago, the government of India announced that it would publicly release the source code for its coronavirus contact tracing app, Aarogya Setu. However, the folks at MIT aren’t terribly impressed with Aarogya Setu’s safety quotient nor its collection of all manner of data beyond what contact tracing demands.
Read More at ZDNet