Home Blog Page 3

Best Linux Foundation classes: Introduction to Linux, Cloud Engineer Bootcamp, and more (ZDNet)

Steven J. Vaughn Nichols writes:

“The Linux Foundation is an IT certification pioneer, offering its first certification exams back in 2014 in a remote format. Before this, it was virtually unheard of to take an IT certification exam outside of a testing center. The Linux Foundation established verifiable, secure remote proctoring processes, which remain in place. This makes it much easier, especially in the days of the coronavirus pandemic for qualified individuals to obtain certifications without traveling.

Here are some of the best of the best of their class programs. I’ve focused on the ones leading to certifications because having a certification can always help. Many techies don’t respect certifications, but to get a job in IT, you must first get by the human resources gatekeepers. And, if they don’t see the certifications they’re looking for, you’ll never get a chance to show your prospective boss your technical chops.”

Read more on ZDNet

Linux Training Helps Network Analyst Transition to Open Source Solutions

Rachael Nelson has a love for sharing, freedom and technology. While her goal was to major in computer science or electrical engineering, Rachael decided to study Management Information Systems at Texas Tech as it was a less demanding major enabling her to stay home and take care of her sick mother. Over the course of her career, Rachael worked her way up from QA to network analyst. In 2018, she applied for and was awarded a Linux Foundation Training (LiFT) scholarship in the category of Developer Do-Gooders.Learn more at Linux Foundation Training

humanID Project: Restoring Civil Discussion Through Better Online Identity

Every day, billions of people use social sign-ons, such as “Login with Facebook”, to access applications over the Internet. A major drawback of this system is the inability to distinguish a real human user from a bot.

Nonprofit organization humanID, a recipient of Harvard University’s Social Impact Fund, came up with an innovative idea: develop a one-click anonymous sign-on that serves as an alternative to social sign-on.

“With humanID, everyone can use services without giving up privacy or having their data sold. Bot networks are automatically excluded, while applications can easily block abusive users and trolls, creating more civil digital communities,” says Bastian Purrer, Co-Founder of humanID.

humanID was born during Purrer’s stint in Indonesia. He was helping out a political party’s campaign and was aghast to discover how much of the political conversation during the election was controlled by bots and trolls.

When he realized that political parties routinely deploy bots to promote propaganda and false facts, it became clear that the key to restoring civil discussion, and the vision of an internet for everyone, was better online identity.

The mission
Besides Purrer, humanID’s other co-founders are Sidiq Permana and Shuyao Kong. Together, they lead a 20-person organization, with the tech team based in Indonesia while the business team is in Boston.

“Fixing the Internet is the core mission that unites all three co-founders. Having witnessed how public opinions and sentiments are swayed by fake accounts, we believe that restoring online identity is the first step to restoring authenticity and accountability on the Internet,” says Kong. “We target consumer use cases that are currently serviced by email-and- password, or social sign-ons. This includes the majority of apps on our phones.”

Purrer says the goal of the project is to have one humanID per person.  “We want people to have control over their own identity from a privacy perspective. We want humanID to be so intuitive and prevalent that it becomes the default identity layer for applications.”

An identity is a permanent representation within a certain context. On the Internet, just like in real life, our identity differs from community to community. humanID enables this, by giving users a different, unique identity in every community.

“It is, if the user chooses so, also a different identity than their offline identity. This is where anonymity comes in. Anonymity means that your offline identity, your physical self, cannot be revealed based on your digital identity,” says Kong, who has worked previously in the blockchain and privacy space.

Permana, who’s leading humanID’s technical development, says, “We achieve this by hashing users’ phone numbers, with a unique, different hash for each user and each application — making cross-referencing between communities impossible. The irreversibility of the hashes ensures secure anonymity. The fact that we do not permanently save any unhashed information makes it impossible, not just for our partner applications but even for ourselves, to reveal a user’s offline identity in the form of his phone number.”

The humanID team believes a persistent, safe identity will be better than any of the existing online identities that are not safe from surveillance and cannot be held accountable for their online behavior.

The underlying tech
humanID reached out to the Linux Foundation because it saw “tremendous value to be part of the force that’s driving the industry standard.”

“The Internet is built on layers of open-source, free-to-use protocols. humanID is created in this tradition. The solution hashes users’ phone numbers and email addresses, securing them safely away from hackers and media giants. Each user will have a unique hash for each application he or she signs on so there’s no cross-referencing,” explains Purrer.

“Our database stores users’ country codes, but relinquishes access to the rest of the information we hash. We are using OAuth at the moment, but actively exploring tech that enhances the security of humanID. Developers can implement the social login within a few hours of work,” he says.

The use cases
One use case they are deploying for their first client GreenZone is tracking COVID without sacrificing users’ privacy. Permana explains, “GreenZone is a tracking application that doesn’t track users’ location. Instead, it shows ‘green zones’ of low-risk areas where no symptoms are reported, therefore, alleviating anxiety by showing users whether they are in a safe zone or not. All data is entirely peer-to-peer and there is no government, police or regulators involved.”

According to him, humanID’s first set of customers will be those that are privacy-conscious because their customers demand native privacy when using their product. These businesses include COVID-tracking, health and self-tracking apps, self-help forums, and VPNs.

“We also target social networks, petition sites, and any site with a forum or comment section. All of these businesses suffer heavily from spam abuse and automated accounts. With humanID, everyone can use services without giving away privacy or having their data sold. Bot networks are automatically excluded, while applications can easily block abusive users and trolls,” he says.

Purrer clarifies that humanID does not intend to replace government-issued IDs or business-internal identity management.

“We don’t intend to compete with these existing businesses or standards, but to add a new and fresh idea in the struggle to bring back privacy, safety and accountability on the web,” he says.

The project has been driven by open source and volunteer work for 1.5 years. “We’re actively seeking support and grants to accelerate our work to bring humanID to market and sign up clients. Beyond this, we aim to cover our cost from our client base and not be dependent on charitable donations beyond 2022,” Purrer adds.

Check out the demo below, if you have any questions feel free to contact the team on github.

New, Free Training Course Teaches Use of Jenkins for CI/CD Workflows

The Linux Foundation and Continuous Delivery Foundation have announced the immediate availability of a new free training course on the edX platform, LFS167x – Introduction to Jenkins. Jenkins is the leading open source automation server, providing hundreds of plugins to support building, deploying and automating any project.

Learn more at Linux Foundation Training

Why now is the time for “Open Innovation” (Harvard Business Review)

“Open innovation has the potential to widen the space for value creation: It allows for many more ways to create value, be it through new partners with complementary skills or by unlocking hidden potential in long-lasting relationships. In a crisis, open innovation can help organizations find new ways to solve pressing problems and at the same time build a positive reputation. Most importantly it can serve as a foundation for future collaboration — in line with sociological research demonstrating that trust develops when partners voluntarily go the extra mile, providing unexpected favors to each other.”

Read More at Harvard Business Review

Lenovo’s Massive Ubuntu And Red Hat Announcement Levels Up Linux In 2020 (Forbes)

Beginning this month, Lenovo will certify its ThinkStation PCs and ThinkPad P Series laptops for both Ubuntu LTS and Red Hat Enterprise Linux. Every single model, every single configuration across the entire workstation portfolio.

And it doesn’t end there.

“Going beyond the box, this also includes full web support, dedicated Linux forums, configuration guidance and more,” says Rob Herman, General Manager, Executive Director Workstation & Client AI Group at Lenovo.

Read more at Forbes.

CNAB: A package format for the cloud

By Matt Butcher, special to Linux.com

Introduction

Installing a new app on your phone is simple. So is installing one on your Mac, Linux box, or PC. It should be just as simple to install a distributed application into your cloud — this is the goal of the Cloud Native Application Bundles (CNAB) project. We believe we can achieve this goal without requiring another cloud service or tying the user to only one cloud provider.

Over the last few months, we have witnessed first-hand how much the cloud has to offer. As everything from our daily meetings to our kids’ classrooms has gone online, we are reminded daily of what a potent boon cloud technologies have become.

For those responsible for building and maintaining our cloud presence, we know that some formidable issues are not yet resolved. One of those is how we install, upgrade, and delete applications in the cloud. Using containers, a bit of JSON, and some best-of-breed security infrastructure, we have created a package management standard for the cloud.

A Package Format for the Cloud

While the core cloud technologies like virtual machines and object storage have been around for over a decade, and a rich tapestry of cloud infrastructure exists, managing cloud applications remains a challenge. Two years ago, my team sat down and asked a straightforward question: Why is installing, upgrading, and deleting applications from the cloud is such a challenge? True, there are specific services (like PaaS) that make this manageable for a small segment of the ecosystem. But when it comes to a high-level solution, we are still left doing the orchestration of things either by hand or with bespoke tools.

This led us to one straightforward question:

What if we could find a way to make package management work for the cloud the same way that it works for a local operating system?

This domain was not entirely new ground for us. After all, we’d built the enormously successful Helm package manager for Kubernetes. But we were well aware that Helm is inextricably bound to Kubernetes. While we believe Kubernetes has many attractive features, we do not think it will replace the rest of the cloud landscape.

Enumerating the big features, we started to list things we would want to be able to do:

    • Install virtual machines
    • Set up object storage and cloud databases databases
    • Load containerized workloads onto clusters like Kubernetes, but perhaps not only Kubernetes
    • Manage virtual networks and resources like load balancers
    • Interoperate with policy and identity control tools
    • Make it possible and even easy for developers to introduce support for new services and tools

The list went on in a similar vein for a while. And then came the two killer features:

    • Make it extremely easy to use, just like a regular package manager.
    • Make it completely cloud-agnostic. It should run just as smoothly on Azure, AKS, on-prem OpenStack, and everything else.

The feature list was looking daunting until a rather elegant solution presented itself: Today’s packages are moved around in self-contained bundles of code and supporting resources. And then the host environment executes that bundle. What if we just used a Docker container as the primary package technology? In that case, we can reuse a considerable amount of cloud infrastructure, easily moving packages around–even across air-gapped boundaries.

This was the critical insight that became Cloud Native Application Bundles (CNAB). With Docker, Datadog, and Pivotal (before their acquisition by VMware), we wrote a specification that described how to build cloud-centric packages that are captured in Docker containers.

Initially announced at DockerCon EU in December of 2018, our combined team has continued to work on the specifications, build tools, and explore better ways of delivering an easy-to-use cloud packaging experience.

Today’s Tools

Since our initial announcement of CNAB, Docker Apps has rolled CNAB into its production release. Microsoft has built Porter–an open source CNAB builder–and Datadog has led the charge on a CNAB security specification that provides not just a quick verification scheme, but deep software supply chain security.

Docker initially announced their CNAB support for Docker Apps with a great architectural introduction. At the end of last year, they explained how CNAB worked with application templates in Docker Desktop. For Docker, CNAB provides a convenient way to encapsulate applications built using core Docker technology, without requiring the user to learn yet another technology stack. And right now, the newly released Docker Compose specification is supported in Porter, providing a new avenue for integrating Docker’s excellent developer tooling with other cloud technologies.

Microsoft created the Porter project. We had already written a CNAB reference implementation (Duffle) designed to exercise the specification. But it was not necessarily designed to provide a great user experience. Porter, on the other hand, is a user-first design. Through mixins, Porter can support a vast range of cloud technologies, from Terraform to Helm to Docker Compose, making it easy to tailor a CNAB bundle to your preferred target cloud or technology stack.

Finally, thanks to the diligent work of Datadog, the CNAB group is preparing to publish a second specification: The CNAB Security 1.0 Specification. The initial security model for CNAB was designed alongside the core specification. But we wanted to make sure we did our due diligence. We have spent an extra year diving deeper into scenarios and vetting and collaborating popular security products so that it could be accomplished with existing solutions. 

Along with covering distribution security, this specification also provides a software supply chain security model. This means that from development through testing, and finally on into release, each step can be verified according to a robust security process. We believe CNAB represents a new generation of security tooling that reduces risk and increases the fidelity of cloud technologies.

Tomorrow’s Goals

CNAB is designed to operate well in enterprise environments. And the CNAB group has two more standards in flight. We are eagerly pushing these toward completion.

One of CNAB’s target environments is the “disconnected cloud.” From physically remote environments, such as research stations and oil rigs, to secure compartmentalized facilities, cloud technologies provide a robust platform even when disconnected from the internet. CNAB is intended to work well in these environments as well. And this means that CNAB must have a robust “air gap” story.

From day one, this has been a goal. Over the last two years, we have refined our model, goals, and features to meet this scenario best. The core specification is written with air-gapped environments in mind, as is the security specification. But our third specification, the CNAB Registry 1.0 Specification, is the last puzzle piece.

This specification describes how CNAB bundles (packages) are stored, discovered, downloaded, and moved. Utilizing the OCI Registry standard, this specification describes how users and tools will share packages. But it also provides details on how bundles can be moved across network boundaries in a high-fidelity manner. With this specification, CNAB becomes a compelling method for transporting sophisticated cloud-native applications from network to network–without sacrificing security or requiring copious amounts of manual labor.

Finally, we have one more specification in the works. The CNAB Claims 1.0 Specification describes how CNAB tools can share a common description of their deployed applications. For example, one tool can “claim” ownership over an application deployment, while another tool can access the shared information about that application and how it was deployed. This brings together distributed management, audit trails, and long-term tool interoperability.

Porter and Duffle already support claims, but we are excited to get a formal standard that enables information sharing across all of the tools in the CNAB ecosystem.

How to Get Involved

The CNAB specification is developed under an open source model. You can dive right in at cnab.io. There you will find not only the specifications, the common source libraries (like cnab-go), and our full command-line reference implementation duffle.

Porter is also open source and is a great starting point if you wish to work with a user-friendly CNAB tool immediately.

We have even experimented with a graphical CNAB installer, and have some VS Code extensions to improve the development process.

Conclusion

Our goal with CNAB is to provide a package management story for the cloud. Just as it is easy to run an installer on our laptops or put a new app on our phone, it should be easy to install a new cloud application. That is the vision that CNAB relentlessly pursues.

We’d love to have you join up, take it for a test drive, and explore the possibilities.

The Linux Foundation introduces Cloud Engineer Bootcamp for cloud job seekers (ZDNet)

Steven J. Vaughn Nichols writes at ZDNet about the Linux Foundation’s new Cloud Engineer Bootcamp:

While there are plenty of cloud classes out there, the Linux Foundation claims it’s the “first-ever bootcamp program, designed to take individuals from newbie to certified cloud engineer in six months.”

The Bootcamp bundles self-paced eLearning courses with certification exams and dedicated instructor support for a comprehensive and well-rounded educational program. As you would imagine for a Bootcamp from the Linux Foundation it starts with Linux at the operating system layer. Since even Azure is now predominantly Linux, this actually makes good sense. From Linux, it moves up the stack, covering DevOps, cloud, containers, and Kubernetes.

Specifically, it comprises the following classes and exams:

Besides the classes, students will also have access to an online forum with other students and instructors. There will also be live virtual office hours with course instructors five days per week. If you enroll, you can expect to spend 15 hours to 20 hours per week on the materials to complete the Bootcamp in about six months. Upon completion, participants will receive LFCS and CKA certification badges and a badge for completing the entire Bootcamp. Badges can be independently verified by potential employers at any time.

Read more at ZDNet

From Kernel Development Student to SysAdmin to Linux Author

In 2016, Ahmed Alkabary had just graduated from the University of Regina, where he earned degrees in computer science and mathematics. He began using Linux in the second year of his studies and quickly developed such a passion for it that he began extra studies outside of university to advance his skills. Ahmed’s enthusiasm for Linux even led him to develop a free course on Udemy to teach it to others; nearly 50,000 students have enrolled to date. Following the completion of his studies, Ahmed hoped to secure a job as a Linux system administrator.

Ahmed applied for and was selected as the recipient of a LiFT scholarship in the category of Academic Aces, which enabled him to enroll in the Linux Kernel Internals and Development (LFD420) training course and the Linux Foundation Certified SysAdmin exam.

Red Hat: Holding Its Own and Fueling Open Source Innovation

When IBM acquired Red Hat for $34 billion in 2019, it was considered the industry’s largest software acquisition. The synergy between the two companies led them to become one of the leading hybrid multi-cloud providers globally.

In most acquisitions, the acquired entity sometimes loses momentum and sheds some of its original luster. This does not seem to be the case with Red Hat.

Distinct Identity
“I would define it as a separate company and that’s how we run it,” affirms Paul Cormier, President & CEO of Red Hat, who is credited with conceptualizing the company’s open hybrid cloud platform.

“We set our own strategy, we set our own road maps. It’s completely up to us. We have stayed as a self-contained company. Red Hat still has all the pieces to be a separate company: its own Engineering, product lines, back office, HR, Legal, and Finance. It’s very much like VMware is to Dell, or LinkedIn is to Microsoft,” he explains.

Cormier believes it’s important to have separate identities for partner ecosystems to thrive.

“We are talking about integrating Arc with OpenShift. IBM didn’t even know this was happening as we had kept it confidential,” he says.

Microsoft’s Azure Arc is a management tool for hybrid cloud application infrastructures, while OpenShift is a family of containerization software developed by Red Hat.

“We’re big on Intel platforms. We’re also big on IBM Z, IBM I, IBM P. Since we support Intel in Red Hat Enterprise Linux (REL), we know their road maps long before they’re implemented.  However, we have to show Intel that we would not give away their secrets to IBM. This is the most important reason that we must remain separate so that those partner ecosystems remain,” he says.

Linux: The Innovation Engine
Cormier points out that what makes Red Hat very unique is its completely open-source software development model.

“That’s our development model. Open source is not a thing — it’s an action,” he says.

Underlining the importance of Linux, he explains that “Linux went by Unix a long time ago in terms of features, function, and performance. However, Linux was so available that eventually, it became the innovation engine. All the technologies today — on the infrastructure and the development side, on the tools side — they are all built-in and around Linux. OpenShift is still a Linux platform. Its containers are Linux. All the innovation is now around that.”

Cormier is also confident of meeting the demand of customers adopting hybrid cloud.

“For us, it doesn’t matter whether it’s 20% on-premise, 20% in the cloud and 80% on-premise, or 60/40 or 50/50 — it’s still a hybrid world. I can’t predict if the COVID thing is going to push people to the cloud more quickly or more slowly, but we don’t care. It doesn’t matter. For us, it’s the same value proposition,” he avers.

Virtualization meets Kubernetes 
Red Hat is now working on bringing VMs into the Kubernetes architecture.

“As opposed to some of our competitors that are trying to bring containers back to their world, we’re moving in the other direction. We are working on advanced cluster management on Kubernetes. As customers increasingly go hybrid, having OpenShift with containers running in different places will help them easily manage across clusters,” Cormier says.

“We’re also focusing on telco 5G use cases on the OpenShift platform. We’re doing a lot of work with Verizon and the other telcos,” he adds.