Home Blog Page 2

Instructor-Led Kubernetes Security Fundamentals Course Now Available

Kubernetes Security Fundamentals (LFS460) is the newest instructor-led training course from Linux Foundation Training & Certification and the Cloud Native Computing Foundation. Those taking the course will gain the skills and knowledge on a broad range of best practices for securing their clouds, container-based applications and Kubernetes platforms during build, deployment and runtime, and upon completion will be ready to take the Certified Kubernetes Security Specialist (CKS) certification exam. CKS registration is included for those taking the instructor-led course, though only those who already hold a Certified Kubernetes Administrator (CKA) are permitted to sit for the exam. 

This four-day course is taught by a live, expert instructor from The Linux Foundation. Anyone may enroll in a public course – the first of which is being offered March 29-April 2, 2021 – or organizations that wish to train a team may arrange for a private course by contacting our Corporate Solutions team. Public courses are conducted online, with a live industry expert providing content and taking you through hands-on labs to teach the experience you need to secure container-based applications. The course covers more than just container security, exploring topics from before a cluster has been configured through deployment, and ongoing and agile use, including where to find ongoing security and vulnerability information. 

The course covers similar content to the Kubernetes Security Essentials (LFS260) eLearning course, but with the added benefit of a live instructor. Before enrolling, course participants are strongly encouraged to have taken or possess the requisite knowledge covered in the CKA exam. Familiarity with the skills and knowledge covered in that exam and related Kubernetes Administration (LFS458) training are necessary to be successful in the new Kubernetes Security Fundamentals course.

Enroll today and get your team ready to address any potential cloud security issues.

The post Instructor-Led Kubernetes Security Fundamentals Course Now Available appeared first on Linux Foundation – Training.

Preventing Supply Chain Attacks like SolarWinds

In late 2020, it was revealed that the SolarWinds Orion software, which is in use by numerous US Government agencies and many private organizations, was severely compromised. This was an incredibly dangerous set of supply chain compromises that the information technology community (including the Open Source community) needs to learn from and take action on.

The US Cybersecurity and Infrastructure Security Agency (CISA) released an alert noting that the SolarWinds Orion software included malicious functionality in March 2020, but it was not detected until December 2020. CISA’s Emergency Directive 21-01 stated that it was being exploited, had a high potential of compromise, and a grave impact on entire organizations when compromised. Indeed, because Orion deployments typically control networks of whole organizations, this is a grave problem. The more people look, the worse it gets. As I write this, it appears that a second and third malware have been identified in Orion.

Why the SolarWinds Attack Is Particularly Noteworthy

What’s especially noteworthy is how the malicious code was inserted into Orion: the attackers subverted something called the build environment. When software is being developed it is converted (compiled) from source code (the text that software developers update) into an executable package using a “build process.” For example, the source code of many open source software projects is then used in software that is built, compiled, and redistributed by other organizations, so that it is ready to install and run on various computing platforms. In the case of SolarWinds’ Orion, CrowdStrike found a piece of malware called Sunspot that watched the build server for build commands and silently replaced source code files inside the Orion app with files that loaded the Sunburst malware. The SolarWinds Orion compromise by Sunspot isn’t the first example of these kinds of attacks, but it has demonstrated just how dangerous they can be when they compromise widely-used software.

Unfortunately, a lot of conventional security advice cannot counter this kind of attack:

SolarWinds’ Orion is not open source software. Only the company’s developers can legally review, modify, or redistribute its source code or its build system and configurations. If we needed further evidence that obscurity of software source code doesn’t automatically provide security, this is it.

Recommendations from The Linux Foundation 

Organizations need to harden their build environments against attackers. SolarWinds followed some poor practices, such as using the insecure ftp protocol and publicly revealing passwords, which may have made these attacks especially easy. The build system is a critical production system, and it should be treated like one, with the same or higher security requirements as its production environments. This is an important short-term step that organizations should already be doing. However, it’s not clear that these particular weaknesses were exploited or that such hardening would have made any difference. Assuming a system can “never be broken into” is a failing strategy.

In the longer term, I know of only one strong countermeasure for this kind of attack: verified reproducible builds. A “reproducible build” is a build that always produces the same outputs given the same inputs so that the build results can be verified. A verified reproducible build is a process where independent organizations produce a build from source code and verify that the built results come from the claimed source code. Almost all software today is not reproducible, but there’s work to change this. The Linux Foundation and Civil Infrastructure Platform has been funding work, including the Reproducible Builds project, to make it possible to have verified reproducible builds.

The software industry needs to begin shifting towards implementing and requiring verified reproducible builds. This will not be easy. Most software is not designed to be reproducible in their build environments today, so it may take years to make software reproducible. Many changes must be made to make software reproducible, so resources (time and money) are often needed. And there’s a lot of software that needs to be reproducible, including operating system packages and library level packages. There are package distribution systems that would need to be reviewed and likely modified. I would expect some of the most critical software to become reproducible first, and then less critical software would increase over time as pressure increases to make more software verified reproducible. It would be wise to develop widely-applicable standards and best practices for creating reproducible builds. Once software is reproducible, others will need to verify the build results for given source code to counter these kinds of attacks. Reproducible builds are much easier for open source software (OSS) because there’s no legal impediment to having many verifiers. Closed source software developers will have added challenges; their business models often depend on hiding source code. It’s still possible to have “trusted rebuilders” worldwide to verify closed source software, even though it’s more challenging and the number of rebuilders would necessarily be smaller.

The information technology industry is generally moving away from “black boxes” that cannot be inspected and verified and towards components that can be reviewed. So this is part of a general industry trend; it’s a trend that needs to be accelerated.

This is not unprecedented. Auditors have access to the financial data and review the financial systems of most enterprises. Audits are an independent entity verifying the data and systems for the benefit of the ecosystem. There is a similar opportunity for organizations to become independent verifiers for both open source and closed source software and build systems.

Attackers will always take the easiest path, so we can’t ignore other attacks. Today most attacks exploit unintentional vulnerabilities in code, so we need to continue to work to prevent these unintentional vulnerabilities. These mitigations include changing tools & interfaces so those problems won’t happen, educating developers on developing secure software (such as the free courses from OpenSSF on edX), and detecting residual vulnerabilities before deployment through various detection tools. The Open Source Security Foundation (OpenSSF) is working on improving the security of open source software (OSS), including all these points.

Applications are mostly reused software (with a small amount of custom code), so this reused software’s software supply chain is critical. Reused components are often extremely out-of-date. Thus, they have many publicly-known unintentional vulnerabilities; in fact, reused components with known vulnerabilities are among the topmost common problems in web applications. The LF’s LFX security tools, GitHub’s Dependabot, GitLab’s dependency analyzers, and many other tools & services can help detect reused components with known vulnerabilities.

Vulnerabilities in widely-reused OSS can cause widespread problems, so the LF is already working to identify such OSS so that it can be reviewed and hardened further (see Vulnerabilities in the Core Preliminary Report and Census II of Open Source Software).

The supply chain matters for malicious code, too; most malicious code gets into applications through library “typosquatting” (that is, by creating a malicious library with a name that looks like a legitimate library).

That means that Users need to start asking for a software bill of materials (SBOM) so they will know what they are using. The US National Telecommunications and Information Administration (NTIA) has been encouraging the adoption of SBOMs throughout organizations and the software supply chain process. The Linux Foundation’s Software Package Data Exchange (SPDX) format is a SBOM format by many. Once you get SBOM information, examine the versions that are included. If the software has malicious components, or components with known vulnerabilities, start asking why. Some vulnerabilities may not be exploitable, but too many application developers simply don’t update dependencies even when they are exploitable. To be fair, there’s a chicken-and-egg problem here: specifications are in the process of being updated, tools are in development, and many software producers aren’t ready to provide SBOMs.  So users should not expect that most software producers will have SBOMs ready today. However, they do need to create a demand for SBOMs.

Similarly, software producers should work towards providing SBOM information. For many OSS projects this can typically be done, at least in part, by providing package management information that identifies their direct and indirect dependencies (e.g., in package.json, requirements.txt, Gemfile, Gemfile.lock, and similar files). Many tools can combine this information to create more complete SBOM information for larger systems.

Organizations should invest in OpenChain conformance and require their suppliers to implement a process designed to improve trust in a supply chain.  OpenChain’s conformance process reveals specifics about the components you depend on that are a critical first step to countering many supply chain attacks.


The attack on SolarWinds’ Orion will have devastating effects for years to come. But we can and should learn from it.

We can:

  1. Harden software build environments
  2. Move towards verified reproducible builds
  3. Change tools & interfaces so unintentional vulnerabilities are less likely
  4. Educate developers (such as the free courses from OpenSSF on edX)
  5. Use vulnerability detection tools when developing software
  6. Use tools to detect known-vulnerable components when developing software
  7. Improve widely-used OSS (the OpenSSF is working on this)
  8. Ask for a software bill of materials (SBOMs), e.g., in SPDX format. Many software producers aren’t ready to provide one yet, but creating the demand will speed progress
  9. Determine if subcomponents we use have known vulnerabilities
  10. Work towards providing SBOM information if we produce software for others
  11. Implement OpenChain

Let’s make it much harder to exploit the future systems we all depend on. Those who do not learn from history are often doomed to repeat it.

David A. Wheeler, Director of Open Source Supply Chain Security at the Linux Foundation

The post Preventing Supply Chain Attacks like SolarWinds appeared first on Linux Foundation.

eBook: Common Open Source Practices in Developing Cloud Native Applications

The TARS Foundation has recently released a new eBook, Common Open Source Practices in Developing Cloud-Native Applications. The following is an overview of the book, click here to download.

With the advent of digital transformation, enterprises are facing more difficult business realities.  As cloud computing has continued to grow, Cloud-Native applications have become a critical driving force for business innovation. Migrating to the cloud-native model can help businesses boost their productivity and increase competitiveness in the market. 

Cloud-Native technologies take advantage of different environments such as public cloud, private cloud, and hybrid cloud to build and run scalable applications that are easy to manage and monitor. Through Cloud-Native technologies, enterprises can enable faster software delivery cycles and drastically improve applications’ agility, elasticity, and availability. 

In this eBook, you will find information about popular open source technologies used in different areas of Cloud-Native applications, such as containers, container orchestration, and microservices. We will highlight the most notable and relevant open source projects, including Docker, Kubernetes, Istio, the Kubernetes native solution for TARS services, to help you gain a quick understanding of Cloud-Native tools available these days. 

About The TARS Foundation

The TARS Foundation is a nonprofit, open source microservice foundation under the Linux Foundation umbrella to support the rapid growth of contributions and membership for a community focused on building an open microservices platform. It focuses on open source technology that helps businesses embrace microservices architecture as they innovate into new areas and scale their applications. It continues to work on addressing the problems that may occur in using microservices and wishes to accommodate a variety of bottom-up content to build a better microservice ecosystem

Deploying a virtual TripleO standalone OpenStack system

A walk-through on how to deploy a virtualized TripleO standalone system, including creating the components need to launch and connect to a VM. Also included is how to cleanup the deployment.
Read More at Enable Sysadmin

System administration is dead, long live system administration!

System administration is dead, long live system administration!

Comparing the skillsets of sysadmins from the “Gilded Age” of administration to those of the “Industrial Age.”
Scott McBrien
Wed, 12/30/2020 at 9:43pm


Image by PublicDomainPictures from Pixabay 

A few weeks ago, I talked with the venerable Ken Hess on the “Red Hat Enterprise Linux Presents …” live stream. The topic of discussion was general systems administration practices, and it became clear that Ken and I have very different opinions of what that is.

Linux Administration  
Read More at Enable Sysadmin

An introduction to hashing and checksums in Linux

Always wondered how to make use of a checksum? This introduction shows you what they mean, and how to use the proper tools to verify the integrity of a file.
Read More at Enable Sysadmin

Formatting tricks for the Linux date command

The Linux date command is simple, yet powerful. This article shows you how to unleash the power of the date command.
Read More at Enable Sysadmin

5 advanced rsync tips for Linux sysadmins

Use rsync compression and checksums to better manage file synchronization.
Read More at Enable Sysadmin

13 questions for a quantum architect

13 questions for a quantum architect

With quantum computing on the horizon, take a look at which type of architect would be needed and what companies need to consider to build such complex systems.
Joachim Haller
Mon, 12/21/2020 at 6:22am


Image from Pexels

With quantum computers already available for commercial use, albeit not in great quantity, I believe it is time for companies to start considering how to incorporate them into their arsenal of IT services. Time is running out for security teams to get their defenses in order ahead of the first quantum computer attack.

Right now, it’s not really possible to just buy the latest quantum computer model and take it for a spin. It takes some real architectural brain power to actually make such a computational beast fit into an existing structure.

Quantum computing  
Read More at Enable Sysadmin

Centaurus Infrastructure Project Joins Linux Foundation to Advance Cloud Infrastructure for 5G, AI and Edge

Centaurus today is becoming a Linux Foundation Project. The Centaurus Infrastructure Project is a cloud infrastructure platform for building distributed cloud as well as a platform for modern cloud native computing. It supports applications and workloads for 5G, Edge and AI and unifies the orchestration, network provisioning and management of cloud compute and network resources at a regional scale. 

Founding members include Click2cloud, Distributed Systems, Futurewei, GridGain Systems, Reinvent Labs, SODA Foundation and Tu Wien Informatics. Centaurus is an umbrella project for modern distributed computing and hosts both Arktos and Mizar. Arktos is a compute cluster management system designed for large scale clouds, while Mizar is the high-performance cloud-network powered by eXpress Data Path (XDP) and Geneve protocol for high scale cloud. More members and projects are expected to be accepted in the coming months. 

“The market is changing and customers require a new kind of cloud infrastructure that will cater to modern applications and workloads for 5G, AI and Edge,” said Mike Dolan, senior vice president and general manager for Linux Foundation Projects. “Centaurus is a technical project with strategic vision, and we’re looking forward to a deep collaboration that advances cloud native computing for generations to come.” 

Current cloud infrastructure technology needs are evolving, requiring companies to manage a larger scale of compute and network resources across data centers and more quickly provision those resources. Centaurus unifies management across bare metal, VMs, containers and serverless, while reducing operational costs and delivering on the low latency and data privacy requirements of edge networks. Centaurus offers a consistent API experience to provision and manage virtual machines, containers, serverless and other types of cloud resources by  combining traditional (Infrastructure as a Service) IaaS and Platform as a Service (PaaS) layers into one common infrastructure platform that can simplify cloud management.

“The Linux Foundation’s support in expanding the Centaurus community will accelerate cloud native infrastructure for the most pressing compute and networking demands,” said Dr. Xiong Ying, the current acting TSC chair, Centaurus Infrastructure Project. “It’s large network of open source developers and projects already supporting this future will enable mass collaboration and important integrations for 5G, AI and Edge workloads.” 

To contribute to Centaurus, please visit: https://www.centauruscloud.io/

Supporting Member Quotes

“Click2cloud has been part of the development of Centaurus, which is world class software that will lead organizations to have a clear transition from IaaS to Cloud Native Infrastructure. Click2cloud has already started a development program to enable the journey from IaaS (Openstack) to Cloud Native migration, 5G cloud based on Centaurus reference architecture to support the partner ecosystem. We are very excited for Centaurus to be a part of Linux Foundation,” said Prashant Mishra, CEO, Click2cloud. 

“Distributed cloud architecture is a natural evolution for cloud computing infrastructure. Centaurus is a cloud native infrastructure platform aiming to unify management and orchestration of virtual machines, containers, and other forms of cloud resources natively at scale and at the edge. We have seen many enterprise users and partners wanting a unified solution to build their distributed cloud to manage virtual machines, containers or bare metal-based applications running at cloud as well as at edge sites. We are very pleased to see, today, the Centaurus Infrastructure project becomes a Linux Foundation open-source project, providing an option for community and enterprise users to build their cloud infrastructure to run and manage next generation applications such as AI, 5G and IoT. We look forward to working with the open-source community to realize the vision of Centaurus,” said Dr. Xiong Ying, Sr. Technical VP, Head of Cloud Lab, Futurewei. 

GridGain Systems
“Creating and managing a unified and scalable distributed cloud infrastructure that extends from cloud to edge is increasingly a challenge for organizations worldwide. GridGain Systems has been a proud sponsor and active participant in the development of in-memory computing solutions to support the Centaurus project. We look forward to helping organizations realize the benefits of Centaurus and continuing to help extend its scalability and adoption,” said Nikita Ivanov, Co-Founder and CTO, GridGain Systems. 

Reinvent Labs
“We are a young company, which specializes in cloud computing and delivering cloud-native solutions to our customers across various industries. As such, we are ever stronger witnessing the need to manage cloud services and applications that span across complex and heterogeneous infrastructures, which combine containers, VMs and serverless functions. What is more, such infrastructures are also starting to grow beyond traditional cloud platforms towards the edge on the network. Being part of the Centaurus project will not only allow us to innovate in this space and deliver a platform for unified management of infrastructure resources across both large Cloud platforms and the Edge, but it will also enable us to connect and collaborate with like-minded members for thought leadership and industry best practices,” said Dr. Stefan Nastic, founder and CEO of Reinvent Labs GmbH. 

The SODA Foundation
“The SODA Open Data Framework is an open source data and storage management framework that goes from the edge to the core to the cloud. Centaurus offers the opportunity for SODA to be deployed in the next generation cloud infrastructure for 5G, AI and Edge, and allows both communities to innovate together,” said Steven Tan, SODA Foundation Chairman and VP & CTO Cloud Solution, Storage at Futurewei. 

TU Wien
“We are very excited to be part of the Centaurus ecosystem and honored to be part of this open source movement and contributing in the fields of IoT, Edge intelligence, and Edge and Cloud Computing, including networking and communication aspects, as well as orchestration, resource allocation, and task scheduling,” said Prof. Schahram Dustdar, IEEE Fellow, Member Academia Europaea Professor of Distributed Systems, TU Wien, Austria.

The post Centaurus Infrastructure Project Joins Linux Foundation to Advance Cloud Infrastructure for 5G, AI and Edge appeared first on Linux Foundation.