Home Blog Page 73

Secure software supply chains: good practices, at scale

Here at The Linux Foundation’s blog, we share content from our projects, such as this article from the Cloud Native Computing Foundation’s blog. The guest post was originally published on Contino Engineering’s blog by Dan Chernoff. 

Supply chain attacks rose by 42% in the first quarter of 2021 [1] and are becoming even more prevalent [2]. In response to secure software supply chain breaches like Solar Winds [3], Kaseya[4], and other less publicized compromises [5], the Biden administration issued an executive order that includes guidance designed to improve the federal government’s defense against cyber threats. With all of this comes the inevitable slew of blog posts that detail a software supply chain and how you would protect it. The Cloud Native Computing Foundation recently released a white paper regarding software supply chain security [7], an excellent summary of the current best practices for securing your software supply chain.

The genesis for the content in this article is work done to implement secure supply chain patterns and practices for a Contino customer. The core goals for the effort were; implement a pipeline agnostic solution that ensures the security of the pipelines and enables secure delivery for the enterprise. We’ll talk a little about why we chose the tools we did in each section and how they supported the end goal.

As we start our journey, we’ll first touch on what a secure software supply chain is and why you should have one to set the context for the rest of the blog post. But let’s assume that you have already decided that your software supply chains need to be secure, and you want to implement the capability for your enterprise. So let’s get into it!

Anteing Up

Before you embark upon the quest of establishing provenance for your software at scale, there are some table stakes elements that teams should already have in place. We won’t delve deeply into any of them here other than to list and briefly describe them.

Centralized Source Control, Git is by far the most popular choice. This ensures a single source of truth for development teams. Beyond just having source control, teams should also implement the signing of their Git commits.

Static Code Analysis. This identifies possible vulnerabilities within ‘static’ (non-running) source code by using techniques such as Taint Analysis and Data Flow Analysis. Analysis and results need to be incorporated into the cadence of development.

Vulnerability Scanning. Implement automated tools that scan the applications and containers that are built to identify potential vulnerabilities in the compiled and sometimes running applications.

Linting is a tool that analyzes source code to flag programming errors, bugs, and stylistic errors. Linting is important to reduce errors and improve the overall code quality. This in turn accelerates development.

CI/CD Pipelines. New code changes are automatically built, tested, versioned, and delivered to an artifact repository. A pipeline then automatically deploys the updated applications into your environments (e.g. test, staging, production, etc.).

Artifact Repositories. Provide management of the artifacts built by your CI/CD systems. An artifact repository can help with the version and access control of your artifacts.

Infrastructure as Code (IaC) is the process of managing and provisioning infrastructure (e.g. virtual machines, databases, load balancers, etc.) through code. As with applications, IaC provides a single source of truth for what the infrastructure should look like. It also provides the ability to test before deploying to production.

Automated…well, everything. Human-in-the-loop systems are not deterministic. They are prone to error which can and will cause outages and security gaps. Manual systems also inhibit the ability of platforms to scale quickly.

What is a Secure Software Supply Chain

A software supply chain consists of anything that goes into the creation of your end software product and the mechanisms you use to deliver the product to customers. This includes things like your source code, your build systems, the 3rd party libraries, deployment infrastructure, or delivery repositories.

Attributes:

Establishes Provenance — One part of establishing provenance is ensuring that any artifact that is created and accessed by the customer should be able to trace its lineage all the way back to the developer(s) that merged the latest commit. The other part is the ability to demonstrate (or attest) that for each step in the process, the software, components, and other materials that go into creating the final product are tamper-free.
Trust — Downstream systems and users need a mechanism to verify that the software that is being installed or deployed came from your systems and that the version being used is the correct version. This ensures that malicious artifacts have not been substituted or that older, vulnerable versions have not been relabeled as the current version.
Transparent — It should be easy to see the results and details for all steps that go into the creation of the final artifact. This can include things like test results, output from vulnerability scans, etc.

Key Elements of a Secure Software Supply Chain

Let’s take a closer look at the things that need to be layered into your pipelines to establish provenance, enable transparency, and ensure tamper resistance.

Here is what a typical pipeline might look like that creates a containerized application. We’ll use this simple pipeline and add elements as we discuss them.

Establishing Provenance Using in-toto

The first step in our journey is to establish that artifacts built via a pipeline have not been tampered with and to do so in a reliable and repeatable way. As we mentioned earlier, part of this is creating evidence to use as part of the verification. in-toto is an open-source tool that creates a snapshot of the workspace where the pipeline step is running.

These snapshots (“link files” in the in-toto terminology) verify the integrity of the pipeline. The core idea behind in-toto is the concept of materials and products and how they flow, just like in a factory. Each step in the process usually has some material that will create its product. An example of the flow of materials and products is the build step. The build step uses code as the material, and the built artifact (jar, war, etc.) is the product. A later step in the pipeline will use the built artifact as the material and produce another product. In this way, in-toto allows you to chain the materials and products together and identify if a material has been tampered with during or between one of the pipeline steps. For example, if the artifact constructed during the build step changed before testing.

At the end of the pipeline, in-toto evaluates the link data (the attestation created at each step) against an in-toto layout (think Jenkins file for attestation) and verifies that all the steps were done correctly and by the approved people or systems. This verification can run anytime the product of the pipeline (container, war, etc.) needs to be verified.

Critical takeaways for establishing provenance

in-toto runs at every step of the process. The attestation compares to an overarching layout during verification. This process enables consumers (users and/or deployment systems) to have confidence that the artifacts built were not altered from start to finish.

Establishing Trust using TUF

You can use in-toto verification to know that the artifact was delivered or downloaded without modification. To do that, you will need to download the artifact(s), the in-toto link files used during the build, the in-toto layout, and the public keys to verify it all. That is a lot of work. An easier way is to sign the artifacts produced with a system that enables centralized trust. The most mature framework for doing so is TUF (The Update Framework).

TUF is a framework that gives consumers of artifacts guarantees that the artifact downloaded or automatically installed came from your systems and is the correct version. The guts of how to accomplish this are outside the scope of this blog post. The functionality we are interested in is verifying that an artifact came from the producer we expected and that the version is the expected version.

Implementing TUF on your own is a fair bit of work. Fortunately, an “out of the box” implementation of TUF is available for use, Docker Content Trust (a.k.a. Notary). Notary enables the signing of regular files as well as containers. In our example pipeline, we sign the container image during build time. This signing allows any downstream system or user to verify the authenticity of the container.

Transparency Centralized Data Storage

One of the gaps that in-toto has as a solution is a mechanism to persist the link data it creates. It is up to the team to implement in-toto to capture and store the link data somewhere. All the valuable metadata for each step can be captured and stored outside of the build system. The goal is twofold; the first is to store the link data outside the pipeline to enable teams to retrieve the link data and use it anytime verification needs to run on the artifacts produced from the pipeline. The second goal is to store the metadata around the build process outside the pipeline. That enables teams to implement visualizations, monitoring, metrics, and rules on the data produced from the pipeline without necessarily needing to keep it in the pipeline.

The Contino team created metadata capture tooling that is independent and agnostic of the pipeline. We chose to write a simple python tool that captures the meta and in-toto data and stores it in a database. If the CI/CD platform is reasonably standard, you can likely use built-in mechanisms to achieve the same results. For example, the Jenkins LogStash plugin can capture the output of a build step and persist data to an elastic datastore.

PGP and Signing Keys

A core component for in-toto and Notary are keys used to sign and verify link data and artifacts/containers. in-toto uses PGP private keys to sign the link data produced at each step internally. That signing ensures a relationship between the person or system that did the action and the link data. It also ensures that it can be easily detected if the link data gets altered or tampered with in any way.

Notary uses public and private keys generated using the Docker or Notary CLI. The public keys get stored in the notary database. The private keys sign the containers or other artifacts.

Scaling Up

For a small set of pipelines, manually implementing and managing secure software supply chain practices is straightforward. Management of an enterprise that has hundreds if not thousands of pipelines requires some additional automation.

Automate in-toto layout creation. As mentioned earlier, in-toto has a file akin to a Jenkins file that dictates what person or systems can complete a pipeline step, the material and product flow, and how to inspect/verify the final artifact(s). Embedded in this layout are the IDs for the PGP keys of the people or systems who can perform steps. Additionally, the layout is internally signed to ensure that any tampering can be detected once the layout gets created. To manage this at scale, the layouts need to be automatically created/re-created on demand. We approach this as a pipeline that automatically runs on changes to the code that creates layouts. The output of the pipeline is layouts, which are treated as artifacts themselves.

Treat in-toto layouts like artifacts. in-toto payouts are artifacts, just like containers, jars, etc. Layouts should be versioned, and the layout version linked to the version of the artifact. This versioning enables artifacts to be re-verified with the layout, link files, and relevant keys at artifact creation time.

Automate the creation of the signing keys. Signing keys that are used by autonomous systems should be rotated frequently and through automation. Doing this limits the likelihood for compromise of the signing keys used by in-toto and Notary. For in-toto, this frequent rotation will require the automatic re-creation of the in-toto layouts. For Notary, cycling the signing keys will require revocation of the old key when we implement the new key.

Store and use signing keys from a secret store. When generating signing keys for use by automated systems, storing the keys in secret management systems like Hashicorp’s Vault is an important practice. The automated system can retrieve the signing keys (e.g., Jenkins, GitLab ci, etc.) when needed. Centrally storing the signing keys combats “secrets sprawl” in an enterprise and enables easier management.

Pipelines should be roughly similar. A single in-toto layout can be used by many pipelines, as long as they operate in the same way. For example, pipelines that build a Java application that creates a WAR as the artifact probably operates in roughly the same way. These pipelines can all use the same layout if they are similar enough.

Wrapping it All Up

Using the technologies, patterns, and practices here the Contino team was able to deliver an MVP grade solution for the enterprise. The design will be able to scale up to thousands of application pipelines and help ensure software supply chain security for the enterprise.

At its core, a secure software supply chain encompasses anything that goes into building and delivering an application to the end customer. It is built on the foundations of secure software development practices (e.g. following OWASP top 10, SAST, etc.). Any implementation of secure supply chain best practices needs to establish provenance about all aspects of the build process, provide transparency for all steps and create mechanisms that ensure trustworthy delivery.

Sources:

[1] https://www.propertycasualty360.com/2021/04/13/supply-chain-attacks-rose-42-in-q1/?slreturn=20210726153708

[2] https://portswigger.net/daily-swig/four-fold-increase-in-software-supply-chain-attacks-predicted-in-2021-report

[3] https://www.npr.org/2021/04/16/985439655/a-worst-nightmare-cyberattack-the-untold-story-of-the-solarwinds-hack

[4] https://www.zdnet.com/article/updated-kaseya-ransomware-attack-faq-what-we-know-now/

[5] https://portswigger.net/daily-swig/researcher-hacks-apple-microsoft-and-other-major-tech-companies-in-novel-supply-chain-attack

[6] https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/

[7] https://github.com/cncf/tag-security/raw/main/supply-chain-security/supply-chain-security-paper/CNCF_SSCP_v1.pdf

The post Secure software supply chains: good practices, at scale appeared first on Linux Foundation.

How to write an Ansible plugin to create inventory files

Take advantage of Ansible’s ecosystem to write inventory files for your playbooks.

Read More at Enable Sysadmin

How to write an Ansible plugin to create inventory files

Take advantage of Ansible’s ecosystem to write inventory files for your playbooks.

Read More at Enable Sysadmin

How to write a Python script to create dynamic Ansible inventories

Write a script in Python that fetches hosts using Nmap to generate dynamic inventories.

Read More at Enable Sysadmin

MLH Fellowship Opens Applications for this Summer’s Production Engineering Track

For the second summer, Major League Hacking (MLH) is running the Production Engineering Track of the MLH Fellowship, powered by Meta. This 12-week educational program is 100% remote and uses industry-leading curriculum from Linux Foundation Training & Certification.  The program is hands-on, project-based, and teaches students how to become Production Engineers.  The goal of the program is for all participants to land a job or internship in the Site Reliability Engineering space, and it will be opened to 100 active college students who meet our admissions criteria.

This Summer’s program will start on May 31, 2022 and will end on August 19, 2022.

Applications are now open and will close on May 23, 2022!

Apply Now!

What is Production Engineering?

Production Engineering, also known as Site Reliability Engineering and DevOps, is one of the most in-demand skill sets that leading technology companies are hiring for. However, it is not widely available as a class offering in university settings.

At Meta, Production Engineers (PEs) are a hybrid between software and systems engineers and are core to engineering efforts that keep Meta platforms running and scaling. PEs work within Meta’s product and infrastructure teams to make sure products and services are reliable and scalable; this means, writing code and debugging hard problems in production systems across Meta services – like Instagram, WhatsApp, and Oculus – and backend services like Storage, Cache, and Network.

What is the Production Engineering Track of the MLH Fellowship?

Launched in the summer of 2020, the MLH Fellowship first focused on Open Source Software projects, pairing early career software engineers with projects and engineers from widely-used open source codebases (like AWS, GitHub, and Solana Labs). During the program, Fellows learned important concepts and software practices while contributing production-level code to their projects and showcasing those contributions in their portfolio. Through the Fellowship, 700 global alumni have learned Open Source skills and tools and increased their professional networks in the process.

The Production Engineering Track takes this proven fellowship model and expands on it. As part of the Production Engineering Track, fellows are put in groups of 10 (“Pods”), matched to dedicated mentors from Meta Engineering while they work through projects and curriculum, and receive guidance from Meta’s Talent Acquisition team, too. Successful program graduates will be invited to apply to full-time Meta internships.

What will admitted fellows learn in the Production Engineering Track?

Program participants will gain practical skills from educational content – adopted by the MLH Curriculum Team – licensed from the Linux Foundation’s “Essentials of System Administration” course. The program covers how to administer, configure and upgrade Linux systems, along with the tools and concepts necessary to build and manage a production Linux infrastructure. The complete list of topics covered in the program includes:

Linux Fundamentals
Scripting
Databases
Services
Testing
Containers
CI/CD
Monitoring
Networking
Troubleshooting
Interview skills

By pairing this industry-leading curriculum with hands-on, project-based learning – and engineering mentors from Meta – fellows in the Production Engineering Track greatly build on their programming knowledge. Fellows will learn a broader array of technology skills, opening the door to new career options in SRE.

What are the important dates I should know about?

The program will be available to roughly 100 aspiring software engineers and will start on May 31, 2022 and end on August 19, 2022.

Applications are now open and will close on May 23, 2022!

Will I get paid as part of the program?

Each successful participant will earn an educational stipend adjusted for Purchasing Power Parity for the country they’re located in.

Who is eligible?

Eligible students are:

Rising sophomores or juniors enrolled in a 4 year degree granting program
United States, Mexico, or Canada-based
Able to code in at least one language (preferably Python)
Can dedicate at least 30 hours/week for the 12-weeks of the program

MLH invites and encourages people to apply who identify as women or non-binary. MLH also invites and encourages people to apply who identify as Black/African American or LatinX. In partnership with Meta, MLH is committed to building a more diverse and inclusive tech industry and providing learning opportunities to under-represented technologists.

Apply Now!

This article was originally posted at Major League Hacking.

The post MLH Fellowship Opens Applications for this Summer’s Production Engineering Track appeared first on Linux Foundation.

Traceroute vs. tracepath: What’s the difference?

Learn the differences between these two important network troubleshooting commands and when you should use traceroute or tracepath.

Read More at Enable Sysadmin

Traceroute vs. tracepath: What’s the difference?

Learn the differences between these two important network troubleshooting commands and when you should use traceroute or tracepath.

Read More at Enable Sysadmin

How to create dynamic inventory files in Ansible

Learn how to use the host_list and Nmap plugins to build inventory files for your Ansible playbooks.

Read More at Enable Sysadmin

How to get started with MySQL and MariaDB

Learn how to install, view, and query data in MySQL and its open source implementation, MariaDB.

Read More at Enable Sysadmin

The ELISA Project Strengthens its Focus on Automotive Use Cases with Expertise from New Members Automotive Intelligence and Control of China, LOTUS Cars and ZTE

Register for the ELISA Spring Workshop on April 5-7 to Learn More

SAN FRANCISCO – March 23, 2022 –  Today, the ELISA (Enabling Linux in Safety Applications) Project, an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based safety-critical applications and systems, announced a stronger ecosystem focused on automotive use cases with the addition of the Automotive Intelligence and Control of China (AICC), LOTUS Cars and ZTE.

“The ELISA ecosystem continues to grow globally with strong support from automakers across Asia and Europe,” Kate Stewart, Vice President of Dependable Embedded Systems at The Linux Foundation. “By leveraging the expertise of current and new ELISA Project members, we are defining the best practices for use of Linux in the automobiles of the future. “

Linux is used in all major industries because it can enable faster time to market for new features and take advantage of the quality of the code development processes. Launched in February 2019 by the Linux Foundation, ELISA works with Linux kernel and safety communities to agree on what should be considered when Linux is to  be used in safety-critical systems. The project has several dedicated working groups that focus on providing resources for System integrators to apply and use to analyze qualitatively and quantitatively on their systems.

The Automotive Working Group discusses the conditions and prerequisites the automotive sector needs to integrate Linux into a safety critical system. The group, which includes collaboration from ADIT, Arm, Codethink, Evidence (a part of Huawei), Red Hat and Toyota, focuses on actual use cases from the Automotive domain to derive the technical requirements to the kernel as a basis for investigation within the Architecture Workgroup and to serve as a blueprint for actual projects in the future. There is also close collaboration with Automotive Grade Linux, which results in a meta-ELISA layer enhancing the instrument cluster demo for safety relevant parts. As leaders in the automotive industry, AICC, LOTUS Cars and ZTE will most likely join the Automotive Working Group.

New Global Automotive Expertise

As the industry’s leading ICV computing infrastructure company, AICC is committed to providing OEMs with intelligent vehicle computing platforms and digital bases for empowering them the differentiated application development ability. In November 2021, AICC released iVBB2.0 series products, which takes ICVOS as the core product, then develops ICVHW, ICVSEC, ICVEC, and other product units. Currently, iVBB2.0 has been delivered to many OEMs and achieved collaboration on cross-platform development, co-built SDV, multi-chip distributed deployment, data security policy deployment and car cloud collaborative computing.

“Becoming a member of the ELISA Project, is in line with the high real-time, high-security, and high-reliability commitment that AICC has always made,” said Dr. Jin Shang, CEO & CTO of AICC. “This will provide a guarantee for the mass production development of AICC’s ICV computing infrastructure platform from security and quality perspectives. Based on the elements, tools, and processes shared by ELISA, AICC will build safety-critical applications and systems relating to Linux requirements, leading to widely used and internationally influential products.”

LOTUS Cars, which was honored as “Manufacturer of the Year” at the News UK Motor Awards in 2021, is focused on the safety of intelligent driving. It is a world-famous manufacturer of sports cars and racing cars noted for their light weight and fine handling characteristics.

“Functional safety is critical to intelligent driving,” said Jie Deng, LOTUS Cars In-Vehicle Operating System Lead. “LOTUS focuses on ‘track-level intelligent drive‘ and is committed to ensuring that drivers stay away from risks through active redundancy of software and hardware. We are very excited to join the ELISA Project and work with industry experts to productize Linux-based safety-critical systems for more drivers to experience intelligent driving in a highly safe and fun way.”

ZTE Corporation is a global leader in telecommunications and information technology.  Founded in 1985, the company has been committed to providing innovative technologies and integrated solutions for operators, government and consumers from over 160 countries. ZTE has established 11 state-of-the-art global R&D centers and 5 intelligent manufacturing bases.

Relying on key technologies and core capabilities in the communications field, ZTE Automotive Electronics is committed to becoming a digital vehicle infrastructure capability provider and an independent high-performance partner in China, facilitating the intelligent and networked development in the automobile field. ZTE has been dedicated to GoldenOS R&D for more than 20 years. On this basis, ZTE proposes the integrated automotive operating system solution of high-performance embedded Linux and high security microkernel OS/Hypervisor, covering all scenarios of intelligent vehicle control, intelligent driving, intelligent cockpit and intelligent network connectivities.

These new members join ADIT, AISIN AW CO., Arm, Automotive Grade Linux, Banma, BMW Car IT GmbH, Codethink, Elektrobit, Horizon Robotics, Huawei Technologies, Intel, Toyota, Kuka, Linuxtronix. Mentor, NVIDIA, SUSE, Suzuki, Wind River, OTH Regensburg and Toyota.

The Spring Workshop

ELISA Project members will come together for its quarterly Spring Workshop on April 5-7 to learn about the latest developments, working group updates, share best practices and collaborate to drive rapid innovation across the industry. Hosted online, this workshop is free and open to the public. Details and registration information can be found here.

Workshop highlights include:

A keynote by Robert Martin, Senior Principal Engineer at MITRE Corporation, about “Software Supply Chain Integrity Transparency & Trustworthiness and Related Community Efforts.” The presentation will discuss the capabilities emerging across industry and government to assess and address the challenges to providing trustworthy software supplies with assurance of integrity and transparency to their composition, source, and veracity – the building blocks of software supply chains we can gain justifiable confidence in at scale and speed.A session by Christopher Temple, Lead Safety & Reliability Systems Architect at Arm Germany GmbH, and Paul Albertella, Consultant at Codethink, about “Mixed-Criticality Processing on Linux.” This talk will help create a common understanding of mixed-criticality processing on Linux and the related problems, collect and discuss alternatives for addressing the problems.A discussion led by Philipp Ahmann, Business Development Manager at Robert Bosch GmbH, about a new Industrial IoT (IIoT) Working Group within ELISA. The open forum will allow the community to discuss framing lightweight SOUP safety standards, but focusing on those touch points which are not fully covered by other use case driven working groups.

Speakers include thought leaders from ADIT GmbH, Arm, Bosch GmbH, Bytedance, Codethink, Huawei, Mobileye, The Linux Foundation, MITRE Corporation and Red Hat. Check out the schedule and register to attend the workshop today.

For more information about ELISA, visit https://elisa.tech/.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

The post The ELISA Project Strengthens its Focus on Automotive Use Cases with Expertise from New Members Automotive Intelligence and Control of China, LOTUS Cars and ZTE appeared first on Linux Foundation.