Home Blog Page 162

My first real experience with Open Source

Arthur Silva Sens wrote the following on Medium:

I just graduated from my internship at Linux Foundation’s Community Bridge program, and I’d like to share my experience and explain why you should also consider applying if you are new to open source or the cloud-native world.

I already had some experience with cloud-native projects, I’ve been using cloud-native tools at my workplace for a couple of years. It is thanks to them that I got my first full-time job as an Infrastructure Analyst and later on as a Cloud Architect. And if you are reading this blog post, I assume that you have at least some knowledge about what CNCF does and you do know some of its projects, like Kubernetes and Prometheus, i.e.

click to read more about Arthur’s experience the Linux Foundation’s community tools:

Why Linux’s biggest ever kernel release is really no big deal

When the Linux 5.8 Release Candidate opened for testing recently, the big news wasn’t so much what was in it, but its size. As Linus Torvalds himself noted, “despite not really having any single thing that stands out … 5.8 looks to be one of our biggest releases of all time.”

True enough, RC 5.8 features over 14,000 non-merge commits, some 800,000 new lines of code, and added around a hundred new contributors. It might have gotten that large simply because few have been traveling thanks to COVID-19, and we’ve all been able to get more work done in a release window than usual. But from the perspective of this seasoned Linux kernel contributor and maintainer, what is particularly striking about the 5.8 RC release is that its unprecedented size just was not an issue for those that are maintaining it. That, I’d argue, is because Linux has the best workflow process of any software project in the world.

What does it mean to have the best workflow process? For me, it comes down to a set of basic rules that Linux kernel developers have established over time to allow them to produce relentlessly steady and reliable progress on a massive scale.

One key factor is git

It’s worth starting with a little Linux history. In the project’s early days (1991–2002), people simply sent patches directly to Linus. Then he began pulling in patches from sub-maintainers, and those people would be taking patches from others. It quickly became apparent that this couldn’t scale. Everything was too hard to keep track of, and the project was at constant risk of merging incompatible code.

That led Linus to explore various change management systems including BitKeeper, which took an unusually decentralized approach. Whereas other change management systems used a check-out/modify/check-in protocol, BitKeeper gave everyone a copy of the whole repo and allowed developers to send their changes up to be merged. Linux briefly adopted BitKeeper in 2002, but its status as a proprietary solution proved incompatible with the community’s belief in open source development, and the relationship ended in 2005. In response, Linus disappeared for a while and came back with git, which took decentralized change management in a powerful new direction and was the first significant instantiation of the management process that makes Linux development work so well today.

Here are seven best practices — or fundamental tenets — that are key to the Linux kernel workflow:

Each commit must do only one thing

A central tenet of Linux is that all changes must be broken up into small steps. Every commit that you submit should do only one thing. That doesn’t mean every commit has to be small in size. A simple change to the API of a function that is used in a thousand files can make the change massive but is still acceptable as it is all part of performing one task. By always obeying this single injunction, you make it much easier to identify and isolate any change that turns out to be problematic. It also makes it easier for the patch reviewer only to need to worry about a single task that the patch accomplishes. 

Commits cannot break the build

Not only should all changes be broken into the smallest possible increments, but they also can’t break the kernel. Every step needs to be fully functional and not cause regressions. This is why a change to a function’s prototype must also update every file that calls it, to prevent the build from breaking. So every step has to work as a standalone change, which brings us to the next point:

All code is bisectable

If a bug is discovered at some point, you need to know which change caused the problem. Essentially, a bisect is an operation that allows you to find the exact point in time where everything went wrong.

You do that by going to the middle of where the last known working commit exists, and the first commit known to be broken, and test the code at that point. If it works, you go forward to the next middle point. If it doesn’t, you go back to the middle point in the other direction. In that way, you can find the commit that breaks the code from tens of thousands of possible commits in just a dozen or so compiles/tests. Git even helps in automating this process with the git bisect functionality. 

Importantly, this only works well if you abide by the previous rules: that each commit does just one thing. Otherwise, you would not know which of the many changes in the problem commit caused the issue. If a commit breaks the build or does not boot, and the bisect lands on that commit, you will not know which direction of the bisect to take. This means that you should never write a commit that depends on a future commit, like calling a function that doesn’t exist yet, or changing the parameters of a global function without changing all its callers in that same commit.

Never rebase a public repository

The Linux workflow process won’t allow you to rebase any public branch used by others. Once you rebase, the commits that were rebased will no longer match the same commits in the repositories based on that tree. A public tree that is not a leaf in a hierarchy of trees must not rebase. Otherwise, it will break the trees lower in the hierarchy. When a git repository is based on another tree, it builds on top of a commit in that tree. A rebase replaces commits, possibly removing a commit that other trees are based on. 

Git gets merging right

Getting merging right is far from a given. Other change management systems are a nightmare to merge code from different branches. It often ends up with hard to figure out conflicts and takes a huge amount of manual work to resolve. Git was structured to do the job effortlessly, and Linux benefits directly as a result. It’s a huge part of why the size of the 5.8 release wasn’t really a big deal. The 5.8-rc1 release averaged 200 commits a day, with 880 total merges from 5.7. Some maintainers noticed a bit more of a workload, but nothing was too stressful or would cause burnout.

Keep well-defined commit logs

Unfortunately, this may be one of the most essential best practices that are skipped over by many other projects. Every commit needs to be a stand-alone, and that includes its commit log. Everything required to understand the change being made must be explained in the change’s commit log. I found that some of my most lengthy and descriptive changelogs were for single line commits. That’s because a single line change may be for a very subtle bug fix, and that bug fix should be thoroughly described in the changelog.

A couple of years after submitting a change, it is highly unlikely that anyone would know why that change was made. A git blame can show what commits changed the code of a file. Some of these commits may be very old. Perhaps you need to remove a lock, or make a change to some code and do not precisely know why it exists. A well-written changelog for that code change can help determine if that code can be removed or how it can be modified. There have been several times I was glad I wrote detailed changelogs on code as I had to remove code, and the changelog description let me know that my changes were fine to make. 

Run continuous testing and integration

Finally, an essential practice is running continuous testing and continuous integration. I test every one of my pull requests before I send them upstream. We also have a repro called linux-next that pulls in all the changes that maintainers have on a specific branch of their repositories and tests them to assure that they integrate correctly. Effectively, linux-next runs a testable branch of the entire kernel that is destined for the next release. This is a public repo so anyone can test it, which happens pretty often – people now even release bug reports on code that’s in linux-next. But the upshot is that code that’s been in linux-next for a couple of weeks is very likely to be good to go into mainline.

Best practices exemplified

All of these practices enable the Linux community to release incredibly reliable code on a regular 9-week schedule at such a massive scale (average of 10,000 commits per release, and over 14,000 for the last release).  

I’d point to one more factor that’s been key to our success: culture. There’s a culture of continuous improvement within the kernel community that led us to adopt these practices in the first place. But we also have a culture of trust. We have a clear pathway via which people can make contributions and demonstrate over time that they are both willing and able to move the project forward. That builds a web of trusted relationships that have been key to the project’s long term success.

At the kernel layer, we have no choice but to follow these practices. All other applications run on top of the kernel. Any performance problem or bug in the kernel becomes a performance problem or bug for the applications on top. All error paths must exit peacefully; otherwise, the entire system will be compromised. We care about every error because the stakes are so high, but this mindset will serve any software project well.

Applications can have the luxury of merely crashing due to a bug. It will annoy users, but the stakes are not as high. Quality software should not take bugs lightly. This is why the Linux development workflow is considered the golden standard to follow.

About the author: Steven Rostedt (@srostedt) is a Linux kernel contributor and an Open Source Engineer at VMware. You can learn more about Steven’s work at blogs.vmware.com/opensource or @VMWopensource on Twitter

Facebook’s Long History of Open Source Investments Deepens with Platinum-level Linux Foundation Membership

From its efforts to reshape computing through open source to its aggressive push to increase internet connectivity around the world, Facebook is a leader in open innovation. Perhaps more important today than ever, Facebook’s focus on democratizing access to technology enhances opportunity and scale for individuals and businesses alike. That’s why we’re so excited to announce the company is joining the Linux Foundation at the highest level.

Facebook’s sponsorship of open innovation through the Linux Foundation will help support the largest shared technology investment in history with an estimated $16B in development costs of the world’s 100+ leading open source projects and supports those project communities through governance, events and education. The company is also already the lead contributor of many Linux Foundation-hosted projects, such as Presto, GraphQL, Osquery and ONNX. It has been an active participant in Linux kernel development, employing key developers and maintainers across major kernel subsystems.

In addition to these efforts, Facebook has a long history of leveraging open source to unlock the potential of open innovation:

  • Through Facebook Connectivity and the open source Telecom Infra Project (TIP) Foundation, Facebook hopes to bring fast, reliable internet to those without it. Facebook’s Magma open source project allows telecom operators to easily deploy mobile networks in hard-to-reach areas — reducing the costs of building and maintaining telecom networks. Together, Facebook Connectivity and TIP have created hundreds of billions of dollars of value through open source collaboration.
  • Facebook created a unique dataset of over 100,000 videos and launched the Deepfake Detection Challenge in order to accelerate development of new ways to detect deepfake videos. This open, collaborative effort will help the industry and society at large meet the challenge presented by deepfake technology and help everyone better assess the legitimacy of content they see online.
  • Facebook’s Data for Good program enables geographic data to be shared with the aim of addressing some of the world’s greatest humanitarian issues, including COVID-19.
  • Facebook also leads the industry in open hardware, having founded the Open Compute Project (OCP), which uses open source to enable the creation of efficient, flexible, and scalable hardware designs for data centers.
  • By creating and sustaining an open source ecosystem around PyTorch, Facebook also accelerates the pace at which data scientists and developers can leverage the power of artificial intelligence and machine learning in computer vision, natural language processing, and other disciplines.
  • Facebook’s React.js library powers some of the world’s most popular websites and has become the standard for frontend web development due to its simplicity and flexibility.
  • In working with Github to sponsor the first-ever remote open source fellowship run by Major League Hacking, Facebook also hopes to create a trend of empowering a new generation of diverse open source contributors.

Facebook’s commitment to the open source community can be seen in both its multi-million dollar investments and its genuine passion for technology development. It is this combination that makes the company an incredible supporter of the open source developer community.

As a Platinum member of the Linux Foundation, Facebook’s Kathy Kam joins the LF board. Kathy is head of Open Source at Facebook where she manages the Open Source Engineering, Developer Advocacy, and Open Source Program Management teams. Kathy is a 20-year engineering, product management, and developer relations leader previously with Google and Microsoft.

The post Facebook’s Long History of Open Source Investments Deepens with Platinum-level Linux Foundation Membership appeared first on The Linux Foundation.

Open Source Collaboration is a Global Endeavor

Linux Foundation Supports Open Source

The Linux Foundation would like to reiterate its statements and analysis of the application of US Export Control regulations to public, open collaboration projects (e.g. open source software, open standards, open hardware, and open data) and the importance of open collaboration in the successful, global development of the world’s most important technologies. At this time, we have no information to believe recent Executive Orders regarding WeChat and TikTok will impact our analysis for open source collaboration. Our members and other participants in our project communities, which span many countries, are clear that they desire to continue collaborating with their peers around the world.

As a reminder, we would like to point anyone with questions to our prior blog post on US export regulations, which also links to our more detailed analysis of the topic. Both are available in English and Simplified Chinese for the convenience of our audiences.

The post Open Source Collaboration is a Global Endeavor appeared first on The Linux Foundation.

Participate in the 2020 Open Source Jobs Report!

The Linux Foundation has partnered with edX to update the Open Source Jobs Report, which was last produced in 2018. The report examines the latest trends in open source careers, which skills are in demand, what motivates open source job seekers, and how employers can attract and retain top talent. In the age of COVID-19, this data will be especially insightful both for companies looking to hire more open source talent, as well as individuals looking to advance or change careers.

The report is anchored by two surveys, one of which explores what hiring managers are looking for in employees, and one focused on what motivates open source professionals. Ten respondents to each survey will be randomly selected to receive a US$100 gift card to a leading online retailer as a thank you for participating!

All those working with open source technology, or hiring folks who do, are encouraged to share your thoughts and experiences. The surveys take around 10 minutes to complete, and all data is collected anonymously. Links to the surveys are at the top and bottom of this post.

Take the open source professionals survey

Take the hiring managers survey

New Hyperledger Fabric Training Course Prepares Developers to Create Enterprise Blockchain Applications

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the availability of a new training course, LFD272 – Hyperledger Fabric for Developers.

LFD272, developed in conjunction with Hyperledger, is designed for developers who want to master Hyperledger Fabric chaincode – Fabric’s smart contracts – and application development.  

TARS: Contributing to an open source microservices ecosystem

Linux Foundation Executive Director Jim Zemlin recently spoke at Cloud Native + Open Source Virtual Summit China 2020. We’d now like to republish his opening comments and a guide on how to get involved with the TARS project, the open source microservices framework.

The pandemic has thrown our global society into a health and economic crisis. It seems like there are conflicts every day from all over the world. Today, I want to remind you that open source is one of the great movements where collaboration, working together, and getting along is the essence of what we do. 

Open source is not a zero-sum game, but it has had an incredible impact on us in a net positive way. I like to remind everyone that open source is public goods that will be freely available to everyone worldwide, no matter what wind of political or economic change brings us. The LF is dedicated to all of that. 

Today, we are working hard to help folks during hard times, expanding our mentorship programs with over a quarter of million dollars of new donations to allow people to come in and train themselves on new skills during this tough time. We had a wonderful set of virtual events with thousands of people from hundreds of companies from countries worldwide working together. 

We want to bring the power of open source to help during these times and have several new initiatives that we are working on. Most notably, our recently launched LFPH initiative, which has started with seven members: Cisco, doc.ai, Geometer, IBM, NearForm, Tencent, and VMware, and it’s hosting exposure notification projects such as Covid-Shield and Covid-Green, which are currently being deployed in Canada, Ireland, and several U.S. states to help find and reduce the spread of COVID-19. 

We are also working on a considerable number of new initiatives, which I will talk about. Still, I like to remind you of what we are here to talk about, which is cloud computing, and how much cloud computing has impacted all of us. Microservices are an essential part of that. In China we are seeing the TARS project; the microservices framework is really taking off. 

Two years ago, TARS joined the Linux Foundation, and ever since its community has been growing and new projects and contributors have been coming in. The TARS project provides a mature, high-performance microservices framework that supports multiple programming languages. We will talk more about the TARS Foundation in a little bit, but the microservices ecosystem has been growing and quickly turning applications and ideas in scale.

In addition to TARS, we have been seeing amazing work going on in the open source community. It begins with things such as the Software Package Data Exchange specification (SPDX), which was recently contributed as an international specification to the ISO/IEC JTC 1 for approval. This will help us track the usages of open source software across a complex global supply chain and reaffirm our commitment to the global movement. 

We also see growth and projects with recent releases, such as our networking project, the Open network automation platform Frankfurt release, which is being used to automate the networks and edge computing service for telecommunication providers, cloud providers, and enterprises. 

We‘ve seen new projects join our organization. One good example is MLflow — this project was contributed to our organization from Data Brick. This project has had an impressive community with over 200 contributors, which has been downloaded more than 2 million times. MLflow is part of the LF AI initiative. It will be a neutral home and open governance model to broaden the adaptation and contribution of things like MLflow. We have also seen new projects come to our organization, such as the FinOps Foundation, the consortium of financial companies. We are working together to grow the use of open source throughout our global financial system. 

It’s impressive to see all the different projects that have been coming. And today, I’d like to introduce the TARS Foundation formally. TARS has been an amazing project, and in just the last few years, I’ve noticed that developers here in China are for the first time incubating and sharing new open-source projects in China and the rest of the world. 

And the rest of the world is watching the progress of open source projects and seeing fantastic work. We are so proud of the work that is coming out of TARS. 

You know, Just like the Linux Foundation is about more than Linux, the TARS Foundation is more than just TARS. It’s a microservices ecosystem. 

Unfortunately, because of the COVID-19 pandemic, we had to cancel the Linux Foundation Member Summit this Spring, and we were unable to announce the TARS Foundation at that time. 

But today, the Linux Foundation is proud to announce again that the TARS project has become the TARS Foundation, an open-source microservice foundation within the overall framework of the Linux Foundation, and its outcome has been rapid growth for both the TARS project and projects associated with TARS. TARS has really taken off, and it’s just amazing to see the amount of development. 

We hope the TARS foundation will create a neutral home for additional projects for solving significant problems surrounding microservices, including but not limited to:

Agile development, DevOps best practices, and the comprehensive governance that we have will enable multi-languages, high performance, scalable solutions.

It is my pleasure to present what the TARS Foundation has achieved in the open source community. 

There are many companies whose contributions are instrumental in establishing TARS’ microservices ecosystem. The TARS Foundation is proof of that. Currently, the TARS Foundation has Arm and Tencent as premier members and five general members: AfterShip, Ampere, API7, Kong, and Zenlayer. 

In terms of TARS applications, it serves more than 100 companies from different industries, including Edge, E-sport, Fintech, Streaming, E-commerce, Entertainment, Telecommunication, Education, and more.

Furthermore, the TARS Foundation is striving to expand its microservices ecosystem, and it’s incorporating more functions such as Testing, Gateway, and Edge, to name a few. So far, the TARS Foundation has more than 30 projects.

Developers around the world are starting to realize that the TARS project is amazing and contribute as such. There are 12,000 developers actively using TARS. Also, 150 developers contribute code to TARS projects, from companies like Arm, Tencent, Google, Microsoft, Vmware, Webank, TAL, China Literature, iFlytek, Longtu Game, and many more.  

An overview of the TARS framework and how you can contribute to the open source microservices community

What is TARS? 

TARS is a new generation distributed microservice application framework that was created in 2008. It provides developers and enterprises with a complete set of solutions to build, release, deploy, and maintain stable and reliable applications that run at scale.

In June 2018, TARS joined the Linux Foundation umbrella and became one of its projects. On March 10th, 2020, it was announced that the TARS Project would transition into the TARS Foundation

“a neutral home for open source microservices projects that empower any industry to quickly turn ideas into applications at scale”.

The TARS Foundation’s goal is to address the most common problems related to microservices application, including solving multi-programming language interoperability issues, mitigating transfer issues, maintaining data storage consistency, and ensuring high performance while supporting a growing number of requests.

Many companies have successfully used TARS framework from diverse industries such as fintech, esports, edge computing, online streaming, e-commerce, and education, to name a few.

Here is a complete timeline of the TARS Foundation’s development:

https://medium.com/dev-genius/community-updates-from-the-tars-foundation-8ea2e8608d3

The TARS Foundation’s contributor ecosystem

Initially developed by Tencent, the world’s largest online gaming company, the TARS project has created an open source microservices platform for modern enterprises to realize innovative ideas quickly with the user-friendly technology in the TARS framework. 

In March 2020, the TARS project transitioned into the TARS Foundation under the Linux Foundation umbrella, aiming to support microservices development through DevOps best practices, comprehensive service governance, high-performance data transfer, storage scalability with massive data requests, and built-in cross-language interoperability. TARS has a mission to support the rapid growth of contributions and membership for a community focused on building a robust microservices platform.

The TARS Foundation provides a great platform for developers who are interested in contributing to an open source project. The organization extends different opportunities for developers to contribute to open source projects and the possibility to take on leadership roles and create major contributions in the broader open source community. 

There are Contributor, Committer, Maintainer, and Ambassador roles in their open source ecosystem, each having different requirements and responsibilities. 

How to become a Contributor

To get involved with TARS open source projects, you can first become a Contributor by participating in software construction and having at least one pull request merged into the source code. 

There are several ways for software developers to engage with the TARS community and become contributors:

    • Help other users and answer questions.
    • Submit meaningful issues.
    • Use TARS projects in production to increase testing scenarios.
    • Improve technical documentations.
    • Publish articles on applications and case studies related to TARS projects.
    • Report or repair the bugs found in TARS software.
    • Write source code analysis or annotate. 
    • Submit your first pull request.

Here are the steps to submit your pull request:

    • Fork the project from the TARS repository to your GitHub account.
    • Git clone the repository to your local machine.
    • Create a sub-branch.
    • Make changes to the code and test it on your local machine.
    • Commit those changes.
    • Push the committed code to GitHub.
    • Open a new pull request to submit your changes for review.
    • Your changes will be merged into the master branch if accepted.
    • Now you did it! You’ve become a TARS Contributor, and you will receive a Contributor t-shirt! 

How to become a Committer

A Committer is a contributor who has made distinct contributions to the TARS repositories and has accomplished at least one essential construction project or has repaired critical bugs. He or she can also take on some leadership opportunities.

The Committer is expected to:

    • Display excellent ability to make technical decisions.
    • Have successfully submitted and merged five pull requests.
    • Have contributed to the improvement of project code quality and performance.
    • Have implemented significant features or fixed major bugs.

After meeting the above requirements, you can submit a Committer request:

    • STEP 1: Provide your proof of the above criteria under Repo ISSUE.
    • STEP 2: Submit your pull request after you receive a response with instructions
    • STEP 3: Once your application is accepted, you will become a TARS Committer!

As a Committer, you are able to:

    • Control the code quality as a whole.
    • Respond to the pull requests submitted by the community.
    • Mentor contributors to promote collaborations in the open source community.
    • Attend regular meetings for committers. 
    • Know about project updates and trends in advance.

How to become a Maintainer

Maintainers are responsible for devising the subprojects in the TARS community. They will take the lead to make decisions associated with project development while holding power to merge branches. They should demonstrate excellent judgment and a sense of responsibility for subprojects’ well-being, as they need to define or approve design strategies suitable for developing subprojects. 

The Maintainer is expected to:

    • Have a firm grasp of TARS technology.
    • Be proactive in organizing technical seminars and put forward construction projects.
    • Be able to handle more complicated problems in coding.
    • Get unanimously approved by the technical support committee (TSC).

As a Maintainer, you have the right to:

    • Devise and decide the top-level technical design of subprojects.
    • Define the technical direction and priority of sub-projects.
    • Participate in version releases and ensure code quality.
    • Guide Contributors and Committers to promote collaborations in the open source community.

How to become an Ambassador

Passionate about open source technology and community, Ambassadors promote and support extensive use of TARS technology to a wider audience of software developers. Ambassadors’ expertise and involvement in TARS projects will also acquire greater recognition in the community. 

The Ambassador can:

    • Become a general member of the TARS Foundation.
    • Participate in TARS Foundation’s projects as a contributor, lecturer, or blogger.
    • Engage with developers by presenting at community events or sharing technology articles on online media platforms.

Looking forward

Ultimately, the TARS Foundation encourages a contributor to becoming a member of the governing board and the Technical Support Committee (TSC). At this level, you will focus on the organization’s strategic directions and decision-making as a whole.

If you are interested in learning more, you can check out their websites: GitHub.com/TarsCloud or TarsCloud.org

Conclusion

Contributing to open source projects has many benefits. It strengthens your development skills, and your code is reviewed by other developers who can give a new perspective. You are also making new connections and even lifelong friendships with like-minded developers in the process of contributing. This is the open source model that has built many tech innovations that all of us enjoy today. Its sustainability depends on a free exchange of ideas and technology in our global community. Open source value and innovations are embedded in developers like you who can attempt development challenges and share insights with the broader community. 

Healthcare industry proof of concept successfully uses SPDX as a software bill of materials format for medical devices

Overview

Software Package Data Exchange (SPDX) is an open standard for communicating software bill of materials (SBOM) information that supports accurate identification of software components, explicit mapping of relationships between components, and the association of security and licensing information with each component. The SPDX format has recently been submitted by the Linux Foundation and the Joint Development Foundation to the JTC1 committee of the ISO for international standards approval.

A group of eight healthcare industry organizations, composed of five medical device manufacturers and three healthcare delivery organizations (hospital systems), recently participated in the first-ever proof of concept (POC) of the SPDX standard for healthcare use.

 This blog post is a summary of the results of this initial trial.

Why do we care about SBOMs and the medical device industry?

A Software Bill of Materials (SBOM) is a nested inventory or a list of ingredients that make up the software components used in creating a device or system. This is especially critical in the medical device industry and within healthcare delivery organizations to adequately understand the operational and cyber risks of those software components from their originating supply chain.

Some cyber risks come from using components with known vulnerabilities. Known vulnerabilities are a widespread problem in the software industry, such as known vulnerabilities in the Top 10 Web Application Security Risks from the Open Web Application Security Project (OWASP). Known vulnerabilities are especially concerning in medical devices since the exploitation of those vulnerabilities could lead to loss of life or maiming. One-time reviews don’t help, since these vulnerabilities are typically found after the component has been developed and incorporated. Instead, what is needed is visibility into the components of a medical device, similar to how food ingredients are made visible.

A measured path towards using SBOMs in the medical device industry

In June 2018, the National Telecommunications and Information Administration (NTIA) engaged stakeholders across multiple industries to discuss software transparency and to participate in a limited proof of concept (POC) to determine if SBOMs can be successfully produced by medical device manufacturers and consumed by healthcare delivery organizations. That initial POC was successfully concluded in the early fall of 2019. 

Despite the limited scope, the NTIA POC results demonstrated that industry-agnostic standard formats can be leveraged by the healthcare vertical and that industry-specific formats are unnecessary. 

Next, the participants in the NTIA POC explored whether a standardized SBOM format could be used for sharing information between medical device manufacturers and healthcare delivery organizations. For this next phase, the NTIA stakeholders engaged the Linux Foundation’s SPDX community to work with the NTIA Healthcare working group. The goal was to demonstrate through a proof of concept whether the open source SPDX SBOM format would be suitable for healthcare and medical device industry uses. The first phase of that trial was conducted in early 2020.

Objectives of the 2020 POC

The stated goals of this 2020 proof of concept (POC) were to prove the viability of the framing document created by the NTIA SBOM Working group (of which the Linux Foundation was a contributor) from their earlier POC for the medical device and healthcare industry. 

This NTIA framing document defines specific baseline data elements or fields that should be used to identify software components in any SBOM format, which can be mapped into corresponding field elements in SPDX:

NTIA Baseline SPDX
Supplier Name (3.5) PackageSupplier:
Component Name (3.1) PackageName:
Unique Identifier (3.2) SPDXID:
Version String (3.3) PackageVersion:
Component Hash (3.10) PackageChecksum;
Relationship (7.1) Relationship: CONTAINS
Author Name (2.8) Creator:

The 2020 POC conducted by NTIA working group had a stated objective to determine if SBOMs generated by Medical Device Manufacturers (MDMs) using SPDX could be ingested into SIEM (Security, Information and Event Management) solutions operated by the participating Healthcare Delivery Organizations (HDOs).

The MDMs included in this POC included Abbott, Medtronic, Philips, Siemens, and Thermo Fisher. The HDOs included Cedars-Sinai, Christiana Care, Mayo Clinic, Cleveland Clinic, Johns Hopkins, New York-Presbyterian, Partners/Mass General, and Sutter Health.

Execution and implementation of the SPDX SBOMs

  • The participating HDOs provided an inventory of the deployed medical devices in use within their organizations.
  • A best-effort approach was used to determine software identity as the names that software packages are known by are “ambiguous” and could be misinterpreted.
  • An example SPDX was created along with a guidance document for the MDMs to follow for use with the medical devices identified by the HDO inventory exercise.
  • The MDMs produced 17 distinct SPDX-based SBOMs manually and with generator tooling.
  • The SBOMs were delivered via secure transfer using enterprise Box accounts, simulating delivery via secure customer portals offered by each MDM.

Consumption of the SBOMs in the SPDX POC

As a result of the 2020 POC, all participating HDOs successfully ingested the SPDX SBOM into their respective SIEM solutions, immediately making the data searchable to identify security vulnerabilities across a fleet of products. This information can also be converted into a human-readable, tabular format for other data analysis systems.

Multiple HDOs are already collaborating with vendor partners to explore direct ingestion into medical device asset/risk management solutions as part of their device procurement. One of the HDOs is working with one of their vendor partners to explore direct ingestion into a healthcare Vendor Risk Management (VRM) solution, and another has developed a ”How-To Guide,” focusing on how to correctly parse out the Packages fields using regular expressions (regex). 

As a positive indicator of SPDX’s suitability when used with asset management systems, two HDOs have begun configuring their respective internal tracking systems to track software dependencies and subcomponents. Additionally, multiple HDOs are collaborating with vendor partners to manage devices into medical device asset/risk management solutions through the device’s life by allowing for periodic updates and an audit trail.

Ongoing considerations for SPDX-based SBOMs for medical devices in healthcare organizations

Risk management, vulnerability management, and legal considerations are ongoing at the participating HDOs related to the use of SPDX-based SBOMs.

Risk management

All of the responding HDOs are exploring vulnerability identification upon procurement (i.e., SIEM through initial ingestion of the SBOM) and on an on-going basis (i.e., SIEM, CMDB/CMMS, VRM). The participating HDOs intend to explore mitigation plan / compensating control exercises that will be performed to identify vulnerable components, measure exploitability, implement risk reduction techniques, and document this data alongside the SBOM.

The SPDX community intends to learn from these exercises and improve future versions of SPDX specification to include requested information determined to be needed to manage risk effectively.

Vulnerability management at HDOs

An HDO is already working with its Biomed team to manually perform vulnerability management processes on information extracted from SBOM data. 

Another is working with their Vulnerability Management team to evaluate correlated SBOM data to credentialed/non-credentialed scans of the same device, which may prove useful in an information audit use case. A second HDO is currently working with their Vulnerability Management team on leveraging the SBOM data to supplement regular scanning results.

Legal

Participating HDOs have been developing SBOM product security language to add cybersecurity safeguards to the contract documentation.

Conclusion

The original POC was able to validate the conclusions of the NTIA Working Group that proprietary SBOM formats specific to healthcare industry verticals are not needed. This 2020 POC showed that the SPDX standard could be used as an open format for SBOMs for use by healthcare industry providers. Additionally, the ability to import the SPDX format into SIEM solutions will help HDOs adequately understand the operational and cyber risks of medical device software components from their originating supply chain. 

There is work ahead to improve automation of SPDX-based SBOMs, including the automated identification of software components and determining which component vulnerabilities are exploitable in a given system. Participating HDOs intend to perform compensating control exercises to identify and implement risk reduction techniques building on this information. HDOs are also evaluating how SPDX can support other improvements to vulnerability management. In summary, this POC showed that SPDX could be an essential part of addressing today’s operational and cyber risks.

The post Healthcare industry proof of concept successfully uses SPDX as a software bill of materials format for medical devices appeared first on The Linux Foundation.

Uniting for better open-source security: The Open Source Security Foundation (ZDNet)

Steven Vaughn-Nichols writes at ZDNet:

Eric S. Raymond, one of open-source’s founders, famously said, “Given enough eyeballs, all bugs are shallow,” which he called “Linus’s Law.” That’s true. It’s one of the reasons why open-source has become the way almost everyone develops software today. That said, it doesn’t go far enough. You need expert eyes hunting and fixing bugs and you need coordination to make sure you’re not duplicating work. 
 
So, it is more than past time that The Linux Foundation started the Open Source Security Foundation (OpenSSF). This cross-industry group brings together open-source leaders by building a security broader community. It combines efforts from the Core Infrastructure Initiative (CII)GitHub’s Open Source Security Coalition, and other open-source security-savvy companies such as GitHub, GitLab, Google, IBM,  Microsoft, NCC Group, OWASP Foundation, Red Hat, and VMware.

Read more at ZDNet

Role Of SPDX In Open Source Software Supply Chain

Kate Stewart is a Senior Director of Strategic Programs, responsible for the Open Compliance program at the Linux Foundation encompassing SPDX, OpenChain, Automating Compliance Tooling related projects. In this interview, we talk about the latest release and the role it’s playing in the open source software supply chain.

Here is a transcript of our interview. 

Swapnil Bhartiya: Hi, this is Swapnil Bhartiya, and today we have with us, once again, Kate Stewart, Senior Director of Strategic Programs at Linux Foundation. So let’s start with SPDX. Tell us, what’s new going on in there in this specification?

Kate Stewart: Well, the SPDX specification just a month ago released auto 2.2 and what we’ve been doing with that is adding in a lot more features that people have been wanting for their use cases, more relationships, and then we’ve been working with the Japanese automotive-made people who’ve been wanting to have a light version. So there’s lots of really new technology sitting in the SPDX 2.2 spec. And I think we’re at a stage right now where it’s good enough that there’s enough people using it, we want to probably take it to ISO. So we’ve been re-formatting the document and we’ll be starting to submit it into ISO so it can become an international specification. And that’s happening.

Swapnil Bhartiya: Can you talk a bit about if there is anything additional that was added to the 2.2 specification. Also, I would like to talk about some of the use cases since you mentioned the automaker. But before that, I just want to talk about anything new in the specification itself.

Kate Stewart: So in the 2.2 specifications, we’ve got a lot more relationships. People wanted to be able to handle some of the use cases that have come up from containers now. And so they wanted to be able to start to be able to express that and specify it. We’ve also been working with the NTIA. Basically they have a software bill of materials or SBoM working groups, and SPDX is one of the formats that’s been adopted. And their framing group has wanted to see certain features so that we can specify known unknowns. So that’s been added into the specification as well.

And then there are, how you can actually capture notices since that’s something that people want to use. The license has called for it and we didn’t have a clean way of doing it and so some of our tool vendors basically asked for this. Not the vendors, I guess there are partners, there are open source projects that wanted to be able to capture this stuff. And so we needed to give them a way to help.

We’re very much focused right now on making sure that SPDX can be useful in tools and that we can get the automation happening in the whole ecosystem. You know, be it when you build a binary to ship to someone or to test, you want to have your SBoM. When you’ve downloaded something from the internet, you want to have your SBoM. When you ship it out to your customer, you want to be able to be very explicit and clear about what’s there because you need to have that level of detail so that you can track any vulnerabilities.

Because right now about, I guess, 19… I think there was a stat from earlier in the year from one of the surveys. And I can dig it up for you if you’d like, but I think 99% of all the code that was scanned by Synopsys last year had open source in it. And of which it was 70% of that whole build materials was open source. Open source is everywhere. And what we need to do is, be able to work with it and be able to adhere to the licenses, and transparency on the licenses is important as is being able to actually know what you have, so you can remediate any vulnerabilities.

Swapnil Bhartiya: You mentioned a couple of things there. One was, you mentioned tooling. So I’m kind of curious, what sort of tooling that is already there? Whether it’s open source or open source be it basically commercialization that worked with the SPDX documents.

Kate Stewart: Actually, I’ve got a document that basically lists all of these tools that we’ve been able to find and more are popping up as the day goes by. We’ve got common tools. Like, some of the Linux Foundation projects are certainly working with it. Like FOSSology, for instance, is able to both consume and generate SPDX. So if you’ve got an SPDX document and you want to pull it in and cross check it against your sources to make sure it’s matching and no one’s tampered with it, the FOSSology tool can let you do that pretty easily and codes out there that can generate FOSSology.

Free Software Foundation Europe has a Lindt tool in their REUSE project that will basically generate an SPDX document if you’re using the IDs. I guess there’s actually a whole bunch more. So like I say, I’ve got a document with a list of about 30 to 40, and obviously the SPDX tools are there. We’ve got a free online, a validator. So if someone gives you an SPDX document, you can paste it into this validator, and it’ll tell you if it’s a valid SPDX document or not. And we’re looking to it.

I’m finding also some tools that are emerging, one of which is decodering, which we’ll be bringing into the Act umbrella soon, which is looking at transforming between SPDX and SWID tags, which is another format that’s commonly in use. And so we have tooling emerging and making sure that what we’ve got with SPDX is usable for tool developers and that we’ve got libraries right now for SPDX to help them in Java, Python and Go. So hopefully we’ll see more tools come in and they’ll be generating SPDX documents and people will be able to share this stuff and make it automatic, which is what we need.

Another good tool, I can’t forget this one, is Tern. And actually Tern, and so what Tern does is, it’s another tool that basically will sit there and it will decompose a container and it will let you know the bill of materials inside that container. So you can do there. And another one that’s emerging that we’ll hopefully see more soon is something called OSS Review Toolkit that goes into your bill flow. And so it goes in when you work with it in your system. And then as you’re doing bills, you’re generating your SBoMs and you’re having accurate information recorded as you go.

As I said, all of this sort of thing should be in the background, it should not be a manual time-intensive effort. When we started this project 10 years ago, it was, and we wanted to get it automated. And I think we’re finally getting to the stage where it’s going to be… There’s enough tooling out there and there’s enough of an ecosystem building that we’ll get this automation to happen.

This is why getting it to ISO and getting the specification to ISO means it’ll make it easier for people in procurement to specify that they want to see the input as an SPDX document to compliment the product that they’re being given so that they can ingest it, manage it and so forth. But by it being able to say it’s an ISO standard, it makes the things a lot easier in the procurement departments.

OpenChain recognized that we needed to do this and so they went through and… OpenChain is actually the first specification we’re taking through to ISO. But for SPDX, we’re taking it through as well, because once they say you need to follow the process, you also need some for a format. And so it’s very logical to make it easy for people to work with this information.

Swapnil Bhartiya: And as you’ve worked with different players, different of the ecosystem, what are some of the pressing needs? Like improve automation is one of those. What are some of the other pressing needs that you think that the community has to work on?

Kate Stewart: So some of the other pressing needs that we need to be working on is more playbooks, more instructions, showing people how they can do things. You know, we figured it out, okay, here’s how we can model it, here’s how you can represent all these cases. This is all sort of known in certain people’s heads, but we have not done a good job of expressing to people so that it’s approachable for them and they can do it.

One of the things that’s kind of exciting right now is the NTIA is having this working group on these software bill of materials. It’s coming from the security side, but there’s various proof of concepts that are going on with it. One of which is a healthcare proof of concept. And so there’s a group of about five to six device manufacturers, medical device manufacturers that are generating SBoMs in SPDX and then there are handing them into hospitals to go and be able to make sure they can ingest them in.

And this level of bringing people up to this level where they feel like they can do these things, it’s been really eye-opening to me. You know, how much we need to improve our handholding and improve the infrastructure to make it approachable. And this obviously motivates more people to be getting involved. From the vendors and commercial side, as well as the open source, but it wouldn’t have happened, I think, to a large extent for SPDX without this open source and without the projects that have adopted it already.

Swapnil Bhartiya: Now, just from the educational awareness point of view, like if there’s an open source project, how can they easily create SBoM documents that uses the SPDX specification with their releases and keep it synced?

Kate Stewart: That’s exactly what we’d love to see. We’d love to see the upstream projects basically generate SPDX documents as they’re going forward. So the first step is to use the SPDX license identifiers to make sure you understand what the licensing should be in each file, and ideally you can document with eTags. But then there’s three or four tools out there that actually scan them and will generate an SPDX document for you.

If you’re working at the command line, the REUSE Lindt tool that I was mentioning from Free Software Foundation Europe will work very fast and quickly with what you’ve got. And it’ll also help you make sure you’ve got all your files tagged properly.

If you haven’t done all the tagging exercising and you wonder [inaudible 00:09:40] what you got, a scan code works at the command line, and it’ll give you that information as well. And then if you want to start working in a larger system and you want to store results and looking things over time, and have some state behind it all so like there’ll different versions of things over time, FOSSology will remember from one version to another and will help you create these [inaudible 00:10:01] off of bill materials.

Swapnil Bhartiya: Can you talk about some of the new use cases that you’re seeing now, which maybe you did not expect earlier and which also shows how the whole community is actually growing?

Kate Stewart: Oh yeah. Well, when we started the project 10 years ago, we didn’t understand containers. They weren’t even not on the raw mindset of people. And there’s a lot of information sitting in containers. We’ve had some really good talks over the last couple of years that illustrate the problems. There was a report that was put out from the Linux Foundation by Armijn Hemel, that goes into the details of what’s going on in containers and some of the concerns.

So being able to get on top of automating, what’s going on with concern inside a container and what you’re shipping and knowing you’re not shipping more than you need to, figuring out how we can improve these sorts of things is certainly an area that was not initially thought about.

We’ve also seen a tremendous interest in what’s going on in IOT space. And so that you need to really understand what’s going on in your devices when they’re being deployed in the field and to know whether or not, effectively is vulnerability going to break it, or can you recover? Things like that. The last 10 years we’ve seen tremendous spectrum of things we just didn’t anticipate. And the nice thing about SPDX is, you’ve got a use case that we’re not able to represent. If we can’t tell you how to do it, just open an issue, and we’ll start trying to figure it out and start to figure if we need to add fields in for you or things like that.

Swapnil Bhartiya:  Kate, thank you so much for taking your time out and talking to me today about this project.