Home Blog

What is an SBOM?

The National Telecommunications and Information Administration (NTIA) recently asked for wide-ranging feedback to define a minimum Software Bill of Materials (SBOM). It was framed with a single, simple question (“What is an SBOM?”), and constituted an incredibly important step towards software security and a significant moment for open standards.

From NTIA’s SBOM FAQ  “A Software Bill of Materials (SBOM) is a complete, formally structured list of components, libraries, and modules that are required to build (i.e. compile and link) a given piece of software and the supply chain relationships between them. These components can be open source or proprietary, free or paid, and widely available or restricted access.”  SBOMs that can be shared without friction between teams and companies are a core part of software management for critical industries and digital infrastructure in the coming decades.

The ISO International Standard for open source license compliance (ISO/IEC 5230:2020 – Information technology — OpenChain Specification) requires a process for managing a bill of materials for supplied software. This aligns with the NTIA goals for increased software transparency and illustrates how the global industry is addressing challenges in this space. For example, it has become a best practice to include an SBOM for all components in supplied software, rather than isolating these materials to open source.

The open source community identified the need for and began to address the challenge of SBOM “list of ingredients” over a decade ago. The de-facto industry standard, and most widely used approach today, is called Software Package Data Exchange (SPDX). All of the elements in the NTIA proposed minimum SBOM definition can be addressed by SPDX today, as well as broader use-cases.

SPDX evolved organically over the last decade to suit the software industry, covering issues like license compliance, security, and more. The community consists of hundreds of people from hundreds of companies, and the standard itself is the most robust, mature, and adopted SBOM in the market today. 

The full SPDX specification is only one part of the picture. Optional components such as SPDX Lite, developed by Pioneer, Sony, Hitachi, Renesas, and Fujitsu, among others, provide a focused SBOM subset for smaller supplier use. The nature of the community approach behind SPDX allows practical use-cases to be addressed as they arose.

In 2020, SPDX was submitted to ISO via the PAS Transposition process of Joint Technical Committee 1 (JTC1) in collaboration with the Joint Development Foundation. It is currently in the approval phase of the transposition process and can be reviewed on the ISO website as ISO/IEC PRF 5962.

The Linux Foundation has prepared a submission for NTIA highlighting knowledge and experience gained from practical deployment and usage of SBOM in the SPDX and OpenChain communities. These include isolating the utility of specific actions such as tracking timestamps and including data licenses in metadata. With the backing of many parties across the worldwide technology industry, the SPDX and OpenChain specifications are constantly evolving to support all stakeholders.

Industry Comments

The Sony team uses various approaches to managing open source compliance and governance… An example is using an OSS management template sheet based on SPDX Lite, a compact subset of the SPDX standard. Teams need to be able to review the type, version, and requirements of software quickly, and using a clear standard is a key part of this process.

Hisashi Tamai, SVP, Sony Group Corporation, Representative of the Software Strategy Committee

“Intel has been an early participant in the development of the SPDX specification and utilizes SPDX, as well as other approaches, both internally and externally for a number of open source software use-cases.”

Melissa Evers, Vice President – Intel Architecture, Graphics, Software / General Manager – Software Business Strategy

Scania corporate standard 4589 (STD 4589) was just made available to our suppliers and defines the expectations we have when Open Source is part of a delivery to Scania. So what is it we ask for in a relationship with our suppliers when it comes to Open Source? 

1) That suppliers conform to ISO/IEC 5230:2020 (OpenChain). If a supplier conforms to this specification, we feel confident that they have a professional management program for Open Source.  

2) If in the process of developing a solution for Scania, a supplier makes modifications to Open Source components, we would like to see those modifications contributed to the Open Source project. 

3) Supply a Bill of materials in ISO/IEC DIS 5962 (SPDX) format, plus the source code where there’s an obligation to offer the source code directly, so we don’t need to ask for it.

Jonas Öberg, Open Source Officer – Scania (Volkswagen Group)

The SPDX format greatly facilitates the sharing of software component data across the supply chain. Wind River has provided a Software Bill of Materials (SBOM) to its customers using the SPDX format for the past eight years. Often customers will request SBOM data in a custom format. Standardizing on SPDX has enabled us to deliver a higher quality SBOM at a lower cost.

Mark Gisi, Wind River Open Source Program Office Director and OpenChain Specification Chair

The Black Duck team from Synopsys has been involved with SPDX since its inception, and I had the pleasure of coordinating the activities of the project’s leadership for more than a decade. In addition, representatives from scores of companies have contributed to the important work of developing a standard way of describing and communicating the content of a software package.

Phil Odence, General Manager, Black Duck Audits, Synopsys

With the rapidly increasing interest in the types of supply chain risk that a Software Bill of Materials helps address, SPDX is gaining broader attention and urgency. FossID (now part of Snyk) has been using SPDX from the start as part of both software component analysis and for open source license audits. Snyk is stepping up its involvement too, already contributing to efforts to expand the use cases for SPDX by building tools to test out the draft work on vulnerability profiles in SPDX v3.0.

Gareth Rushgrove, Vice President of Products, Snyk

For more information on OpenChain: https://www.openchainproject.org/

For more information on SPDX: https://spdx.dev/


The post What is an SBOM? appeared first on Linux Foundation.

When will my instance be ready? — understanding cloud launch time performance metrics

Understanding cloud launch time performance metrics
Click to Read More at Oracle Linux Kernel Development

Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up

After careful consideration, we have decided that the safest course of action for returning to in-person events this fall is to take a “COVID-19 vaccine required” approach to participating in-person. Events that will be taking this approach include:

Open Source Summit + Embedded Linux Conference (and co-located events), Sept 27-30, Seattle, WAOSPOCon, Sept 27-29, Seattle, WALinux Security Summit, Sept 27-29, Seattle, WAOpen Source Strategy Forum, Oct 4-5, London, UKOSPOCon Europe, Oct 6, London, UKOpen Networking & Edge Summit + Kubernetes on Edge Day, Oct 11-12, Los Angeles, CAKubeCon + CloudNativeCon (and co-located events), Oct 11-15, Los Angeles, CAThe Linux Foundation Member Summit, Nov 2-4, Napa, CAOpen Source Strategy Forum, Nov 9-10, New York, NY

We are still evaluating whether to keep this requirement in place for events in December and beyond. We will share more information once we have an update.

Proof of full COVID-19 vaccination will be required to attend any of the events listed above. A person is considered fully vaccinated 2 weeks after the second dose of a two-dose series, or two weeks after a single dose of a one-dose vaccine.

Vaccination proof will be collected via a digitally secure vaccine verification application that will protect attendee data in accordance with EU GDPR, California CCPA, and US HIPAA regulations. Further details on the app we will be using, health and safety protocols that will be in place onsite at the events, and a full list of accepted vaccines will be added to individual event websites in the coming months. 

While this has been a difficult decision to make, the health and safety of our community and our attendees are of the utmost importance to us. Mandating vaccines will help infuse confidence and alleviate concerns that some may still have about attending an event in person. Additionally, it helps us keep our community members safe who have not yet been able to get vaccinated or who are unable to get vaccinated. 

This decision also allows us to be more flexible in pivoting with potential changes in guidelines that venues and municipalities may make as organizations and attendees return to in person events. Finally, it will allow for a more comprehensive event experience onsite by offering more flexibility in the structure of the event.

For those that are unable to attend in-person, all of our Fall 2021 events will have a digital component that anyone can participate in virtually. Please visit individual event websites for more information on the virtual aspect of each event.

We hope everyone continues to stay safe, and we look forward to seeing you, either in person or virtually, this fall. 

The Linux Foundation


Q:If I’ve already tested positive for COVID-19, do I still need to show proof of COVID-19 vaccination to attend in person? 

A: Yes, you will still need to show proof of COVID-19 vaccination to attend in-person.

Q: Are there any special circumstances in which you will accept a negative COVID-19 test instead of proof of a COVID-19 vaccination? 

A: Unfortunately, no. For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: I cannot get vaccinated for medical, religious, or other reasons. Does this mean I cannot attend?

A: For your own safety, as well as the safety of all our onsite attendees, everyone who is not vaccinated against COVID-19 – even due to medical, religious or other reasons – will need to participate in these events virtually this year, and will not be able to attend in-person.

Q: Will I need to wear a mask and socially distance at these events if everyone is vaccinated? 

A: Mask and social distancing requirements for each event will be determined closer to event dates, taking into consideration venue and municipality guidelines.

Q: Can I bring family members to any portion of an event (such as an evening reception) if they have not provided COVID-19 vaccination verification in the app? 

A: No. Anyone that attends any portion of an event in-person will need to register for the event, and upload COVID vaccine verification into our application.

Q: Will you provide childcare onsite at events again this year?

A: Due to COVID-19 restrictions, we unfortunately cannot offer child care services onsite at events at this time. We can, however, provide a list of local childcare providers. We apologize for this disruption to our normal event plans. We will be making this service available as soon as we can for future events.

Q: Will international (from outside the US) be able to attend? Will you accept international vaccinations?

A: Absolutely. As mentioned above, a full list of accepted vaccines will be added to individual event websites in the coming months. 

The post Adoption of a “COVID-19 Vaccine Required” Approach for our Fall 2021 Event Line-up appeared first on Linux Foundation.

How Linux Has Impacted Your Lives – Celebrating 30 Years of Open Source

In April, The Linux Foundation asked the open source community: How has Linux impacted your life? Needless to say, responses poured in from across the globe sharing memories, sentiments and important moments that changed your lives forever. We are grateful you took the time to tell us your stories.

We’re thrilled to share 30 of the responses we received, randomly selected from all submissions. As a thank you to these 30 folks for sharing their stories, and in celebration of the 30th Anniversary of Linux, 30 penguins were adopted* from the Southern African Foundation for the Conservation of Coastal Birds in their honor, and each of our submitters got to name their adopted penguin. 

Check out the slides below to read these stories, get a glimpse of their newly adopted penguins and their new names!

Thank you to all who contributed for inspiring us and the community for the next 30 years of innovation and beyond. 

*Each of the adopted wild African penguins have been rescued and are being rehabilitated with the goal of being released back into the wild by the wonderful and dedicated staff at SANCCOB.

The post How Linux Has Impacted Your Lives – Celebrating 30 Years of Open Source appeared first on Linux Foundation.

Interview with Stephen Hendrick, Vice President of Research, Linux Foundation Research

Jason Perlow, Director of Project Insights and Editorial Content, spoke with Stephen Hendrick about Linux Foundation Research and how it will promote a greater understanding of the work being done by open source projects, their communities, and the Linux Foundation.

JP: It’s great to have you here today, and also, welcome to the Linux Foundation. First, can you tell me a bit about yourself, where you are from, and your interests outside work?

SH: I’m from the northeastern US.  I started as a kid in upstate NY and then came to the greater Boston area when I was 8.  I grew up in the Boston area, went to college back in upstate NY, and got a graduate degree in Boston.  I’ve worked in the greater Boston area since I was out of school and have really had two careers.  My first career was as a programmer, which evolved into project and product management doing global cash management for JPMC.  When I was in banking, IT was approached very conservatively, with a tagline like yesterday’s technology, tomorrow.  The best thing about JPMC was that it was where I met my wife.  Yes, I know, you’re never supposed to date anybody from work.  But it was the best decision I ever made.  After JPMC, my second career began as an industry analyst working for IDC, specializing in application development and deployment tools and technologies.  This was a long-lived 25+ year career followed by time with a couple of boutique analyst firms and cut short by my transition to the Linux Foundation.

Until recently, interests outside of work mainly included vertical pursuits — rock climbing during the warm months and ice climbing in the winter.  The day I got engaged, my wife (to be) and I had been climbing in the morning, and she jokes that if she didn’t make it up that last 5.10, I wouldn’t have offered her the ring.  However, having just moved to a house overlooking Mt. Hope bay in Rhode Island, our outdoor pursuits will become more nautically focused.

JP: And from what organization are you joining us?

SH: I was lead analyst at Enterprise Management Associates, a boutique industry analyst firm.  I initially focused my practice area on DevOps, but in reality, since I was the only person with application development and deployment experience, I also covered adjacent markets that included primary research into NoSQL, Software Quality, PaaS, and decisioning.  

JP: Tell me a bit more about your academic and quantitative analysis background; I see you went to Boston University, which was my mom’s alma mater as well. 

SH:  I went to BU for an MBA.  In the process, I concentrated in quantitative methods, including decisioning, Bayesian methods, and mathematical optimization.  This built on my undergraduate math and economics focus and was a kind of predecessor to today’s data science focus.  The regression work that I did served me well as an analyst and was the foundation for much of the forecasting work I did and industry models that I built.  My qualitative and quantitative empirical experience was primarily gained through experience in the more than 100 surveys and in-depth interviews I have fielded.  

JP: What disciplines do you feel most influence your analytic methodology? 

SH: We now live in a data-driven world, and math enables us to gain insight into the data.  So math and statistics are the foundation that analysis is built on.  So, math is most important, but so is the ability to ask the right questions.  Asking the right questions provides you with the data (raw materials) shaped into insights using math.  So analysis ends up being a combination of both art and science.

JP: What are some of the most enlightening research projects you’ve worked on in your career? 

SH:  One of the most exciting projects I cooked up was to figure out how many professional developers there were in the world, by country, with five years of history and a 5-year forecast.  I developed a parameterized logistics curve tuned to each country using the CIA, WHO, UN, and selected country-level data.  It was a landmark project at the time and used by the world’s leading software and hardware manufacturers. I was flattered to find out six years later that another analyst firm had copied it (since I provided the generalized equation in the report).

I was also interested in finding that an up-and-coming SaaS company had used some of my published matrix data on language use, which showed huge growth in Ruby.  This company used my findings and other evidence to help drive its acquisition of a successful Ruby cloud application platform.

JP: I see that you have a lot of experience working at enterprise research firms, such as IDC, covering enterprise software development. What lessons do you think we can learn from the enterprise and how to approach FOSS in organizations adopting open source technologies?

SH: The analyst community has struggled at times to understand the impact of OSS. Part of this stems from the economic foundation of the supply side research that gets done.  However, this has changed radically over the past eight years due to the success of Linux and the availability of a wide variety of curated open source products that have helped transform and accelerate the IT industry.  Enterprises today are less concerned about whether a product/service is open or closed source.  Primarily they want tools that are best able to address their needs. I think of this as a huge win for OSS because it validates the open innovation model that is characteristic of OSS. 

JP: So you are joining the Linux Foundation at a time when we have just gotten our research division off the ground. What are the kind of methodologies and practices that you would like to take from your years at firms like IDC and EMA and see applied to our new LF Research?

SH: LF is in the enviable position of having close relationships with IT luminaries, academics, hundreds of OSS projects, and a significant portion of the IT community.  The LF has an excellent opportunity to develop world-class research that helps the IT community, industry, and governments better understand OSS’s pivotal role in shaping IT going forward.

I anticipate that we will use a combination of quantitative and qualitative research to tell this story.  Quantitative research can deliver statistically significant findings, but qualitative interview-based research can provide examples, sound bites, and perspectives that help communicate a far more nuanced understanding of OSS’s relationship with IT.

JP: How might these approaches contrast with other forms of primary research, specifically human interviews? What are the strengths and weaknesses of the interview process?

SH: Interviews help fill in the gaps around discrete survey questions in ways that can be insightful, personal, entertaining, and unexpected.  Interviews can also provide context for understanding the detailed findings from surveys and provide confirmation or adjustments to models based on underlying data.

JP: What are you most looking forward to learning through the research process into open source ecosystems?

SH: The transformative impact that OSS is having on the digital economy and helping enterprises better understand when to collaborate and when to compete.

JP: What insights do you feel we can uncover with the quantitative analysis we will perform in our upcoming surveys? Are there things that we can learn about the use of FOSS in organizations?

SH: A key capability of empirical research is that it can be structured to highlight how enterprises are leveraging people, policy, processes, and products to address market needs.  Since enterprises are widely distributed in their approach and best/worst practices to a particular market, data can help us build maturity models that provide advice on how enterprises can shape strategy and decision based on the experience and best practices of others.

JP: Trust in technology (and other facets of society) is arguably at an all-time low right now. Do you see a role for LF Research to help improve levels of trust in not only software but in open source as an approach to building secure technologies? What are the opportunities for this department?

SH: I’m reminded by the old saying that there are “lies, damned lies, and then there are statistics.” If trust in technology is at an all-time low, it’s because there are people in this world with a certain moral flexibility, and the IT industry has not yet found effective ways to prevent the few from exploiting the many.  LF Research is in the unique position to help educate and persuade through factual data and analysis on accelerating improvements in IT security.

JP: Thanks, Steve. It’s been great talking to you today!

The post Interview with Stephen Hendrick, Vice President of Research, Linux Foundation Research appeared first on Linux Foundation.

TODO Group Announces 2021 State of OSPO Survey

The TODO Group, together with Linux Foundation Research and The New Stack, is conducting a survey as part of a research project on the prevalence and outcomes of open source programs among different organizations across the globe. 

Open source program offices (OSPOs) help set open source strategies and improve an organization’s software development practices. Since 2018, the TODO Group has conducted surveys to assess the state of open source programs across the industry. Today, we are pleased to announce the launch of the 2021 edition featuring additional questions to add value to the community.

The survey will generate insights into the following areas, including:

The extent of adoption of open source programs and initiatives Concerns around the hiring of open source developers Perceived benefits and challenges of open source programsThe impact of open source on organizational strategy

We hope to expand the pool of respondents by translating the survey into Chinese and Japanese. Please participate now; we intend to close the survey in early July. Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be published in the final results.

To take the 2021 OSPO Survey, click the button below:


As a thank you for completing this survey, you will receive a 75% discount code on enrollment in The Linux Foundation’s Open Source Management & Strategy training program, a $375 savings. This seven-course online training series is designed to help executives, managers, and software developers understand and articulate the basic concepts for building effective open source practices within their organization.


Your name and company name will not be published. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy. Please note that survey partners who are not Linux Foundation employees will be involved in reviewing the survey results. If you do not want them to have access to your name or email address, please do not provide this information.


We will summarize the survey data and share the findings during OSPOCon 2021. The summary report will be published on the TODO Group and Linux Foundation websites. 


If you have questions regarding this survey, please email us at info@todogroup.org

The post TODO Group Announces 2021 State of OSPO Survey appeared first on Linux Foundation.

FINOS Announces 2021 State of Open Source in Financial Services Survey

FINOS, the fintech open source foundation, and its research partners, Linux Foundation Research, Scott Logic, WIPRO, and GitHub, are conducting a survey as part of a research project on the state of open source adoption, contribution, and readiness in the financial services industry. 

The increased prevalence, importance, and value of open source is well understood and widely reported by many industry surveys and studies. However, the rate at which different industries are acknowledging this shift and adapting their own working practices to capitalize on the new world of open source-first differs considerably.

The financial services industry has been a long-time consumer of open source software, however many are struggling in contributing to, and publishing, open source software and standards, and adopting open source methodologies. A lack of understanding of how to build and deploy efficient tooling and governance models are often seen as a limiting factor.

This survey and report seeks to explore open source within the context of financial services organizations; including banks, asset managers, and hedge funds but will be designed as a resource to be used by all financial services organizations, with the goal to make this an annual survey with a year-on-year tracing of metrics. 

Please participate now; we intend to close the survey in early July. Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be published in the final results.

To take the 2021 FINOS Survey, click the button below:


As a thank-you for completing this survey, you will receive a 75% discount code on enrollment in the Linux Foundation’s Open Source Management & Strategy training program, a $375 savings. This seven-course online training series is designed to help executives, managers, and software developers understand and articulate the basic concepts for building effective open source practices within their organization.


Your name and company name will not be published. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy. Please note that survey partners who are not Linux Foundation employees will be involved in reviewing the survey results. If you do not want them to have access to your name or email address, please do not provide this information.


We will summarize the survey data and share the findings during Open Source Strategy Forum, 2021. The summary report will be published on the FINOS and Linux Foundation websites. 


If you have questions regarding this survey, please email us at info@finos.org

The post FINOS Announces 2021 State of Open Source in Financial Services Survey appeared first on Linux Foundation.

New Open Source Project Uses Machine Learning to Inform Quality Assurance for Construction in Emerging Nations

Linux Foundation with support from IBM and Call for Code hosts ‘Intelligent Supervision Assistant for Construction’ project from Build Change to help builders identify structural issues in masonry walls or concrete columns, especially in areas affected by disasters

SAN FRANCISCO, June 10, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the Intelligent Supervision Assistant for Construction (ISAC-SIMO) project, which was created by Build Change with a grant from IBM as part of the Call for Code initiative. The Autodesk Foundation, a Build Change funder, also contributed pro-bono expertise to advise the project’s development.

Build Change helps save lives in earthquakes and windstorms. Its mission is to prevent housing loss caused by disasters by transforming the systems that regulate, finance, build and improve houses around the world.

ISAC-SIMO packages important construction quality assurance checks into a convenient mobile app. The tool harnesses the power of machine learning and image processing to provide feedback on specific construction elements such as masonry walls and reinforced concrete columns. Users can choose a building element check and upload a photo from the site to receive a quick assessment.

“ISAC-SIMO has amazing potential to radically improve construction quality and ensure that homes are built or strengthened to a resilient standard, especially in areas affected by earthquakes, windstorms, and climate change,” said Dr. Elizabeth Hausler, Founder & CEO of Build Change. “We’ve created a foundation from which the open source community can develop and contribute different models to enable this tool to reach its full potential. The Linux Foundation, building on the support of IBM over these past three years, will help us build this community.”

ISAC-SIMO was imagined as a solution to gaps in technical knowledge that were apparent in the field. The app ensures that workmanship issues can be more easily identified by anyone with a phone, instead of solely relying on technical staff. It does this by comparing user-uploaded images against trained models to assess whether the work done is broadly acceptable (go) or not (no go) along with a specific score. The project is itself built on open source software, including Python through Django, Jupyter Notebooks, and React Native.

“Due to the pandemic, the project deliverables and target audience have evolved. Rather than sharing information and workflows between separate users within the app, the app has pivoted to provide tools for each user to perform their own checks based on their role and location. This has led to a general framework that is well-suited for plugging in models from the open source community, beyond Build Change’s original use case,” said Daniel Krook, IBM Chief Technology Officer for the Call for Code Global Initiative.

IBM and The Linux Foundation have a rich history of deploying projects that fundamentally make change and progress in society through innovation – and remain committed during COVID-19. The winner of the 2018 Call for Code Global Challenge, Project OWL, contributed its IoT device firmware in March 2020 as the ClusterDuck Protocol, and since then, twelve more Call for Code deployment projects like ISAC-SIMO that address disasters, climate change, and racial justice, have been open sourced for communities that need them most.

The project encourages new users to contribute and to deploy the software in new environments around the world. Priorities for short term updates include improvements in user interface, contributions to the image dataset for different construction elements, and support to automatically detect if the perspective of an image is flawed. For more information, please visit: ​https://www.isac-simo.net/docs/contribute/

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.


The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the Linux Foundation

The post New Open Source Project Uses Machine Learning to Inform Quality Assurance for Construction in Emerging Nations appeared first on Linux Foundation.

Build and Deploy Hyperledger Fabric on Azure Cloud Platform- Part 3

By Matt Zand and Abhik Banerjee


In our first article in this series, we learned about the below topics:

Azure cloud for Blockchain Applications
Fabric Marketplace Template versus Manual Configurations
Deploy Orderer and Peer Organizations

In our second article in this series, we discussed the following:

Setting Up the Development Environment
Setting Up Configurations for Orderer and Peer
Setting Up Pods and Volume Mounts for Development

In this final article in the series, we will cover the following:

Create A Channel
Adding  Peer to Network, Channel and Nodes
Deploying and Operating Chaincode

It should be noted that while this article is intended to be  beginner friendly, it would be helpful to have prior familiarity with Azure, Kubernetes APIs, and Azure Kubernetes Service (AKS). Also, a good knowledge of Hyperledger Fabric and its components is required to follow steps discussed in this article. The topics and steps covered in this article are very useful for those interested in doing blockchain consulting and development

7- Create A Channel

Once the development pods are up and running, we can create a channel and then add in the clusters in consecutive steps.

To create a channel you need to bring it up with one ordering service. Here that would be our OrdererOrg cluster. So to create a channel we need to first start from the Orderer cluster’s development pod. Assuming you are inside the said pod, you can easily create a channel by running:

configtxgen -profile SampleChannel  -outputCreateChannelTx ./channel.tx -channelID ch01 -configPath ./

The configtxgen tool is provided by default in the Hyperledger Fabric binaries and since the pod is based on the image which has HF already installed and the required paths set, this tool will create a new channel with name “ch01”. The SampleChannel profile will be used in this case for specific details.

The channel details will be saved inside the channel.tx transaction file. This, as you may know, is a very important file needed to create the genesis block for this channel. The following step will use this file to bring up the channel with the OrdererOrg as the provider of the ordering service:

Peer channel create -o <URL of your Orderer Cluster> -c <Channel Name/ID> -f ./channel.tx –tls –cafile /var/hyperledger/admin/msp/tlscacerts/ca.crt –clientauth –certfile /var/hyperledger/admin/tls/cert.pem –keyfile /var/hyperledger/admin/tls/key.pem

Provide the URL of your Orderer’s AKS cluster for the -o flag and the channel ID (here ch01) in the -c flag. The -f flag will need the path to the channel.tx transaction file. These details are mostly the same no matter which cloud platform you use since these are the APIs provided by HF.

8- Adding the Peer to Network, Channel and Nodes

Channels can be understood as an analogue to Virtual Private Cloud (VPC) in a cloud platform. Generally on any cloud platform you have a default virtual private cloud (the offering name may differ from cloud platform to platform). Next there are subnets inside VPC that provide a layer of segregation between compute resources like VMs. VMs can have multiple vNICs and can be a part of more than one subnet.

When it comes to Hyperledger Fabric, think of Peer Nodes and Orderers as VMs. Channels which facilitate a common chaincode as a subnet. Nodes which are part of more than one channel (like VMs part of different subnets) facilitate cross-channel communication and are known as “Anchor Nodes”. And just like the VPC which contains all the subnets, there is a channel for managing the whole network. This is “testchainid” by default (check the config files of HF for more). To add a new Peer to the network, one simply needs to propose a transaction which would take into account the changes in the system channel configuration on the Orderer node. The transaction would be recorded and the new Peer’s addition would be added onto the ledger. The role of Membership Service Provider (MSP) comes to play here as the MSP of the new Peer is what is recorded.

Peer to the Network

First we start by logging into the development environment we created on Peer cluster. This can be done by making the kubectl point to that and then accessing the dev pod (refer to “Setting Up Pods and Volume Mounts” section ). Next we need to generate a configtx.yaml here. This config.yaml is similar to the configtx.yaml provided through the hosted code and we will only be needing till line 44 of that configtx.yaml which details Org01’s information. Once a new configtx.yaml has been derived/created, we will generate the Peer organization’s configuration as a JSON file using the configtxgen tool. Next we will need to copy it to the Azure Shell from where we will then copy it to the dev pod on Orderer Cluster. The following commands demonstrate how to do that:

configtxgen -printOrg Org01 -configPath ./ > Org01.json && exit
kubectl cp -n hlf-admin devenv:/var/hyperledger/admin/Org01.json Org01.json
az aks get-credentials –resource-group <Orderer Resource Group> –name <Orderer AKS Cluster Name> –subscription <Subscription ID>
kubectl cp -n hlf-admin Org01.json devenv:/var/hyperledger/admin/Org01.json
kubectl exec -n hlf-admin -it devenv /bin/bash

We generate a JSON of the Peer configuration and then exit to the Azure Shell. From there we copy the config of the Peer (here Org01.json) to the Azure shell. Then we get the kubectl to point at the Orderer Cluster from the Peer Cluster at step 3 and in subsequent steps copy the config to /var/hyperledger/admin in the Orderer Dev Pod. In the next steps we fetch the config of the system channel (channel name is testchainid) in a JSON file of name sys_config_block.json. In the subsequent steps we modify and save the changes onto a new file (called modified_sys_config.json).

Peer channel fetch config sys_config_block.pb -o <Replace with Orderer URL> -c testchainid –tls –cafile /var/hyperledger/admin/msp/tlscacerts/ca.crt –clientauth –certfile /var/hyperledger/admin/tls/cert.pem –keyfile /var/hyperledger/admin/tls/key.pem
configtxlator proto_decode –input sys_config_block.pb –type common.Block > sys_config_block.json
jq .data.data[0].payload.data.config sys_config_block.json > original_sys_config.json
jq -s “.[0] * {”channel_group”:{”groups”:{”Consortiums”:{”groups”: {”SampleConsortium”: {”groups”: {”Org01”:.[1]}}}}}}}” original_sys_config.json Org01.json > modified_sys_config.json

What we are doing in the above steps is essentially creating a new file which contains a modification which states that there is a new organization in the network. The “Consortium” property modification reflects that in the modified_sys_config.json. This difference between the original and the new config would be recorded in the ledger of the network. This would provide an irrefutable proof of inclusion for the Peer Organization Org01. This is what we accomplish with the following:

configtxlator proto_encode –input original_sys_config.json –type common.Config > original_sys_config.pb
configtxlator proto_encode –input modified_sys_config.json –type common.Config > modified_sys_config.pb
configtxlator compute_update –channel_id testchainid –original original_sys_config.pb  –updated modified_sys_config.pb > differed_sys_config.pb
configtxlator proto_decode –input differed_sys_config.pb –type common.ConfigUpdate > differed_sys_config.json
echo ‘{“payload”:{“header”:{“channel_header”:{“channel_id”:”testchainid”, “type”:2}},”data”:{“config_update”:’$(cat differed_sys_config.json)’}}}’ > modreqst_sys_config.json
configtxlator proto_encode –input modreqst_sys_config.json –type common.Envelope > modreqst_sys_config.pb

We calculate the difference between the two configurations using the configtxlator tool. This tool is also used to convert the JSONs to Protobufs as this is the preferred medium in HF. The difference is finally reflected as modreqst_sys_config.pb which will be used in the next command to create an update request through the Orderer (which is also technically a Peer).

Peer channel update -f modreqst_sys_config.pb -o <Orderer URL Here> -c testchainid –tls –cafile /var/hyperledger/admin/msp/tlscacerts/ca.crt –clientauth –certfile /var/hyperledger/admin/tls/cert.pem –keyfile /var/hyperledger/admin/tls/key.pem

With this, the new Organization (Org01) is now a part of the network, and almost 70% of the work is done. In the next sections we will add Org01 to our channel ch01 and then add the nodes which Org01 owns (here the Peer cluster on AKS) as a trusted component in the network.

Peer to the Channel

If we are going with the VPC analogy, then our new VM (Org01 Peer) has been turned on and included in the VPC but it is yet to be assigned a subnet. Until that point, it is virtually useless. Only after it gets a subnet would it be able to facilitate ingress and egress depending on its permissions.

Just like that we will also need to include the newly inducted Org01 into our channel ch01 that we created in our previous section. This too needs to be done from the Orderer Dev Pod. Instead of editing the config of the system channel (testchannelid) we will edit the config of our channel (ch01) and the difference between the original and the modified will be recorded as a transaction in the ledger. This will denote Org01’s inclusion in the channel. The following steps fetch the original ch01 config and store it in a JSON file for readability.

Peer channel fetch config ch01_config_block.pb -o <Orderer URL here> -c ch01 –tls –cafile /var/hyperledger/admin/msp/tlscacerts/ca.crt –clientauth –certfile /var/hyperledger/admin/tls/cert.pem –keyfile /var/hyperledger/admin/tls/key.pem
configtxlator proto_decode –input ch01_config_block.pb –type common.Block > ch01_config_block.json
jq .data.data[0].payload.data.config ch01_config_block.json > original_ch01_config.json
jq -s “.[0] * {”channel_group”:{”groups”:{”Application”:{”groups”: {”Org01”:.[1]}}}}}” original_ch01_config.json Org01.json > modified_ch01_config.json

Once we get the original config, we create a modified config which reflects Org01’s inclusion in the channel ch01 in steps 3 and 4. Just as before, we now need to have the modified configuration be accounted for by calculating the difference between the new and the old config and that too is done in the protobuf format.

configtxlator proto_encode –input original_ch01_config.json –type common.Config > original_ch01_config.pb
configtxlator proto_encode –input modified_ch01_config.json –type common.Config > modified_ch01_config.pb
configtxlator compute_update –channel_id ch01 –original original_ch01_config.pb –updated modified_ch01_config.pb > differed_ch01_config.pb
configtxlator proto_decode –input differed_ch01_config.pb –type common.ConfigUpdate > differed_ch01_config.json
echo ‘{“payload”:{“header”:{“channel_header”:{“channel_id”:”ch01″, “type”:2}},”data”:{“config_update”:’$(cat differed_ch01_config.json)’}}}’ > modreqst_ch01_config.json
configtxlator proto_encode –input modreqst_ch01_config.json –type common.Envelope > modreqst_ch01_config.pb

The above commands using the configtxlator tool should seem similar. We are literally doing the same thing here with ch01 instead of the testchainid channel. With the next command using the Peer API, the process of including the node into the channel gets completed.

Peer channel update -f modreqst_ch01_config.pb -o <Orderer URL here> -c ch01 –tls –cafile /var/hyperledger/admin/msp/tlscacerts/ca.crt –clientauth –certfile /var/hyperledger/admin/tls/cert.pem –keyfile /var/hyperledger/admin/tls/key.pem

The above command creates an update transaction which gets recorded. This update reflects the inclusion of Org01 into the channel ch01. Now all we need to do is include the nodes owned by this Peer organization into the network.

Peer to the Nodes

Previously we copied a file from Peer Dev Pod to Azure Shell and then to Orderer Dev Pod. In this case, we need to do the reverse. We need the Orderer’s TLS certificate located at devenv:/var/hyperledger/admin/msp/tlscacerts/..data/ca.crt on the Orderer Dev Pod at devenv:/var/hyperledger/admin/ca_Orderer.crt on the Peer’s Dev Pod. You may think why would we be needing this. The logic behind this is that we can use the MSP of Org01 which we quite recently enrolled into the channel ch01 but we can only connect to the channel- the node itself is not validated.

The TLS certificate from Orderer will help in that. Assuming the TLS certificate has been copied to the Peer Node, we need to retrieve the configuration of the genesis block from the Orderer. You need to provide this here while connecting to the Orderer node from the Peer with the following command:

Peer channel fetch 0 ch01_config.block -o <Orderer URL Here>-c ch01 –tls –cafile /var/hyperledger/admin/ca_Orderer.crt –clientauth –certfile /var/hyperledger/admin/tls/cert.pem  –keyfile /var/hyperledger/admin/tls/key.pem
Peer channel join -b ch01_config.block

And just like that the Node is now a part of the channel. The first line retrieves the channel’s genesis block configuration and stores it in a ch01_config.block file while the second uses this to let the node join the channel. Since the MSP of Org01 is already registered, this Peer node can now transact on the channel.

This concludes the construction of our Demo Network. At this point, the only thing remaining is deploying your required chaincode and invoking it when needed. In the next section we show chaincode management in this demo network. 

9- Deploying and Operating Chaincode

Once the network has been built and the nodes have been included in the channel, the process afterward is the same regardless of the cloud platform. While we are at the end of the section, it might be useful if you save the Orderer’s public URL as a ENV variable in this pod. Since this value can vary for every deployment, it is recommended to set it after pod creation.

The task of installing a chaincode has to be done from a Peer first which is supposed to operate the chaincode (here Peer node of Org01). This means you will need to first make the kubectl point to the AKS cluster of the Peer and then access the bash shell of the dev pod of the Peer. Afterward, the process of operating a chaincode is fairly simple. Provided you have set the paths in the env variables, you can use commands like:

Peer chaincode install -p <Path to the Chaincode> -n <Name of the Chaincode> -v <Version of the Chaincode (Generally start of 1.0 for every chaincode)> -l <Language the chaincode is written in> 

To install the chaincode on the Peer and then go on with the lifecycle. An interesting point to note here is that most Peer API commands have a global flag of -o which denotes the address of the Orderer. This would be the Orderer that the channel is using. In our case, the value of this flag will be the URL of the OrdererOrg.


We have now finished part 3, which concludes our article series.  At a high level, we learned how to deploy Fabric on Azure cloud platforms. We started out with a discussion on the Blockchain as a Service offerings of Azure and then moved onto demonstrating how easily the Azure Marketplace AKS offering for Fabric can be used to deploy clusters of Peers and Orderers. Further, we did a brief look at using chiancode once you have your network and nodes up and running as well as showing you how to expand the network and introduce more members in the channel after you have the initial setup done. 


Free Training Courses from The Linux Foundation & Hyperledger

Blockchain: Understanding Its Uses and Implications (LFS170)
Introduction to Hyperledger Blockchain Technologies (LFS171)
Introduction to Hyperledger Sovereign Identity Blockchain Solutions: Indy, Aries & Ursa (LFS172)
Becoming a Hyperledger Aries Developer (LFS173)
Hyperledger Sawtooth for Application Developers (LFS174)

eLearning Courses from The Linux Foundation & Hyperledger

Hyperledger Fabric Administration (LFS272)
Hyperledger Fabric for Developers (LFD272)

Certification Exams from The Linux Foundation & Hyperledger

Certified Hyperledger Fabric Administrator (CHFA)
Certified Hyperledger Fabric Developer (CHFD)

Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy
Review of three Hyperledger Tools- Caliper, Cello and Avalon
Review of Four Hyperledger Libraries- Aries, Quilt, Ursa, and Transact

Hands-On Smart Contract Development with Hyperledger Fabric V2 Book by Matt Zand and others.
Essential Hyperledger Sawtooth Features for Enterprise Blockchain Developers
Blockchain Developer Guide- How to Install Hyperledger Fabric on AWS
Blockchain Developer Guide- How to Install and work with Hyperledger Sawtooth
Intro to Blockchain Cybersecurity (Coding Bootcamps)
Intro to Hyperledger Sawtooth for System Admins (Coding Bootcamps)
Blockchain Developer Guide- How to Install Hyperledger Iroha on AWS
Blockchain Developer Guide- How to Install Hyperledger Indy and Indy CLI on AWS
Blockchain Developer Guide- How to Configure Hyperledger Sawtooth Validator and REST API on AWS
Intro blockchain development with Hyperledger Fabric (Coding Bootcamps)
How to build DApps with Hyperledger Fabric
Blockchain Developer Guide- How to Build Transaction Processor as a Service and Python Egg for Hyperledger Sawtooth
Blockchain Developer Guide- How to Create Cryptocurrency Using Hyperledger Iroha CLI
Blockchain Developer Guide- How to Explore Hyperledger Indy Command Line Interface
Blockchain Developer Guide- Comprehensive Blockchain Hyperledger Developer Guide from Beginner to Advance Level
Blockchain Management in Hyperledger for System Admins
Hyperledger Fabric for Developers (Coding Bootcamps)
Free White Papers from Hyperledger
Free Webinars from Hyperledger
Hyperledger Wiki

About the Authors

Matt Zand is a serial entrepreneur and the founder of 4 tech startups: DC Web Makers, Hash Flow, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms at sites such as IBM, SAP, Alibaba Cloud, Hyperledger, The Linux Foundation, and more. At Hash Flow, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, investor, business advisor for a few startup companies. You can connect with him on LI: https://www.linkedin.com/in/matt-zand-64047871

Abhik Banerjee is a researcher, an avid reader and also an anime fan. In his free time you can find him reading whitepapers and building hobby projects ranging from DLT to Cloud Infra. He has multiple publications in International Conferences and Book Titles along with a couple of patents in Blockchain. His interests include Blockchain, Quantum Information Processing and Bioinformatics. You can connect with him on LI:  https://in.linkedin.com/in/abhik-banerjee-591081164

The post Build and Deploy Hyperledger Fabric on Azure Cloud Platform- Part 3 appeared first on Linux Foundation – Training.