Home Blog

Scaling Microservices on Kubernetes

By Ashley Davis

*This article was originally published at TheNewStack

Applications built on microservices can be scaled in multiple ways. We can scale them to support development by larger development teams and we can also scale them up for better performance. Our application can then have a higher capacity and can handle a larger workload.

Using microservices gives us granular control over the performance of our application. We can easily measure the performance of our microservices to find the ones that are performing poorly, are overworked, or are overloaded at times of peak demand. Figure 1 shows how we might use the Kubernetes dashboard to understand CPU and memory usage for our microservices.

Figure 1: Viewing CPU and memory usage for microservices in the Kubernetes dashboard

If we were using a monolith, however, we would have limited control over performance. We could vertically scale the monolith, but that’s basically it.

Horizontally scaling a monolith is much more difficult; and we simply can’t independently scale any of the “parts” of a monolith. This isn’t ideal, because it might only be a small part of the monolith that causes the performance problem. Yet, we would have to vertically scale the entire monolith to fix it. Vertically scaling a large monolith can be an expensive proposition.

Instead, with microservices, we have numerous options for scaling. For instance, we can independently fine-tune the performance of small parts of our system to eliminate bottlenecks and achieve the right mix of performance outcomes.

There are also many advanced ways we could tackle performance issues, but in this post, we’ll overview a handful of relatively simple techniques for scaling our microservices using Kubernetes:

  1. Vertically scaling the entire cluster
  2. Horizontally scaling the entire cluster
  3. Horizontally scaling individual microservices
  4. Elastically scaling the entire cluster
  5. Elastically scaling individual microservices

Scaling often requires risky configuration changes to our cluster. For this reason, you shouldn’t try to make any of these changes directly to a production cluster that your customers or staff are depending on.

Instead, I would suggest that you create a new cluster and use blue-green deployment, or a similar deployment strategy, to buffer your users from risky changes to your infrastructure.

Vertically Scaling the Cluster

As we grow our application, we might come to a point where our cluster generally doesn’t have enough compute, memory or storage to run our application. As we add new microservices (or replicate existing microservices for redundancy), we will eventually max out the nodes in our cluster. (We can monitor this through our cloud vendor or the Kubernetes dashboard.)

At this point, we must increase the total amount of resources available to our cluster. When scaling microservices on a Kubernetes cluster, we can just as easily make use of either vertical or horizontal scaling. Figure 2 shows what vertical scaling looks like for Kubernetes.

Figure 2: Vertically scaling your cluster by increasing the size of the virtual machines (VMs)

We scale up our cluster by increasing the size of the virtual machines (VMs) in the node pool. In this example, we increased the size of three small-sized VMs so that we now have three large-sized VMs. We haven’t changed the number of VMs; we’ve just increased their size — scaling our VMs vertically.

Listing 1 is an extract from Terraform code that provisions a cluster on Azure; we change the vm_size field from Standard_B2ms to Standard_B4ms. This upgrades the size of each VM in our Kubernetes node pool. Instead of two CPUs, we now have four (one for each VM). As part of this change, memory and hard-drive for the VM also increase. If you are deploying to AWS or GCP, you can use this technique to vertically scale, but those cloud platforms offer different options for varying VM sizes.

We still only have a single VM in our cluster, but we have increased our VM’s size. In this example, scaling our cluster is as simple as a code change. This is the power of infrastructure-as-code, the technique where we store our infrastructure configuration as code and make changes to our infrastructure by committing code changes that trigger our continuous delivery (CD) pipeline

Listing 1: Vertically scaling the cluster with Terraform (an extract)

Horizontally Scaling the Cluster

In addition to vertically scaling our cluster, we can also scale it horizontally. Our VMs can remain the same size, but we simply add more VMs.

By adding more VMs to our cluster, we spread the load of our application across more computers. Figure 3 illustrates how we can take our cluster from three VMs up to six. The size of each VM remains the same, but we gain more computing power by having more VMs.

Figure 3: Horizontally scaling your cluster by increasing the number of VMs

Listing 2 shows an extract of Terraform code to add more VMs to our node pool. Back in listing 1, we had node_count set to 1, but here we have changed it to 6. Note that we reverted the vm_size field to the smaller size of Standard_B2ms. In this example, we increase the number of VMs, but not their size; although there is nothing stopping us from increasing both the number and the size of our VMs.

Generally, though, we might prefer horizontal scaling because it is less expensive than vertical scaling. That’s because using many smaller VMs is cheaper than using fewer but bigger and higher-priced VMs.

Listing 2: Horizontal scaling the cluster with Terraform (an extract)

Horizontally Scaling an Individual Microservice

Assuming our cluster is scaled to an adequate size to host all the microservices with good performance, what do we do when individual microservices become overloaded? (This can be monitored in the Kubernetes dashboard.)

Whenever a microservice becomes a performance bottleneck, we can horizontally scale it to distribute its load over multiple instances. This is shown in figure 4.

Figure 4: Horizontally scaling a microservice by replicating it

We are effectively giving more compute, memory and storage to this particular microservice so that it can handle a bigger workload.

Again, we can use code to make this change. We can do this by setting the replicas field in the specification for our Kubernetes deployment or pod as shown in listing 3.

Listing 3: Horizontally scaling a microservice with Terraform (an extract)

Not only can we scale individual microservices for performance, we can also horizontally scale our microservices for redundancy, creating a more fault-tolerant application. By having multiple instances, there are others available to pick up the load whenever any single instance fails. This allows the failed instance of a microservice to restart and begin working again.

Elastic Scaling for the Cluster

Moving into more advanced territory, we can now think about elastic scaling. This is a technique where we automatically and dynamically scale our cluster to meet varying levels of demand.

Whenever a demand is low, Kubernetes can automatically deallocate resources that aren’t needed. During high-demand periods, new resources are allocated to meet the increased workload. This generates substantial cost savings because, at any given moment, we only pay for the resources necessary to handle our application’s workload at that time.

We can use elastic scaling at the cluster level to automatically grow our clusters that are nearing their resource limits. Yet again, when using Terraform, this is just a code change. Listing 4 shows how we can enable the Kubernetes autoscaler and set the minimum and maximum size of our node pool.

Elastic scaling for the cluster works by default, but there are also many ways we can customize it. Search for “auto_scaler_profile” in the Terraform documentation to learn more.

Listing 4: Enabling elastic scaling for the cluster with Terraform (an extract)

Elastic Scaling for an Individual Microservice

We can also enable elastic scaling at the level of an individual microservice.

Listing 5 is a sample of Terraform code that gives microservices a “burstable” capability. The number of replicas for the microservice is expanded and contracted dynamically to meet the varying workload for the microservice (bursts of activity).

The scaling works by default, but can be customized to use other metrics. See the Terraform documentation to learn more. To learn more about pod auto-scaling in Kubernetes, see the Kubernetes docs.

Listing 5: Enabling elastic scaling for a microservice with Terraform

About the Book: Bootstrapping Microservices

You can learn about building applications with microservices with Bootstrapping Microservices.

Bootstrapping Microservices is a practical and project-based guide to building applications with microservices. It will take you all the way from building one single microservice all the way up to running a microservices application in production on Kubernetes, ending up with an automated continuous delivery pipeline and using infrastructure-as-code to push updates into production.

Other Kubernetes Resources

This post is an extract from Bootstrapping Microservices and has been a short overview of the ways we can scale microservices when running them on Kubernetes.

We specify the configuration for our infrastructure using Terraform. Creating and updating our infrastructure through code in this way is known as intrastructure-as-code, as a technique that turns working with infrastructure into a coding task and paved the way for the DevOps revolution.

To learn more about Kubernetes, please see the Kubernetes documentation and the free Introduction to Kubernetes training course.

To learn more about working with Kubernetes using Terraform, please see the Terraform documentation.

About the Author, Ashley Davis
Ashley is a software craftsman, entrepreneur, and author with over 20 years of experience in software development, from coding to managing teams, then to founding companies. He is the CTO of Sortal, a product that automatically sorts digital assets through the magic of machine learning.

The post Scaling Microservices on Kubernetes appeared first on Linux Foundation – Training.

Top Enable Sysadmin content of March 2021

Check out our top articles from a record-breaking month.
Read More at Enable Sysadmin

Linux Foundation Training Scholarships Are Back! Apply by April 30

Linux Foundation Training (LiFT) Scholarships are back! Since 2011, The Linux Foundation has awarded over 600 scholarships for more than a million dollars in training and certification to deserving individuals around the world who would otherwise be unable to afford it. This is part of our mission to grow the open source community by lowering the barrier to entry and making quality training options accessible to those who want them.

Applications are being accepted through April 30 in 10 different categories:

  • Open Source Newbies
  • Teens-in-Training
  • Women in Open Source
  • Software Developer Do-Gooder
  • SysAdmin Super Star
  • Blockchain Blockbuster
  • Cloud Captain
  • Linux Kernel Guru
  • Networking Notable
  • Web Development Wiz

Whether you are just starting in your open source career, or you are a veteran developer or sysadmin who is looking to gain new skills, if you feel you can benefit from training and/or certification but cannot afford it, you should apply. 

Recipients will receive a Linux Foundation training course and certification exam. All our certification exams, and most training courses, are offered remotely, meaning they can be completed from anywhere. 

Winners will be announced early summer.

Apply today!

The post Linux Foundation Training Scholarships Are Back! Apply by April 30 appeared first on Linux Foundation – Training.

Announcing the Unbreakable Enterprise Kernel Release 6 Update 2 for Oracle Linux

The Unbreakable Enterprise Kernel (UEK) for Oracle Linux provides the latest open source innovations, key optimizations, and security to cloud and on-premises workloads. It is the Linux kernel that powers Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata Database Machine and Oracle Linux on Intel/AMD as well as Arm platforms. What’s New? The Unbreakable Enterprise Kernel Release 6 Update 2 (UEK R6U2) for Oracle Linux is based on…
Click to Read More at Oracle Linux Kernel Development

The Linux Foundation Hosts Project to Decentralize and Accelerate Drug Development for Rare Genetic Diseases

OpenTreatments and RareCamp creator Sanath Kumar Ramesh built the project to address his son’s rare disease, now that work will be available to all in an effort to accelerate treatments

SAN FRANCISCO, Calif., March 31, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and the OpenTreatments Foundation, which enables treatments for rare genetic diseases regardless of rarity and geography, today announced the RareCamp software project will be hosted at the Linux Foundation. The Project will provide the source code and open governance for the OpenTreatments software platform to enable patients to create gene therapies for rare genetic diseases.

The project is supported by individual contributors, as well as collaborations from companies that include Baylor College of Medicine, Castle IRB, Charles River, Columbus Children’s Foundation, GlobalGenes, Odylia Therapeutics, RARE-X and Turing.com.

“OpenTreatments and RareCamp decentralize drug development and empowers patients, families and other motivated individuals to create treatments for diseases they care about. We will enable the hand off of these therapies to commercial, governmental and philanthropic entities to ensure patients around the world get access to the therapies for the years to come,” said Sanath Kumar Ramesh, founder of the OpenTreatments Foundation and creator of RareCamp.

There are 400 million patients worldwide affected by more than 7,000 rare diseases, yet treatments for rare genetic diseases are an underserved area. More than 95 percent of rare diseases do not have an approved treatment, and new treatments are estimated to cost more than $1 billion.

“If it’s not yet commercially viable to create treatments for rare diseases, we will take this work into our own hands with open source software and community collaboration is the way we can do it,” said Ramesh.

The RareCamp open source project provides open governance for the software and scientific community to collaborate and create the software tools to aid in the creation of treatments for rare diseases. The community includes software engineers, UX designers, content writers and scientists who are collaborating now to build the software that will power the OpenTreatments platform. The project uses the open source Javascript framework NextJS for frontend and the Amazon Web Services (AWS) Serverless stack – including AWS Lambda, Amazon API Gateway, and Amazon DynamoDB – to power the backend. The project uses the open source toolchain Serverless Framework to develop and deploy the software. The project is licensed under Apache 2.0 and available for anyone to use.

“OpenTreatments and RareCamp really demonstrate how technology and collaboration can have an impact on human life,” said Brett Andrews, RareCamp contributor and software engineer at Vendia. “Sanath’s vision is fueled with love for his son, technical savvy and the desire to share what he’s learning with others who can benefit. Contributing to this project was an easy decision.”

“OpenTreatments Foundation and RareCamp really represent exactly why open source and collaboration are so powerful – because they allow all of us to do more together than any one of us,” said Mike Dolan, executive vice president and GM of Projects at the Linux Foundation. “We’re honored to be able to support this community and are both confident and inspired about its impact on human lives.”

For more information and to contribute, please visit: OpenTreatments.org

About OpenTreatments Foundation

OpenTreatments Foundation’s mission is to enable treatments for all genetic diseases regardless of rarity and geography. Through the OpenTreatments software platform, patient-led organizations get access to a robust roadmap, people, and infrastructure necessary to build a gene therapy program. The software platform offers project management capabilities to manage the program while reducing time and money necessary for the development. For more information, please visit: OpenTreatments.org

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for the OpenTreatements Foundation
and Linux Foundation
503-867-2304
jennifer@storychangesculture.com

The post The Linux Foundation Hosts Project to Decentralize and Accelerate Drug Development for Rare Genetic Diseases appeared first on Linux Foundation.

LF Networking Announces New Member Walmart, Bolsters a New Era of Enterprise Open Source Networking

  • Participation by the fortune 1 enterprise brings technical leadership and unprecedented scale to LFN projects across Network Management & Automation
  • Koby Avital, EVP of Technology Platforms, Walmart Global Tech, joins the Governing Board as LFN Platinum member
  • Community Growth signals ecosystem commitment to leverage open source for collaborative network transformation across Cloud, Enterprise and Service Provider Ecosystems.

SAN FRANCISCO– March 31, 2021 – LF Networking (LFN), the de-facto collaboration ecosystem for Open Source Networking projects, today announced that Walmart has joined as a Platinum member. Walmart is the first retail member of LFN and joins 21 other global organizations as Platinum members all working to accelerate open source networking.  

“We are thrilled to welcome Walmart to the LF Networking community,” said Arpit Joshipura, general manager, Networking, Edge and IoT, at the Linux Foundation. “As the world’s largest retailer, Walmart brings expertise across a broad swath of areas, including retail point of sale networking, enterprise IT, and hybrid cloud deployments.  We look forward to collaborative efforts that accelerate the open source networking community.”

“I’m excited to join the Linux Foundation Governing Board on behalf of Walmart,” said Koby Avital, Executive Vice President, Walmart Global Tech. “By joining LFN, Walmart has the opportunity to contribute, influence the cloud growth and better support the enterprise and service provider communities by open-sourcing innovative technologies across its retail infrastructure.”

Join the LF Networking community October 11-12 for Open Networking and Edge Summit (ONES), the industry’s premier open networking event, expanded to comprehensively cover Edge Computing, Edge Cloud & IoT. ONES North America enables collaborative development and innovation across enterprises, service providers/telcos and cloud providers to shape the future of networking and edge computing. Details here: https://events.linuxfoundation.org/open-networking-edge-summit-north-america/.

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and industry adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage.

Linux is a registered trademark of Linus Torvalds.

# # #

The post LF Networking Announces New Member Walmart, Bolsters a New Era of Enterprise Open Source Networking appeared first on Linux Foundation.

Exploring the new Podman secret command

Use the new podman secret command to secure sensitive data when working with containers.
Read More at Enable Sysadmin

LF Energy Spring Summit 2021: Lighting Up the Future

Click on the image above for TFiR coverage of the LF Energy Spring Summit

To avert the worst of the climate crisis, the decarbonization of our power systems leads the way. We have ten years. With a greener grid, we can wholly embark on the electrification of automobiles, decarbonization of our built environments, and the electrification of trucking. Together, these four sectors will represent approximately 75% of the journey to remain below 1.5°C.  

Hosted within The Linux Foundation, LF Energy leads the way by bringing together stakeholders to solve the complex, interconnected problems associated with the decarbonization of energy and sector coupling through neutral governance, an open, collaborative community, and using resilient, secure, and flexible open source software. 

Join us at our April 14th event and be part of the movement!  

The LF Energy Spring Summit 2021 is a half-day virtual event that will take place in two time segments on April 14th (April 15th in Asia/Australia) to accommodate a global audience. Visit us and register at https://events.linuxfoundation.org/lf-energy-spring-summit/register/. Standard registration is $50USD. 

The End of Black Boxes, Open Source, and Getting to a Greener Future 

Power systems have historically considered infrastructure to be big, expensive, heavy hardware and software, composed into black boxes that last for 50 years. Decoupling hardware and software is a giant step in digitalization. At the Linux Foundation, we have done this before!

If energy is anything like other industrials, software-defined infrastructure that supports virtualization and automation is where we are heading. Rather than forklift applications, we need fast, iterative development and releases that enable the grid and our requirements about current conditions to adapt as the grid evolves rapidly. Digitalization means that system operators can “network and shape electrons” by orchestrating the metadata about an electron, thus enabling a choreography of supply and demand in ways never before possible. Parallels have been made to the telecommunications industry, and while instructive of digitalization and innovation pathways, electrons are physical and not abstracted like a packet. Electrons need physical surfaces, like power lines, to meet demand or deliver a resource back to the greater network. Digitalization will facilitate a radically energy-efficient future. When every electron counts, renewable and distributed energy provides humanity with the tools to address climate change by decarbonizing the grid, powering the transition to e-mobility, and supporting the urbanization of world populations.

Our community is ground zero for innovation! Join us at the Summit!

The Program Flow 

The LF Energy Spring Summit 2021 takes place on April 14th and is a virtual event with two segments, to accommodate different time zonesThis event will be recorded and available to attendees post-event as well. 

Segment 1: 6:00 – 11:30 am PDT (3:00 – 8:30 pm CEST)

Segment 2: 3:00 – 8:30 pm PDT (7:00 – 12:30 am JST, April 15) 

The discussion will consist of the latest updates and future trends in the power and energy system. Hear our speakers as they share their knowledge and experiences with you. 

This year’s tracks include: [Review Full Schedule

    • Best Practices in Open Source Development/ Lessons Learned 
    • Certifying Open Source Projects & compliance 
    • DevOps + Cloud Native + Microservices 
    • Growing & Sustaining Open Source Projects 
    • Microgrids 
    • Open Source Program Office (OSPO)/TODO Group 
    • Organizational Transformation 
    • Power System Network Operations for the Future 
    • Price-based Grid Coordination 
    • Project Highlights 
    • Social + Technical + Economic Directions 

For more information and to register, please visit our website

Towards a Diversified LF Energy Community 

Great minds and ideas have no race, gender, or image. Every person, when given the opportunity, can shape the world and be the future in their endeavors. LF Energy promotes a diversified community where every idea is welcome, every talent can be nurtured and honed, and every individual is respected. 

At the LF Energy Spring Summit 2021, we will gather as a global community. We believe that education and collaboration are vital in unifying us all.

LF Energy offers a diversity scholarship program that will support those from an underrepresented or marginalized community in terms of technology. It includes and is not limited to LGBTQI people, women, people of color, and people with disabilities. It will allow those who are having financial constraints and have the passion and desire to join. 

The scholarship will be given based on the needs and impact. Selection will be reviewed and assessed, and all applications will be kept confidential. Several scholarships will be given, and all that will be accepted will be given a complimentary registration to the said virtual event. 

Be part of our community. With the LF Energy Spring Summit right around the corner, be sure to register for our diversity scholarship program that welcomes all. Visit and apply at https://events.linuxfoundation.org/lf-energy-spring-summit/attend/scholarships/ to show us your passion and let us grow together.   

OpenTreatments Foundation: Democratizing and Decentralizing Drug Development

Sanath Kumar Ramesh is the Founder & CEO of the OpenTreatments Foundation, which was announced today at the Linux Foundation. Ramesh created the foundation as he looked at medical solutions to help his son suffering from an ultra-rare genetic disease called SSMD (curegpx4.org). He is building the world’s first software platform to decentralize drug development and empower anyone in the world to create treatments for genetic diseases.

Linux Foundation Will Host AsyncAPI to Support Growth and Collaboration for Industry’s Fastest-Growing API Spec

The open specification for defining asynchronous APIs gains momentum, seeks neutral home for open governance, community growth and industry adoption

SAN FRANCISCO, Calif., March 30, 2021 –  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the AsyncAPI Initiative. AsyncAPI is a specification and a suite of open source tools that work with asynchronous APIs and event-driven architectures. It is the fastest-growing API specification according to a recent developer survey, tripling in production usage from 2019 to 2020.

Founding sponsors of the AsyncAPI Initiative include Ably Realtime, Apideck, Bump, IQVIA Technologies, Slack, Solace, and TIBCO, and AsyncAPI recently announced a partnership with Postman. Today, AsyncAPI is in production at Adidas, PayPal, Salesforce, SAP, and Slack, among other enterprise environments. 

“As the growth of AsyncAPI skyrocketed, it became clear to us that we needed to find a neutral, trusted home for its ongoing development. The Linux Foundation is without question the leader in bringing together interested communities to advance technology and accelerate adoption in an open way,” said Fran Méndez, who created AsyncAPI in 2016. “This natural next step for the project really represents the maturity and strength of AsyncAPI. We expect the open governance model architected and standardized by the Linux Foundation will ensure the initiative continues to thrive.” 

AsyncAPI helps unify documentation automation and code generation, as well as managing, testing, and monitoring asynchronous APIs. It provides language for describing the interface of event-driven systems regardless of the underlying technology and supports the full development cycle of event-driven architecture.  AsyncAPI is considered a sister project of the OpenAPI Initiative, which is focused on synchronous REST communication and is also hosted by the Linux Foundation.

“The Linux Foundation is pleased to provide a forum where individuals and organizations can come together to advance AsyncAPI and nurture collaboration in a neutral forum that can support the kind of growth this community is experiencing,” said Chris Aniszczyk, CTO and Vice President, Developer Relations at the Linux Foundation.

For more information, please visit: https://www.asyncapi.org

Supporting Quotes

Łukasz Górnicki, AsyncAPI

“AsyncAPI at Linux Foundation is another brick needed to build a solid and sustainable community for the project. We are securing a perimeter for AsyncAPI and can focus on expanding the vision of making all the specs work together for the user’s good.”

Bill Doerrfeld, NordicAPIs

“Open standards are only as strong as their community effort. The details of the AsyncAPI charter represent their ongoing community mission and goal to retain vendor neutrality around the format. AsyncAPI is taking an active role in enacting this by limiting company representation per TSC, privileging work over money, and other strategies.”

Kin Lane, Postman

“AsyncAPI joining the Linux Foundation is the final cornerstone in the foundation of the open source event-driven API specification. This creates solid groundwork for defining the next generation of API infrastructure, beginning with HTTP request and response APIs, but also event-driven approaches spanning multiple protocols and patterns including Kafka, GraphQL, MQTT, AMQP, and much more. And all of that, in turn, will provide what is needed to power documentation, mocking, testing, and other critical stops along a modern enterprise API lifecycle.”

Matt McLarty, Salesforce

“Seeing how AsyncAPI has blossomed has been incredible. Its progress has been guided by two key principles in my opinion: a focus on solving real world problems, and a focus on community. As the world of synchronous APIs and event-based communication converges, AsyncAPI plays a vital role in levelling the API playing field.”

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contact

Jennifer Cloer
for Linux Foundation 
503-867-2304
jennifer@storychangesculture.com

The post Linux Foundation Will Host AsyncAPI to Support Growth and Collaboration for Industry’s Fastest-Growing API Spec appeared first on Linux Foundation.