See how you can automate rote tasks and shift your focus to more attractive projects.
Read More at Enable Sysadmin 
By the numbers: Getting your team on board with IT automation
Linux Foundation Research Announces Software Bill of Materials (SBOM) Readiness Survey
A Software Bill of Materials (SBOM) is a complete, formally structured list of components, libraries, and modules required to build (i.e., compile and link) a given piece of software and the supply chain relationships between them. These components can be open source or proprietary, free or paid, and widely available or restricted access. SBOMs that can be shared without friction between teams and companies are a core part of software management for critical industries and digital infrastructure in the coming decades.
SBOMs are especially critical for a national digital infrastructure used within government agencies and in critical industries that present national security risks if penetrated. SBOMs would improve understanding of those software components’ operational and cyber risks from their originating supply chain.
This SBOM readiness survey is the Linux Foundation’s first project addressing how to secure the software supply chain. The foundation of this project is a worldwide survey of IT professionals who understand their organization’s approach to software development, procurement, compliance, or security. Organizations surveyed will include both software producers and consumers. An important driver for this survey is the recent Executive Order on Cybersecurity, which focuses on producing and consuming SBOMs.
The objectives of the survey are as follows:
How concerned are organizations about software security?How familiar are organizations with SBOMs?How ready are organizations to consume and produce SBOMs?What is your commitment to the timeline for addressing SBOMs?What benefits do you expect to derive from SBOMs?What concerns you about SBOMs?What capabilities are needed in SBOMs?What do organizations need to improve their SBOM operability?How important are SBOMS relative to other ways to secure the software supply chain?
Data from this survey will enable the development of a maturity model that will focus on how the increasing value provided by SBOMs as organizations build out their SBOM capabilities.
The survey is available in seven languages:
EnglishChineseJapaneseKoreanGermanFrenchRussian
To take the 2021 State of SBOM Readiness Survey, click the button for your desired language/region below:
BONUS
As a thank-you for your participation, you will receive a 20% registration discount to attend the Open Source Summit/Embedded Linux Conference event upon completion of the survey. Please note this discount is not transferable, and may not be combined with other offers.
PRIVACY
Personally identifiable information will not be published. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at https://linuxfoundation.org/privacy.
VISIBILITY
We will summarize the survey data and share the findings at the Open Source Summit/Embedded Linux Conference in September.
QUESTIONS
If you have questions regarding this survey, please email us at research@linuxfoundation.org.
The post Linux Foundation Research Announces Software Bill of Materials (SBOM) Readiness Survey appeared first on Linux Foundation.
5 ways for teams to create an automation-first mentality
5 ways for teams to create an automation-first mentality
            DevSecOps can provide a competitive edge for your organization. Use these five strategies to get started.
      aeastwoo
Fri, 6/25/2021 at 3:50am
Image
Image by Roy Harryman from Pixabay
An automation-first mentality is likely a significant transformation for any organization, typically starting with task automation, moving to complex workflow orchestration, and ultimately innovating intelligent operations and “push-button” end-user services. It represents a solid commitment for DevSecOps—acknowledging the competitive edge this type of cultural change can provide. But getting there, and finding and building the necessary support for it, are real challenges—even when there’s been some initial success running automations in individual departments.
  Topics:  
      Automation  
      Career  
      DevOps  
Read More at Enable Sysadmin
5 ways for teams to create an automation-first mentality
5 ways for teams to create an automation-first mentality
            DevSecOps can provide a competitive edge for your organization. Use these five strategies to get started.
      aeastwoo
Fri, 6/25/2021 at 3:50am
Image
Image by Roy Harryman from Pixabay
An automation-first mentality is likely a significant transformation for any organization, typically starting with task automation, moving to complex workflow orchestration, and ultimately innovating intelligent operations and “push-button” end-user services. It represents a solid commitment for DevSecOps—acknowledging the competitive edge this type of cultural change can provide. But getting there, and finding and building the necessary support for it, are real challenges—even when there’s been some initial success running automations in individual departments.
  Topics:  
      Automation  
      Career  
      DevOps  
Read More at Enable Sysadmin
How WebAssembly Modules Safely Exchange Data
By Marco Fioretti
The WebAssembly binary format (Wasm) has been developed to allow software written in any language to “compile once, run everywhere”, inside web browsers or stand-alone virtual machines (runtimes) available for any platform, almost as fast as code directly compiled for those platforms. Wasm modules can interact with any host environment in which they run in a really portable way, thanks to the WebAssembly System Interface (WASI).
That is not enough, though. In order to be actually usable without surprises in as many scenarios as possible, Wasm executable files need at least two more things. One is the capability to interact directly not just with the operating system, but with any other program of the same kind. The way to do this with Wasm is called “module linking”, and will be the topic of the next article of this series. The other feature, that is a prerequisite for module linking to be useful, is the capability to exchange data structures of any kind, without misunderstandings or data loss.
What happens when Wasm modules exchange data?
Since it is only a compilation target, the WebAssembly format provides only low-level data types that aim to be as close to the underlying machine as possible. It is this choice that provides highly portable, high performing modules, while leaving programmers to write software in whatever language they want. The burden of mapping complex data structures in that language to native Wasm data types is left to software libraries, and to the compilers that use them.
The problem here is that in order to be efficient, the first generation of Wasm syntax and WASI do not natively support strings and other equally basic data types. Therefore, there is no intrinsic guarantee that, for example, a Wasm module compiled from Python sources and another from Rust ones will have exactly the same concept of “string” in every circumstance where string may be used.
The consequence is that, if Wasm modules compiled from different languages want to exchange more complex data structures, something important may be, so to speak, “lost in translation” every time some data goes from one module to another. Concretely, this prevents both direct embedding of Wasm modules into generic applications and direct calls from Wasm modules to external software.
In order to understand the nature of the problem, it is useful to look at how such data are passed around in first-generation Wasm and WASI modules.
The original way for WebAssembly to communicate with JavaScript and C programs is to simulate things like strings by manually managing chunks of memory.
For example, in the function path_open, a string is passed as a pair of integer numbers (i32) that represent the offset and, respectively, the length of that string in the linear memory reserved to a Wasm module. This would already be bad enough when, to mention just the simplest and most frequent cases, different character encodings or Garbage Collection (GC) are used. To make things worse, WASI modules that exchange strings would be forced to access each other’s memory, making this way of working far from optimal for both performance and security reasons.
Theoretically, Wasm modules that want to exchange data may also use traditional, JavaScript-compatible data passing mechanisms like WebIDL. This is the Interface Description Language used to describe all the components, including of course data types for any Web application programming interface (API).
In practice however, this would not solve anything. First because Web IDL functions can accept, that is pass back to the Wasm module that called them, higher level constructs than WebAssembly would understand. Second because using WebAssembly means exchanging data not directly but through ECMAScript Bindings, which have their own complexities and performance penalties. Summarizing, certain tricks work today, but not in all cases, and are by no means future-proof.
The solution: Interface and Reference Types
The real solution to all the problems mentioned above is to extend both the Wasm binary format and WASI in ways that:
directly support more complex data structures like strings or lists
allow Wasm modules to statically type-check the corresponding variables, and exchange them directly, but without having to share their internal linear memory.
There are two specifications that are being deployed just for this purpose. The main one is simply called Interface Types and its companion Reference Types.
Both Types rely on lower level features already added to the original Wasm core, namely “multi-value” and multi-memory support. The first extension allows Wasm functions to return an arbitrary number of values, instead of just one as before, and Wasm instruction sequences to handle an arbitrary number of stack values. The other lets a whole Wasm module, or single functions, use multiple memories at the same time, which is good for a whole lot of reasons besides exchanging variables.
Building on these features, Interface Types define strings and other “high-level” data structures, in ways that any unmodified Wasm runtime can use. Reference Types complete the picture, specifying how Wasm applications must actually exchange those data structures with external applications.
The specifications are not fully completed yet. Interface Types can exchange values, but not handles to resources and buffers, which would be required, for example, to “read a file and write directly into a buffer”.
Working together however, all the features described here already enable Wasm modules and WASI interfaces to handle and exchange most complex data structures efficiently, without corrupting them and regardless of what language they were used in, before compiling to Wasm.
The post How WebAssembly Modules Safely Exchange Data appeared first on Linux Foundation – Training.
The ever-evolving IT job role: system administrator
 If you explore multiple iterations of sysadmins in the wild, you may be interested to find that not all ‘sysadmin’ roles look alike.
Read More at Enable Sysadmin 
Why I was scared of IT automation
 Learn the perspectives of three IT roles—and their common anxieties about IT automation.
Read More at Enable Sysadmin 
The rise of the automation architect
 Use these tips to advance your IT career and establish yourself as an automation architect.
Read More at Enable Sysadmin 
Sigstore: A New Tool Wants to Save Open Source From Supply Chain Attacks (WIRED)
“The founders of Sigstore hope that their platform will spur adoption of code signing, an important protection for software supply chains but one that popular and widely used open source software often overlooks. Open source developers don’t always have the resources, time, expertise, or wherewithal to fully implement code signing on top of all the other nonnegotiable components they need to build for their code to function.”
Linux Foundation Launches GitOps Training
The two new courses were created in partnership with the Cloud Native Computing Foundation and Continuous Delivery Foundation
SAN FRANCISCO – GITOPS SUMMIT – June 22, 2021 – Today at GitOps Summit, The Linux Foundation, the nonprofit organization enabling mass innovation through open source, the Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Continuous Delivery Foundation (CDF), the open-source software foundation that seeks to improve the world’s capacity to deliver software with security and speed, have announced the immediate availability of two new, online training courses focused on GitOps, or operation by pull request, a powerful developer workflow that enables organizations to unlock the promise of cloud native continuous delivery.
Cloud native technologies enable organizations to scale rapidly and deliver software faster than ever before. GitOps is the set of practices that enable developers to carry out tasks that traditionally fell to operations personnel. As development practices evolve, GitOps is becoming an essential skill for many job roles. These two new online, self-paced training courses are designed to teach the skills necessary to begin implementing GitOps practices:
Introduction to GitOps (LFS169)
LFS169 is a free introductory course providing foundational knowledge about key GitOps principles, tools and practices, to help build an operational framework for cloud native applications primarily running on Kubernetes. The course explains how to set up and automate a continuous delivery pipeline to Kubernetes, leading to increased productivity and efficiency for tech roles.
This course walks through a series of demonstrations with a fully functional GitOps environment, which explains the true power of GitOps and how it can help build infrastructures, deploy applications, and even do progressive releases, all via pull requests and git-based workflows. By the end of this course, participants will be familiar with the need for GitOps, and understand the different reconciliation patterns and implementation options available, helping them make the right technological choices for their particular needs.
GitOps: Continuous Delivery on Kubernetes with Flux (LFS269)
LFS269 will benefit software developers interested in learning how to deploy their cloud native applications using familiar GitHub-based workflows and GitOps practices; quality assurance engineers interested in setting up continuous delivery pipelines, and implementing canary analysis, A/B testing, etc. on Kubernetes; site reliability engineers interested in automating deployment workflows and setting up multi-tenant, multi-cluster GitOps-based Continuous Delivery workflows and incorporating them with existing Continuous Integration and monitoring setups; and anyone looking to understand the landscape of GitOps and learn how to choose and implement the right tools.
This course provides a deep dive into GitOps principles and practices, and how to implement them using Flux CD, a CNCF project. Flux CD uses a reconciliation approach to keep Kubernetes clusters in sync using Git repositories as the source of truth. This course helps build essential Git and Kubernetes knowledge for a GitOps practitioner by setting up Flux v2 on an existing Kubernetes cluster, automating the deployment of Kubernetes manifests with Flux, and incorporating Kustomize and Helm to create customizable deployments. It explains how to set up notifications and monitoring with Prometheus, Grafana and Slack, integrate Flux with Tekton-based workflows to set up CI/CD pipelines, build release strategies, including canary, A/B testing, and blue/green, deploying to multi-cluster and multi-tenant environments, integrate GitOps with service meshes such as Linkerd, and Istio, securing GitOps workflows with Flux, and much more.
“GitOps is an essential methodology for shifting left and using cloud native effectively. We are already seeing the demand for it with the adoption of CNCF projects like Argo and Flux,” said Priyanka Sharma, General Manager of the Cloud Native Computing foundation. “I am thrilled that we now offer two GitOps courses so developers of all levels can build a foundation and learn how to integrate GitOps with Kubernetes. I encourage every practitioner to check it out!”
“Our partnership with Cloud Native Computing Foundation (CNCF) has resulted in the creation of this high quality course for software developers who want a better understanding of the GitOps landscape. It includes information on integrating Flux CD with Tekton-based workflows, a great example of CNCF and CDF projects closely working together. By taking the course, you will be able to evaluate and implement GitOps to meet your development needs,” said Tracy Miranda, Executive Director of the Continuous Delivery Foundation. “The launch of these courses is a result of the strong increase in demand for cloud-native applications. This program will directly benefit those interested in expanding their Git and Kubernetes knowledge and following best practices for GitOps techniques.”
Introduction to GitOps consists of 3-4 hours of course material including video lessons. It is available at no cost for up to a year.
GitOps: Continuous Delivery on Kubernetes with Flux consists of 30-40 hours of course material, including video lessons, hands-on labs, and more. The $299 course fee includes a full year of access to all materials.
About Cloud Native Computing Foundation
Cloud native computing empowers organizations to build and run scalable applications with an open source software stack in public, private, and hybrid clouds. The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure, including Kubernetes, Prometheus, and Envoy. CNCF brings together the industry’s top developers, end users, and vendors, and runs the largest open source developer conferences in the world. Supported by more than 500 members, including the world’s largest cloud computing and software companies, as well as over 200 innovative startups, CNCF is part of the nonprofit Linux Foundation. For more information, please visit www.cncf.io.
About the Continuous Delivery Foundation
The Continuous Delivery Foundation (CDF) seeks to improve the world’s capacity to deliver software with security and speed. The CDF is a vendor-neutral organization that is establishing best practices of software delivery automation, propelling education and adoption of CD tools, and facilitating cross-pollination across emerging technologies. The CDF is home to many of the fastest-growing projects for CD, including Jenkins, Jenkins X, Tekton, and Spinnaker. The CDF is part of the Linux Foundation, a nonprofit organization. For more information about the CDF, please visit https://cd.foundation.
About the Linux Foundation
Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.
The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.
# # #
The post Linux Foundation Launches GitOps Training appeared first on Linux Foundation – Training.
 
                