Home Blog Page 123

The Linux Foundation Hosts Open19 to Accelerate Data Center and Edge Hardware Innovation

Open19 framework enables data center hardware design that powers edge, 5G and custom cloud deployments worldwide, brings both hardware and software under the Linux Foundation with fellow Yuval Bachar 

SAN FRANCISCO, April 21, 2021 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the Open19 Foundation, an open hardware platform for data center and edge hardware innovation. It is also announcing that one of the original founders of the Open19 project, Yuval Bachar, is joining the Linux Foundation to lead this effort. Project leadership includes premiere members Equinix and Cisco.

Open19 focuses on hardware standards that enable compute, storage and network manufacturers and end users to develop differentiated hardware solutions while protecting their competitive intellectual property. With the addition of Open19, The Linux Foundation is hosting data center hardware and software under one virtual roof.

“As the open hardware project of The Linux Foundation, the Open19 Project is dedicated to creating solutions that help digital businesses take advantage of specialized infrastructure,” said Zachary Smith, Open19 Foundation chairperson and Managing Director of Equinix Metal. “We are excited to join The Linux Foundation to solve the challenges facing modern data centers with collaborative, open, community-led innovation.”

Open19 provides a framework for accessing and deploying hardware innovation at any scale, from edge environments to large-scale custom clouds. With its unique intellectual property model and market-leading specifications with proven adoption, Open19 enables technology providers, supply chain partners, cloud service providers, telecoms and tech forward enterprises to leverage shared investments to address the exploding needs of modern compute and network deployments while minimizing risk. This reduces time to market for new solutions while substantially lowering the cost of operations.

“Open19 is revolutionizing the way we approach hardware,” said Yuval Bachar, Open 19 Foundation Fellow. “The time to invest in open hardware has never been more pressing. With the transformation happening as a result of AI, 5G and edge networking, in particular, the opportunity for innovation is ripe, and Open19 will accelerate it.”

Yuval Bachar founded the Open19 project and is returning to support the project and its community under the Linux Foundation. His career includes technical leadership roles at Microsoft, LinkedIn, Facebook and Cisco. Bachar has been on the forefront of some of the industry’s most important technology developments, from data center networking to data center self healing with Machine Learning, AI and predictive maintenance. Most recently, he was Principal Hardware Architect of the Azure Platform at Microsoft. Previously, he was Principal Engineer in the global infrastructure and strategy team at LinkedIn, the leader and architect for Facebook’s data center networking hardware and Senior Director of Engineering in the CTO office at Cisco.

The Linux Foundation provides an open governance model and a vendor neutral home to a variety of projects working to advance open hardware and data center innovation. This framework nurtures cross-project collaboration among Open19, DPDK, OpenBMC, and RISC-V projects; the LF Edge, OpenPower and Cloud Native Computing Foundations; and incubating projects such as bare metal provisioning engine Tinkerbell, among others. Formal collaborations are expected to be announced in the coming months.

“The Open19 Community has been doing crucial work to accelerate open source hardware design to meet the needs of modern data centers and the edge,” said Arpit Joshipura, General manager, Networking, Edge & IOT at The Linux Foundation. “We are excited to welcome Open19 as our growing community defines the next generation of digital infrastructure.”

Originally founded in 2016 by a community of cloud infrastructure innovators looking to solve the cost, efficiency and operational challenges of modern data center deployments, solutions based on Open19 technology are now deployed at leading global providers. Open19 provides specifications for servers, storage and networking components designed to fit in any 19-inch data center rack environment. The project features common elements to enable platform innovation: flexible server “bricks” (server nodes with standard power supply and network delivery, plus cooling); a mechanical cage to house bricks; a standardized power shelf and blind mate power and data connectors.

Driven by strong industry adoption, members are working now on the next-generation of the Open19 specification and invite others to get involved. It is expected to be available mid-year 2021. For more information, please visit: www.open19.org

About The Open19 Project

The Open19 project, as part of The Linux Foundation, designs and promotes a form factor specification that includes a brick cage, server brick form factor, power shelf and unique blind mate power and data connectors. These components allow service providers and enterprises to leverage the first data center form factor design for a cloud and edge-native world.

About The Linux Foundation

Founded in 2000, The Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. The Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page:  https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Contacts

Jennifer Cloer
for the Open19 Foundation and Linux Foundation
503-867-2304
jennifer@storychangesculture.com

Jennifer Lankford
for Equinix
503-308-2553
jennifer@lankfordpr.co

The post The Linux Foundation Hosts Open19 to Accelerate Data Center and Edge Hardware Innovation appeared first on Linux Foundation.

5 tips for deciding which Linux tasks and workloads to automate

It’s tough to know how to get started with automation, but here are five ideas to get you rolling.
Read More at Enable Sysadmin

Ping command basics for testing and troubleshooting

Have you ever stopped to look at how much more ping can do for you beyond just a quick network connectivity test?
Read More at Enable Sysadmin

In the trenches with Thomas Gleixner, real-time Linux kernel patch set

Jason Perlow, Editorial Director at the Linux Foundation interviews Thomas Gleixner, Linux Foundation Fellow, CTO of Linutronix GmbH, and project leader of the PREEMPT_RT real-time kernel patch set.

JP: Greetings, Thomas! It’s great to have you here this morning — although for you, it’s getting late in the afternoon in Germany. So PREEMPT_RT, the real-time patch set for the kernel is a fascinating project because it has some very important use-cases that most people who use Linux-based systems may not be aware of. First of all, can you tell me what “Real-Time” truly means? 

TG: Real-Time in the context of operating systems means that the operating system provides mechanisms to guarantee that the associated real-time task processes an event within a specified period of time. Real-Time is often confused with “really fast.” The late Prof. Doug Niehaus explained it this way: “Real-Time is not as fast as possible; it is as fast as specified.”

The specified time constraint is application-dependent. A control loop for a water treatment plant can have comparatively large time constraints measured in seconds or even minutes, while a robotics control loop has time constraints in the range of microseconds. But for both scenarios missing the deadline at which the computation has to be finished can result in malfunction. For some application scenarios, missing the deadline can have fatal consequences.

In the strict sense of Real-Time, the guarantee which is provided by the operating system must be verifiable, e.g., by mathematical proof of the worst-case execution time. In some application areas, especially those related to functional safety (aerospace, medical, automation, automotive, just to name a few), this is a mandatory requirement. But for other scenarios or scenarios where there is a separate mechanism for providing the safety requirements, the proof of correctness can be more relaxed. But even in the more relaxed case, the malfunction of a real-time system can cause substantial damage, which obviously wants to be avoided.

JP: What is the history behind the project? How did it get started?

TG: Real-Time Linux has a history that goes way beyond the actual PREEMPT_RT project.

Linux became a research vehicle very early on. Real-Time researchers set out to transform Linux into a Real-Time Operating system and followed different approaches with more or less success. Still, none of them seriously attempted a fully integrated and perhaps upstream-able variant. In 2004 various parties started an uncoordinated effort to get some key technologies into the Linux kernel on which they wanted to build proper Real-Time support. None of them was complete, and there was a lack of an overall concept. 

Ingo Molnar, working for RedHat, started to pick up pieces, reshape them and collect them in a patch series to build the grounds for the real-time preemption patch set PREEMPT_RT. At that time, I worked with the late Dr. Doug Niehaus to port a solution we had working based on the 2.4 Linux kernel forward to the 2.6 kernel. Our work was both conflicting and complimentary, so I teamed up with Ingo quickly to get this into a usable shape. Others like Steven Rostedt brought in ideas and experience from other Linux Real-Time research efforts. With a quickly forming loose team of interested developers, we were able to develop a halfway usable Real-Time solution that was fully integrated into the Linux kernel in a short period of time. That was far from a maintainable and production-ready solution. Still, we had laid the groundwork and proven that the concept of making the Linux Kernel real-time capable was feasible. The idea and intent of fully integrating this into the mainline Linux kernel over time were there from the very beginning.

JP: Why is it still a separate project from the Mainline kernel today?

TG: To integrate the real-time patches into the Linux kernel, a lot of preparatory work, restructuring, and consolidation of the mainline codebase had to be done first. While many pieces that emerged from the real-time work found their way into the mainline kernel rather quickly due to their isolation, the more intrusive changes that change the Linux kernel’s fundamental behavior needed (and still need) a lot of polishing and careful integration work. 

Naturally, this has to be coordinated with all the other ongoing efforts to adopt the Linux kernel to the different use cases ranging from tiny embedded systems to supercomputers. 

This also requires carefully designing the integration so it does not get in the way of other interests and imposes roadblocks for further developing the Linux kernel, which is something the community and especially Linus Torvalds, cares about deeply. 

As long as these remaining patches are out of the mainline kernel, this is not a problem because it does not put any burden or restriction on the mainline kernel. The responsibility is on the real-time project, but on the other side, in this context, there is no restriction to take shortcuts that would never be acceptable in the upstream kernel.

The real-time patches are fundamentally different from something like a device driver that sits at some corner of the source tree. A device driver does not cause any larger damage when it goes unmaintained and can be easily removed when it reaches the final state bit-rot. Conversely, the PREEMPT_RT core technology is in the heart of the Linux kernel. Long-term maintainability is key as any problem in that area will affect the Linux user universe as a whole. In contrast, a bit-rotted driver only affects the few people who have a device depending on it.

JP: Traditionally, when I think about RTOS, I think of legacy solutions based on closed systems. Why is it essential we have an open-source alternative to them? 

TG: The RTOS landscape is broad and, in many cases, very specialized. As I mentioned on the question of “what is real-time,” certain application scenarios require a fully validated RTOS, usually according to an application space-specific standard and often regulatory law. Aside from that, many RTOSes are limited to a specific class of CPU devices that fit into the targeted application space. Many of them come with specialized application programming interfaces which require special tooling and expertise.

The Real-Time Linux project never aimed at these narrow and specialized application spaces. It always was meant to be the solution for 99% of the use cases and to be able to fully leverage the flexibility and scalability of the Linux kernel and the broader FOSS ecosystem so that integrated solutions with mixed-criticality workloads can be handled consistently. 

Developing real-time applications on a real-time enabled Linux kernel is not much different from developing non-real-time applications on Linux, except for the careful selection of system interfaces that can be utilized and programming patterns that should be avoided, but that is true for real-time application programming in general independent of the RTOS. 

The important difference is that the tools and concepts are all the same, and integration into and utilizing the larger FOSS ecosystem comes for free.

The downside of PREEMPT_RT is that it can’t be fully validated, which excludes it from specific application spaces, but there are efforts underway, e.g., the LF ELISA project, to fill that gap. The reason behind this is, that large multiprocessor systems have become a commodity, and the need for more complex real-time systems in various application spaces, e.g., assisted / autonomous driving or robotics, requires a more flexible and scalable RTOS approach than what most of the specialized and validated RTOSes can provide. 

That’s a long way down the road. Still, there are solutions out there today which utilize external mechanisms to achieve the safety requirements in some of the application spaces while leveraging the full potential of a real-time enabled Linux kernel along with the broad offerings of the wider FOSS ecosystem.

JP: What are examples of products and systems that use the real-time patch set that people depend on regularly?

TG: It’s all over the place now. Industrial automation, control systems, robotics, medical devices, professional audio, automotive, rockets, and telecommunication, just to name a few prominent areas.

JP: Who are the major participants currently developing systems and toolsets with the real-time Linux kernel patch set?  

TG: Listing them all would be equivalent to reciting the “who’s who” in the industry. On the distribution side, there are offerings from, e.g., RedHat, SUSE, Mentor, and Wind River, which deliver RT to a broad range of customers in different application areas. There are firms like Concurrent, National Instruments, Boston Dynamics, SpaceX, and Tesla, just to name a few on the products side.

RedHat and National Instruments are also members of the LF collaborative Real-Time project.

JP: What are the challenges in developing a real-time subsystem or specialized kernel for Linux? Is it any different than how other projects are run for the kernel?

TG: Not really different; the same rules apply. Patches have to be posted, are reviewed, and discussed. The feedback is then incorporated. The loop starts over until everyone agrees on the solution, and the patches get merged into the relevant subsystem tree and finally end up in the mainline kernel.

But as I explained before, it needs a lot of care and effort and, often enough, a large amount of extra work to restructure existing code first to get a particular piece of the patches integrated. The result is providing the desired functionality but is at the same time not in the way of other interests or, ideally, provides a benefit for everyone.

The technology’s complexity that reaches into a broad range of the core kernel code is obviously challenging, especially combined with the mainline kernel’s rapid change rate. Even larger changes happening at the related core infrastructure level are not impacting ongoing development and integration work too much in areas like drivers or file systems. But any change on the core infrastructure can break a carefully thought-out integration of the real-time parts into that infrastructure and send us back to the drawing board for a while.

JP:  Which companies have been supporting the effort to get the PREEMPT_RT Linux kernel patches upstream? 

TG: For the past five years, it has been supported by the members of the LF real-time Linux project, currently ARM, BMW, CIP, ELISA, Intel, National Instruments, OSADL, RedHat, and Texas Instruments. CIP, ELISA, and OSADL are projects or organizations on their own which have member companies all over the industry. Former supporters include Google, IBM, and NXP.

I personally, my team and the broader Linux real-time community are extremely grateful for the support provided by these members. 

However, as with other key open source projects heavily used in critical infrastructure, funding always was and still is a difficult challenge. Even if the amount of money required to keep such low-level plumbing but essential functionality sustained is comparatively small, these projects struggle with finding enough sponsors and often lack long-term commitment.

The approach to funding these kinds of projects reminds me of the Mikado Game, which is popular in Europe, where the first player who picks up the stick and disturbs the pile often is the one who loses.

That’s puzzling to me, especially as many companies build key products depending on these technologies and seem to take the availability and sustainability for granted up to the point where such a project fails, or people stop working on it due to lack of funding. Such companies should seriously consider supporting the funding of the Real-Time project.

It’s a lot like the Jenga game, where everyone pulls out as many pieces as they can up until the point where it collapses. We cannot keep taking; we have to give back to these communities putting in the hard work for technologies that companies heavily rely on.

I gave up long ago trying to make sense of that, especially when looking at the insane amounts of money thrown at the over-hyped technology of the day. Even if critical for a large part of the industry, low-level infrastructure lacks the buzzword charm that attracts attention and makes headlines — but it still needs support.

JP:  One of the historical concerns was that Real-Time didn’t have a community associated with it; what has changed in the last five years?  

TG: There is a lively user community, and quite a bit of the activity comes from the LF project members. On the development side itself, we are slowly gaining more people who understand the intricacies of PREEMPT_RT and also people who look at it from other angles, e.g., analysis and instrumentation. Some fields could be improved, like documentation, but there is always something that can be improved.

JP:  What will the Real-Time Stable team be doing once the patches are accepted upstream?

TG: The stable team is currently overseeing the RT variants of the supported mainline stable versions. Once everything is integrated, this will dry out to some extent once the older versions reach EOL. But their expertise will still be required to keep real-time in shape in mainline and in the supported mainline stable kernels.

JP: So once the upstreaming activity is complete, what happens afterward?

TG: Once upstreaming is done, efforts have to be made to enable RT support for specific Linux features currently disabled on real-time enabled kernels. Also, for quite some time, there will be fallout when other things change in the kernel, and there has to be support for kernel developers who run into the constraints of RT, which they did not have to think about before. 

The latter is a crucial point for this effort. Because there needs to be a clear longer-term commitment that the people who are deeply familiar with the matter and the concepts are not going to vanish once the mainlining is done. We can’t leave everybody else with the task of wrapping their brains around it in desperation; there cannot be institutional knowledge loss with a system as critical as this. 

The lack of such a commitment would be a showstopper on the final step because we are now at the point where the notable changes are focused on the real-time only aspects rather than welcoming cleanups, improvements, and features of general value. This, in turn, circles back to the earlier question of funding and industry support — for this final step requires several years of commitment by companies using the real-time kernel.

There’s not going to be a shortage of things to work on. It’s not going to be as much as the current upstreaming effort, but as the kernel never stops changing, this will be interesting for a long time.

JP: Thank you, Thomas, for your time this morning. It’s been an illuminating discussion.

To get involved with the real-time kernel patch for Linux, please visit the PREEMPT_RT wiki at The Linux Foundation or email real-time-membership@linuxfoundation.org

ELISA Project Welcomes Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat to its Global Ecosystem

SAN FRANCISCO – April 19, 2020 –  Today, the ELISA (Enabling Linux in Safety Applications) Project, an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based safety-critical applications and systems, announced that Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat has joined its global ecosystem.

Linux is used in safety-critical applications with all major industries because it can enable faster time to market for new features and take advantage of the quality of the code development processes which decreases the issues that could result in loss of human life, significant property damage, or environmental damage. Launched in February 2019 by the Linux Foundation, ELISA will work with certification authorities and standardization bodies across industries to document how Linux can be used in safety-critical systems.

“Open source software has become a significant part of the technology strategy to accelerate innovation for companies worldwide,” said Kate Stewart, Vice President of Dependable Embedded Systems at The Linux Foundation. “We want to reduce the barriers to be able to use Linux in safety-critical applications and welcome the collaboration of new members to help build specific use cases for automotive, medical and industrial sectors.”

Milestones

After a little more than two years, ELISA has continued to see momentum in project and technical milestones. Examples include:

  • Successful Workshops: In February, ELISA hosted its 6th workshop with more than 120 registered participants. During the workshop, members and external speakers discussed cybersecurity expectations in the automotive world, code coverage of glibc and Intel’s Linux test robot. Learn more in this blog. The next workshop is scheduled for May 18-20 and is free to attend. Register here.
  • New Ambassador Program: In October 2020, ELISA launched a program with thought leaders with expertise in functional safety and Linux kernel development. These ambassadors are willing to speak at events, write articles and work directly with the community on mentorships or onboarding new contributors. Meet the ambassadors here
  • Mentorship Opportunities: The Linux Foundation offers a Mentorship Program with projects that are designed to help developers with the necessary skills to contribute effectively to open source communities. A recent program, ELISA participated in the Fall 2020 session with Code coverage metrics for GLibC and a Linux Kernel mentorship focused on CodeChecker. This project supports ELISA’s goals to gain experience in using various status analysis methods and tools available in the Linux kernel. Learn more here.
  • Working Groups: Since launch, the project has created several working groups that collaborate and work towards providing resources for System integrators to apply and use to analyze qualitatively and quantitatively on their systems. Current groups include an Automotive Working Group, Medical Devices Working Group, Safety Architecture Working Group,  Kernel Development Process Working Group and Tool Investigation and Code Improvement Sub-Working Group to focus on specific activities and goals. Learn more or join a working group here

“The primary challenge is selecting Linux components and features that can be evaluated for safety and identifying gaps where more work is needed to evaluate safety sufficiently,” said Shuah Khan, Chair of the ELISA Project Technical Steering Committee and Linux Fellow at the Linux Foundation. “We’ve taken on this challenge to make it easier for companies to build and certify Linux-based safety-critical applications by exploring potential methods to enable engineers to answer that question for their specific system.”

Learn more about the goals and technical strategy in this white paper

Growing Ecosystem

After a little more than two years, the ELISA Project has grown by 300%. With new members Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat, the project currently has 20 members that collaborate to define and maintain a standardized set of processes and tools that can be integrated into Linux-based, safety-critical systems seeking safety certification. These new members join BMW Car IT GmbH, Intel, Toyota, ADIT, AISIN AW CO., arm, Elektrobit, Kuka, Linuxtronix. Mentor, Suzuki, Wind River, Automotive Grade Linux and OTH Regensburg.

“Codethink has been working with ELISA for a few years and we are excited to continue our engagement as a member,” said Shaun Mooney, Division Manager at Codethink. “Open Source Software, particularly Linux, is being used more and more in safety applications and Codethink has been looking at how we can make software trustable for a long time. We’ve been working to understand how we can use complex software and guarantee it will function as we want it to. This problem needs to be tackled collectively and ELISA is a great place to collaborate with experts in both safety and software. We’ve been working with most of the working groups since the start of ELISA and will continue to be active participants, using our expert knowledge of Linux and Open Source to help advance the state of the art for safety.”

“Safety is the most important feature of a self-driving car,” said Huang Chang, co-founder and CTO of Horizon Robotics. “Horizon’s investment into functional safety is one of the most important ones we’ve ever made, and it provides a critical ingredient for automakers to bring self-driving cars to market. The creative safety construction the ELISA project is undertaking complements Horizon’s functional safety endeavor and continued commitment to certifying Linux-based safety-critical systems.”

“Huawei is one of the most important Linux kernel contributors and recently joined the automotive industry as strategic partner in Asia and Europe,” said Alessandro Biasci, Technical Expert at Huawei.“ We are pleased to further advance our mission and participate in ELISA, which will allow us to combine our experience in the Linux kernel development and knowledge in safety and security to bring Linux to safety-critical applications.”

“Edge computing extends enterprise software from the datacenter and cloud to a myriad of operational and embedded technology footprints that interact with the physical world, such as connected vehicles and manufacturing equipment,” said Chris Wright, Chief Technical Officer at Red Hat. “A common open source software platform across these locations simplifies and accelerates solution development, while supporting functional safety’s end goal of reducing the risk of physical injury. Red Hat recognizes the importance of establishing functional safety evidence and certifications for Linux, backed by a rich platform and vibrant ecosystem for safety-related applications. We are excited to bring our twenty-seven years of Linux expertise to the ELISA community’s work.”

For more information about ELISA, visit https://elisa.tech/.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###

The post ELISA Project Welcomes Codethink, Horizon Robotics, Huawei Technologies, NVIDIA and Red Hat to its Global Ecosystem appeared first on Linux Foundation.

File transfer protocols: FTP vs SFTP  

You have both secure and non-secure choices for file transfer, and each can have different advantages in different situations.
Read More at Enable Sysadmin

Building containers by hand: The PID namespace

The PID namespace is an important one when it comes to building isolated environments. Find out why and how to use it.
Read More at Enable Sysadmin

WASI, Bringing WebAssembly Way Beyond Browsers

By Marco Fioretti

WebAssembly (Wasm) is a binary software format that all browsers can run directly, safely and at near-native speeds, on any operating system (OS). Its biggest promise, however, is to eventually work in the same way everywhere, from IoT devices and edge servers, to mobile devices and traditional desktops. This post introduces the main interface that should make this happen. The next post in this series will describe some of the already available, real-world implementations and applications of the same interface.

What is portability, again?

To be safe and portable, software code needs, as a minimum: 

  1. guarantees that users and programs can do only what they actually have the right to do, and only do it without creating problems to other programs or users
  2. standard, platform-independent methods to declare and apply those guarantees

Traditionally, these services are provided by libraries of “system calls” for each language, that is functions with which a software program can ask its host OS to perform some low-level, or sensitive task. When those libraries follow standards like POSIX, any compiler can automatically combine them with the source code, to produce a binary file that can run on some combination of OSes and processors.

The next level: BINARY compatibility

System calls only make source code portable across platforms. As useful as they are, they still force developers to generate platform-specific executable files, all too often from more or less different combinations of source code.

WebAssembly instead aims to get to the next level: use any language you want, then compile it once, to produce one binary file that will just run, securely, in any environment that recognizes WebAssembly. 

What Wasm does not need to work outside browsers

Since WebAssembly already “compiles once” for all major browsers, the easiest way to expand its reach may seem to create, for every target environment, a full virtual machine (runtime) that provides everything a Wasm module expects from Firefox or Chrome.

Work like that however would be really complex, and above all simply unnecessary, if not impossible, in many cases (e.g. on IoT devices). Besides, there are better ways to secure Wasm modules than dumping them in one-size-fits-all sandboxes as browsers do today.

The solution? A virtual operating system and runtime

Fully portable Wasm modules cannot happen until, to give one practical example, accesses to webcams or websites can be written only with system calls that generate platform-dependent machine code.

Consequently, the most practical way to have such modules, from any programming language, seems to be that of the WebAssembly System interface (WASI) project: write and compile code for only one, obviously virtual, but complete operating system.

On one hand WASI gives to all the developers of Wasm runtimes one single OS to emulate. On the other, WASI gives to all programming languages one set of system calls to talk to that same OS.

In this way, even if you loaded it on ten different platforms, a binary Wasm module calling a certain WASI function would still get – from the runtime that launched it – a different binary object every time. But since all those objects would interact with that single Wasm module in exactly the same way, it would not matter!

This approach would work also in the first use case of WebAssembly, that is with the JavaScript virtual machines inside web browsers. To run Wasm modules that use WASI calls, those machines should only load the JavaScript versions of the corresponding libraries.

This OS-level emulation is also more secure than simple sandboxing. With WASI, any runtime can implement different versions of each system call – with different security privileges – as long as they all follow the specification. Then that runtime could place every instance of every Wasm module it launches into a separate sandbox, containing only the smallest, and least privileged combination of functions that that specific instance really needs.

This “principle of least privilege”, or “capability-based security model“, is everywhere in WASI. A WASI runtime can pass into a sandbox an instance of the “open” system call that is only capable of opening the specific files, or folders, that were pre-selected by the runtime itself. This is a more robust, much more granular control on what programs can do than it would be possible with traditional file permissions, or even with chroot systems.

Coding-wise, functions for things like basic management of files, folders, network connections or time are needed by almost any program. Therefore the corresponding WASI interfaces are designed as similar as possible to their POSIX equivalents, and all packaged into one “wasi-core” module, that every WASI-compliant runtime must contain.

A version of the libc standard C library, rewritten usi wasi-core functions, is already available and, according to its developers, already “sufficiently stable and usable for many purposes”. 

All the other virtual interfaces that WASI includes, or will include over time, are standardized and packaged as separate modules,  without forcing any runtime to support all of them. In the next article we will see how some of these WASI components are already used today.

The post WASI, Bringing WebAssembly Way Beyond Browsers appeared first on Linux Foundation – Training.

What we learned from our survey about returning to in-person events

Recently, the Linux Foundation Events team sent out a survey to past attendees of all events from 2018 through 2021 to get their feedback on how they feel about virtual events and gauge their thoughts on returning to in-person events. We sent the survey to 69,000 people and received 972 responses. 

The enclosed PDF document summarizes the results of that survey. Click on the embedded image to see the page advance controls.

LF-Events-surveyApril2021

Ultimately the good news here is that a healthy number of people feel comfortable traveling this year for events, especially domestically in the US. The results also show that about 1/4 of respondents like virtual events, and the vast majority of people who told us that they had attended in-person events before — another reason to keep a hybrid format moving forward.

The post What we learned from our survey about returning to in-person events appeared first on Linux Foundation.

How to resize a logical volume with 5 simple LVM commands

It’s easy to add capacity to logical volumes with a few simple commands.
Read More at Enable Sysadmin