Home Blog Page 295

Open Source Compliance Projects Unite Under New ACT Group

As open source software releases and customer adoption continue to increase, many companies underestimate what’s involved with going open source. It’s not only a matter of volunteering for the encouraged, but optional, upstream contributions to FOSS projects, but also complying with the legal requirements of open source licenses. Software increasingly includes a diverse assortment of open source code with a variety of licenses, as well as a mix of proprietary code. Sorting it all out to can be a major hassle, but the alternative is potential legal action and damaged relations with the open source community.

The Linux Foundation has just launched an Automated Compliance Tooling (ACT) project to help companies comply with open source licensing requirements. The new group consolidates its existing FOSSology and Software Package Data Exchange (SPDX) projects and adds two new projects: Endocode’s QMSTR for integrating open source compliance toolchain within build systems and VMware’s Tern, an inspection tool for identifying open source components within containers.

Announced at this week’s Open Compliance Summit in Yokohama, Japan, the ACT umbrella organization aims to “consolidate investment in, and increase interoperability and usability of, open source compliance tooling,” says the project.

“There are numerous open source compliance tooling projects but the majority are unfunded and have limited scope to build out robust usability or advanced features,” stated Kate Stewart, Senior Director of Strategic Programs at The Linux Foundation. “We have also heard from many organizations that the tools that do exist do not meet their current needs. Forming a neutral body under The Linux Foundation to work on these issues will allow us to increase funding and support for the compliance tooling development community.” 

The four ACT projects, with links to their websites, include:

  • FOSSology  This early project for improving open source compliance was adopted by the Linux Foundation in 2015. The FOSSology project maintains and updates a FOSSology open source license compliance software system and toolkit. The software lets users quickly run license and copyright scans from the command line and generate an SPDX file — a format used to share data about software licenses and copyrights. FOSSology includes a database and web UI for easing compliance workflow, as well as license, copyright, and export scanning tools. Users include Arm, HP, HP Enterprise, Siemens, Toshiba, Wind River, and others.

  • SPDX — The Software Package Data Exchange project maintains the SPDX file format for communicating software Bill of Material (BoM) information including components, licenses, copyrights, and security references. The SPDX project was spun off from FOSSology as a Linux Foundation project in 2011 and is now reunited under ACT. In 2015, SPDX 2.0 added improved tracking of complex open source license dependencies. In 2016, SPDX 2.1 standardized the inclusion of additional data in generated files and added a syntax for accurate tagging of source files with license list identifiers. The latest 2.1.15 release offers support for deprecated license exceptions. The SPDX spec will “remain separate from, yet complementary to, ACT, while the SPDX tools that meet the spec and help users and producers of SPDX documents will become part of ACT,” says the project.

  • QMSTR Also known as Quartermaster, QMSTR was developed by Endocode and is now hosted by ACT. QMSTR creates an open source toolchain that integrates into build systems to implement best practices for license compliance management. QMSTR identifies software products, sources, and dependencies, and can be used to verify outcomes, review problems and produce compliance reports. “By integrating into DevOps CI/CD cycles, license compliance can become a quality metric for software development,” says ACT.

  • Tern — This VMware hosted project for ensuring compliance in container technology is now part of the ACT family. Tern is an inspection tool for discovering the metadata of packages installed in container images. Tern “provides a deeper understanding of a container’s bill of materials so better decisions can be made about container based infrastructure, integration and deployment strategies,” says ACT.

The ACT project aligns with two related Linux Foundation projects: OpenChain, which just welcomed Google, Facebook, and Uber as platinum members, and the Open Compliance Program. In 2016, the OpenChain project released OpenChain 1.0 with a focus on tracking open source compliance along supply chains. The project also offers other services including OpenChain Curriculum for teaching best practices.

The Open Source Compliance group hosts the Open Compliance Summit. It also offers best practices information, legal guidance, and training courses for developers. The group helps companies understand their license requirements and “how to build efficient, frictionless and often automated processes to support compliance,” says the project.

ACT has yet to launch a separate website but has listed an act@linuxfoundation.org email address for more information.

Cloud Foundry, Cloud Native, and Entering a Multi-Platform World with Abby Kearns

2018 has been an amazing year for Cloud Foundry, with Alibaba joining as a Gold member, and Pivotal going public with its IPO, among some of the highlights. I recently talked with Abby Kearns, Executive Director of Cloud Foundry Foundation, to reflect on these milestones and more.

Kearns has been part of the Cloud Foundry ecosystem for the past five years and, under her leadership, Cloud Foundry has grown and evolved and found its way into half of the Fortune 500 companies, with those numbers increasing daily.

All of the major public cloud vendors want to be part of the ecosystem. “This year, we saw Alibaba join as a Gold member, and Cloud Foundry is now natively available on Alibaba Cloud,” said Kearns.

In 2017, Cloud Foundry embraced Kubernetes, the hottest open source project, and created CFCR (Cloud Foundry Container Runtime). “Kubernetes is a great technology that brings tons of capabilities to containers, which are the fundamental building blocks for a lot of portability for cloud native apps,” Kearns said.

Watch the video interview at The Linux Foundation

Convincing Your Manager That Upstreaming Is In Their Best Interest

In an ideal world, everyone would implicitly understand that it just makes good business sense to upstream some of the modifications made when creating your Linux powered devices. Unfortunately, this is a long way from being common knowledge.

By Martyn Welch, Senior Software Engineer at Collabora.

In an ideal world, everyone would implicitly understand that it just makes good business sense to upstream some of the modifications made when creating your Linux powered devices. Unfortunately, this is a long way from being common knowledge, and many managers still need convincing that this is, in fact, in their best interests.

Just so that we are clear, I’m not suggesting here that your next Linux powered device should be an entirely open design. We live in the real world and unless your explicit aim is to produce a completely open platform, doing so is unlikely to be good for your companies’ profitabilty. What does make sense however is to protect the parts of your product that drive your value proposition, while looking for ways to reduce costs in places which don’t drive the value add or unique selling point. This is where upstreaming and open source can offer you a massive advantage, if done right.

Say you have a new product in development, with a number of cool features to implement that you hope will drive customers to your door. You also have a new hardware design, thanks to the hardware guys that have discovered some funky new devices that optimise and improve this new design. You’ve also picked up the SoC vendors’ slightly outdated kernel tree and discovered that a number of these devices already have some support in the kernel, awesome. For others there is no support, either in the vendors tree or in the mainline tree, so backporting isn’t an option, and you’re looking to write some drivers. You’ve heard something about upstreaming and would like to give it a go, but you’re wondering if this is a good idea. Is this going to help my company? Well, the answer is generally “Yes”.

Uptreaming is the process of submitting the changes that you have made, typically, to existing open source projects so that they become part of the main (or upstream) codebase. This may be changes to support specific hardware (usually kernel level changes), changes to fix bugs that you’ve exposed via your specific use case or additional features that may extend existing libraries that you use in your project.

Upstreaming provides you with a number of tangible advantages which can be used as rationale to help convince your management:

  • You gain at least one 3rd party review, by a domain expert, giving you confidence in the quality of your changes.
  • You decrease your delta with the upstream codebase, reducing the maintenace burden of your product (you do security updates, right?), providing product updates and potentialy when creating the next version of your product.
  • Community suggested improvements, providing you with ways to reduce your code size whilst simultanously increasing available features.

Let’s use the Linux kernel as an example (one which many product developers will likely need to modify) of how these benefits manifest, as this is the project that I am familiar with.

Changes submitted to open source projects are not blindly accepted. Projects need to take some care that the changes are not going to negatively impact other users of the project that may have other use cases, and must also ensure that the changes are both sensible and done in a way that safeguards how the project can be maintained in the future. As a result, changes may need to be altered before being accepted, but such changes are likely to have a positive impact on your modifications.

The reviewer (who is very likely to be an expert in the area in which you are making changes) may be able to point out existing infrastructure that can be used to reduce code length and increase code reuse, or recommend changes that may remove potential race conditions or fix bugs that may not have been triggered during your testing. As the kernel (like most projects) expects a specific code style, there may be requests to change code to meet these requirements, as a consitent code style makes maintenance of the code easier. Once merged, the maintainer will be taking on the burden of maintaining this code, so he will want to ensure this can be done efficiently.

The upstream Linux kernel code base is being modified at a very fast pace, with a change being merged at a rate of one every 7 minutes. Different parts of the kernel develop at different rates however, some seeing a higher rate of change while others undergo little to no change at all. Should you have local patches, there is an increasing likelihood over time that these will be incompatible with the ever-evolving kernel.

This means your developers will need to spend time making modifications to the local patches when updating the software stack on an existing product, or when attempting to re-use these patches on a new product. Conversely, when local patches are applied upstream, existing code will be changed when APIs change, generally resulting in the modifications continuing to function as required in your use case without any engineering effort on your behalf.

Once a driver is added to the kernel, for example, others may add features to the driver that weren’t of immediate use to you. As your requirements change and grow for updates and subsequent revisions however, such changes may prove very useful to you and would be available with minimal work. A well documented example of this is Microsoft’s submission of hyper-V support. This large body of work was initially added to the “staging” area, an area where drivers that aren’t ready for full inclusion in the kernel can be put to enable them to be modified and improved with the help of the community. Whilst in the staging area the drivers were greatly improved, the drivers were modified to comply with the Linux Driver Model, reducing the code line count by 60% whilst simultaneously significantly improving performance and stability.

Of course, there are also less tangible reasons for contributing upstream. As a company, if you are planning to utilise Linux and other free software in your products, it is likely that you will want to hire talented, experienced developers to help you create your products. Contributions made to open source projects relevant to you are likely to be noticed by the very same developers that you hope to atract to your company, and will also reflect well on your company should they be looking for new opportunies or should these developers be asked if they have any recommendations for good places to work.

Submitting upstream, and contributing to an open source project, can also be a very rewarding experience for existing employees. By actively participating in a project on which your products are based, they not only gain direct access to the community behind the project, but also get a better understanding of the projects’ inner workings, enabling them to build future products more efficiently and confidently.

2019 Predictions About Artificial Intelligence That Will Make Your Head Spin

Whether praised as a panacea for greater business efficiency or the feared as the demise of humanity,  Artificial Intelligence is upon us and will impact business and society at large in ways that we can only begin to imagine. Fasten your seatbelts. Here’s what a few influencers in the arena say is on tap for 2019.

First, Ibrahim Haddad, Director of Research at The Linux Foundation says that there are two key areas to watch.

“2019 is going to be the year of open source AI,” predicts Haddad. “We’re already seeing companies begin to open source their internal AI projects and stacks, and I expect to see this accelerate in the coming year.” He says that the reason for such a move is that it increases innovation, enables faster time-to-market and lower costs. “The cost of building a platform is high, and organizations are realizing the real value is in the models, training data and applications. We’re going to see harmonization around a set of critical projects creating a comprehensive open source stack for AI, machine learning and deep learning.”

Read more at Forbes

Linux Desktop Environments: Pantheon, Trinity, LXDE

Our article Best Linux Desktop Environments: Strong and Stablesurveyed 9 strong and stable Linux desktop environments (DEs). Due to popular demand, this article extends that survey with 3 other desktops: Pantheon, Trinity Desktop Environment (TDE), and LXDE. We examine their features, user experience, resources footprint, extensibility, and documentation, and compare them to the 9 desktops covered in the original article.

Let’s start with considering the features of Pantheon, LXDE, and TDE.

The three DEs provide the core functionality we’d expect from this type of software. They are stable environments that have been in development for years. 

Pantheon applications are designed and developed by elementary – some of them are forks of Gnome-based apps, others are designed from the ground up. There’s a good variety of ready-to-use apps included. We’re particularly fond of Wingpanel, a top panel similar to GNOME Shell, and Files, a well-designed file manager. And its email client, Mail (how do they come up with these names?), is fairly capable but it’s no Thunderbird. The apps are written in the Vala programming language.

TDE includes a wide selection of applications that allow you to surf the internet, send and receive email messages, chat with friends, family and colleagues, view images, compose text and create/edit documents, as well as other useful desktop utilities. Some of the choices are rather dated. And we haven’t recommended using Konqueror to surf the net or manage files for many, many years.

Read more at LinuxLinks

Bash Variables: Environmental and Otherwise

Bash variables, including those pesky environment variables, have popped up several times in previous articles, and it’s high time you get to know them better and how they can help you.

So, open your terminal window and let’s get started.

Environment Variables

Consider HOME. Apart from the cozy place where you lay down your hat, in Linux it is a variable that contains the path to the current user’s home directory. Try this:

echo $HOME

This will show the path to your home directory, usually /home/<your username>.

As the name indicates, variables can change according to the context. Indeed, each user on a Linux system will have a HOME variable containing a different value. You can also change the value of a variable by hand:

HOME=/home/<your username>/Documents

will make HOME point to your Documents/ folder.

There are three things to notice here:

  1. There are no spaces between the name of the variable and the = or between the = and the value you are putting into the variable. Spaces have their own meaning in the shell and cannot be used any old way you want.
  2. If you want to put a value into a variable or manipulate it in any way, you just have to write the name of the variable. If you want to see or use the contents of a variable, you put a $ in front of it.
  3. Changing HOME is risky! A lot programs rely on HOME to do stuff and changing it can have unforeseeable consequences. For example, just for laughs, change HOME as shown above and try typing cd and then [Enter]. As we have seen elsewhere in this series, you use cd to change to another directory. Without any parameters, cd takes you to your home directory. If you change the HOME variable, cd will take you to the new directory HOME points to.

Changes to environment variables like the one described in point 3 above are not permanent. If you close your terminal and open it back up, or even open a new tab in your terminal window and move there, echo $HOME will show its original value.

Before we go on to how you make changes permanent, let’s look at another environment variable that it does make sense changing.

PATH

The PATH variable lists directories that contain executable programs. If you ever wondered where your applications go when they are installed and how come the shell seems to magically know which programs it can run without you having to tell it where to look for them, PATH is the reason.

Have a look inside PATH and you will see something like this:

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/bin:/sbin

Each directory is separated by a colon (:) and if you want to run an application installed in any directory other than the ones listed in PATH, you will have to tell the shell where to find it:

/home/<user name>/bin/my_program.sh

This will run a program called my_program.sh you have copied into a bin/ directory in your home directory.

This is a common problem: you don’t want to clutter up your system’s bin/ directories, or you don’t want other users running your own personal scripts, but you don’t want to have to type out the complete path every time you need to run a script you use often. The solution is to create your own bin/ directory in your home directory:

mkdir $HOME/bin

And then tell PATH all about it:

PATH=$PATH:$HOME/bin

After that, your /home//bin will show up in your PATH variable. But… Wait! We said that the changes you make in a given shell will not last and will lose effect when that shell is closed.

To make changes permanent for your user, instead of running them directly in the shell, put them into a file that gets run every time a shell is started. That file already exists and lives in your home directory. It is called .bashrc and the dot in front of the name makes it a hidden file — a regular ls won’t show it, but ls -a will.

You can open it with a text editor like kate, gedit, nano, or vim (NOT LibreOffice Writer — that’s a word processor. Different beast entirely). You will see that .bashrc is full of shell commands the purpose of which are to set up the environment for your user.

Scroll to the bottom and add the following on a new, empty line:

export PATH=$PATH:$HOME/bin

Save and close the file. You’ll be seeing what export does presently. In the meantime, to make sure the changes take effect immediately, you need to source .bashrc:

source .bashrc

What source does is execute .bashrc for the current open shell, and all the ones that come after it. The alternative would be to log out and log back in again for the changes to take effect, and who has the time for that?

From now on, your shell will find every program you dump in /home//bin without you having to specify the whole path to the file.

DIY Variables

You can, of course, make your own variables. All the ones we have seen have been written with ALL CAPS, but you can call a variable more or less whatever you want.

Creating a new variables is straightforward: just set a value within it:

new_variable="Hello"

And you already know how to recover a value contained within a variable:

echo $new_variable

You often have a program that will require you set up a variable for things to work properly. The variable may set an option to “on”, or help the program find a library it needs, and so on. When you run a program in Bash, the shell spawns a daughter process. This means it is not exactly the same shell that executes your program, but a related mini-shell that inherits some of the mother’s characteristics. Unfortunately, variables, by default, are not one of them. This is because, by default again, variables are local. This means that, for security reasons, a variable set in one shell cannot be read in another, even if it is a daughter shell.

To see what I mean, set a variable:

robots="R2D2 & C3PO"

… and run:

bash

You just ran a Bash shell program within a Bash shell program.

Now see if you can read the contents of you variable with:

echo $robots

You should draw a blank.

Still inside your bash-within-bash shell, set robots to something different:

robots="These aren't the ones you are looking for"

Check robots‘ value:

$ echo $robots
These aren't the ones you are looking for

Exit the bash-within-bash shell:

exit

And re-check the value of robots:

$ echo $robots
R2D2 & C3P0

This is very useful to avoid all sorts of messed up configurations, but this presents a problem also: if a program requires you set up a variable, but the program can’t access it because Bash will execute it in a daughter process, what can you do? That is exactly what export is for.

Try doing the prior experiment, but, instead of just starting off by setting robots="R2D2 & C3PO", export it at the same time:

export robots="R2D2 & C3PO"

You’ll notice that, when you enter the bash-within-bash shell, robots still retains the same value it had at the outset.

Interesting fact: While the daughter process will “inherit” the value of an exported variable, if the variable is changed within the daughter process, changes will not flow upwards to the mother process. In other words, changing the value of an exported variable in a daughter process does not change the value of the original variable in the mother process.

You can see all exported variables by running

export -p

The variables you create should be at the end of the list. You will also notice some other interesting variables in the list: USER, for example, contains the current user’s user name; PWD points to the current directory; and OLDPWD contains the path to the last directory you visited and since left. That’s because, if you run:

cd -

You will go back to the last directory you visited and cd gets the information from OLDPWD.

You can also see all the environment variables using the env command.

To un-export a variable, use the -n option:

export -n robots

Next Time

You have now reached a level in which you are dangerous to yourself and others. It is time you learned how to protect yourself from yourself by making your environment safer and friendlier through the use of aliases, and that is exactly what we’ll be tackling in the next episode. See you then.

How to Bring Good Fortune to Your Linux Terminal

It’s December, and if you haven’t found a tech advent calendar that sparks your fancy yet, well, maybe this one will do the trick. Every day, from now to the 24th, we’re bringing you a different Linux command-line toy. What’s a command-line toy, you ask? It could be a game or any simple diversion to bring a little happiness to your terminal.

Today’s toy, fortune, is an old one. Versions of it date back to the 1980s when it was included with Unix. The version I installed in Fedora was available under a BSD license, and I grabbed it with the following.

sudo dnf install fortune-mod -y

Your distribution may be different. 

Read more at OpenSource.com

Living Open Source in Zambia

In a previous article I’ve announced my sponsorship project, where I offered to help a motivated young Linux Professional getting certified. I found an ideal candidate, and he has taken the RHCSA exam, and now we’re ready to take the next step.

Santos Chibenga from Zambia is so engaged in the local Linux community in Zambia that we decided to host an event together: https://www.vieo.tv/event/linux-event-lusaka-zambia. In this event we will have local speakers, and I will educate nearly 200 participants to become LFCS certified. 

We do have some very cool sponsors, including Linux Foundation who has contributed 100 exam vouchers for the LFCS exam. I’m very excited about that, because that means that at the end of the event we’re able to leave something real in Zambia.

Learn more at LinkedIn

Critical Vulnerability Allows Kubernetes Node Hacking

Kubernetes has received fixes for one of the most serious vulnerabilities ever found in the project to date. If left unpatched, the flaw could allow attackers to take over entire compute nodes.

“With a specially crafted request, users that are allowed to establish a connection through the Kubernetes API server to a backend server can then send arbitrary requests over the same connection directly to that backend, authenticated with the Kubernetes API server’s TLS credentials used to establish the backend connection,” the Kubernetes developers said in an advisory.

Furthermore, in default configurations, both authenticated and unauthenticated users are allowed to perform API discovery calls and could exploit this vulnerability to escalate their privileges. For example, attackers could list all pods on a node, could run arbitrary commands on those pods and could obtain the output of those commands.

Read more at The New Stack

Libcamera Aims to Make Embedded Cameras Easier

The V4L2 (Video for Linux 2) API has long offered an open source alternative to proprietary camera/computer interfaces, but it’s beginning to show its age. At the Embedded Linux Conference Europe in October, the V4L2 project unveiled a successor called libcamera. V4L2 co-creator and prolific Linux kernel contributor Laurent Pinchart outlined the early-stage libcamera project in a presentation called “Why Embedded Cameras are Difficult, and How to Make Them Easy.”

V4l and V4L2 were developed when camera-enabled embedded systems were far simpler. “Maybe you had a camera sensor connected to a SoC, with maybe a scaler, and everything was exposed via the API,” said Pinchart, who runs an embedded Linux firms called Ideas on Board and is currently working for Renesas. “But when hardware became more complex, we disposed of the traditional model. Instead of exposing a camera as a single device with a single API, we let userspace dive into the device and expose the technology to offer more fine-grained control.”

These improvements were extensively documented, enabling experienced developers implement more use cases than before. Yet, the spec placed much of the burden of controlling the complex API on developers, with few resources available to ease the learning curve. In other words, “V4L2 became more complex for userspace,” explained Pinchart.

The project planned to add a layer called libv4l to address this. The libv4l userspace library was designed to mimic the V4L2 kernel API and expose it to apps “so it could be completely transparent in tracking the code to libc,” said Pinchart. “The plan was to have device specific plugins provided by the vendor and it would all be part of the libv4l file, but it never happened. Even if it had, it would not have been enough.”

Libcamera, which Pinchart describes as “not only a camera library but a full camera stack in user space,” aims to ease embedded camera application development, improving both on V4L2 and libv4l. The core piece is a libcamera framework, written in C++, that exposes kernel driver APIs to userspace. On top of the framework are optional language bindings for languages such as C.

The next layer up is a libcamera application layer that translates to existing camera APIs, including V4L2, Gstreamer, and the Android Camera Framework, which Pinchart said would not contain the usual vendor specific Android HAL code. As for V4L2, “we will attempt to maintain compatibility as a best effort, but we won’t implement every feature,” said Pinchart. There will also be a native libcamera app format, as well as plans to support Chrome OS.

Libcamera keeps the kernel level hidden from the upper layers. The framework is built around the concept of a camera device, “which is what you would expect from a camera as an end user,” said Pinchart. “We will want to implement each camera’s capabilities, and we’ll also have a concept of profiles, which is a higher view of features. For example, you could choose a video or point-and-shoot profile.”

Libcamera will support multiple video streams from a single camera. “In videoconferencing, for example, you might want a different resolution and stream than what you encode over the network,” said Pinchart. “You may want to display the live stream on the screen and, at the same time, capture stills or record video, perhaps at different resolutions.”

Per-frame controls and a 3A API

One major new feature is per-frame controls. “Cameras provide controls for things like video stabilization, flash, or exposure time which may change under different lighting conditions,” said Pinchart. “V4L2 supports most of these controls but with one big limitation. Because you’re capturing a video stream with one frame after another, if you want to increase exposure time you never know precisely at what frame that will take effect. If you want to take a still image capture with flash, you don’t want to activate a flash and receive an image that is either before or after the flash.”

With libcamera’s per-frame controls, you can be more precise. “If you want to ensure you always have the right brightness and exposure time, you need to control those features in a way that is tied to the video stream,” explained Pinchart. “With per-frame controls you can modify all the frames that are being captured in a way that is synchronized with the stream.”

Libcamera also offers a novel approach to a given camera’s 3A controls, such as auto exposure, autofocus, and auto white balance. To provide a 3A control loop, “you can have a simple implementation with 100 lines of code that will give you barely usable results or an implementation based on two or three years of development by device vendors where they really try to optimize the image quality,” said Pinchart. Because most SoC vendors refuse to release the 3A algorithms that run in their ISPs with an open source license, “we want to create a framework and ecosystem in which open source re-implementations of proprietary 3A algorithms will be possible,” said Pinchart.

Libcamera will provide a 3A API that will translate between standard camera code and a vendor specific component. “The camera needs to communicate with kernel drivers, which is a security risk if the image processing code is closed source,” said Pinchart. “You’re running untrusted 3A vendor code, and even if they’re not doing something behind your back, it can be hacked. So we want to be able to isolate the closed source component and make it operate within a sandbox. The API can be marshaled and unmarshaled over IPC. We can limit the system calls that are available and prevent the sandboxed component from directly accessing the kernel driver. Sandboxing will ensure that all the controls will have to go through our API.”

The 3A API combined with libcamera’s sandboxing approach, may encourage more SoC vendors to further expose their ISPs just as some are have begun to open up their GPUs. “We want the vendors to publish open source camera drivers that expose and document every control on the device,” he said. “When you are interacting with a camera, a large part of that code is device agnostic. Vendors implement a completely closed source camera HAL and supply their own buffer management and memory location and other tasks that don’t add any value. It’s a waste of resources. We want as much code as possible that can be reused and shared with vendors.”

Pinchart went on to describe libcamera’s cam device manager, which will support hot plugging and unplugging of cameras. He also explained libcamera’s pipeline handler, which controls memory buffering and communications between MIPI-CSI or other camera receiver interfaces and the camera’s ISP.

“Our pipeline handler takes care of the details so the application doesn’t have to,” said Pinchart. “It handles scheduling, configuration, signal routing, the number of streams, and locating and passing buffers.” The pipeline handler is flexible enough to support an ISP with an integrated CSI receiver (and without a buffer pool) or other complicated ISPs that can have a direct pipeline to memory.

Watch Pinchart’s entire ELC talk below: