Home Blog Page 571

Deep Hardware Discovery With lshw and lsusb on Linux

In today’s stupendous roundup, we will dig into the beloved lshw (list hardware) and lsusb (list USB) commands. This is a wonderful rabbit hole to fall down and get lost in as you learn everything about your hardware down to minute details, without ever opening the case.

lshw

The glorious lshw (list hardware) command reveals, in excruciating detail, everything about your motherboard and everything connected to it. It’s a tiny little command, weighting in at a mere 639k, and yet it reveals much. If you run lshw with no options you get a giant data dump, so try storing the results in a text file for leisurely analysis, and run it with root permissions for complete results:

$ sudo lshw | tee lshw-output.txt

The -short option prints a summary:

$ sudo lshw -short
H/W path   Device     Class     Description
===========================================
                      system    To Be Filled By O.E.M.
/0                    bus       H97M Pro4
/0/0                  memory    64KiB BIOS
/0/b                  memory    16GiB System Memory
/0/b/0                memory    DIMM [empty]
/0/b/1                memory    8GiB DIMM DDR3 Synchronous 1333 MHz (0.8 ns)

I assembled this system, so there is no OEM description. On my Dell PC it says “Precision Tower 5810” (0617).

This abbreviated example displays the hardware paths, which are the bus addresses. The output is in bus order. /0 is system/bus, your computer/motherboard. /0/n is system/bus/device. You can see these in the filesystem with ls -l /sys/bus/*/*, or look in /proc/bus. The lshw output tells you exact locations, like which memory slots are occupied, and which ports your SATA drives are connected to.

The Device column displays devices such as USB host controllers, hard drives, network interfaces, and connected USB devices.

The Class column contains the categories of your devices, and you can query by class. This example displays all storage devices, including a USB stick:

$ sudo lshw -short -class storage -class disk
H/W path               Device      Class      Description
=========================================================
/0/100/14/1/3/4        scsi6       storage    Mass Storage
/0/100/14/1/3/4/0.0.0  /dev/sdc    disk       4027MB SCSI Disk
/0/100/1f.2                        storage    9 Series Chipset Family
                                              SATA Controller [AHCI Mode
/0/1                   scsi0       storage        
/0/1/0.0.0             /dev/sda    disk       2TB ST2000DM001-1CH1
/0/2                   scsi2       storage        
/0/2/0.0.0             /dev/sdb    disk       2TB SAMSUNG HD204UI
/0/3                   scsi4       storage        
/0/3/0.0.0             /dev/cdrom  disk       iHAS424   B

Use -volume to show all of your partitions.

In the first example I see my motherboard model, H97M Pro4, but I don’t remember anything else about it. No worries, because I can call up excruciatingly detailed information by omitting the -short option:

$ sudo lshw -class bus
  *-core                  
       description: Motherboard
       product: H97M Pro4
       vendor: ASRock
       physical id: 0
       serial: M80-55060501382

Check it out, the serial number, vendor, and everything. Consult the fine man page, man lshw, and see Hardware Lister (lshw) for detailed information on what all the fields mean.

lsusb

The usbutils suite of commands probes your USB bus and tells you everything about it. This includes usb-devices, lsusb, and usbhid-dump. openSUSE and CentOS also package lsusb.py, but don’t include any documentation for it. My guess is it’s obsolete as it was last updated in 2009, so let us move on to the freakishly useful lsusb:

$ lsusb
Bus 002 Device 002: ID 8087:8001 Intel Corp. 
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:8009 Intel Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 003: ID 148f:5372 Ralink Technology, Corp. RT5372 Wireless Adapter
Bus 003 Device 004: ID 046d:c018 Logitech, Inc. Optical Wheel Mouse
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

This may be all you ever need to verify what USB devices are connected to your system, and whether it is seeing all of them.

It also tells us a lot of interesting details, starting with bus assignments. The above output is on an older system that includes both 3.0 and 2.0 controllers, which may seem odd because USB standards are always backwards-compatible. But some 2.0 devices had problems with 3.0 controllers, so it made sense to have both.

There are only two external USB devices in the above output, a Ralink wi-fi dongle and a USB mouse. What are all those other things?

The root hub is a virtual device that represents the USB bus. Its device number is always 001, and the manufacturer is always 1d6b: Linux Foundation. The device ID tells us the USB standard, so 1d6b:0002 is a USB 2.0 bus, and 1d6b:0003 is USB 3.0.

In the above output there are two physical host controllers: 8087:8001 Intel Corp. USB 2.0) and 8087:8009 Intel Corp. (USB 3.0). On this system this is the Intel 9 Series Chipset Family Platform Controller Hub (PCH). This particular controller manages all I/O between the CPU and the rest of the system. There are no North and South bridges as there were in in the olden Intel days; everything is managed in a single chip. The architecture is rather interesting, and you can read all the endless details in the 815-page datasheet. The pertinent bits for this article are as follows.

There are two physical EHCI host controllers (USB 2.0), and one xHCI host controller (USB 3.0). You can see this more clearly with the tree view:

$ lsusb -t
/:  Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    |__ Port 4: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
/:  Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M
    |__ Port 5: Dev 13, If 0, Class=Vendor Specific Class, Driver=rt2800usb, 480M
    |__ Port 12: Dev 4, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
    |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M
/:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
    |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/6p, 480M

This reveals all manner of fascinating information. It displays the kernel drivers, the USB versions of the connected devices (1.5M = USB 1.1, 480M = USB 2.0, and 5000M = USB 3.0), classes, busses, ports, and device numbers. There are four buses because the xHCI controller manages both USB 2.0 and 3.0 devices. lspci more clearly shows three physical host controllers:

$ sudo lspci|grep -i usb
00:14.0 USB controller: Intel Corporation 9 Series Chipset Family
USB xHCI Controller
00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family
USB EHCI Controller #2
00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family
USB EHCI Controller #1

The physical USB ports that you plug your devices into are supposed to be color-coded. 3.0 is blue, and 2.0 ports are black. However, not all vendors use colored ports. No worries, just use a 3.0 device and lsusb to map your ports.

You may query specific buses, devices, or both. This example queries bus 004 and displays detailed information on the bus and connected devices:

$ sudo lsusb -vs 004:

You may query by vendor and product code:

$ sudo lsusb -vd 148f:5372

Update the ID database:

$ sudo update-usbids

You can also update the lspci database:

$ sudo update-pciids

See man lsusb for complete options, and thank you for joining me on this trip down the Linux hardware discovery rabbit hole.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Fixing the Linux Graphics Kernel for True DisplayPort Compliance, Or: How to Upstream a Patch

If you’ve ever hooked up a Linux computer to a DisplayPort monitor and encountered only a flickering or blank screen, we’ve got good news for you. A graphics kernel developer at Intel’s Open Source Technology Center has solved the problem with a patch that will go into Linux 4.12. Manasi Navare’s patch modifies Atomic Kernel Mode Setting (KMS) technology to gracefully drop down to a lower resolution to display the image.

“Someone had to fix this problem, so I said okay, I have the knowledge and I have the community to help me,” said Navare at Embedded Linux Conference.

To hear Navare tell it, the hard part was not so much developing the fix as fully understanding the inner workings of DisplayPort (DP) compliance, the Linux graphics stack, and Atomic KMS. The task of upstreaming the patch was perhaps even more challenging, calling upon the right mix of persuasion, persistence, charm, and flexibility. At the end of the talk, Navare offered some tips for anyone pushing a patch upstream toward an eventual kernel merge.

Negotiating with DisplayPort

Navare started by explaining how a computer (the DP source) negotiates with a display (DP sink) to enable the desired resolution and other properties. When you connect the cable, the sink sends a signal that informs the source about the maximum link-outs and link rates supported by the sink. The source then initiates a DPCD (DisplayPort Configuration Data) read on the sink’s AUX channel, performs a calibration routine, and then launches a handshaking process called DP link training. This configures the main link out of the four possible DP links, each of which has different channel capacities.

“The first phase is clock recovery, sending out the known training packet sequence onto the main link,” said Navare. “The receiver extracts the block information to find if it can work at that linkage. Next comes channel equalization where the receiver tries to understand the link mapping. If these are successful, the link is ready, and the sink is set to receive the data at a specific link-out and link rate.”

Despite all these steps, the link training can still result in a blank or flickering display. “The link training could fail because you haven’t tested the physical capability of the cable until the very end of the process,” said Navare. “There is no way to send this information back to userspace because the commit phase was never expected to fail. It’s a dead end.”

To find a solution, Navare needed to test DP compliance. She used a Unigraf DPR 120 device, which has been certified by Mesa. The device sits between the source and sink and requests specific data or video packets to be sent to the DP monitor. “It maps those values onto the AUX channel and monitors all the transactions on the display cables,” said Navare. “It compares that data to the reference values, and if it matches, the device is compliant.”

Navare also needed to improve her understanding of the complex Linux graphics stack. The base level consists of an Intel Integrated Graphics Device layer — a hardware layer for rendering the display and doing graphics acceleration. “On top of this sits the Linux kernel with the I19 Intel graphics driver, which knows how to configure the hardware according to userspace commands,” explained Navare.

At a higher layer within the same Linux kernel subsystem is the DRM (Direct Rendering Manager), which implements the part of the kernel that is common to different hardware specific drivers. “The DRM exposes the APIs to userspace, which sends information down to the hardware to request a specific display for rendering,” said Navare.

She also further explored KMS, which, among other things, scans the RGB pixel data in the plane buffers using the cathode ray tube controller (CRTC), which decides whether to generate DVI, HDMI, or DP signals.

“The CTRC generates the bitstream according to the video timings and sends the data to the encoder, which modifies the bitstream and generates the analog signals based on the connector type,” says Navare. “Then it goes to the connector and lights up the display.”

Once into the project, Navare realized her solution would need to support the new Atomic KMS version of KMS, which implements a secondary process that Navare called the two step. “When you connect the source with the sink, userspace creates a list of parameters that it wants to change on the hardware, and sends this out to the kernel using a DRM_IOCTL_MODE_ATOMIC call. The first step is the atomic check phase where it forms the state of the device and its structure for the different DRM mode objects: the plane, CRTC, or connector. It validates the mode requested by Userspace, such as 4K, to see if the display is capable.”

If successful, the process advances to the next stage — atomic commit — which sends the data to the hardware. “The expectation is that it will succeed because it has already been validated,” said Navare.

Yet even with Atomic KMS, you can still end up with a blank screen. Navare determined that the problem happened within Atomic KMS between the check and commit stages, where link training occurred.

Navare’s solution was to introduce a new property for the connector called link status. “If a commit fails, the kernel now tags the connect property as BAD,” she explained. “It sends the HPD back to the userspace, which requests another modeset, but at lower resolution. The kernel repeats the check and commit, and retrains the link at a lower rate.”

If the test passes, the link status switches to GOOD, and the display works, although at a lower resolution. “Atomic check is never supposed to fail, but link training is the exception because it depends on the physical cable,” said Navare. “The link might fail after a successful modeset because something can go wrong with the cable between initial hookup and test. This patch provides a way for the kernel to send that notification back to userspace. You have to go back to userspace because you have to repeat the process of setting the clock and rate, which you can’t do at the point of failure.”

A few tips on upstreaming

Navare added the new link status connector property to the DRM layer as part of an Upstream I915 driver patch, and submitted it to her manager at Intel. “I said, ‘It’s working now. What can I work on next?’ He replied: ‘Have you sent it upstream?’”

Navare submitted the patch to the public mailing list for the graphics driver, thereby beginning a journey that took almost a year. “It took a long time to convince the community that this would fix the problem,” said Navare. “You get constant feedback and review comments. I think I submitted 15 or 20 revisions before it was accepted. But you keep on submitting patch revisions until you get the ‘reviewed by’ and that’s the day you go party, right?”

Not exactly. The patch then gets merged into an internal DRM tree, where much more testing transpires. It finally gets merged into the main DRM tree where it’s sorted into DRM fixes or DRM next.

“Linus [Torvalds] pulls the patches from this DRM tree on a weekly basis and announces his release candidates,” said Navare. “It goes through the cycle of release candidates for a long time until it’s stable, and it finally becomes part of the next Linux release.”

Torvalds finally approved the patch for merger, and the champagne cork popped.

Linus’s Rules

Navare also offered some general tips for the upstreaming process, which she calls Linus’s Rules. The first rule is “No regressions,” that is, no GPU hangs or blanks screens. “If you submit a patch it should not break something else in the driver, or else the review cycle can get really aggressive,” said Navare. “I had to leverage the community’s knowledge about other parts of the graphics driver.”

The second rule is “Never blame userspace, it’s always kernel’s fault.” In other words, “If the hardware doesn’t work as expected then the kernel developer is the one to blame,” she added.

The problem here is that kernel patches require changes in userspace drivers, which leads to “a chicken and egg situation,” said Navare. “It’s hard to upstream kernel changes without testing userspace… You can’t merge the kernel patches until you’ve tested the userspace, but you can’t merge userspace because the kernel changes have not yet landed. It’s very complicated.”

To prove her solution would not break userspace, Navare spent a lot of time interacting with userspace community and involved them in testing and submitting patches.

Another rule is that “Feedback is always constructive.” In other words, “don’t take it as criticism, and don’t take it personally,” she said. “I got reviews that said: ‘This sucks. It’s going to break link training, which is very fragile — don’t touch that part of the driver.’ It was frustrating, but it really helped. You have to ask them why they think it’s going to break the code, and how they would fix it.”

The final rule is persistence. “You just have to keep pinging the maintainers and bugging them on IRC,” said Navare. “You will see the finish line, so don’t give up.”

Navare’s Upstream i915 patch can be found here, and the documentation is here. You can watch the complete presentation below.

Connect with the Linux community at Open Source Summit North America on September 11-13. Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

Enterprise Container DevOps Steps Up its Game with Kubernetes 1.6

Managing containers isn’t easy. That’s where such programs as Docker swarm mode, Kubernetes, and Mesosphere can make or break your containers initiatives. Perhaps the most popular of these, Kubernetes, has a new release, Kubernetes 1.6, that expands its reach by 50 percent to 5,000 node clusters. Conservatively, that means Kubernetes can manage 25,000 Docker containers at once.

In Kubernetes, a node is a virtual machine (VM) or physical server. Some people run as many as 500 containers per node, which means you could manage 2.5 million containers with Kubernetes. 

Read more at ZDNet

How to Learn Unix/Linux

Every month or two, someone asks me how they should go about learning Unix. The short answer is always “use it” or maybe as much as “use it — a lot.”

But the detailed answer includes a lot of steps and a good amount of commitment to spending time working on the command line. I may have learned some of the most important parts of making my way on the Unix command line the first week that I used it back in the early 80’s but I had to spend a lot of time with it before I was really good. And I’m still learning new ways of getting work done 30+ years later. So here is my detailed answer.

Read more at ComputerWorld

Chain of Command Example

The idea is that a cache is often first because it’s so fast. We can, of course, write a giant master function with other functions..

One objective of the chain of command design pattern is to be able to write a bunch of functions that link together and form a chain of alternative implementations. The idea is to have alternatives that vary in their ability to compute a correct answer. If Algorithm 1 doesn’t work, try Algorithm 2. If that doesn’t work, fall back to Algorithm 3, etc.

Perhaps Algorithm 1 has a number of constraints, i.e., it’s fast, but only for a limited kind of input. Algorithm 2 may have a different set of constraints. And Algorithm 3 involves the “British Museum” algorithm.

Read more at DZone

How to List Files Installed From a RPM or DEB Package in Linux

Have you ever wondered where the various files contained inside a package are installed (located) in the Linux file system? In this article, we’ll show how to list all files installed from or present in a certain package or group of packages in Linux.

This can help you to easily locate important package files like configurations files, documentation and more. Let’s look at the different methods of listing files in or installed from a package:

Read more at Tecmint

 

Keynote: State of the Union – Jim Zemlin, Executive Director, The Linux Foundation

https://www.youtube.com/watch?v=DNG0zfi8Xpg?list=PLbzoR-pLrL6rm2vBxfJAsySspk2FLj4fM

As the open source community continues to grow, Jim Zemlin, Executive Director of The Linux Foundation, says the Foundation’s goal remains the same: to create a sustainable ecosystem for open source technology through good governance and innovation.

Trivial Transfers with TFTP, Part 1: Why Use It?

Sometimes we find ourselves using technologies that — although we may not realize it — stem from way back in the history of the Internet. The other day, I was using Trivial File Transfer Protocol (TFTP) and looked up its Request For Comments (RFC) only to discover that it’s been around a while longer than I suspected: since June 1981 to be exact. That may not be the 1970s but FTP  and TFTP can certainly be considered founding protocols.

In an unusual twist, TFTP doesn’t use the now almost mandatory Transmission Control Protocol (TCP) for moving its data around. TCP offers resilience through error recovery but instead TFTP uses the User Data Protocol (UDP) presumably because of the “trivial” nature of its file transfers.

The feature set included with TFTP is admittedly quite limited but, make no mistake, it can still be very useful on a local area network (LAN). Unlike the well-known FTP service, which is commonly used for moving files back and forth across the Internet (and which includes successors with encryption, such as sFTP and FTPS, among its family members), TFTP doesn’t even allow the listing of directories, so you can see which files are available. If you want to use TFTP, you need to be aware of the filenames, which are sometimes complex and lengthy to provide a small dose of security through obscurity, before connecting to a server.

Other somewhat surprising limitations, relative to its cousin FTP, include a lack of authentication and the ability to delete or rename files. Admittedly, there may have been improvements since its original design but the RFC also states that, in essence, the only errors it can pick up are noticing if the wrong user is specified, if a file that’s requested doesn’t exist, and other access violations.

Now that you’re firmly sold on using this somewhat-deprecated protocol, let’s have a think about what it might be used for.

If you’re creating new machines from images, then TFTP is perfect for bootstrapping a new server with predefined config and a sprinkling of packages. It might also be used during boot time to pull the latest version of a file from a local server so that all clients are guaranteed to be up-to-date with a certain software package.

You may also want to use TFTP — as several vendors do — for firmware updates. Why choose TFTP over FTP or even HTTP, you may ask? Simply because even if you don’t have a TFTP server already up and running, it’s relatively easy to quickly get started. Also the number of parameters required to retrieve a file (and therefore the number of things that can go wrong) is very limited. It tends to work or it doesn’t; there’s little middle ground. This functional simplicity is definitely a bonus.

If you’ve ever maintained switches or routers, then you’ve likely used TFTP either to back up or restore config files or possibly to perform a firmware upgrade. Many of the major vendors still prefer this method, possibly because there’s a feeling of comfort (in relation to security) when moving files around inside a LAN relative to doing so across the Internet.

On a network device, for example, you might encounter a transaction similar to that seen in Listing 1:

Router# copy running-config tftp:
Address or name of remote host []? 10.10.10.10
Destination filename [router_config_backup_v1]? router_config_backup_v1
!!!!
3151 bytes copied in 1.21 secs (2,604 bytes/sec)


Router#

Listing 1: The type of transaction that you may see when backing up a network device’s config via TFTP.

As you can see in this listing, the exclamation marks provide a progress bar of sorts and each one acts as an indicator that a successful transfer of ten packets.

Installation

Let’s look at how to get a TFTP server up and running.

In the olden days, inetd ruled the roost and was responsible for letting many local services out onto the network so that they could communicate with other users and machines. On the majority of systems that I used, thanks to security limitations, inetd ultimately became xinetd, which closed down more unneeded services by default. Thankfully, however we can avoid also installing xinetd, which was the norm until a few years ago but instead solely focus on the tftpd package.

On Debian derivatives, installing tftpd is as simple as running:

# apt-get install tftpd

As you can see in Figure 1, inetd is indeed mentioned as a supplementary package but this is of little consequence in terms of the filesystem footprint remaining minuscule.

Figure 1: Installing the “tftpd” package on Debian systems.

Trilbys and Fedoras

On Red Hat derivatives, there are a few other considerations. You can use an alternative package with similar relative ease but opt in to using the more advanced xinetd by running a command such as:

# yum install tftp-server xinetd

This pulls down the more sophisticated replacement for inetd and tftp-server. Incidentally, the tftpd-hpa is pulled down on Debian systems if you try and install tftp-server, and you would edit the file /etc/default/tftpd-hpa to config your service. Look for the Debian-specific README to allow file uploads, too.

Back to Red Hat. The description for the tftp-server packages is as follows, echoing what we’ve said until now:

“The Trivial File Transfer Protocol (TFTP) is normally used only for booting diskless workstations. The tftp-server package provides the server for TFTP, which allows users to transfer files to and from a remote machine. TFTP provides very little security, and should not be enabled unless it is expressly needed. The TFTP server is run from /etc/xinetd.d/tftp, and is disabled by default.”

If you haven’t used xinetd before it uses individual config files per service. For example, inside the file /etc/xinetd.d/tftp you need to make a couple of small changes to get started. Have a look at Listing 2.

service tftp

{

socket_type  = dgram

protocol            = udp

wait            = yes

user            = root

server            = /usr/sbin/in.tftpd

server_args    = -s /tftp_server_directory

disable            = yes

per_source    = 11

cps            = 100 2

flags            = IPv4

}

Listing 2: A sample “xinetd” config for a TFTP service.

As you can see in this listing, we will need to change the “disable” setting to “no” if we want this service to start. Additionally, we might need to alter the “server_args” option away from “-s /tftp_server_directory” if we want to serve files from another directory. If you want to allow file uploads then simply add a “-c” option before the aforementioned “-s” on that line.

In the next article, we’ll look more closely at the main config file and talk about how to enable and disable tftpd services.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

Cloud Foundry Launches Its Developer Certification Program

Cloud Foundry, a massive open source project that allows enterprises to host their own platform-as-a-service for running cloud applications in their own data center or in a public cloud, today announced the launch of its “Cloud Foundry Certified Developer” program.

The Cloud Foundry Foundation calls this “the world’s largest cloud-native developer certification initiative,” and while we obviously still have to wait and see how successful this initiative will be, it already has the backing of the likes of Dell EMC, IBM, SAP and Pivotal (the commercial venture that incubated the Cloud Foundry project). The company is partnering with the Linux Foundation to deliver the program through its eLearning infrastructure.

Read more at TechCrunch

Open Source State of the Union

The Linux Foundation is on track to break the 1,000 participating organizations mark some time in 2017 and has set its sights on bringing more new and diverse voices into open source technology through training and outreach efforts. Even as the open source community continues to grow, Executive Director Jim Zemlin said at the Open Source Leadership Summit in February that the Foundation’s goal remains the same: to create a sustainable ecosystem for open source technology through good governance and innovation.

“We think that the job of the Foundation,” Zemlin said, “is to create that sustainable ecosystem. It’s to work with projects that solve a meaningful problem in society, in the market, to create really good communities.”

According to Zemlin, The Linux Foundation has trained more than 800,000 students, many of them at no cost. Training is crucial, he said, so the barrier both to contribute to open source and to use open source projects in more settings is lowered a little every day.

“We are trying to make sure that the projects that we work with have a set of practitioners and developers that can further increase the adoption of that particular code,” he said.

Zemlin is also thrilled that companies not traditionally known for their open source contributions are becoming excited about the opportunities The Linux Foundation and the open source code can provide.

“The thing I’m most proud about that is the fact that companies are coming in now from wholesale new sectors that hadn’t done a lot of open source work in the past,” Zemlin said. “Telecom, automotive, etc., are really learning how to do shared software development, understanding the intellectual property regimes that open source represents, and just greasing the skids for broader flow of code, which is incredibly important if your mission is to create a greater shared technology resource in the world.”

Zemlin was particularly excited about Automotive Grade Linux (AGL), a middleware project that was represented at the Consumer Electronics Show this year. “This is such a sleeper project at The Linux Foundation that’s going to have a huge impact just as more and more production vehicles roll out with the AGL code in it,” Zemlin said. “It’s at CES this year. Daimler announced that they’re joining our Automotive Grade Linux initiative so now we have Toyota, Daimler, and a dozen of the world’s biggest automotive OEMs all working together to create the future automotive middleware and informatics systems that will really define what an automotive cockpit experience looks like.”

The goal for that project, and all the various projects that the different open source foundations are shepherding in 2017, is to create value for both the contributors and the organizations investing their time and money.

“The best projects, the projects that are meaningful and that you can count on for decades to come, are those who have a good developer community solving a really big problem where that code is used to create real value,” Zemlin said. “Value in the form of profit for companies.”

For that value to be created, foundations such as The Linux Foundation must continue their hard work by supporting the developers and other professionals leading their passion projects.

“Ecosystems take real work,” Zemlin said. “This is what foundations do… We create a governance structure where you can pull intellectual property for long-term safe harbor.”

You can watch the complete presentation below:

https://www.youtube.com/watch?v=DNG0zfi8Xpg?list=PLbzoR-pLrL6rm2vBxfJAsySspk2FLj4fM

Learn how successful companies gain a business advantage with open source software in our online, self-paced Fundamentals of Professional Open Source Management course. Download a free sample chapter now!