Home Blog Page 600

Optimizing Graphics Memory Bandwidth with Compression and Tiling: Notes on DRM Format Modifiers

Written by Varad Gautam, Consultant Associate Software Engineer at Collabora.

Over the past few weeks, I have been working for Collabora on plumbing DRM format modifier support across a number of components in the graphics stack. This post documents the work and the related consequences/implications.

The need for modifiers

Until recently, the FourCC format code of a buffer provided a nearly comprehensive description of its contents. But as hardware design has advanced, placing the buffers in a non-traditional (nonlinear) fashion has become more important in order to gain performance. A given buffer layout can prove to be more suited for particular use-cases than others, such as tiled formats to improve locality of access and enable fast local buffering.

Alternatively, a buffer may hold compressed data that requires external metadata to decompress before the buffer is used. Just using the FourCC format code then falls short of being able to convey the complete information about how a buffer is placed in memory, especially when the buffers are to be shared across processes or even IP blocks. Moreover, newer hardware generation may add further usage capabilities for a layout.

Figure 1. Intel Buffer Tiling Layouts.

Figure 1. Intel Buffer Tiling Layouts. Memory organization per-tile for two of the Intel-hardware supported buffer tiling layouts. An XMAJOR 4KB tile is stored as an 8×32 WxH array of 16Byte data (row order) while a YMAJOR 4KB tile is laid as 32×8 WxH 16B data (column order).CC-BY-ND from Intel Graphics PRM Vol 5: Memory Views
 

As an example, modern Intel hardware can use multiple layout types for performance depending on the memory access patterns. The Linear layout places data in row-adjacent format, making the buffer suited for scanline-order traversal. In contrast, the Y-Tiled layout splits and packs pixels in memory such that the geometrically close-by pixels fit into the same cache line, reducing misses for x- and y- adjacent pixel data when sampled – but couldn’t be used for scan-out buffers before the Skylake generation due to hardware constraints. Skylake also allows the single-sampled buffer data to be compressed in-hardware before being passed around to cut down bus bandwidth, with an extra auxiliary buffer to contain the compression metadata. The Intel Memory Views Reference Manual describes these and more layout orders in depth.

Besides access-pattern related optimizations, a similar need for layout communication arises when propagating buffers through the multimedia stack. The msm vidc video decoder (present on some Samsung and Qualcomm SoCs) arranges decoded frames in a tiled variant of the NV12 fourcc format – the NV12_64Z32. With a modifier associated with the buffer, the GPU driver can use the information to program the texture samplers accordingly – as is the case with the a3xx, using the hardware path to avoid explicitly detiling the frame.
 

Figure 2. Buffer Layout for NV12 Format with Tiled Modifier.

Figure 2. Buffer Layout for NV12 Format with Tiled Modifier. The data is laid out as 64×32 WxH tiles similar to NV12, but the tiles appear in a zigzag order instead of linear.
 

To ease this situation, an extra 64b ‘modifier’ field was added into DRM’s kernel modesetting to contain this vendor specific buffer layout information. These modifier codes are defined in the drm_fourcc.h header file.

Current state of affairs across components

With the AddFB2 ioctl, DRM lets userspace attach a modifier to buffers it imports. For userspace, libdrm support is also planned to allow probing the kernel support and scanout-ability of a given buffer layout, now representable as a fourcc+modifier combination. A modifier aware GetPlane ioctl analog, ‘GetPlane2’ is up for consideration. With recent patches from Intel, GBM becomes capable of allocating modifier-abiding surfaces too, for Wayland compositors and the X11 server to render to.

Collabora and others recently published an EGL extension, EGL_EXT_image_dma_buf_import_modifiers, which makes it possible to create EGLImages out of dmabuf buffers with a format modifier via eglCreateImageKHR, which can then be bound as external textures for the hardware to render into and sample from. It also introduces format and modifier query entrypoints that allow for easier buffer constraint negotiation with knowing the GL capabilities beforehand to avoid hit-and-trial guesswork and bailout scenarios that can arise for compositors.

Mesa provides an implementation for the extension, along with driver support on Skylake to import and sample compressed color buffers. The patchset discussions can be found at mesa-dev: ver1 ver2.

The Wayland protocol has been extended to allow the compositor to advertise platform supported format modifiers to its client applications, with Weston supporting this.

With the full end-to-end client-to-display pipeline now supporting tiled and compressed modes, users can transparently benefit from the reduced memory bandwidth requirements.

Further reads

Some GPUs find the 64b modifier to be too restrictive and require more storage to convey layout and related metadata. AMDGPU associates 256 bytes of information with each texture buffer to describe the buffer layout.

To standardize buffer allocation and cross-platform sharing, the Unix Device Memory Allocation project is being discussed.

Thanks to Google for sponsoring a large part of this work as part of ChromeOS development.

Linux Foundation Releases Business Open Source Basics Ebook

Want to know how your business can get the most from open source? This free ebook can help.

Developers know that open source is great. Even Microsoft is now on the open-source bandwagon. But, outside of the IT department, many companies don’t understand why and how open source can help their businesses. The Linux Foundation has the answers you need in its new free Open Source Software Basics ebook.

Read more at ZDNet

How to Install pandom: A True Random Number Generator for Linux

This tutorial explains how to install pandom: a timing jitter true random number generator maintained by ncomputers.org. The built-in Linux kernel true random number generator provides low throughput under modern circumstances, as for example: personal computers with solid state drives (SSD) and virtual private servers (VPS). This problem is becoming popular in Linux implementations, because of the continuously increasing need for true random numbers, mainly by diverse cryptographic purposes.

This tutorial is for amd64 / x86_64 linux kernel versions greater and equal to 2.6.9. It explains how to install pandom: a timing jitter true random number generator maintained by ncomputers.org.

Read more at HowToForge

Understanding the Difference Between sudo and su

In one of our earlier articles, we discussed the ‘sudo’ command in detail. Towards the ends of that tutorial, there was a mention of another similar command ‘su’ in a small note. 

In this article, we will discuss in detail the ‘su’ command as well as how it differs from the ‘sudo’ command. The main work of the su command is to let you switch to some other user during a login session. In other words, the tool lets you assume the identity of some other user without having to logout and then login (as that user).

Read more at HowtoForge

How to Integrate Video Streaming Into Your C or C++ Application Using Nex Gen Media

The Nex Gen Media Server is a small-footprint shared library that allows users to easily build video media and telephony applications. It supports several popular streaming protocols such as RTMP, RTSP and Apple’s HTTP Live, and can capture live video streams and adapt them so they can be received by another type of device. For instance, using the NGMS you could capture a HD video feed and convert it so that it may be received by an iPhone over a 3G connection. This makes it a particularly useful tool for developers, so let’s take a closer look and see just how you can integrate the NGMS API to control streaming features directly in a C application:

 

1. Download and read the NGMS user guide

As always, the first step lies of any process lies in understanding its backbone. To that end, you’ll need to download and read the NGMS user guide from http://ngmsvid.com/ngms.php and its respective API reference guide from http://ngmsvid.com/develop.php before you begin coding. These cover the basics of the library and its main utilities. Then, proceed to download the NGMS package for Linux. Once you’ve done that, unzip its contents into the directory of your choice.

2. Set up the application

In order for NGMS to be directly integrated into an application, you’ll need to include ngms/include/ngmslib.h into your code. You’ll also have to include selected libraries such as ngms/lib/libngms.so and ngms/lib/libxcode.so. Be aware that libngms.so depends on libngms.so, so be sure to specify that in the linker options.

3. Create a simple makefile

Here is an example of what things should look like:
#Example Makefile CC=gcc CFLAGS=-ggdb INCLUDES+= -I ngms/include LDFLAGS+= -L ngms/lib -lngms -xlcode -crypto all: myapp %.o: %.c $(CC) $(CFLAGS) $(INCLUDES) -o $@ -c $< myapp: myapp.o $(CC) -fpic -o myapp myapp.o $(LDFLAGS)

And here is the source to myapp.c. 

/** * * Example myapp application * */ typedef unsigned int uint32_t; typedef unsigned long long uint64_t; #include <stdio.h> #include “ngmslib.h” int main(int argc, char *argv[]) { NGMSLIB_STREAM_PARAMS_T ngmsConfig; NGMS_RC_T returnCode; returnCode = ngmslib_open(&ngmsConfig); if(NGMS_RC_OK != returnCode) { fprintf(stderr, “ngmslib_open failedn”); return -1; } ngmsConfig.inputs[0] = “mediaTestFile.mp4”; ngmsConfig.output = “rtp://127.0.0.1:5004”; returnCode = ngmslib_stream(&ngmsConfig); if(NGMS_RC_OK != returnCode) { fprintf(stderr, “ngmslib_open failedn”); } ngmslib_close(&ngmsConfig); return 0; }

It’s worth mentioning that the code uses the NGMSLIB_STREAM_PARAMS_T struct type in order to control the NGMS library. To that end, you’ll need to call to ngmslib_open so you can “preset” the struct. After that you can fill out whatever options you’d like in the struct, and then proceed to ngmslib_stream in order to create the output video.

4. Open the stream in VLC player and test it out

This one’s easy. Just do:

VLC Player -> Open Network rtp://@:5004

Now you can stream a media file directly from your application. Since the ngmslib_stream function is what’s called as a blocking operation, you can actually interrupt the stream by doing ngmslib_close from another thread and the ngmslib_stream call will exit.

5. Add in the final touches

You can also add support for an embedded Flash player by adding the following lines of code:

    ngmsConfig.rtmplive = “1935”;     

ngmsConfig.live = “8080”;

Or, instead of playing a file, you might want to change the input so that it’s a live video stream. You can create two separate instances of the application, one of which will output the video to port 5006, while the other will capture video on port 5006 and output it to port 5004. It looks something like this:

//ngmsConfig.inputs[0] = “mediaTestFile.mp4”;     

ngmsConfig.inputs[0] = “rtp://127.0.0.1:5006”;     

ngmsConfig.strfilters[0] = “type=m2t”;

In conclusion, it is fairly easy to add video streaming support to your own application. The aforementioned code was done using C, but C++ developers can adapt it by wrapping all the calls to ngmslib using the “extern “C”” keyword. Java developers can also do it, but it will require building a JNI interface and wrapping each of the calls down to NGMS. Still, the NGMS library is quite useful, with potential applications that include building your own video streaming client as well.

5 Tips on Enterprise Open Source Success From Capital One, Google, and Walmart

Some of the world’s largest and most successful companies gathered this week at Open Source Leadership Summit in Lake Tahoe to share best practices around open source use and participation. Companies from diverse industries — from healthcare and finance, to telecom and high tech — discussed the strategies and processes they have adopted to create business success with open source software.

Below, are five lessons learned, taken from a sampling of talks by engineers and community managers at Capital One, Google, and Walmart, which have all adopted a strategic approach to open source.  

1. Give developers freedom to contribute

Walmart has worked hard to develop a culture that embraces open source. Key to this cultural transformation has been convincing managers that it’s beneficial to devote developer resources to open source contributions — and to give developers the freedom to contribute however they wish.

“We’ve found that the team members that have a choice of what (open source projects) to work on are the most passionate about really diving in,” said Megan Rossetti, senior engineer, cloud technology, at Walmart.

2. Always be evaluating open source options

Walmart has also created an open source management structure and process to help institutionalize and enable open source participation. The company has an internal open source team to find and shepherd new open source projects and contributions.

“As we onboard new projects, we are always evaluating where does it make sense to bring in open source and to contribute back to open source,” said Andrew Mitry, a senior distinguished engineer at Walmart.

3. Use the right license

Capital One has also made significant strides to become a good open source partner in a way that doesn’t compromise customers or violate financial industry regulations. The company sees a great benefit in releasing open source projects that encourage broad use and participation from other companies. They’ve learned that this means projects must be structured in a way that encourages openness.

“If you want to make sure your code can be used, you really should pick a license written by someone who knows what they’re doing, preferably one of the ones approved by the FSF (Free Software Foundation) or OSI (Open Source Initiative),” said Jonathan Bodner, lead software engineer, technology fellows at Capital One.

“Also, if you want to encourage companies to join the community for your software you probably should pick one of the permissive licenses.”

4. Lead from behind

Kubernetes, an open source project hosted by the Cloud Native Computing Foundation, is one of the fastest growing open source communities on GitHub. Despite massive participation, the project always needs good leaders – those willing to “chop wood and carry water,” said Sarah Novotny, head of the Kubernetes Community Program at Google.

“Being a leader in the open source community is not always about control and it is not always about making sure you have the most commits or the only viewpoint or the only direction,” Novotny said. “We need people willing to do work that is not as glamorous, that’s not as much in the fore. This is very much leadership from behind… It’s making sure that you have influence in the community that is longstanding and promotes the health of the project long term.”

5. Let go of IP

By releasing its Kubernetes container orchestration technology as open source and donating it to The Linux Foundation (under CNCF), Google opened up the project to outside contribution and increased enterprise participation. That, in turn, helped the technology become ubiquitous and profitable for Google which built cloud services on top of the project. Letting go of the project’s intellectual property was ultimately what created that success, said Craig McLuckie, CEO and founder of Heptio, and founder of Kubernetes at Google.

“Nothing poisons an ecosystem faster than playing heavy with trademark,” McLuckie said. “One of the first things we did with Kubernetes was donate it to the Linux Foundation to make it very clear that we were not going to play those games. And in many ways that actually opened up the community…

“It would have really held us back if we had held the IP. If we’d held that trademark and copyright on the project it would have hurt us.”

 

Want to learn more about open source in the enterprise? Recorded keynote talks from Open Source Leadership Summit 2017 are available now on YouTube. Watch now! 

 

AT&T, Intel, Google, Microsoft, Visa, and More to Speak at Open Networking Summit 2017

The Linux Foundation has announced keynote speakers and session highlights for Open Networking Summit, to be held April 3-6, 2017 in Santa Clara, CA.

ONS promises to be the largest, most comprehensive and most innovative networking and orchestration event of the year. The event brings enterprises, carriers, and cloud service providers together with the networking ecosystem to share learnings, highlight innovation and discuss the future of open source networking.

Speakers and attendees at Open Networking Summit represent the best and brightest in next-generation open source networking and orchestration technologies.

ONS keynote speakers

Martin Casado, a general partner at the venture capital firm Andreessen Horowitz and co-founder of Nicira (acquired by VMware in 2012) will give a keynote on the future of networking. (See our Q&A with Casado for a sneak preview.)

Other keynote speakers include:

  • John Donovan, Chief Strategy Officer and Group President – AT&T Technology and Operations with Andre Fuetsch, President AT&T Labs and Chief Technology Officer at AT&T

  • Justin Dustzadeh, VP, Head of Global Infrastructure Network Services, Visa

  • Dr. Hossein Eslambolchi, Technical Advisor to Facebook, Chairman & CEO, 2020 Venture Partners

  • Albert Greenberg, Corporate Vice President Azure Networking, Microsoft

  • Rashesh Jethi, SVP Engineering at Amadeus IT Group SA, the world’s leading online travel platform

  • Sandra Rivera, Vice President Datacenter Group, General Manager, Network Platforms Group, Intel Corporation

  • Amin Vahdat, Google Fellow and Technical Lead for Networking, Google

ONS session speakers

Summit sessions will cover the full scope of open networking across enterprise, cloud and service providers. Topics that will be explored at the event include container networking, software-defined data centers, cloud-native application development, security, network automation, microservices architecture, orchestration, SDN, NFV and so much more. Look forward to over 75 tutorials, workshops, and sessions led by networking innovators.

Session highlights include:

  • Accelerated SDN in Azure, Daniel Firestone, Microsoft

  • Troubleshooting for Intent-based Networking, Joon-Myung Kang, Hewlett Packard Labs

  • Beyond Micro-Services Architecture, Larry Peterson, Open Networking Lab

  • Combining AI and IoT. New Industrial Revolution in our houses and in the Universe, Karina Popova, LINK Mobility

  • Rethinking NFV: Where have we gone wrong, and how can we get it right?, Scott Shenker, UC Berkeley

View the full schedule with many more sessions across six tracks.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the registration price. Register to attend by February 19 and save more than $800 over late registration pricing.

3 Good Command-Line Audio and Graphics Apps for Linux

It is often faster to use command-line apps to play audio files and preview images than to futz around launching and using a graphical application, and you can use them in scripts. Come along to learn about MOC and SoX for playing audio files, and feh for viewing image files from the Linux command line.

MOC, Music on Console

MOC plays audio files from an X terminal, and from a console with no X windows, such as a headless server with no graphical environment. MOC supports most audio file formats including OGG, WAV, FLAC, MIDI, MP4, and MP3. Note that the correct command is mocp and not moc. moc is a Qt command, the Meta Object Compiler, so if you run it you’ll get a “No relevant classes found” error. The simplest use is to start it and name a directory that contains audio files:

$ mocp sounds/ambient/

You’ll see something like Figure 1, a nice Midnight Commander-style two-pane file manager where you can navigate all over your filesystem and find audio files to play. Figure 1 has a playlist in the right pane; add files from the left pane to your playlist by highlighting them and pressing the a key. Press V to save your playlist in the current directory.

Figure 1: Moc

By default MOC plays all the files in the current directory. Use the Tab key to toggle between the file list and your playlist. Navigate up and down with the arrow keys, and press Enter to select a file to play. These are the basic commands, and note that they are case-sensitive:

  • < and > control the volume level
  • p or spacebar toggle pause/play
  • n plays the next file, b plays the previous file
  • S toggles shuffle
  • Right arrow key seeks forward and Left arrow key seeks backward
  • q detaches from the MOC interface and returns to your prompt, and your audio keeps playing
  • mocp returns to the MOC interface from your command line
  • Q from the MOC interface quits MOC
  • mocp -x from any command prompt closes MOC

MOC commands are different in the MOC interface than on your command line. man moc details the commands that you run on the command line, and press h to see a list of commands in the MOC interface.

Your personal MOC directory is ~/.moc. The example configuration file is in /usr/share/doc/moc/examples/config.example.gz. You can extract and copy this example file to ~/.moc/config, or just copy the bits you want to use. I use the MusicDir option to set my default playlist, and you may set a default directory instead. List your audio directories in the Fastdir options for fast switching:

MusicDir = /home/carla/.moc/list2.m3u

Fastdir1 = /home/carla/tutorials
Fastdir2 = /home/carla/sessions
Fastdir3 = /home/carla/music-for-dull-meetings

Start MOC in your MusicDir with mocp -m, or press m in the MOC interface.

Press Shift+1, Shift+2 and so on to change to your various Fastdirs.

MOC has customizable theming and keymaps; see man mocp and the help in the MOC interface to see many more options and controls.

Play One Audio File with SoX

Good old SoX (Sound eXchange) has been around forever and contains a multitude of capabilities. If MOC has an easy way to play just one file I have not found it, so I use SoX for this. This example shows how to play a single file, and shows how to play a file with whitespace in the filename by enclosing it in quotation marks:

$ play "quake2/music/Sonic Mayhem - Quake II/11.ogg"

Just as I do with image files (see the next section), I use locate and grep to find audio files. Then it’s a quick select > middle-click paste to play the file with SoX.

feh X Terminal Image Viewer

I use feh to quickly preview images. You need to be in graphical session, that is, using an X terminal like GNOME Terminal or Konsole. I have over a thousand images on my main PC, and I rely on locate and grep to find what I want. It’s a lot faster to view the image with feh than to open a graphical app and wander through it until I find my my image. Like the photo of my little cat Molly in Figure 2:

$ locate -i molly|grep rock
/home/carla/Pictures/molly-on-rocks-small.jpeg
$ feh /home/carla/Pictures/molly-on-rocks-small.jpeg

Figure 2: Molly.

You can also open your images in editors like Inkscape and Gimp this way, for example inkscape /home/carla/Pictures/molly-on-rocks-small.jpeg. In feh, right-click on your image to open a menu full of useful options: rotate, set image as background, delete, image size and type, and several others.

Give feh a directory name to launch a slideshow of all images in the directory, and then click on each image to advance to the next image. feh displays them at their native resolutions, so right-click on any image and check Options > Fullscreen to shrink large images to fit your screen. Or pass in options in your command. This example stops the slideshow after displaying all images once, pauses for four seconds on each image, automatically scales large images to fit your screen, and prints the filename on each image:

$ feh --cycle-once -D 4 --scale-down -d  Pictures/2016-april/

Create a montage of all images in a directory with 100-pixel thumbnails:

$ feh --thumbnails  --thumb-height 100 --thumb-width 100 --index-info "%nn%wx%h" Pictures/2016-april/

Use the right-click menu to save your montage in your current directory (not your images directory).

Open all images in the directory in their own windows (don’t do this with a large number of images!):

$ feh -w image/directory

You can enter a list of filenames in a text file, one per line, and then pass this list to feh:

$ feh -f mylist

man feh is quite good; it’s well-organized and clear, and details all of feh’s operations which include keyboard shortcuts, mouse shortcuts, and randomizing background images.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Verified Boot: From ROM to Userspace

Amid growing attacks on Linux devices, the 2016 Embedded Linux Conference demonstrated a renewed focus on security. One well-attended presentation at ELC Europe covered the topic of verified boot schemes. In this talk, Marc Kleine-Budde of Pengutronix revealed the architecture and strategies of a recently developed verified boot scheme for a single-core, Cortex-A9 NXP i.MX6 running on the RIoTboard SBC.

The stack works on most i.MX SoC models, and its structure and most of its components are transferable to other ARM processors. “It’s easy to do this yourself using a number of open source stacks,” said Kleine-Budde, who is also the Linux kernel project’s CAN driver maintainer.

Verified boot is “a complex Linux stack in userspace” designed to detect if any stage of the boot process has been tampered with, he explained. Considering that embedded devices typically lack the advanced security software and physical safeguards found in the server world, it’s one of the most effective — and cost-effective — security strategies available.

“If you can change the bootloader of an embedded system you can have complete control over it,” said Kleine-Budde, “If an attacker wants to root your system, they first try to put in their own bootloader, usually from an unprotected source like SD, USB, or eMMC. For our customers, we wanted to protect the bootloader, kernel, file system, and even read-write data.”

On these ARM systems, ROM code verifies the bootloader before it launches it. On i.MX SoCs this is done by signing the bootloader with a proprietary tool and a public key. The corresponding certificate is burned into the i.MX itself to verify the bootloader. The ROM code decides where to boot from and then passes off to a bootloader, such as U-boot or in this case barebox, which then loads the kernel device trees and root file system.

For the initial ROM stage, the i.MX SoCs use a proprietary extension ROM code called high assurance boot, or short HAB, which in turn taps standard cryptographic SHA and RSA algorithms. Barebox has another key that verifies the image of the kernel and the InitRAMFS (initial RAM file system).

“In the boot process, the ROM code runs first,” said Kleine-Budde. “In production, you burn the fuses into your SoC, which verify the public key that comes with the bootloader. ROM code verifies that the pubkey is correct, and then the pubkey verifies the signature, which goes over the bootloader itself. A second pubkey is used to verify the kernel stage.”

FIT-Image, ext4, and UBIFS

For user space verification, Pengutronix used a FIT-Image, which consists of kernel, device-tree(s), and InitRAMFS. “This is all included in one image, and can be used in several configurations, so you can use one FIT-Image on a variety of boards,” said Kleine-Budde. “If your bootloader knows which board you have, it can pick the right configuration from the FIT-Image, which can be stored on untrusted media.”

The bootloader checks against the bootloader’s public key to see if the FIT-Image configuration is valid. To do this, it checks the signature, and then analyzes three hashes for kernel, device-tree, and InitRAMFS.

Once verified, the kernel then secures each file in the root file system, which must be able to support extended attributes. “We used the ext4 file system with extended attributes,” said Kleine-Budde. “You can use a flash chip or naked NAND chip with UBI and UBIFS, or you can use block media storage such as eMMC.”

To verify the file system, Pengutronix employed the mainline kernel’s IMA (Integrity Measurement Architecture), which uses a hash for every file, thereby indicating if the file has been modified. The content is hashed, and then stored as an extended attribute, in this case security-ima.

“If attackers gain access to this system, they can modify a file, recalculate the hash and write it to the media,” said Kleine-Budde. “To avoid this, we make use of the kernel’s EVM (Extended Verification Module), and create a signature over the extended attributes. This is done on your development PC during image creation. You take a private key, create the root file system, and sign every file and extended attribute. It contains the hash, so you can be sure the file and the checksum have not been modified. The EVM-signature is then verified by the kernel’s public key.”

Protecting read/write storage with a SHA-HMAC Secret

Pengutronix’s customers also wanted to be able to verify read/write media. “For this, we used EVM with SHA-HMAC, a clever way of hashing things that lets you guarantee integrity and authentication,” said Kleine-Budde. HMACs can be verified faster than RSA, thereby reducing boot time, he added.

“With SHA-HMAC, you need a different ’Shared Secret’ for each system because if an attacker opens one system and modifies it, and HMAC gets used, he could transfer a modified file from one system to another,” said Kleine-Budde. “Once the kernel touches every file, it will recalculate the HMAC-based verification and write it down. The attacker cannot recalculate the HMAC unless he has the EVM’s Secret.”

The Secret is generated on the i.MX SoC. “If you have created a properly signed bootloader, you have access to a unique key, which is unique to every individual SoC,” said Kleine-Budde. “The SoC’s fuses contain certification hashes that correspond to the secret key used to sign the bootloader. You can only burn the fuses once, and there’s even a separate fuse that can disallow the burning of other fuses. You sign the bootloader when you build your BSP.”

The unique key is used to encrypt a shared Secret for the EVM, stored on media. “You can decrypt only on the system you used to encrypt it, and only if you have a properly signed bootloader,” said Kleine-Budde. “You then use InitRAMFS to decrypt the blob and obtain access to its EVM-Secret, which is required if you want to do read/write. This checks to see if EVM, IMA, and contents are all correct.”

About 22 minutes into the video, Kleine-Budde answered about 10 minutes of questions about whether the blob was properly secured. Kleine-Budde stuck to his original answer: “The blob is encrypted with a unique key. You cannot decrypt the blob unless you have a unique key.”

He then explained how he used eCryptfs for file system level encryption. “eCryptfsworks works on both NAND and UBIFS,” he said. “Every file in the encrypted file system corresponds to a file in the unencrypted system. File names and content are encrypted, but the directory layout and permissions are clear text. eCryptfs requires a different shared Secret for each system. You do not need IMA/EVM because integrity is provided by GCM and AES within the i.MX crypto engine.”

Finally, Kleine-Budde demonstrated the verified boot process running Linux 4.0.9 with patches on the i.MX6-based RIoTboard. He also passed on some lessons learned for others attempting to create a similar ARM-based trusted boot stack.

“Keep your packages in two configurations: one secure package with production keys and another that people can play with,” said Kleine-Budde. Similarly, one production bootloader and

Kernel/InitRAMFS configuration should reboot upon discovery of attack while another one simply displays a notification. He also noted that the combination of UBIFS with IMA/EVM is sensitive to sudden power loss. This issue is fixed with the upcoming Linux v4.10 release.

Kleine-Budde acknowledged that verified boot extends your boot time, in this case by about 10 percent. The overhead is worth it, however, when you consider the alternative. “There are a lot of Linux targets on the Internet that are attractive for hacking,” said Kleine-Budde.

You can watch the complete presentation below:

https://www.youtube.com/watch?v=lkFKtCh2SaU?list=PLbzoR-pLrL6pRFP6SOywVJWdEHlmQE51q

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 – 23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

What’s the Difference Between NFV Automation and NFV Orchestration?

NFV automation is the ability to transfer manual network configuration to technology; NFV orchestration creates the deployment and automation blueprint.

NFV automation and NFV orchestration have overlapping and interrelated capabilities, which are essential to the deployment of virtual network services. Both automation and orchestration are part of the critical management, automation and orchestration, or MANO, layer. The lack of MANO standards has hindered network functions virtualization deployments by many leading service providers.

Read more at TechTarget