In its simplest terms, an IoT solution is a collection of sensors combined with a centralized management application permitting the user to modify the environment in some way. Examples include being able to monitor the temperature of your home and adjust it based on occupancy; and being able to monitor the progress of an assembly line and validate manufacturing tolerances.
If you’ve recognized that the communications between these devices benefits from standardization, and could be prone to attack, then you’re asking the right questions. Today, there are a variety of IoT communication protocols and standards designed to simplify IoT designs and increase the ability of vendors to innovate quickly. The following list is far from exhaustive, but gives both an overview for some of the popular choices as well as an indication of their security state.
RightScale’s ‘State of the Cloud’ survey also shows hybrid cloud beating public-only and private-only clouds, and Microsoft Azure making major inroads.
If there’s one devops tool that’s out in front with cloud-conscious companies, it’s Docker. Thirty-five percent of respondents were already using it, and 32 percent had plans to do so. These numbers outstripped those of Chef, Puppet, Ansible, Salt, Mesosphere, and Rancher.
Written by Varad Gautam, Consultant Associate Software Engineer at Collabora.
Over the past few weeks, I have been working for Collabora on plumbing DRM format modifier support across a number of components in the graphics stack. This post documents the work and the related consequences/implications.
The need for modifiers
Until recently, the FourCC format code of a buffer provided a nearly comprehensive description of its contents. But as hardware design has advanced, placing the buffers in a non-traditional (nonlinear) fashion has become more important in order to gain performance. A given buffer layout can prove to be more suited for particular use-cases than others, such as tiled formats to improve locality of access and enable fast local buffering.
Alternatively, a buffer may hold compressed data that requires external metadata to decompress before the buffer is used. Just using the FourCC format code then falls short of being able to convey the complete information about how a buffer is placed in memory, especially when the buffers are to be shared across processes or even IP blocks. Moreover, newer hardware generation may add further usage capabilities for a layout.
Figure 1. Intel Buffer Tiling Layouts. Memory organization per-tile for two of the Intel-hardware supported buffer tiling layouts. An XMAJOR 4KB tile is stored as an 8×32 WxH array of 16Byte data (row order) while a YMAJOR 4KB tile is laid as 32×8 WxH 16B data (column order).CC-BY-ND from Intel Graphics PRM Vol 5: Memory Views
As an example, modern Intel hardware can use multiple layout types for performance depending on the memory access patterns. The Linear layout places data in row-adjacent format, making the buffer suited for scanline-order traversal. In contrast, the Y-Tiled layout splits and packs pixels in memory such that the geometrically close-by pixels fit into the same cache line, reducing misses for x- and y- adjacent pixel data when sampled – but couldn’t be used for scan-out buffers before the Skylake generation due to hardware constraints. Skylake also allows the single-sampled buffer data to be compressed in-hardware before being passed around to cut down bus bandwidth, with an extra auxiliary buffer to contain the compression metadata. The Intel Memory Views Reference Manual describes these and more layout orders in depth.
Besides access-pattern related optimizations, a similar need for layout communication arises when propagating buffers through the multimedia stack. The msm vidc video decoder (present on some Samsung and Qualcomm SoCs) arranges decoded frames in a tiled variant of the NV12 fourcc format – the NV12_64Z32. With a modifier associated with the buffer, the GPU driver can use the information to program the texture samplers accordingly – as is the case with the a3xx, using the hardware path to avoid explicitly detiling the frame.
Figure 2. Buffer Layout for NV12 Format with Tiled Modifier. The data is laid out as 64×32 WxH tiles similar to NV12, but the tiles appear in a zigzag order instead of linear.
To ease this situation, an extra 64b ‘modifier’ field was added into DRM’s kernel modesetting to contain this vendor specific buffer layout information. These modifier codes are defined in the drm_fourcc.h header file.
Current state of affairs across components
With the AddFB2 ioctl, DRM lets userspace attach a modifier to buffers it imports. For userspace, libdrm support is also planned to allow probing the kernel support and scanout-ability of a given buffer layout, now representable as a fourcc+modifier combination. A modifier aware GetPlane ioctl analog, ‘GetPlane2’ is up for consideration. With recent patches from Intel, GBM becomes capable of allocating modifier-abiding surfaces too, for Wayland compositors and the X11 server to render to.
Collabora and others recently published an EGL extension, EGL_EXT_image_dma_buf_import_modifiers, which makes it possible to create EGLImages out of dmabuf buffers with a format modifier via eglCreateImageKHR, which can then be bound as external textures for the hardware to render into and sample from. It also introduces format and modifier query entrypoints that allow for easier buffer constraint negotiation with knowing the GL capabilities beforehand to avoid hit-and-trial guesswork and bailout scenarios that can arise for compositors.
Mesa provides an implementation for the extension, along with driver support on Skylake to import and sample compressed color buffers. The patchset discussions can be found at mesa-dev: ver1ver2.
The Wayland protocol has been extended to allow the compositor to advertise platform supported format modifiers to its client applications, with Weston supporting this.
With the full end-to-end client-to-display pipeline now supporting tiled and compressed modes, users can transparently benefit from the reduced memory bandwidth requirements.
Further reads
Some GPUs find the 64b modifier to be too restrictive and require more storage to convey layout and related metadata. AMDGPU associates 256 bytes of information with each texture buffer to describe the buffer layout.
This tutorial explains how to install pandom: a timing jitter true random number generator maintained by ncomputers.org. The built-in Linux kernel true random number generator provides low throughput under modern circumstances, as for example: personal computers with solid state drives (SSD) and virtual private servers (VPS). This problem is becoming popular in Linux implementations, because of the continuously increasing need for true random numbers, mainly by diverse cryptographic purposes.
This tutorial is for amd64 / x86_64 linux kernel versions greater and equal to 2.6.9. It explains how to install pandom: a timing jitter true random number generator maintained by ncomputers.org.
In one of our earlier articles, we discussed the ‘sudo’ command in detail. Towards the ends of that tutorial, there was a mention of another similar command ‘su’ in a small note.
In this article, we will discuss in detail the ‘su’ command as well as how it differs from the ‘sudo’ command. The main work of the su command is to let you switch to some other user during a login session. In other words, the tool lets you assume the identity of some other user without having to logout and then login (as that user).
The Nex Gen Media Server is a small-footprint shared library that allows users to easily build video media and telephony applications. It supports several popular streaming protocols such as RTMP, RTSP and Apple’s HTTP Live, and can capture live video streams and adapt them so they can be received by another type of device. For instance, using the NGMS you could capture a HD video feed and convert it so that it may be received by an iPhone over a 3G connection. This makes it a particularly useful tool for developers, so let’s take a closer look and see just how you can integrate the NGMS API to control streaming features directly in a C application:
1. Download and read the NGMS user guide
As always, the first step lies of any process lies in understanding its backbone. To that end, you’ll need to download and read the NGMS user guide from http://ngmsvid.com/ngms.php and its respective API reference guide from http://ngmsvid.com/develop.php before you begin coding. These cover the basics of the library and its main utilities. Then, proceed to download the NGMS package for Linux. Once you’ve done that, unzip its contents into the directory of your choice.
2. Set up the application
In order for NGMS to be directly integrated into an application, you’ll need to include ngms/include/ngmslib.h into your code. You’ll also have to include selected libraries such as ngms/lib/libngms.so and ngms/lib/libxcode.so. Be aware that libngms.so depends on libngms.so, so be sure to specify that in the linker options.
3. Create a simple makefile
Here is an example of what things should look like:
#Example Makefile CC=gcc CFLAGS=-ggdb INCLUDES+= -I ngms/include LDFLAGS+= -L ngms/lib -lngms -xlcode -crypto all: myapp %.o: %.c $(CC) $(CFLAGS) $(INCLUDES) -o $@ -c $< myapp: myapp.o $(CC) -fpic -o myapp myapp.o $(LDFLAGS)
It’s worth mentioning that the code uses the NGMSLIB_STREAM_PARAMS_T struct type in order to control the NGMS library. To that end, you’ll need to call to ngmslib_open so you can “preset” the struct. After that you can fill out whatever options you’d like in the struct, and then proceed to ngmslib_stream in order to create the output video.
Now you can stream a media file directly from your application. Since the ngmslib_stream function is what’s called as a blocking operation, you can actually interrupt the stream by doing ngmslib_close from another thread and the ngmslib_stream call will exit.
5. Add in the final touches
You can also add support for an embedded Flash player by adding the following lines of code:
ngmsConfig.rtmplive = “1935”;
ngmsConfig.live = “8080”;
Or, instead of playing a file, you might want to change the input so that it’s a live video stream. You can create two separate instances of the application, one of which will output the video to port 5006, while the other will capture video on port 5006 and output it to port 5004. It looks something like this:
//ngmsConfig.inputs[0] = “mediaTestFile.mp4”;
ngmsConfig.inputs[0] = “rtp://127.0.0.1:5006”;
ngmsConfig.strfilters[0] = “type=m2t”;
In conclusion, it is fairly easy to add video streaming support to your own application. The aforementioned code was done using C, but C++ developers can adapt it by wrapping all the calls to ngmslib using the “extern “C”” keyword. Java developers can also do it, but it will require building a JNI interface and wrapping each of the calls down to NGMS. Still, the NGMS library is quite useful, with potential applications that include building your own video streaming client as well.
Some of the world’s largest and most successful companies gathered this week at Open Source Leadership Summit in Lake Tahoe to share best practices around open source use and participation. Companies from diverse industries — from healthcare and finance, to telecom and high tech — discussed the strategies and processes they have adopted to create business success with open source software.
Below, are five lessons learned, taken from a sampling of talks by engineers and community managers at Capital One, Google, and Walmart, which have all adopted a strategic approach to open source.
1. Give developers freedom to contribute
Walmart has worked hard to develop a culture that embraces open source. Key to this cultural transformation has been convincing managers that it’s beneficial to devote developer resources to open source contributions — and to give developers the freedom to contribute however they wish.
“We’ve found that the team members that have a choice of what (open source projects) to work on are the most passionate about really diving in,” said Megan Rossetti, senior engineer, cloud technology, at Walmart.
2. Always be evaluating open source options
Walmart has also created an open source management structure and process to help institutionalize and enable open source participation. The company has an internal open source team to find and shepherd new open source projects and contributions.
“As we onboard new projects, we are always evaluating where does it make sense to bring in open source and to contribute back to open source,” said Andrew Mitry, a senior distinguished engineer at Walmart.
3. Use the right license
Capital One has also made significant strides to become a good open source partner in a way that doesn’t compromise customers or violate financial industry regulations. The company sees a great benefit in releasing open source projects that encourage broad use and participation from other companies. They’ve learned that this means projects must be structured in a way that encourages openness.
“If you want to make sure your code can be used, you really should pick a license written by someone who knows what they’re doing, preferably one of the ones approved by the FSF (Free Software Foundation) or OSI (Open Source Initiative),” said Jonathan Bodner, lead software engineer, technology fellows at Capital One.
“Also, if you want to encourage companies to join the community for your software you probably should pick one of the permissive licenses.”
4. Lead from behind
Kubernetes, an open source project hosted by the Cloud Native Computing Foundation, is one of the fastest growing open source communities on GitHub. Despite massive participation, the project always needs good leaders – those willing to “chop wood and carry water,” said Sarah Novotny, head of the Kubernetes Community Program at Google.
“Being a leader in the open source community is not always about control and it is not always about making sure you have the most commits or the only viewpoint or the only direction,” Novotny said. “We need people willing to do work that is not as glamorous, that’s not as much in the fore. This is very much leadership from behind… It’s making sure that you have influence in the community that is longstanding and promotes the health of the project long term.”
5. Let go of IP
By releasing its Kubernetes container orchestration technology as open source and donating it to The Linux Foundation (under CNCF), Google opened up the project to outside contribution and increased enterprise participation. That, in turn, helped the technology become ubiquitous and profitable for Google which built cloud services on top of the project. Letting go of the project’s intellectual property was ultimately what created that success, said Craig McLuckie, CEO and founder of Heptio, and founder of Kubernetes at Google.
“Nothing poisons an ecosystem faster than playing heavy with trademark,” McLuckie said. “One of the first things we did with Kubernetes was donate it to the Linux Foundation to make it very clear that we were not going to play those games. And in many ways that actually opened up the community…
“It would have really held us back if we had held the IP. If we’d held that trademark and copyright on the project it would have hurt us.”
Want to learn more about open source in the enterprise? Recorded keynote talks from Open Source Leadership Summit 2017 are available now on YouTube. Watch now!
The Linux Foundation has announced keynote speakers and session highlights for Open Networking Summit, to be held April 3-6, 2017 in Santa Clara, CA.
ONS promises to be the largest, most comprehensive and most innovative networking and orchestration event of the year. The event brings enterprises, carriers, and cloud service providers together with the networking ecosystem to share learnings, highlight innovation and discuss the future of open source networking.
Speakers and attendees at Open Networking Summit represent the best and brightest in next-generation open source networking and orchestration technologies.
ONS keynote speakers
Martin Casado, a general partner at the venture capital firm Andreessen Horowitz and co-founder of Nicira (acquired by VMware in 2012) will give a keynote on the future of networking. (See our Q&A with Casado for a sneak preview.)
Other keynote speakers include:
John Donovan, Chief Strategy Officer and Group President – AT&T Technology and Operations with Andre Fuetsch, President AT&T Labs and Chief Technology Officer at AT&T
Justin Dustzadeh, VP, Head of Global Infrastructure Network Services, Visa
Dr. Hossein Eslambolchi, Technical Advisor to Facebook, Chairman & CEO, 2020 Venture Partners
Albert Greenberg, Corporate Vice President Azure Networking, Microsoft
Rashesh Jethi, SVP Engineering at Amadeus IT Group SA, the world’s leading online travel platform
Sandra Rivera, Vice President Datacenter Group, General Manager, Network Platforms Group, Intel Corporation
Amin Vahdat, Google Fellow and Technical Lead for Networking, Google
ONS session speakers
Summit sessions will cover the full scope of open networking across enterprise, cloud and service providers. Topics that will be explored at the event include container networking, software-defined data centers, cloud-native application development, security, network automation, microservices architecture, orchestration, SDN, NFV and so much more. Look forward to over 75 tutorials, workshops, and sessions led by networking innovators.
Session highlights include:
Accelerated SDN in Azure, Daniel Firestone, Microsoft
Troubleshooting for Intent-based Networking, Joon-Myung Kang, Hewlett Packard Labs
Beyond Micro-Services Architecture, Larry Peterson, Open Networking Lab
Combining AI and IoT. New Industrial Revolution in our houses and in the Universe, Karina Popova, LINK Mobility
Rethinking NFV: Where have we gone wrong, and how can we get it right?, Scott Shenker, UC Berkeley
Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the registration price. Register to attend by February 19 and save more than $800 over late registration pricing.
It is often faster to use command-line apps to play audio files and preview images than to futz around launching and using a graphical application, and you can use them in scripts. Come along to learn about MOC and SoX for playing audio files, and feh for viewing image files from the Linux command line.
MOC, Music on Console
MOC plays audio files from an X terminal, and from a console with no X windows, such as a headless server with no graphical environment. MOC supports most audio file formats including OGG, WAV, FLAC, MIDI, MP4, and MP3. Note that the correct command is mocp and not moc. moc is a Qt command, the Meta Object Compiler, so if you run it you’ll get a “No relevant classes found” error. The simplest use is to start it and name a directory that contains audio files:
$ mocp sounds/ambient/
You’ll see something like Figure 1, a nice Midnight Commander-style two-pane file manager where you can navigate all over your filesystem and find audio files to play. Figure 1 has a playlist in the right pane; add files from the left pane to your playlist by highlighting them and pressing the a key. Press V to save your playlist in the current directory.
Figure 1: Moc
By default MOC plays all the files in the current directory. Use the Tab key to toggle between the file list and your playlist. Navigate up and down with the arrow keys, and press Enter to select a file to play. These are the basic commands, and note that they are case-sensitive:
< and > control the volume level
p or spacebar toggle pause/play
n plays the next file, b plays the previous file
S toggles shuffle
Right arrow key seeks forward and Left arrow key seeks backward
q detaches from the MOC interface and returns to your prompt, and your audio keeps playing
mocp returns to the MOC interface from your command line
Q from the MOC interface quits MOC
mocp -x from any command prompt closes MOC
MOC commands are different in the MOC interface than on your command line. man moc details the commands that you run on the command line, and press h to see a list of commands in the MOC interface.
Your personal MOC directory is ~/.moc. The example configuration file is in /usr/share/doc/moc/examples/config.example.gz. You can extract and copy this example file to ~/.moc/config, or just copy the bits you want to use. I use the MusicDir option to set my default playlist, and you may set a default directory instead. List your audio directories in the Fastdir options for fast switching:
Start MOC in your MusicDir with mocp -m, or press m in the MOC interface.
Press Shift+1, Shift+2 and so on to change to your various Fastdirs.
MOC has customizable theming and keymaps; see man mocp and the help in the MOC interface to see many more options and controls.
Play One Audio File with SoX
Good old SoX (Sound eXchange) has been around forever and contains a multitude of capabilities. If MOC has an easy way to play just one file I have not found it, so I use SoX for this. This example shows how to play a single file, and shows how to play a file with whitespace in the filename by enclosing it in quotation marks:
$ play "quake2/music/Sonic Mayhem - Quake II/11.ogg"
Just as I do with image files (see the next section), I use locate and grep to find audio files. Then it’s a quick select > middle-click paste to play the file with SoX.
feh X Terminal Image Viewer
I use feh to quickly preview images. You need to be in graphical session, that is, using an X terminal like GNOME Terminal or Konsole. I have over a thousand images on my main PC, and I rely on locate and grep to find what I want. It’s a lot faster to view the image with feh than to open a graphical app and wander through it until I find my my image. Like the photo of my little cat Molly in Figure 2:
$ locate -i molly|grep rock
/home/carla/Pictures/molly-on-rocks-small.jpeg
$ feh /home/carla/Pictures/molly-on-rocks-small.jpeg
Figure 2: Molly.
You can also open your images in editors like Inkscape and Gimp this way, for example inkscape /home/carla/Pictures/molly-on-rocks-small.jpeg. In feh, right-click on your image to open a menu full of useful options: rotate, set image as background, delete, image size and type, and several others.
Give feh a directory name to launch a slideshow of all images in the directory, and then click on each image to advance to the next image. feh displays them at their native resolutions, so right-click on any image and check Options > Fullscreen to shrink large images to fit your screen. Or pass in options in your command. This example stops the slideshow after displaying all images once, pauses for four seconds on each image, automatically scales large images to fit your screen, and prints the filename on each image:
Use the right-click menu to save your montage in your current directory (not your images directory).
Open all images in the directory in their own windows (don’t do this with a large number of images!):
$ feh -w image/directory
You can enter a list of filenames in a text file, one per line, and then pass this list to feh:
$ feh -f mylist
man feh is quite good; it’s well-organized and clear, and details all of feh’s operations which include keyboard shortcuts, mouse shortcuts, and randomizing background images.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.