Home Blog Page 487

Linux and Open Source on the Move in Embedded, Says Survey

AspenCore has released the results of an embedded technology survey of its EETimes and Embedded readers. The survey indicates that open source operating systems like Linux and FreeRTOS continue to dominate, while Microsoft Embedded and other proprietary platforms are declining.

Dozens of market studies are happy to tell you how many IoT gizmos are expected to ship by 2020, but few research firms regularly dig into embedded development trends. That’s where reader surveys come in handy. Our own joint survey with LinuxGizmos readers on hacker board trends offer insights into users of Linux and Android community-backed SBCs. The AspenCore survey has a smaller sample (1,234 vs. 1,705), but is broader and more in depth, asking many more questions and spanning developers who use a range of OSes on both MCU and application processors.

The survey, which was taken in March and April of this year, does not perfectly represent of global trends. The respondents are predominantly located in the U.S. and Canada (56 percent) followed by Europe/ENEA (25 percent), and Asia (11 percent). They also tend to be older, with an average of 24 years out of college, and work at larger, established companies with an average size of 3,452 employees and on teams averaging 15 engineers.

As shown by the chart above, Linux was dominant when readers were asked to list all the embedded OSes they used. Some 22 percent chose “Embedded Linux” compared to 20 percent selecting the open source FreeRTOS. The Linux numbers may actually be much higher since the 22 percent figure may only partially overlap with the 13 percent rankings for Debian and Android, the 11 percent ranking for Ubuntu, and 3 percent for Angstrom.

When looking at next year’s plans, FreeRTOS and Embedded Linux jump to 28 percent and 27 percent, respectively. Android also saw a sizable boost to 17 percent while Debian dropped to 12 percent, and Ubuntu and Angstrom stayed constant. The chief losers here are Microsoft Windows Embedded and Windows Compact, which rank 8 percent and 5 percent, respectively, dropping to 6 percent and 4 percent in future plans. Windows 10 IoT did not make the list at all.

As seen in the above graph, open source operating systems offered without commercial support beat commercial OSes by 41 percent to 30 percent. The trend toward open source has been consistent for the last five years of the survey, and plans for future projects suggest it will continue, with 43 percent planning to use open source vs. 28 percent for commercial.

In-house OSes appear to be in a gradual decline while commercial distros based on open source, such as the Yocto-based Wind River Linux and Mentor Embedded Linux, are  growing slightly. The advantages of commercial offerings are said to include better real-time capabilities (45 percent), followed by hardware compatibility, code size/memory usage, tech support, and maintenance, all in the mid 30 percentages. Drawbacks included expense and vendor lock-in.

When asked why respondents chose an OS in general, availability of full source code led at 39 percent, followed by “no royalties” (30 percent), and tech support and HW/SW compatibility, both at 27 percent. Next up was “freedom to customize or modify” and open source availability, both at 25 percent.

Increased interest in Linux and Android was also reflected in a question asking which industry conferences readers attended last year and expected to attend this year. The Linux Foundation’s Embedded Linux Conferences saw one of the larger proportional increases from 5.2 to 8.0 percent while the Android Builders Summit jumped from 2.7 percent to 4.5 percent.

Only 19 percent of respondents purchase off-the-shelf development boards vs. building or subcontracting their own boards. Among that 19 percent, boards from ST Microelectronics and TI are tied for most popular at 10.7 percent, followed by similarly unnamed boards from Xilinx, NXP, and Microchip. The 6-8 ranked entries are more familiar: Arduino (5.6 percent), Raspberry Pi (4.2 percent), and BeagleBone Black (3.4 percent).

When the question was asked slightly differently — What form factor do you work with? – these same three boards were included with categories like 3.5” and ATX. Here, the Arduino (17 percent), RPi (16 percent), and BB Black (10 percent) followed custom design (26 percent) and proprietary (23 percent). When asked which form factor readers planned for this year, the Raspberry Pi jumped to 23 percent. The only proportionally larger increase was ARM’s Mbed development platform which moved from 3 percent to 6 percent.

OS Findings Jibe with VDC Report

The key findings in the 2017 Embedded Market Survey on OS and open source are reflected in large part by the most recent VDC Research study on embedded tech, published last November. (We covered the 2015 report here.) VDC’s Global Market for IoT & Embedded Operating Systems 2016 projected only 2 percent (CAGR) growth for the IoT/embedded OS market through 2020 in large part due to the open source phenomenon.

“Free and/or publicly available, open source operating systems such as Debian-based Linux, FreeRTOS, and Yocto-based Linux continue to lead new stack wins, with nearly half of surveyed embedded engineers expecting to use some type of free, open source OS on their next project,” said VDC Research.

As the VDC chart above indicates, bare metal, in-house, and commercial OSes are on the decline, while open source – and especially free open source – platforms are on the rise. VDC cited the decline of Microsoft’s embedded platforms, and noted market uncertainty due to major chip vendor acquisitions, as well as the future of Wind River’s platforms as the company is fully integrated within Intel.

Other survey findings in chips, wireless, and more

More than a third of the AspenCore 2017 Embedded Market Survey respondents work on industrial automation, followed by a quarter each for consumer electronics and IoT. Half the respondents said that IoT will be important to their companies in 2017.

Some 13 percent of respondents said they use 64-bit chips, up from 8 percent in 2015. The report shows a reader-ranked list of processor vendors, but not the processors themselves. Most appear to be MCU vendors, but many also make higher-end SoCs. The picture is further muddied by rampant acquisition.

The processor leaders are Texas Instruments (31 percent) followed by Freescale (NXP/Qualcomm) and Atmel (Microchip) at 26 percent and Microchip on its own at 25 percent. Then comes STMicro (23 percent), NXP (Qualcomm) at 17 percent, and Intel at 16 percent. In future plans, TI and Freescale extend their lead, while STMicro jumps past Microchip to number three. Xilinx edges past Intel at 21 percent and 18 percent respectively, with Intel’s Altera unit at 17 percent.

When asked which 32-bit chips readers plan to use, the top contenders that run Linux include the Xilinx Zynq and NXP i.MX6, both ranked third at 17 percent behind STM32 and Microchip’s PIC-32. The Atmel SAMxx and TI Sitara families are tied for fifth at 14 percent, and the 32-bit models among Intel’s Atom and Core chips come next at 13 percent. Intel’s Linux-ready Altera FPGA SoCs follow at 12 percent, tied with Arduino. Despite the popularity of the Xilinx and Altera hybrid FPGA/ARM SoCs, use of FPGA chips overall has declined slightly to 30 percent.

C and C++ are by far the most popular programming languages at 56 percent and 22 percent, respectively. C has lost 10 percentage points since 2015, however, while C++ has gained three points. Python saw the largest boost when asked about future plans, jumping from 3 percent to 5 percent expected usage. Git is the top version control software at 38 percent, up from 31 percent two years ago.

The most widely implemented wireless technologies were WiFi (65 percent), Bluetooth (49 percent), cellular (25 percent), and 802.15.4 (ZigBee etc.), which ranked at 14 percent. Interestingly, use of virtualization and hypervisors has dropped to 15 percent, with only 7 percent saying they plan to use the technologies in 2017.

Debugging and “meeting schedules” were the two greatest development challenges cited by readers, both at 23 percent. The leading future challenge was “managing increases in code size and complexity,” at 19 percent.

Half the readers said they were working on embedded vision technology, but slightly less planned to do so this year. Other advanced technologies, including machine learning (25 percent), speech (22 percent), VR (14 percent), and AR (11 percent) all saw big jumps in expected use in 2017, especially with machine learning, which almost doubled to 47 percent.

Connect with the embedded community at Embedded Linux Conference Europe in Prague, October 23-25. Register now! 

Hardening Docker Hosts with User Namespaces

Securing your Docker containers and the hosts upon which they run is key to sustaining reliable and available services. From my professional DevSecOps perspective, securing the containers and the orchestrators (e.g., OpenShift, Docker Swarm, and Kubernetes) is usually far from easy. This is primarily because the goal posts change frequently thanks to the technology evolving at such a rapid pace.

A number of relatively new-world challenges need to be addressed but one thing you can do to make a significant difference is remap your server’s user (UIDs) and group (GIDs) ranges to different user and group ranges within your containers.

With some unchallenging configuration changes, it’s possible to segregate your host’s root user from the root user inside your containers with a not-so-new feature called User Namespaces. This feature has been around since Docker 1.10, which was released sometime around February 2016. I say that it’s a not-so-new feature because anybody that has been following the containerization and orchestration space will know that a feature more than six months old is considered all but an antique!

The lowdown

To get us started, I’ll run through the hands-on methodology of running host-level, or more accurately kernel-level, User Namespaces.

First, here’s a quick reminder of the definitions of two commonly related pieces of terminology when it comes to securing your Docker containers, or many other vendors’ containers for that matter. You might have come across cgroups. These allow a process to be locked down from within the kernel. When I say locked down, I mean we can limit its capacity to take up system resources. That applies to CPU, RAM, and IO, among many aspects of a system.

These are not to be confused with namespaces which control the visibility of a process. You might not want a process to see all of your network stack or other processes running inside the process table for example.

I’ll continue to use Docker as our container runtime example as it’s become so undeniably popular. What we will look at in this article is the remapping of users and groups inside a container with the host’s own processes. For clarity, the “host” being the server that the Docker daemon is running on. And, by extension we will affect the visibility of the container’s processes in order to protect our host.

That remapping of users and groups is known as manipulating User Namespaces to affect a user’s visibility of other processes on the system.

If you’re interested in some further reading then you could do worse than look at the manual on User Namespaces.  The man page explains: “User namespaces isolate security-related identifiers and attributes, in particular, user IDs and group IDs…”.

It goes on to say that: “process’s user and group IDs can be different inside and outside a user namespace. In particular, a process can have a normal unprivileged user ID outside a user namespace while at the same time having a user ID of 0 inside the namespace; in other words, the process has full privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace.”

Figure 1 offers some further insight in the segregation that we’re trying to achieve with User Namespaces. You could do worse than look at this page for some more detailed information.

L-qCAb80kItvm-g496jALnZjxZ9IVzXywsSCJQ0k

Figure 1: An illustrative view of User Namespaces (Image source: https://endocode.com/blog/2016/01/22/linux-containers-and-user-namespaces)

Seconds out

Let’s clarify what we’re trying to achieve. Our aim is actually very simple; we want to segregate our host’s superuser (the root user which is always UID 0) away from a container’s root user.

The magical result of making these changes is that even if the container’s application runs as the root user and uses UID 0 then in reality the superuser UID only matters inside the container and no longer correlates to the superuser on your host.

Why is this a good thing you may ask? Well, if your container’s application is compromised, then you are safe in the knowledge that an attacker will still have to elevate their privileges if they escape from the container to take control of other services (running as containers) on your host and then ultimately your host itself.

Add a daemon option

It’s important to first enable your Docker daemon option, namely –userns-remap.

It’s worth pointing out at this stage that the last time I checked you will need to use Kubernetes v1.5+ to avoid breaking network namespaces with User Namespaces. Kubernetes simply won’t fire up from what I saw, complaining about Network Namespace issues.

Also, let me reiterate the fact that adding options to your Docker daemon might have changed recently due to a version change. If you’re using a version more than a month old then please accept my sympathies. There is a price for a continued rate of evolution; backward compatibility or a new way of doing things sometimes causes some eye strain. To my mind, there’s little to complain about, however; the technology is fantastic.

It’s for the reason of version confusion that I’ll show you the current way that I add this option to my Docker daemon and as a result your petrol-consumption may of course vary; different versions and different flavors needing additional tweaks. When there’s confusion about versions don’t be concerned if you know of a more efficient way of doing something. In other words, feel free to skip the parts that you know.

And so it begins

The first step is asking our Docker daemon to use for a JSON config file from now on (instead of a text file key=value, Unix-style config) and to do so we’ll add a DOCKER_OPTS to the file /etc/default/docker. It should make adding many options a bit easier in the medium term and stops you from editing systemd unit files with clumsy options.

Inside the file mentioned, we simply add the following line which, erm, points to another config file from now on:

DOCKER_OPTS="--config-file=/etc/docker/daemon.json"

I’m sure you’ve guessed that our newly created /etc/docker/daemon.json file needs to contain formatted JSON. And, in this case, I’ve stripped out other config for simplicity and just added a userns-remap option as follows.

{

  "userns-remap": "default"

}

For older versions (and different Linux distributions) or personal preference, you can probably add this config change directly into /etc/default/docker as DOCKER_OPTS=”–user-remap=default” and not use the JSON config file.

Equally, we can probably fire our Docker daemon up even without a service manager like systemd as shown below.

$ dockerd --userns-remap=default

I hope one of these ways of switching on this option works for you. Google is, as ever, your friend otherwise.

Eleventy-one

At this stage, note that so far I have taken the lazy option in the examples above and simply said “default” for our remapped user. We’ll come back to that in a second — fear not.

You can now jump to the only other mandatory config required to enable User Namespaces, courtesy of our friendly, neighborhood kernel.

Even if you stick to using “default” as I did above you should add these entries to the following files. On Red Hat derivatives, do this before restarting your Docker daemon with the added option shown above. On some distros these files don’t exist yet, so create them (using the echo command as below will do it) if they don’t already.

echo "dockremap:123000:65536" >> /etc/subuid

echo "dockremap:123000:65536" >> /etc/subgid

Restarting your daemon on modern Linux versions looks like this (a reminder that RHEL might be using the docker-latest service and Ubuntu might have required apt install docker.io to install the daemon in the first place amongst other gotchas).

$ systemctl restart docker

Crux of the matter

By adding the “subordinate” dockremap user and group entries to the files above, we are saying that we want to remap container user IDs and group IDs to the host range starting at 123,000. We can in theory use 65,536 above that starting range, but in practice this differs. In “current” versions, Docker actually only maps the first, single UID. Docker has said this will hopefully change in the future.

I mentioned that I’d explain the “default” user setting we used. That value tells the Docker internals to use the username and groupname dockremap as we’ve seen. You can use arbitrary names, but make sure your /etc/subuid and /etc/subgid files reflect the new name before then restarting your daemon.

Other changes

Note that you’ll have to re-pull your container images as they will now live in a new local subdirectory on your host.

If you look under the directory /var/lib/docker you will note our image storage directory is named after our now familiar UID.GID number-formatted range as follows:

$ ls /var/lib/docker

drwx------.  9 dockremap   dockremap   4096 Nov 11 11:11 123000.123000/

From here, if you enter your container with a command like this one shown below then you should see that your application is still running as the root user and using UID 0.

$ docker exec -it f73f181d3e bash

On the host, you can run the ps command and see that although the container thought it was using UID 0 (or the root user), actually it’s running as our 123,000 UID.

$ ps -ef | grep redis

If it helps, the command that I use on the host and directly inside containers to get the corresponding numbered UID for comparison with the username — which is displayed by most ps commands — is as follows:

$ ps -eo uid,gid,args

Limitations

As with all added security, there are tradeoffs; however, these aren’t too onerous in the case of User Namespaces.

To start, note that you won’t be able to open up your containers using –net=host or share PIDs with -pid=host if you use the above config.

Also, be warned that you can’t use a –read-only container, which is effectively a stateless container with User Namespaces.

Additionally, the super-lazy and highly dangerous privileged mode won’t work with this set up either. Also, you will need to make sure that any filesystems that you mount, such as NFS drives, can allow access to the UIDs and GIDs that you use.

One final gotcha is that Red Hat derivatives, such as CentOS, need to open up the kernel settings via the boot loader to enable User Namespaces. You can achieve this as so using grubby:

$ grubby --args="user_namespace.enable=1" 
  --update-kernel="$(grubby --default-kernel)"

Having done so, reboot your server for the to take effect. To disable that setting, you can use this command line:

$ grubby --remove-args="user_namespace.enable=1" 
  --update-kernel="$(grubby --default-kernel)";

The End

I suggest that these simple changes are well worth the effort in relation to bolstering your Docker host’s security.

The last thing that anybody wants is an attacker to sit idle with superuser access on a host for months learning about the weak links in your set up. Also known as an Advanced Persistent Threat, it would be very unwelcome and, of course, might happen entirely without your knowledge. All it takes is for the person who built an image that you pulled off Docker Hub to forget to upgrade a vulnerable library. On that ever so cheery note: Stay vigilant!

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website (http://www.devsecops.cc).

LinkedIn Announces Open Source Tool to Keep Kafka Clusters Running

Today at The Kafka Summit in San Francisco, LinkedIn announced a new load balancing tool called Cruise Control, which has been developed to help keep Kafka clusters up and running.

The company developed Kafka, an open source message streaming tool to help make it easier to move massive amounts of data around a network from application to application. It has become so essential today that LinkedIn has dedicated 1800 servers moving over 2 trillion transactions per day through Kafka, Jiangjie Qin, lead software engineer on the Cruise Control project told TechCrunch.

With that kind of volume, keeping the Kafka clusters running has become mission-critical, so earlier this year the team decided to create a tool that would recognize when a cluster was going to break. 

Read more at TechCrunch

Distributed Systems Are Hard

In this post, we’ll look at some of the ways distributed systems can trip you up and some of the ways that folks are handling those obstacles.

Forget Conway’s Law, distributed systems at scale follow Murphy’s Law: “Anything that can go wrong, will go wrong.”

At scale, statistics are not your friend. The more instances of anything you have, the higher the likelihood one or more of them will break. Probably at the same time.

Services will fall over before they’ve received your message, while they’re processing your message or after they’ve processed it but before they’ve told you they have. The network will lose packets, disks will fail, virtual machines will unexpectedly terminate.

There are things a monolithic architecture guarantees that are no longer true when we’ve distributed our system.

Read more at The New Stack

How to Use the motd File to Get Linux Users to Pay Attention

It seems only decades ago that I was commonly sending out notices to my users by editing the /etc/motd file on the servers I managed. I would tell them about planned outages, system upgrades, new tools, and who would be covering for me during my very rare vacations. Somewhere along the stretch of time since, message of the day files seem to have faded from common usage – maybe overwhelmed by an excess of system messages, emailed alerts, texts, and other notices that have taken over, the /etc/motd file has. Or maybe not.

The truth is that the /etc/motd file on quite a number of Linux systems has simply become part of a larger configuration of messages that are fed to users when they log in. And, even if your /etc/motd file is empty or doesn’t exist at all, login messages are being delivered when someone logs into a server via a terminal window – and you have more control over what those messages are telling your users than you might realize.

Read more at Network World

Jump-Start Your Career with Open Source Skills

Although attending college is not required for success in software development, college programs can provide a great deal of useful information in a relatively short period of time. More importantly, they are designed to cover all necessary concepts without the knowledge holes some self-taught practitioners suffer. College programs also often include theory and history, which can form the foundation for professional exploration and decision-making.

Yet college graduates entering the workforce often find their coursework has emphasized theory over the practice, technologies, and trends required for success on the job. The reason? Curricula take time to develop, so institutions of higher education often teach technologies and practices that are at the tail end of current usage.

Fortunately, there are ways to learn and develop the knowledge and skills you need to land a job and succeed in today’s workplace.

Read more at OpenSource.com

My Use-Case for Go

The lack of generics is often mentioned in discussions regarding Go.

I would have liked Go to have algebraic data types and immutability by default. I would gladly give nil away to get these features.

On the positive side, Go has good libraries, good tooling, a common style and a syntax that is extremely easy to pick up. It’s fast enough and it has good support for concurrency via goroutines. It also produces executables that are very easy to deploy anywhere.

Given this description, it seems to me that Go is an evolution of C and Python and I decided to give it a try rewriting a project originally written in Python I am working on.

Read more at Dev.to

What Do the Most Successful Open Source Projects Have In Common?

Thriving open source projects have many users, and the most active have thousands of authors contributing. There are now more than 60 million open source repositories, but the vast majority are just a public workspace for a single individual. What differentiates the most successful open source projects? One commonality is that most of them are backed by either one company or a group of companies collaborating together.

So, tracking the projects with the highest developer velocity can help illuminate promising areas in which to get involved, and what are likely to be the successful platforms over the next several years.

Rather than debate whether to measure high-velocity projects via commits, authors, or comments and pull requests, we use a bubble chart to show all 3 axes of data, and plot on a log-log chart to show the data across large scales.

Read more at The Linux Foundation

Future Proof Your SysAdmin Career: Embracing DevOps

Sysadmins are increasingly looking to expand their skillsets and carve out new opportunities. With that in mind, many sysadmins are looking to the world of DevOps. At lots of organizations, DevOps has emerged as the most effective method for application delivery, including in the cloud.

future proof ebook

One of the drivers of the DevOps movement is that organizations simply have limits on the number of IT staffers, sysadmins, and developers that they can employ. Cross-pollination of traditional skillsets makes good business sense. And, as Jeff Cogswell has noted, “The line between hardware and software is more blurry than it used to be.”

Cogswell also laid out a good recipe for what specific skills to master in order to meet DevOps goals:

  • Learn what virtualization is and how, through software alone, you can provision a virtual computer and install an operating system and a software stack.

  • Study emerging open source platforms and frameworks, such as OpenStack.

  • Learn network virtualization.

  • Learn to use configuration management tools, such as Puppet and Chef.

All of these pursuits can help sysadmins appeal to organizations looking to create more collaborative and efficient working environments. Additionally, as mentioned earlier, fluency and facility with emerging cloud, virtualization, and configuration management tools can make a substantial compensation difference for sysadmins.

Training options

Sysadmins interested in becoming more fluent with DevOps skills and practices can start by exploring Dice’s Skills Center. A look there makes it clear that skillsets surrounding configuration management tools, containers, and open platforms are much in demand. Savvy sysadmins can combine existing competencies with these skillsets and move the needle.

Flexible training options are available for these tools. For example, if you just want to take Puppet for a test drive within a virtual machine, you can do so here, or there are instructor-led and online training options detailed on the same page. For example, you can chart a learning roadmap for Puppet, find in-person or online training options for Chef, or sample some of the available online tutorials.

A great way to learn more about cloud skills is to open an account on Amazon Web Services and work with EC2 technology. OpenStack training options also abound. The Linux Foundation, for example, offers an OpenStack Administration Fundamentals course, which serves as preparation for certification. The course is available bundled with the COA exam, enabling students to learn the skills they need to work as an OpenStack-skilled administrator and get the certification to prove it.

The Guide to the Open Cloud 2016 from The Linux Foundation also includes a comprehensive look at other cloud platforms and tools that many sysadmins would be wise to pick up. Mirantis and other vendors, such as Red Hat, also offer certified OpenStack administrator curriculum.

Finally, scripting and development skills can also expand a sysadmin’s horizons and fit in with organizational DevOps goals. Scripting skills, from Python to Perl, are a valuable part of sysadmin’s toolkit. The Linux Foundation offers coursework in this area, too, including Developing Applications for Linux and Linux Performance Tuning. Additionally, The Foundation offers an Introduction to DevOps course that is worth exploring.

In the next article, we will explore specific professional certifications and relevant training to help you move to the next level.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Read more:

Future Proof Your SysAdmin Career: An Introduction to Essential Skills 

Future Proof Your SysAdmin Career: New Networking Essentials

Future Proof Your SysAdmin Career: Locking Down Security

Future Proof Your SysAdmin Career: Looking to the Cloud

Future Proof Your SysAdmin Career: Configuration and Automation

This Week in Numbers: Comparing Corporate Open Source Contributions on GitHub Organizations

Another way to evaluate GitHub organizations is based on their activity. Open Hub data indicates that 61 percent of the most active organizations on GitHub organizations are commercial enterprises. Most of these companies are working on projects where almost all of the contributors are also employees. Non-profit organizations like those supporting Linux and Kubernetes on average have the highest number of commits. Education organizations have the fewest because many of the projects they maintain are just ways to manage syllabi and homework assignments.

Read more at The New Stack