Home Blog Page 370

Survey Shows Linux the Top Operating System for Internet of Things Devices

The results are in from the latest IoT Developer Survey and again this year, Linux is by far the most used operating system for Internet of Things devices. Surprised? You shouldn’t be. Linux rules the roost in all areas of computing, so why should IoT be any different?

The online survey is sponsored each year by the Eclipse IoT Working Group, AGILE IoT, IEEE, and the Open Mobile Alliance for the purpose of understanding how developers are building IoT solutions. The survey was open from January 24 until March 5, with 502 participants.

This year 71.8 percent of respondents ticked “Linux” to the choose-all-that-apply question, “What operating system(s) do you use for your IoT devices?” Windows came in second with 22.9 percent, followed by FreeRTOS with 20.4 percent. The answer “No OS/Bare-metal” took fourth place with a 19.9 percent tally. No other operating system received higher than a 10 percent vote.

Read more at ITPro

Android Things 1.0 Offers Free OTA Updates — With Restrictions

A year and a half after Google announced that its stripped down, IoT-oriented Brillo version of Android was being recast as Android Things, the platform has emerged from Developer Preview as Android Things 1.0. The good news is that Google is offering customers free automated updates for three years, which should save money while improving security and reliability. The bad news is that Android Things is more proprietary than the mostly open source Android.

Google will continue to support the Raspberry Pi 3 and Technexion’s i.MX7-based Pico i.MX7D module as official Android Things development platforms. However, you can’t use them for production, as they “do not meet Google’s security requirements for key and ID attestation and verified boot, and may not receive stability and security updates,” says Google.

Significantly, Google dropped support for NXP’s low-power i.MX6 UL SoC, and has chosen higher end, quad-core, Cortex-A7, -A35, and -A64 SoCs, and an octa-core -A53 SoC for its newly announced production platforms. Customers are required to choose from NXP’s i.MX8M, MediaTek’s MT8516, and Qualcomm’s Snapdragon 212 and Snapdragon 624. (In January, Google had mentioned the Rockchip RK3229 as a platform, but it’s not included here.) Four tiny new compute modules based on these chips are “coming soon” from InnoComm (i.MX8M), Intrinsyc (Snapdragon), and MediaTek.

Android Things consumer devices should start arriving this summer with a focus on home automation and consumer devices rather than industrial IoT. Most are smart speakers that mix Android Things with Google Cast and the Google Assistant voice agent, with potential links to other Google cloud services. Unlike Android phones, Android Things devices are not likely to drive nearly as much revenues from advertising or apps, but Google may be able to profit by driving customers to its cloud services — and perhaps by selling user behavior data.

At this January’s CES show, Google previewed several of the Android Things devices that Google now says will ship this summer. These include the LG ThinQ WK7 and iHome iGV1 smart speakers, as well as the Lenovo Smart Display with Google Assistant. Two more unnamed Android Things/Assistant driven smart displays are also still on the way from JBL and LG.

Google also announced two new Android Things products due this summer. Byteflies is a docking station that securely transmits wearable health data to the cloud, and Mirego is developing a “network of large photo displays driven by public photo booths in downtown Montreal,” says Google. There was no mention of the previously announced InstaView ThinQ smart fridge.

Version 1.0 Improves Android Things Console

Android Things continues to be a stripped-down version of Android based on a Linux kernel that can run on as little as 32MB RAM. Wireless savvy and cloud connected, the platform is streamlined for single application use. Displays are optional, full-screen, and developed with standard Android UI tools. Audio is increasingly emphasized, with a focus on Google Assistant.

New Android Things features added since the latest preview include an updated Android Things Console that lets you build factory images and enable OTA updates, including OEM app updates. Analytics are available, but so far there’s no IoT aggregation platform as there is with Amazon’s more industrial focused AWS IOT and related AWS Greengrass platforms.

Other new Android Things features include the ability to automatically launch a selected application on boot. Google has added new Bluetooth device state management features and has improved support for LoWPAN networks such as Thread. Peripheral I/O APIs have been developed for GPIO, PWM, I2C, SPI, and UART, and there are new user-space drivers for location, input, sensors, and LoWPAN.

Three free years of OTA  With restrictions

Weirdly enough, Google’s Android Things could end up being less open source than Microsoft’s upcoming, Linux-based Azure Sphere IoT ecosystem. Android Things is open source to the extent that it’s posted on GitHub, and as of today, is freely downloadable for anyone. Version 1.0 has evolved with the help of feedback from 10,000 developers who have used the Developer Previews, which have been downloaded more than 100,000 times.

Yet, Google clearly states this is a “managed OS.” You need to sign a license agreement to use the Android Things SDK if you plan to deploy more than 100 devices commercially. If you have 100+ devices and want the long-term support version with the updates enabled by the cloud-connected Android Things Console software, you must sign a distribution agreement.

There will also be “additional options for extended support” after three years, and OEMs can “push OEM apps/APK updates at any time, even after updates for Android Things ends,” says Google. In addition, the Alphabet subsidiary is launching a “special limited program to partner with the Android Things team for technical guidance and support.”

With so many other IoT development platforms to choose from, it’s hard to imagine vendors investing in Android Things without licensing the long-term version. In an age of increasing IoT malware attacks, the free three-year update deal is very compelling. But you’re also giving up control and flexibility.

Like Microsoft with Azure Sphere, Google is limiting the authorized hardware platforms, but it also limits the types of devices that can run it. Google’s Android Things Program Policies page states that Android CDD Device Types “such as handhelds, watches, televisions, automotives, and any other device categories defined in the future” are prohibited. “If interested in these categories, please see Android, Wear OS, Android TV, and Android Auto.”

Ars Technica interpreted the licensing info this way: “Android Things is closed source and has a centralized update system. Google controls the operating system, and device makers can only make apps.”

A May 7 Solutions Review blog post by Nathaniel Lewis calls Android Wear a “proprietary platform” with “undocumented distribution terms,” and recommends “backing away slowly from the whole area.” Of particular concern to Lewis is a clause in the SDK agreement that states: “Except to the extent required by applicable third party licenses, you may not… combine any part of the Android Things SDK with other software.”

Lewis argues that it would be very difficult for a developer to determine whether a compileOnly dependency results in a violation of terms. A confidentiality requirement in the Android Things Console agreement is similarly problematic.

A Wear OS for IoT

Despite Google’s pivot away from open source, the response from the mainstream tech media has been enthusiastic. Indeed, with its free updates, Google has taken a welcome step toward assuming responsibility for IoT’s security vulnerabilities. The increased vigilance could also reduce fragmentation and improve software compatibility. This is less critical in the IoT world, but is growing more important in the consumer realm targeted by Android Things.

A positive report on Android Things from The Next Web argues that IoT development is increasingly driven by smaller software-driven firms that lack the skill or the money to hassle with the details of embedded development and security. Android Things solves those problems while providing a familiar app development environment closely based on the Android SDK.

Android Things does not appear to be much more restrictive than Wear OS, the new name for Google’s Android Wear smartwatch distribution. Yet Wear OS is targeted at a smaller number of mostly large vendors working with a fairly standard form factor. Android Things is designed for a far more diverse set of devices that are likely to developed by many smaller vendors.

The restrictions imposed on Wear OS vendors may be one reason the platform is lagging behind Apple Watch and Samsung’s second-place, Tizen-based Gear watches. (Blocks recently launched Project OpenWatch as a more open source alternative to Wear OS.)

Google may have been better served by either building a single Google Watch or else creating a more open platform that could evolve spontaneously like Android. In the IoT world, Google experimented with the Google Watch strategy by buying Nest, but that did not work out as planned.

There are less restrictive ways Google could encourage secure updates. In the Android world, Google’s Project Treble, which requires vendors to use modern Linux kernels, will likely help improve security and reduce fragmentation while still retaining open source flexibility.

Canonical’s Ubuntu Core and its snap mechanism for enabling securely updated IoT applications offers a more open source alternative to Android Things. Ubuntu Core enables secure, transactional updates while also offering access to a large application library. Whereas Google’s approach ensures secure updates by doing them itself, Canonical is providing a platform with the update paradigm built in, essentially accomplishing the same goal.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

​Linux Comes to Chromebooks

Chrome OS is based on Linux, but you can’t easily run Linux applications on it. That’s about to change, with Google’s Project Crostini rolling out.

Chrome OS started as a spin off of Ubuntu Linux. It then migrated to Gentoo Linux and evolved into Google’s own take on the vanilla Linux kernel. But it’s interface remained the Chrome web browser UI to this day.

True, you could run DebianUbuntu, and Kali Linux with Chrome OS — with the open-source Crouton program in a chroot container. Or, you could run Gallium OS, a third-party, Xubuntu Chromebook-specific Linux variant. But, neither were for the faint of heart or the weak in technical skills.

According to Google, you will soon be able to run Linux inside a virtual machine (VM) that was designed from scratch for Chromebooks. That means it will start in seconds, and it integrates completely with Chromebook features.

Read more at ZDNet

CNCF’s CloudEvents Spec Could Facilitate Interoperability across Serverless Platforms

The Cloud Native Computing Foundation (CNCF) wants to foster greater interoperability between serverless platforms, through its release of the CloudEvents specification. The project is at version 0.1 iteration, and hopes that it will be approved as a CNCF sandbox project in June.

The CloudEvents specification provides (formerly called OpenEvents) a path that would allow any two components to transfer an event, regardless of whether they are functions, apps, containers or services, said Doug Davis, an IBM senior technical staff member at IBM and a member of the CNCF serverless working group.

“Much in the same way HTTP — in its most basic form — helped interoperability between any two components by standardizing how to represent well-defined metadata about the message being transferred, CloudEvents is doing the same thing,” said Davis. “Defining the common metadata will aid in the transferring of an event from any producer to any consumer.”

Read more at The New Stack

Build a Real VPN with OpenVPN

Learn how to set up your own VPN in this tutorial from our archives.

 A real, genuine, honest-to-gosh virtual private network (VPN) is an encrypted network-to-network virtual tunnel that connects trusted endpoints. It is not a HTTPS web portal that trusts all clients. Let us build a proper strong VPN with OpenVPN.

The definition of VPN has been stretched beyond recognition with the proliferation of HTTPS VPNs, which trust all clients. These work for shopping sites, which permit only limited client access. Many are sold to businesses as “Easy client-less configuration!” to provide remote employee access. But I do not trust them as extensions of my networks. A VPN connects two networks, such as branch offices, or a remote worker to an office server. A real VPN requires that both the server and clients authenticate to each other.

Setting up a VPN where both servers and clients authenticate to each other is a bit of work, and that is why “Easy client-less configuration!” sells. But it’s really not that hard to set up a proper strong OpenVPN server. You need two hosts on different networks to set up a nice OpenVPN test lab, such as a couple of virtual machines, or two hosts on different networks, like a wireless and a wired machine. All hosts need OpenVPN and Easy-RSA installed.

Set up PKI

First. we’ll create a proper public key infrastructure (PKI) on the server. Your OpenVPN server is the machine that external users will connect to. As with all Linux servers, “server” refers to function, and a computer can be both a server and a client. A PKI offers several advantages: you have a Certificate Authority (CA) which simplifies key distribution and management, and you can revoke client certificates at the server. When you don’t use a CA the server needs a copy of every client certificate. A CA doesn’t need all those client certificates; it only needs to know whether the client certificates have been signed by the CA. (OpenVPN also supports static keys, which are fine for one or two users; see How to Set Up Secure Remote Networking with OpenVPN on Linux, Part 1.)

Remember, private keys must always be protected and never shared, while public keys are meant to be shared. In OpenVPN, the public key is called a certificate and has a .crt extension, and the private key is called a key, with a .key extension.

In the olden days, OpenVPN came with nice helper scripts to set this up: the Easy-RSA scripts. These are now maintained as a separate project, so if your Linux distribution doesn’t package them you can get them fresh from GitHub. Browse the Releases page to get ready-to-use tarballs. You might want to download them from GitHub anyway, to get the current 3.0.1 release. This release dates back to October 2015, but a lot of Linux distributions are stuck on the old 2.x releases. Let’s go ahead and use the new release.

Download and unpack the Easy-RSA tarball into your /etc/openvpn directory. Change to your Easy-RSA directory, then run this command to initialize your new PKI:

$ sudo ./easyrsa init-pki

init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /etc/openvpn/easyrsa/pki

Now go ahead and create your new CA:

$ sudo ./easyrsa build-ca
Generating a 2048 bit RSA private key
........................................................+++
................+++
writing new private key to '/etc/openvpn/easyrsa/pki/private/ca.key.tJXulR8Ery'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:server.net

CA creation complete and you may now import and sign cert requests.
Your new CA certificate file for publishing is at:
/etc/openvpn/easyrsa/pki/ca.crt

You will copy your new ca.crt into /etc/openvpn on all client machines. The next steps takes place on your client machine, creating a PKI environment, the client’s private key, and a signing request. Replace “AliceRemote” with whatever name you want to identify the client:

$ sudo ./easyrsa init-pki
$ sudo ./easyrsa gen-req AliceRemote
[...]
Keypair and certificate request completed. Your files are:
req: /etc/openvpn/easyrsa/pki/reqs/AliceRemote.req
key: /etc/openvpn/easyrsa/pki/private/AliceRemote.key

Copy the .req file to your server, import it, and then sign it:

$ sudo ./easyrsa import-req /media/carla/4gbstik/AliceRemote.req AliceRemote
$ sudo ./easyrsa sign-req client AliceRemote
[..]
Certificate created at: /etc/openvpn/easyrsa/pki/issued/AliceRemote.crt

Copy the signed certificate to the client machine. Now both server and client have all the necessary certificates and key pairs.

If you plan to use TLS, you need to generate Diffie-Hellman parameters on the server. You probably will, so go ahead and do it:

$ sudo ./easyrsa gen-dh

Server Configuration

Look in your openvpn/examples/ directory for configuration file examples. This is a complete example server configuration, and it goes in /etc/openvpn/server.conf. Edit the commented options for your own setup:

port 1194
proto udp
dev tun
keepalive 10 120
status openvpn-status.log
verb 3
persist-tun
persist-key
ifconfig-pool-persist /etc/openvpn/ipp.txt

# Your server keys
ca /etc/openvpn/easyrsa/pki/ca.crt
key /etc/openvpn/easyrsa/pki/private/ca.key
dh /etc/openvpn/easyrsa/pki/dh.pem

# Set server mode, and define a virtual pool of IP
# addresses for clients to use. Use any subnet
# that does not collide with your existing subnets.
server 192.168.10.0 255.255.255.0

# Set up route(s) to subnet(s) behind
# OpenVPN server
push "route 192.168.11.0 255.255.255.0"
push "route 192.168.12.0 255.255.255.0"

Client Configuration

Use this on your client. This example is /etc/openvpn/client.conf:

client
dev tun
proto udp
resolv-retry infinite
nobind
persist-key
persist-tun

# The hostname/IP address and port of the server
remote servername 1194

# Your certificates and keys
cert /etc/openvpn/easyrsa/pki/AliceRemote.crt
ca /etc/openvpn/easyrsa/pki/ca.crt
key /etc/openvpn/easyrsa/pki/private/AliceRemote.key

Connecting to the Server

Start OpenVPN on the server from the command line by referencing the configuration file, for example openvpn /etc/openvpn/server.conf. Start it on the client in the same way, for example openvpn /etc/openvpn/client.conf. You may name your configuration files anything you want, and you may create multiple files for multiple server and client configurations. Once your OpenVPN tunnel is established it’s just like having a shielded Ethernet cable to carry your session safely over untrusted networks, and you can log into your usual programs just as though you were sitting next to the server.

This should get you up and running. There are many configuration and command line options for OpenVPN; see the OpenVPN Documentation. Easy-RSA has a lot of good howtos on GitHub, and bundled in the tarball.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How the Four Components of a Distributed Tracing System Work Together

Ten years ago, essentially the only people thinking hard about distributed tracing were academics and a handful of large internet companies. Today, it’s turned into table stakes for any organization adopting microservices. The rationale is well-established: microservices fail in surprising and often spectacular ways, and distributed tracing is the best way to describe and diagnose those failures.

That said, if you set out to integrate distributed tracing into your own application, you’ll quickly realize that the term “Distributed Tracing” means different things to different people. Furthermore, the tracing ecosystem is crowded with partially-overlapping projects with similar charters. This article describes the four (potentially) independent components in distributed tracing, and how they fit together.

Distributed tracing: A mental model

Most mental models for tracing descend from Google’s Dapper paperOpenTracinguses similar nouns and verbs, so we will borrow the terms from that project…

Read more at OpenSource.com

Introduction to Security and TLS

IoT (Internet of Things) is all about connecting to the internet and even more about security. Without security and encrypted communication, everyone can possibly see what I send or receive. And this is especially bad if passwords or user names are sent in an unencrypted way. So, encryption and secure communication is key. The solution to that is to use a connection which uses the TLS (Transport Layer Security) protocol, which I want to use for my MQTT communication (see MQTT with lwip and NXP FRDM-K64F Board).

This article walks through the basic principles for secure communication using TLS with MQTT in mind. TLS is the successor of SSL (Secure Sockets Layer), and the two are often used together (TLS/SSL). TLS (as the name indicates) is an encryption on the transport layer: that means that the application layer does not have to implement the encryption itself. Instead, it configures the transport layer to use the encryption protocol.

Read more at DZone

How and Why to Secure Your Linux System with VPN and Firejail

We have previously discussed VPNs and Firejail here on Linux.com, but here’s a quick refresher to help you remember why you would want to use these tools:

  • VPNs help protect your Internet traffic from prying eyes — such as those of your ISP, the wi-fi provider you happen to be using, or any malicious attackers who may be in control of various pieces of routing equipment between you and the resource you are trying to access. VPNs may also enable you to gain access to online content that is for some reason unavailable via your current online provider.

  • Firejail is a tool that helps set up additional sandboxing around your desktop applications to help further reduce the impact of accessing potentially malicious content online. It is most commonly used in conjunction with Firefox.

I am very fond of combining both VPN and Firejail on my travel laptop (where I cannot use QubesOS), but I have recently discovered that I was leaving myself exposed to online tracking via so-called “WebRTC leaks.” My VPN provider offers a convenient testing page to see how well protected my connection is, and they very helpfully alerted me to this problem:

WebRTC is an open-source protocol that allows establishing peer-to-peer Real-Time Communication (RTC) via two browsers. It is normally used for native audio and video conferencing that does not require any additional plug-ins or extensions, and works across different browsers and different platforms. If you’ve ever used Google Hangouts, you’ve relied on WebRTC.

What makes WebRTC leak your real IP address? As part of establishing the communication channel, both parties exchange their networking information in order to find the network route that offers the least amount of latency. So, WebRTC will tell the remote party all of your local IP addresses, in hopes that it will help establish a better communication channel. Obviously, if exposing your local IP address is specifically something you do not want, then this is a problem.

One way to plug this leak is to turn off WebRTC entirely. However, if you do this, then you will no longer be able to use online conferencing — and depending on your needs, this may not be what you want. Thankfully, if you’re already using Firejail, then you can benefit from its support for network namespaces in order to hide your local networking information from WebRTC.

Setting up network namespaces manually is a bit of a chore, since in addition to the virtual interface, you will need to set up things like IP forwarding and DNS resolving (this script may help get you going, if you are interested). However, if you are using Fedora, then you should already have something you can use for this purpose: a virbr0 virtual bridge that is automatically available after the default workstation install.

Here’s what happens when you start Firefox inside a firejail and tell it to use virbr0 for its networking:

$ firejail --net=virbr0 firefox -no-remote

Interface     MAC             IP                         Mask                 Status
 
lo                                   127.0.0.1             255.0.0.0          UP  

eth0          x:x:x:x:x:x    192.168.124.38   255.255.255.0  UP    

Default gateway 192.168.124.1

Firejail automatically obtains a private IP address inside the virtual networking range and sets up all the necessary routing information to be able to get online. And indeed, if I now look on my VPN provider’s verification page, they give me a clean bill of health:

I ended up writing a small wrapper that helps me bring up Firefox in various profiles — one I use for work, one I use for personal browsing, and one I bring up when I want a temporary junk profile for testing (a kind of “incognito mode” on steroids).

You can test if you are vulnerable to WebRTC leaks yourself on the browserleaks site. If there is anything showing up in the “Local IP Address” field, then you are potentially leaking your IP information online to people who can use it against you. Hopefully, you are now well-protected against this leak — but also against others that may use similar mechanisms of passing your local networking information to a remote adversary.

Kubernetes and Microservices: A Developers’ Movement to Make the Web Faster, Stable, and More Open

As web development has evolved, there has been a tendency to develop “monolithic” applications — that is, software that contains most or all parts of the code for a given company or service. Over time, those code bases have grown to massive sizes and become hugely complex, which has led to a wide array of problems.

Developing and maintaining such applications can take an enormous number of developers. Even for companies that have made the necessary investments and hired those developers, making any changes or updates can be cumbersome and take weeks. For others, the resources needed to build the technology can seem like an insurmountable challenge.

“Software has gotten a lot more complex,” said Ben Sigelman, cofounder and CEO of LightStep, a San Francisco-based startup that makes performance management tools for microservices. “It’s gotten a lot more powerful, but it crossed a threshold where the complexity of the code to deliver those features requires hundreds and hundreds of developers….”

Read more at VentureBeat

Tutorial: Git for Absolutely Everyone

Imagine you have a brand new project. Naturally, you plan to store all related files in a single new directory. As work progresses, these files will change. A lot. Things will get disorganized, even messy, and at some point even completely fubar. At that point, you would want to go back in time to the most recent not-messy, still-working version of your project — if only that were possible!

Well, thanks to git, it is. Version control happens when you install git on your computer. Git is built to create that new project directory, and to keep track of all the changes you make to any and all files you put in that directory. As things progress and you make additions and changes, git takes a “snapshot” of the current version. And that, friends, is version control: make a small change, take a snapshot, make another small change, take a snapshot…And save all of these snapshots in chronological order. You can then use git to step back and forth as necessary through each version of your project directory.

So when you screw up, git is like having this magic ability to go back in time to the last good version before you gaffed. Thus, version control. git is not the only version control systems out there, but it is probably the most widely used.

Read more at The New Stack