Home Blog Page 312

Arm Launches Mbed Linux and Extends Pelion IoT Service

Politics and international relations may be fraught with acrimony these days, but the tech world seems a bit friendlier of late. Last week Microsoft joined the Open Invention Network and agreed to grant a royalty-free, unrestricted license of its 60,000-patent portfolio to other OIN members, thereby enabling Android and Linux device manufacturers to avoid exorbitant patent payments. This week, Arm and Intel kept up the happy talk by agreeing to a partnership involving IoT device provisioning.

Arm’s recently announced Pelion IoT Platform will align with Intel’s Secure Device Onboard (SDO) provisioning technology to make it easier for IoT vendors and customers to onboard both x86 and Arm-based devices using a common Pelion platform. Arm also announced Pelion related partnerships with myDevices and Arduino (see farther below).

In another nod to Intel, Arm unveiled a new, IoT focused Mbed Linux OS distribution that combines the Linux kernel with tools and recipes from the Intel-backed Yocto Project. The distro also integrates security and IoT connectivity code from its open source Mbed RTOS.

When Pelion was announced, Arm mentioned cross-platform support, but there were few details. Now with the Intel SDO deal and the launch of Mbed Linux OS, Arm has formally expanded Pelion from an MCU-only IoT data aggregation platform to one that supports more advanced x86 and Cortex-A based systems.

Mbed Linux OS

The early stage Mbed Linux OS will be released by the end of the year as an invitation-only developer preview. Both the OS source code and related test suites will eventually be open sourced.

In the Mbed Linux OS announcement, Arm’s Mark Wright pitches the distro as a secure, IoT focused “sibling” to the Cortex-M focused Mbed that is designed for Cortex-A processors. Arm will support Mbed Linux with its MCU-oriented Mbed community of 350,000 developers and will offer support for popular Linux development boards and modules. The Softbank-owned company will also supply optional commercial support.

Like Mbed, Mbed Linux will be “deeply integrated” with the Pelion IoT System in order “to simplify lifecycle management.” The Pelion support provides device provisioning, connectivity, and updates, thereby enabling development teams to update the OS and the applications independently, says Wright. Working with the Pelion Device Management Application, Mbed Linux OS can “simplify in-field provisioning and eradicate the need for legacy serial connections for initial device configuration,” says Arm.

Mbed Linux will support Arm’s Platform Security Architecture and hardware based TrustZone security to enable secure, signed boot and signed updates. It will also enable deployment of applications in secure, OCI-compliant containers.

Arm did not specify which components of the Yocto Project code it would integrate with Mbed. In late August, Arm and Facebook joined Intel and TI as Platinum members of the Yocto Project. The Linux Foundation hosted project was launched by Intel but is now widely used on Arm as well as x86 based IoT devices.

Despite common references to “Yocto Linux,” Yocto Project is not a distribution, but rather a collection of open source templates, tools, and methods for creating custom embedded Linux-based systems. A Yocto foundation underlies most major commercial Linux distributions such as Wind River Linux and Mentor Embedded Linux and is often spun into custom builds by DIY developers, especially for resource constrained IoT devices.

We saw no mention of a contribution for the Arm-backed Linaro initiative for either Mbed Linux or Pelion. Linaro, which oversees the 96Boards project, develops open source embedded Linux and Android software components. The Yocto and Linaro projects were initially seen as rivals, but they have grown increasingly complementary. Linaro’s Arm toolchain can be used within Yocto Project, as well as with the related OpenEmbedded build environment and Bitbake build engine.

Developers can sign up for the limited number of invites to participate in the upcoming developer preview of Mbed Linux OS here.

Arm’s Pelion partnerships

Arm’s Pelion IoT Platform will soon run on devices with Intel’s recently launched Secure Device Onboard (SDO) service, enabling customers to deploy both Arm and x86 based systems controlled by the common Pelion platform. “We believe this collaboration is a big step forward for greater customer choice, fewer device SKUs, higher volume and velocity through IoT supply chains and lower deployment cost,” says Arm.

The SDO “zero-touch onboarding service” depends on Intel Enhanced Privacy ID (EPID) data embedded in chips to validate and provision IoT devices automatically. SDO automatically discovers and provisions compliant devices during installation. This “late binding” approach reduces provisioning times from 20 minutes to an hour to a few minutes, says Intel.

Unlike PKI based authentication methods, “SDO does not insert Intel into the authentication path.” Instead, it brokers a rendezvous URL to the Intel SDO service where Intel EPID opens a private authentication channel between the device and the customer’s IoT platform.

The Pelion IoT Platform offers its own scheme for provisioning and configuration of devices using cryptographic identities built into Cortex-M MCUs running Mbed. With the new Mbed Linux, Pelion will also be able to accept devices that run on Cortex-A chips with TrustZone security.

Pelion combines Arm’s Mbed Cloud connected Mbed IoT Device Management Platform with technologies it acquired via two 2018 acquisitions. The new Treasure Data unit supplies data management services to Pelion. Meanwhile, Stream Technologies provides Pelion managed gateway services for wireless technologies including cellular, LoRa, and satellite communications.

The partnership with myDevices extends Pelion support to devices that run myDevices’ new IoT in a Box turnkey IoT software for LoRa gateways and nodes. myDevices, which is known for its Linux- and Arduino-friendly Cayenne drag-and-drop IoT development and management platform, launched IoT in a Box to enable easy set up a LoRa gateway and LoRa sensor nodes. Different IoT in a Box versions target specific applications ranging from home and building management to storage lockers to refrigeration systems. Developers can try out Pelion services together with IoT in a Box for a new, $199 IoT Starter Kit.

The Arduino partnership is a bit less clear.  It appears to extend Arm’s Pelion Connectivity Management stack, based on the Stream Technologies acquisition, to Arduino devices. The partnership gives users the option of selecting “competitive global data plans” for cellular service, says Arm.

More details on this and the other Pelion announcements should emerge at Arm TechCon in San Jose, California and IoT Solution World Congress in Barcelona, both of which run Oct 16-18. Intel also offers a video overview of the Pelion/SDO mashup.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Prototyping IoT Applications using Beaglebone and Debian

The recent emergence of the Internet of Things (IoT) and associated applications into the mainstream means more developers will be required to develop these systems in the coming years. Developers without traditional embedded systems backgrounds are being asked to build applications to meet consumer demand and many of these developers are looking for quick ways to get started.

This post will detail some quick methods for prototyping IoT applications without needing to understand traditional embedded systems details such as bootloaders, device drivers and OS kernels. Using readily available OS images will allow developers to bring up new platforms and quickly focus on IoT features while deferring hardware specifics to later in the development cycle.

Note that the focus of this discussion is applications that are suitable for a 32- or 64- bit system running Linux and will not discuss smaller systems more suitable for an RTOS.

Application Architecture

The application we will develop here is similar in scope to many existing IoT designs, albeit extremely simplified. The goal is to familiarize readers with the general workflow and to create something to be used as a jumping off point for future development.

We will be developing a weather station application which are extremely common in the maker community. We will customize our application code to produce two classes of systems common in IoT designs; sensors and actuators. Our sensor systems will act as remote devices that return weather data to a central location. Our actuator systems will read the weather data and affect changes in their environment, such as darkening windows, based on the weather readings. The devices in our fleet will communicate with each other to implement the desired IoT application. Note that the actual weather sensing devices and actuators will be stub implementations so that readers can replicate this without needing custom hardware but it should be reasonably straightforward to link in physical devices if they are available.

Initial Application Development

Many of the libraries, networking protocols, and languages needed for developing IoT applications are available on your PC Linux installation provided by your distribution. For features that are not dependent on the target hardware, it makes sense to do as much development as possible on your PC system. As a developer, you are likely very comfortable in that environment and have the tools you need installed and configured to your liking. Additionally, this system will be significantly more powerful than your target board and your development cycles will be reduced.

Note that the examples shown here were run on an Ubuntu 18.04 system but similar steps should be available on most host OS distributions.

The first step is to download the code examples and install the necessary Python libraries on your PC host. Clone the following git repository to access the sample files:

https://github.com/drewmoseley/iot-mqtt-bbb.git

The two scripts, also shown in listings 1 and 2, represent code running separately on an actuator device and a sensor device.

#!/usr/bin/python
import paho.mqtt.client as mqtt

def onConnect(client, obj, flags, rc):
    print("Connected. rc = %s " % rc)
    client.subscribe("iot-bbb-example/weather/temperature")
    client.subscribe("iot-bbb-example/weather/precipitation")

# Dummy function to act on temperature data
def temperatureActuator(temperature):
    print("Temperature = %sn" % temperature)

# Dummy function to act on precipitation data
def precipitationActuator(precipitation):
    action = {
        "rain" : "Grab an umbrella",
        "sun" : "Don't forget the sunscreen",
        "hurricane" : "Buy bread, water and peanut butter."
    }
    print("Precipitation = %s" % precipitation)
    print("t%sn" % action[precipitation])
    
def onMessage(mqttc, obj, msg):
    callbacks = {
        "iot-bbb-example/weather/temperature" : temperatureActuator,
        "iot-bbb-example/weather/precipitation" : precipitationActuator
    }
    callbacks[msg.topic](msg.payload)

client = mqtt.Client()
client.on_connect = onConnect
client.on_message = onMessage
client.connect("test.mosquitto.org", 1883, 60)
client.loop_forever()

Listing 1: Actuator Sample

#!/usr/bin/python

import paho.mqtt.publish as mqtt
import time
import random

# Dummy function to read from a temperature sensor.
def readTemp():
    return random.randint(80,100)

# Deumm function to read from a rain sensor.
def readPrecipitation():
    r = random.randint(0,10)
    if r < 4:
        return 'rain'
    elif r < 8:
        return 'sun'
    else:
        return 'hurricane'
    
while True:
    mqtt.single("iot-bbb-example/weather/temperature", readTemp(), hostname="test.mosquitto.org")
    mqtt.single("iot-bbb-example/weather/precipitation", readPrecipitation(), hostname="test.mosquitto.org")
    time.sleep(10)

Listing 2: Sensor Sample

Now, let’s install python and the required libraries. Note that your system may need additional libraries.

$ sudo apt install python python-paho-mqtt

Finally, we can invoke the scripts. In this case we are simply using two terminal windows on our PC system but these could just as well have been executed on any two internet connected machines. Every 10 seconds, the sensor script will generate dummy weather data which is then passed to the actuator script over MQTT. For now the only action taken is to print a pithy message to standard output. Press ctrl-c to exit from these scripts.

$ python ./iot-mqtt-bbb-sensor.py

Listing 3: Server/sensor invocation

$ python ./iot-mqtt-bbb-actuator.py 
Connected. rc = 0 
Temperature = 86

Precipitation = hurricane
    Buy bread, water and peanut butter.

Temperature = 96

Precipitation = sun
    Don't forget the sunscreen

Temperature = 84

Precipitation = rain
    Grab an umbrella

Listing 4: Client/actuator invocation

On-Target Development

Now that we have some working code on our PC system, we can turn our attention to the desired target board. In this case, we are using the Beaglebone Black. Some useful reference links regarding this platform are as follows:

The first step is to configure an SD Card with the Debian image. Download the latest IoT Debian image from https://beagleboard.org/latest-images.This image can be written to your SD Card with the Etcher utility.

Once you have created the SD Card, insert it into your Beaglebone Black system and connect an ethernet cable. Now, press and hold the Boot Button (S2) near the SD card slot, and connect the power adapter. Holding switch S2, in this case, forces the system to boot off of the SD Card rather than the onboard eMMC. For more details, Adafruit has good instructions.

There are two main interfaces for development on the Beaglebone.

  1. HDMI + USB Keyboard: this method uses hardware you likely already have but can be a bit limiting due to the limited horsepower of the Beaglebone compared to your PC system. Additionally, for this exercise, we are using the IoT image provided by beagleboard.org which does not have a graphical environment. Note that the Beaglebone black has a micro-HDMI port and thus will likely require a custom cable or adapter.

  2. Serial console: this method uses the venerable RS-232 serial ports and protocols to allow you to have a text-mode connection to the Beaglebone. This requires an appropriate cable and a serial terminal emulator on your PC. For Windows systems, Putty is a good choice. For MacOS systems, Serial is a good choice. Command line programs such as Picocom and Minicom are available for most Linux distributions as well as Windows and MacOS.

Once you have setup your chosen interface to the Beaglebone, you will see a login prompt. The default username is “debian” with a password of “temppwd”. After logging in, set a custom password. This simple step will make your device more secure than many commercially available IoT devices. Devices with well known, default credentials are the root cause for issues such as the Mirai botnet.

debian@beaglebone:~$ passwd
Changing password for debian.
(current) UNIX password: 
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully

Listing 5: Change default password

Now, let’s make sure the system is running the latest Debian updates. This is another IoT security best practice.

debian@beaglebone:~$ sudo apt-get update
debian@beaglebone:~$ sudo apt-get upgrade
debian@beaglebone:~$ sudo reboot

Listing 6: Upgrade all packages

The dependencies needed on the Debian/BBB system are a bit different. There is not a standard Debian package available containing the paho-mqtt library, so we use the standard Python pip package manager instead.

debian@beaglebone:~$ sudo apt-get install python python-pip
debian@beaglebone:~$ pip install paho-mqtt

Listing 7: Installing dependencies on target system

Now simply transfer the python scripts to the target and invoke them as done above. You can use scp, wget, curl or many other mechanisms to deploy the scripts onto your device. Similar to our testing on the PC above, we run two text mode logins to invoke the scripts in separate windows. The output from these is identical to listings 3 and 4 above.

Field Deployment Considerations

Now that we have a working application in the tightly controlled environment that is our local lab, we need to consider what will change as our devices are deployed. The issues discussed here are general in nature and your application use cases will certainly dictate other considerations.

Network Connectivity

The first consideration is to understand what the network connectivity will be in your deployed location. In the exercise above, we used a wired ethernet since the Beaglebone Black does not have a WiFi adapter. It is possible to use a USB-WiFi adapter, and certain versions of the Beaglebone hardware have WiFi built in. If you plan to use WiFi, you will need to develop a mechanism for the end user of the device to specify the WiFi credentials. Both WiFi and wired ethernet connections are well supported in Linux on most IoT hardware and provide a reasonable data rate. Obviously, the speed will ultimately be dictated by the networking provider. However, this type of connection is typically not metered (at least in the United States) so concerns over runaway billing cycles are somewhat mitigated.

IoT applications have other choices for connectivity as well. Some applications use cellular phone data protocols and include a SIM card for connecting to a provider’s network. Depending on the protocol supported by your carrier, and the location of your deployment, you may need to consider throughput and bulk data costs in your system design. Cellular connections are popular in IoT applications due to their ubiquity and relative ease of remote deployment.

Other connectivity options can be considered based on your application requirements:

  • Bluetooth/Bluetooth Low Energy: this is useful if your end device does not need a full internet connection and has a gateway device available through which the IoT data of interest can be proxied. This is also commonly used for mesh networks.

  • LoRa®/LoRaWAN™: this is a city-scale wireless protocol that is being developed and governed by an industry alliance. Availability is limited but if your rollout plans are in a geography supported by a LoRa network then this is a good choice.

  • Sigfox: this is another city-scale wireless protocol but is controlled by a single commercial entity. Again, availability is limited but this may work depending on your needs.

Environment Hostility

Typically, IoT devices are managed and configured over a web interface or a custom application on your desktop or mobile device. Extreme care must be taken and security professionals must be consulted on the design of this portion of your system. Decisions will need to be made on what devices are able to access the configuration interface and whether those devices need to be on the local network or if they will be able to access the device over the internet.

Additionally, IoT devices are regularly installed in uncontrolled environments, such as coffee shops and restaurants, and as such are vulnerable to a wide range of attacks including hostile actors on the local network as well as physical attacks.

Default Credentials

Recent attempts at legislation in the United States have tried to regulate IoT devices to help reduce the likelihood of attacks causing massive disruptions of the internet. The main concrete proposal is that devices must not share initial login and password values. Either a unique set of credentials should be generated for each device in manufacturing and provided to the purchaser, or some mechanism of forcing the end user to set up a username and/or password should be considered a mandatory requirement in your design.

Physical Access

As mentioned above, the fact that many of these devices are deployed in environments outside of the manufacturers control results in a wide range of potential attack vectors. Some attacks require physical access to vulnerable devices. These can involve offline access to device storage, rebooting devices into manufacturing modes and simple denial of service by disconnecting or unplugging devices. Storage media encryption should be used to ensure that offline access to the data is not possible. Secure hardware mechanisms such as Trusted Platform Module and Arm TrustZone should be reviewed to determine if they are applicable for your application.

Device Updatability

Finally, the simple fact is that all software has bugs. And more software has more bugs. The amount of code that is running on today’s IoT devices is staggering. Not all bugs will be vulnerabilities leading to exploits, but it should be taken as a given that you will need to provide updates to your deployed devices.

In addition to providing fixes for bugs (and likely security vulnerabilities), providing a strong over-the-air (OTA) update mechanism allows you to deploy new features to your end users. Many minimum viable products are also released in order to get to market quickly with the intent of updating them later with more comprehensive features.

A few characteristics that should be considered when reviewing update solutions are as follows:

  1. Security: does the solution in question follow industry best practices for certificate and encryption management. Is there active development of the solution resolving security issues within the update technology?

  2. Robust: what is the risk of bricking devices due to a failed or interrupted update?

  3. Fleet management: what is the interface for managing a large fleet of device updates and how well does it integrate with your other device management needs

  4. Getting started: how easy is it to add update capability to your design? Does it require your team to become experts on the update technology or can you easily integrate it and remain focused on your systems value-add?

There are many open source options available, including the project I’m involved with, Mender.io. Mender Hub has also been recently released, a community-supported repository to enable OTA on any board and operating system.

Next Steps

Let’s wrap up by discussing a few bigger picture items to consider early in your development process. These items are certainly not unique to IoT devices but considering them early can save a lot of pain and rework later.

Manufacturing Considerations

Make sure to involve a member of the team who will be responsible for manufacturing your devices early on in the discussions. Decisions that seem reasonable to us as system developers may have unexpected impacts on manufacturing time. As an example, consider that typically all devices will need a unique ID of some kind; this can be used to identify the device to your device management infrastructure and as a seed for cryptographic validation of the devices to ensure that only authorized devices can connect. One simple data point that can be used as the device ID is the MAC address of any onboard ethernet device (if you have one). In the lab, it is simple enough to boot the device into the target operating system to determine the MAC address. However, in manufacturing -requiring the system software to boot to complete the device assembly can add undue complexity, cost, and time to the assembly line. Where possible, the device ID data should be defined and deterministic without requiring the OS to boot.

Build Image Reproducibility

This exercise has focused on using the Debian OS as a prototyping platform for the IoT application. This is convenient because the system developers are likely already familiar with the environment and there is a large number of software tools to assist in the software development process. The workflow going from this to a production device is a bit awkward as it requires managing a golden-master installation that your application is then installed into along with required libraries, drivers etc. This gets to be troublesome as your development team grows and access to golden-master becomes a bottleneck.

Using the debootstrap tool from the Debian project is one step removed from the golden-master. Your build workflow changes such that you are installing a base system and your customizations into a subdirectory on your PC Debian install. With this tooling you can also install software for different CPU architectures.

One step beyond the debootstrap tool is to use a build system such as Yocto, Buildroot, or ISAR. These build systems consist of tooling to enable cross building all the packages needed for your target. The workflow with these systems requires that you develop recipes how to build the system and all required packages. These recipes, coupled with configuration data are used to cross-build the system from scratch. This removes the bottleneck on the golden master and allows independent developers to recreate the build from scratch when needed.

Security

The IoT market has a deserved reputation for producing products with glaring security flaws. Simple mistakes, such as reusing default credentials, leaving unnecessary services installed, and not providing an easy and automatic over-the-air update mechanism, have produced a large variety of IoT devices that are ripe for attack. Make sure to involve security engineers early in your design cycle to help guide your team. Also, keep in mind that the only truly secure software is that which is not installed. Make sure you minimize the software packages installed in your design to those critical to your defined use cases; avoid feature-creep and keep your scope well defined.

Conclusions

The Internet of Things is an exciting and growing industry, adding new functionality and use cases never before possible. The availability of low-cost hardware and software makes it extremely easy to get started building a design. Using the Beaglebone Black and the Debian operating system, it is very easy to get started with your system. With a bit of care and planning, you should be well on your way to developing the next great thing.

Author Bio

Drew is currently part of the Mender.io open source project to deploy OTA software updates to embedded Linux devices. He has worked on embedded projects such as RAID storage controllers, Direct and Network attached storage devices and graphical pagers.

He has spent the last 7 years working in Operating System Professional Services helping customers develop production embedded Linux systems. He has spent his career in embedded software and developer tools and has focused on Embedded Linux and Yocto for about 10 years. He is currently a Technical Solutions Engineer at Northern.Tech (the company behind the OSS project Mender.io), helping customers develop safer, more secure connected devices.  

Drew has presented at many conferences, including OSCON, Embedded Linux Conference, Southern California Linux Expo (SCALE), Embedded Systems Conference, All Systems Go, and other technology conferences.

Carnegie Mellon is Saving Old Software from Oblivion

A prototype archiving system called Olive lets vintage code run on today’s computers.

Researchers’ growing dependence on computers and the difficulty they encounter when attempting to run old software are hampering their ability to check published results. The problem of obsolescent software is thus eroding the very premise of reproducibility—which is, after all, the bedrock of science. …

We created a system called Olive—an acronym for Open Library of Images for Virtualized Execution. Olive delivers over the Internet an experience that in every way matches what you would have obtained by running an application, operating system, and computer from the past. So once you install Olive, you can interact with some very old software as if it were brand new. Think of it as a Wayback Machine for executable content.

To understand how Olive can bring old computing environments back to life, you have to dig through quite a few layers of software abstraction. At the very bottom is the common base of much of today’s computer technology: a standard desktop or laptop endowed with one or more x86 microprocessors. On that computer, we run the Linux operating system, which forms the second layer in Olive’s stack of technology.

Read more at IEEE Spectrum

Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)

In Part 2 of our series, we deployed a Jenkins pod into our Kubernetes cluster, and used Jenkins to set up a CI/CD pipeline that automated building and deploying our containerized Hello-Kenzan application in Kubernetes.

In Part 3, we are going to set aside the Hello-Kenzan application and get to the main event: running our Kr8sswordz Puzzle application. We will showcase the built-in UI functionality to scale backend service pods up and down using the Kubernetes API, and also simulate a load test. We will also touch on showing caching in etcd and persistence in MongoDB.

Before we start the install, it’s helpful to take a look at the pods we’ll run as part of the Kr8sswordz Puzzle app:

  • kr8sswordz – A React container with our Node.js frontend UI.

  • puzzle – The primary backend service that handles submitting and getting answers to the crossword puzzle via persistence in MongoDB and caching in ectd.

  • mongo – A MongoDB container for persisting crossword answers.

  • etcd – An etcd cluster for caching crossword answers (this is separate from the etcd cluster used by the K8s Control Plane).

  • monitor-scale – A backend service that handles functionality for scaling the puzzle service up and down. This service also interacts with the UI by broadcasting websockets messages.

We will go into the main service endpoints and architecture in more detail after running the application. For now, let’s get going!

Read all the articles in the series:
 

3di6imeKV7hPtEx3cDcZM3dUG6aW4CWOPmdGOIFA

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Running the Kr8sswordz Puzzle App

First make sure you’ve run through the steps in Part 1 and Part 2, in which we set up our image repository and Jenkins pods—you will need these to proceed with Part 3 (to do so quickly, you can run the part1 and part2 automated scripts detailed below). If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start

You can check the cluster status and view all the pods that are running.

kubectl cluster-info

kubectl get pods --all-namespaces
Make sure the registry and jenkins pods are up and running. 
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

So far we have been creating deployments directly using K8s manifests, and have not yet used Helm. Helm is a package manager that deploys a Chart (or package) onto a K8s cluster with all the resources and dependencies needed for the application. Underneath, the chart generates Kubernetes deployment manifests for the application using templates that replace environment configuration values. Charts are stored in a repository and versioned with releases so that cluster state can be maintained.

Helm is very powerful because it allows you to templatize, version, reuse, and share the deployments you create for Kubernetes. See https://hub.kubeapps.com/ for a look at some of the open source charts available. We will be using Helm to install an etcd operator directly onto our cluster using a pre-built chart.

1. Initialize Helm. This will install Tiller (Helm’s server) into our Kubernetes cluster.

helm init --wait --debug; kubectl rollout status deploy/tiller-deploy -n kube-system

2. We will deploy an etcd operator onto the cluster using a Helm Chart.  

helm install stable/etcd-operator --version 0.8.0 --name etcd-operator --debug --wait
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

An operator is a custom controller for managing complex or stateful applications. As a separate watcher, it monitors the state of the application, and acts to align the application with a given specification as events occur. In the case of etcd, as nodes terminate, the operator will bring up replacement nodes using snapshot data.

3. Deploy the etcd cluster and K8s Services for accessing the cluster.

kubectl  create -f manifests/etcd-cluster.yaml

kubectl  create -f manifests/etcd-service.yaml

You can see these new pods by entering kubectl get pods in a separate terminal window. The cluster runs as three pod instances for redundancy.

4. The crossword application is a multi-tier application whose services depend on each other. We will create three K8s Services so that the applications can communicate with one another.

kubectl apply -f manifests/all-services.yaml

5. Now we’re going to walk through an initial build of the monitor-scale application.

docker build -t 127.0.0.1:30400/monitor-scale:`git rev-parse 
 --short HEAD` -f applications/monitor-scale/Dockerfile 
 applications/monitor-scale
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

To simulate a real life scenario, we are leveraging the github commit id to tag all our service images, as shown in this command (git rev-parse –short HEAD).

6. Once again we’ll need to set up the Socat Registry proxy container to push the monitor-scale image to our registry, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 2 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile 
 applications/socat

7. Run the proxy container from the newly created image.

docker stop socat-registry; docker rm socat-registry; docker run 
 -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name 
 socat-registry -p 30400:5000 socat-registry
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command

lsof -i :30400

8. Push the monitor-scale image to the registry.

docker push 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD`

9. The proxy’s work is done, so go ahead and stop it.

docker stop socat-registry

10. Open the registry UI and verify that the monitor-scale image is in our local registry.

minikube service registry-ui
_I4gSkKcakXTMxLSD_qfzVLlTlfLiabRf3fOZzrm

11. Monitor-scale has the functionality to let us scale our puzzle app up and down through the Kr8sswordz UI, therefore we’ll need to do some RBAC work in order to provide monitor-scale with the proper rights.

kubectl apply -f manifests/monitor-scale-serviceaccount.yaml
ANM4b9RSNsAb4CFeAbJNUYr6IlIzulAIb0sEvwVJ

In the manifests/monitor-scale-serviceaccount.yaml you’ll find the specs for the following K8s Objects.

Role: The custom “puzzle-scaler” role allows “Update” and “Get” actions to be taken over the Deployments and Deployments/scale kinds of resources, specifically to the resource named “puzzle”. This is not a ClusterRole kind of object, which means it will only work on a specific namespace (in our case “default”) as opposed to being cluster-wide.

ServiceAccount: A “monitor-scale” ServiceAccount is assigned to the monitor-scale deployment.

RoleBinding: A “monitor-scale-puzzle-scaler” RoleBinding binds together the aforementioned objects.

12. Create the monitor-scale deployment and the Ingress defining the hostname by which this service will be accessible to the other services.

sed 's#127.0.0.1:30400/monitor-scale:$BUILD_TAG#127.0.0.1:30400/
 monitor-scale:'`git rev-parse --short HEAD`'#' 
 applications/monitor-scale/k8s/deployment.yaml | kubectl apply -f -
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

The sed command is replacing the $BUILD_TAG substring from the manifest file with the actual build tag value used in the previous docker build command. We’ll see later how Jenkins plugin can do this automatically.

13. Wait for the monitor-scale deployment to finish.

kubectl rollout status deployment/monitor-scale

14. View pods to see the monitor-scale pod running.

kubectl get pods

15. View services to see the monitor-scale service.

kubectl get services

16. View ingress rules to see the monitor-scale ingress rule.

kubectl get ingress

17. View deployments to see the monitor-scale deployment.

kubectl get deployments

18. We will run a script to bootstrap the puzzle and mongo services, creating Docker images and storing them in the local registry. The puzzle.sh script runs through the same build, proxy, push, and deploy steps we just ran through manually for both services.

scripts/puzzle.sh

19. Check to see if the puzzle and mongo services have been deployed.

kubectl rollout status deployment/puzzle
kubectl rollout status deployment/mongo

20. Bootstrap the kr8sswordz frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed.

scripts/kr8sswordz-pages.sh

21. Check to see if the frontend has been deployed.

kubectl rollout status deployment/kr8sswordz

22. Check to see that all the pods are running.

kubectl get pods

23. Start the web application in your default browser.

minikube service kr8sswordz

Giving the Kr8sswordz Puzzle a Spin

Now that it’s up and running, let’s give the Kr8sswordz puzzle a try. We’ll also spin up several backend service instances and hammer it with a load test to see how Kubernetes automatically balances the load.   

1. Try filling out some of the answers to the puzzle. You’ll see that any wrong answers are automatically shown in red as letters are filled in.

2. Click Submit. When you click Submit, your current answers for the puzzle are stored in MongoDB.

EfPr45Sz_JuXZDzxNUyRsfXnKCis5iwRZLGi3cSo

3. Try filling out the puzzle a bit more, then click Reload once. This will perform a GET which retrieves the last submitted puzzle answers in MongoDB.

Did you notice the green arrow on the right as you clicked Reload? The arrow indicates that the application is fetching the data from MongoDB. The GET also caches those same answers in etcd with a 30 sec TTL (time to live). If you immediately press Reload again, it will retrieve answers from etcd until the TTL expires, at which point answers are again retrieved from MongoDB and re-cached. Give it a try, and watch the arrows.

4. Scale the number of instances of the Kr8sswordz puzzle service up to 16 by dragging the upper slider all the way to the right, then click Scale. Notice the number of puzzle services increase.

goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

If you did not allocate 8 GB of memory to Minikube, we suggest not exceeding 6 scaled instances using the slider.

r5ShVJ4omRX9znIrPLlpBCwatys2yjjdHA2h2Dlq

In a terminal, run kubectl get pods to see the new replicas.

5. Now run a load test. Drag the lower slider to the right to 250 requests, and click Load Test. Notice how it very quickly hits several of the puzzle services (the ones that flash white) to manage the numerous requests. Kubernetes is automatically balancing the load across all available pod instances. Thanks, Kubernetes!

P4S5i1UdQg6LHo71fTFLfHiZa1IpGwmXDhg7nhZJ

​6. Drag the middle slider back down to 1 and click Scale. In a terminal, run kubectl get pods to see the puzzle services terminating.

g5SHkVKTJQjiRvaG-huPf8aJmLWS19QGlmqgn2OI

7. Now let’s try deleting the puzzle pod to see Kubernetes restart a pod using its ability to automatically heal downed pods

a. In a terminal enter kubectl get pods to see all pods. Copy the puzzle pod name (similar to the one shown in the picture above).

 b. Enter the following command to delete the remaining puzzle pod. 
kubectl delete pod [puzzle podname]

c. Enter kubectl get pods to see the old pod terminating and the new pod starting. You should see the new puzzle pod appear in the Kr8sswordz Puzzle app.

What’s Happening on the Backend

We’ve seen a bit of Kubernetes magic, showing how pods can be scaled for load, how Kubernetes automatically handles load balancing of requests, as well as how Pods are self-healed when they go down. Let’s take a closer look at what’s happening on the backend of the Kr8sswordz Puzzle app to make this functionality apparent.  

Kr8sswordz.png

1. pod instance of the puzzle service. The puzzle service uses a LoopBack data source to store answers in MongoDB. When the Reload button is pressed, answers are retrieved with a GET request in MongoDB, and the etcd client is used to cache answers with a 30 second TTL.  

2. The monitor-scale pod handles scaling and load test functionality for the app. When the Scale button is pressed, the monitor-scale pod uses the Kubectl API to scale the number of puzzle pods up and down in Kubernetes.

3. When the Load Test button is pressed, the monitor-scale pod handles the loadtest by sending several GET requests to the service pods based on the count sent from the front end. The puzzle service sends Hits to monitor-scale whenever it receives a request. Monitor-scale then uses websockets to broadcast to the UI to have pod instances light up green.

4. When a puzzle pod instance goes up or down, the puzzle pod sends this information to the monitor-scale pod. The up and down states are configured as lifecycle hooks in the puzzle pod k8s deployment, which curls the same endpoint on monitor-scale (see kubernetes-ci-cd/applications/crossword/k8s/deployment.yml to view the hooks). Monitor-scale persists the list of available puzzle pods in etcd with set, delete, and get pod requests.

goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

We do not recommend stopping Minikube (minikube stop) before moving on to do the tutorial in Part 4. Upon restart, it may create some issues with the etcd cluster.

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.  

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

 a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
 b. sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

2. Change directories to the cloned repository and install the interactive tutorial script:

 a. cd ~/kubernetes-ci-cd
 b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

4. Press Enter to proceed running each command.

Up Next

Now that we’ve run our Kr8sswordz Puzzle app, the next step is to set up CI/CD for our app. Similar to what we did for the Hello-Kenzan app, Part 4 will cover creating a Jenkins pipeline for the Kr8sswordz Puzzle app so that it builds at the touch of a button. We will also modify a bit of code to enhance the application and enable our Submit button to show white hits on the puzzle service instances in the UI.  

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

4-Phase Approach for Taking Over Large, Messy IT Systems

Everyone loves building shiny, new systems using the latest technologies and especially the most modern DevOps tools. But that’s not the reality for lots of operations teams, especially those running larger systems with millions of users and old, complex infrastructure.

It’s even worse for teams taking over existing systems as part of company mergers, department consolidation, or changing managed service providers (MSPs). The new team has to come in and hit the ground running while keeping the lights on using a messy system they know nothing about.

We’ve spent a decade doing this as a large-scale MSP in China, taking over and managing systems with 10 million to 100 million users, usually with little information. This can be a daunting challenge, but our four-phase approach and related tools make it possible. If you find yourself in a similar position, you might benefit from our experience.

Read more at OpenSource.com

Anaxi App Shows the State of Your Software Project

If you work within the world of software development, you’ll find yourself bouncing back and forth between a few tools. You’ll most likely use GitHub to host your code, but find yourself needing some task/priority software. This could be GitHub itself or other ones like Jira. Of course, you may also find yourself collaborating on several tools, like Slack, and several projects. Considering that it’s already hard to keep track of the progress on one of your projects, working across several of them becomes a struggle. This problem gets worse as you move up the ranks of management where it becomes increasingly difficult to assimilate and rationalize all of this information. To help combat this, Anaxi was created to help give you all the information on the state and progress of your projects in one single interface.

Why measure dev progress?

According to LinkedIn data, there are currently over 3,000 software engineers employed on average at Fortune 4,000 companies. So, how do those companies measure the progress of their software projects and the performance of their teams? After all, you can’t manage what you don’t measure, so the best of them will manually compute portions of this data on a weekly basis. This turns into a tedious and time-consuming task. In fact, this directly impacts your bottom line. Anaxi cuts out this task and may significantly improve software development efficiency within organizations. Teams will know the impact of any process change, which task they should focus on, and whether or not to anticipate any bottlenecks. This also helps reduce the loss in revenue due to shipping critical issues. According to Tricentis, there was a total of $1.7T loss in revenue in 2017 alone due to software failures and poor bug prioritization.

What is Anaxi?

Anaxi currently offers a free iPhone app that provides the full picture of your GitHub projects to help you understand and manage them better. Anaxi has a lot of features based on what they call reports. Reports are lists of issues or pull requests that you can filter as you see fit using labels, state, milestone, authors, assignees, and more. This allows you to monitor those critical bugs or see the progress of your team’s work. For each project, you can select the people on your team so you can easily see what each person is doing and help where help is needed most. It can also be used to keep track of your own work and priorities, and because it’s an iPhone app, it grants quick access to issues and pull requests that have been assigned. There’s also a customizable color indicator for report deadlines that will help you prioritize what to work on.

How to set up the app

First, you’ll need an iPhone and access to the app store. Go into the App Store and download it. Once you open the app, the landing page will appear.

65oqrPLAq7UaPC1LpOw6FXI5GZ6mEgLr1_MUE9Wm

To get started, press on the Connect GitHub button on the bottom of the screen and enter your GitHub credentials. Next, you’ll be asked to select projects that you want to monitor. Anaxi will automatically select some projects. There is a button you can press to edit this list at the bottom that allows you to add or remove projects from this list. If you forget a project, or realize that you don’t want to monitor a project anymore, you can change it once the initial setup is over.

DexG3Wh2YvwVFjs6u58Dstvj515wF7TrWDO0t6v3

When you have your projects selected, hit the Next button. It’s time to select your team. Anaxi will start by automatically selecting people that you interact with the most with for the projects you selected. Just like the previous step, you can edit this list by pressing the button at the bottom and you can add or remove team members later.

wGqp_gzsPVeIqoZxye6XlANIDpCdmJFzwFKRveEe

Next, you will be prompted to help set up the reports for your projects. Anaxi will also start by automatically choosing labels that are most used, but you can customize which labels you want to monitor by clicking the button at the bottom of each project. Later on, you can create more tailored reports by adding issue or pull request reports when inside of a project folder.

3j6EUiM-LMnOtNOJbpf-jAC4v9PCJ3Y918avzAC9

Now, Anaxi is set up and a view of reports appears. Mine are all green because I don’t have any activity on my selected projects. From this menu, you can see which projects have pull requests at the top. Clicking on these will pull up open tickets on these projects. If you scroll down, you can see all the pull requests and issues that are assigned to you and your team. Then you can see individual views near the bottom for all of your projects. The order of these can be changed at any time by hitting the edit button in the top right and dragging the folders around.

X1IJT672xy31lkj3_jp32LKg1oIthwhU4ju_9tEyfKEbYkA6-9VWq-L64UkV4iYcEMn8OwGfC479ynFPsoct3rYoAsfB5fxMSUP6BokKGNq0py35FvqZstKv

Let’s choose an open-source project and see what it looks like when more people are working together and there are more issues and pull requests. For this example, let’s use kubernetes/kubernetes. As you can see below, Anaxi created a report for the new project, and added it to the current full report that already existed. Now that there is a more active GitHub project present in my reports, we can see the full extent of Anaxi in action.

y3Btj41pWjZ4HYjyk8ijrVH1h_ny0AveRyeLw_r11n2BhEKWd8bxu6cWUhzQI96FWXGaUMYmyQBPJABNkUzQQqJmztRKLDjJq-HagoI3mWu8mX0-uJdvtC70

To edit any part of the reports, simply click on that section, and then click on the edit button in the top right. Once there, you can change filters and if you scroll to the bottom, you can change the values for when an aspect of a report displays green, yellow or red.

My experience

After using Anaxi for a little while, scrolling through my GitHub Projects doesn’t feel like a chore anymore. It’s easy to choose one project and see everything that I want to see. One thing that was slightly bothersome is every time you click on a project, it has to read the GitHub API instead of holding on to it. This results in some wait time when you are trying to switch back and forth between multiple projects in quick succession, but that’s the only downside I’ve seen so far. Changing the colors or filters on aspects of reports is surprisingly easy and intuitive. Another thing I like is that you can create a due date for a certain issue or pull request. This is great when you want to build in dates into your projects. I feel like this would really help me when I want to prioritize certain things, instead of creating Google Calendar notifications, I can do this on the project directly.

So far, I haven’t worked on any project that’s been bigger than 4 people, so it hasn’t helped me that much… yet. As I move forward in my career and work on projects with more and more people and deadlines, I feel like Anaxi will become a go-to product for me. The ability to see everything so easily and the customizability really draws me in and makes me love the product and see myself using it in the future.

What’s coming next

Anaxi currently offers an iPhone app, but don’t fret if you are a web user. The plan for Anaxi is to work on integration with Jira next to help with the technology gap between managing project and managing code. After that is completed, they are planning on creating a web app, followed by Android, and ending with native desktop apps.

This article was produced in partnership with Holberton School.

LLVM 7 Improves Performance Analysis, Linking

The compiler framework that powers Rust, Swift, and Clang offers new and revised tools for optimization, linking, and debugging.

The developers behind LLVM, the open-source framework for building cross-platform compilers, have unveiled LLVM 7. The new release arrives right on schedule as part of the project’s cadence of major releases every six months.

LLVM underpins several modern language compilers including Apple’s Swift, the Rust language, and the Clang C/C++ compiler. LLVM 7 introduces revisions to both its native features and to companion tools that make it easier to build, debug, and analyze LLVM-generated software.

Read more at InfoWorld

Convert Screenshots of Equations into LaTeX Instantly With This Nifty Tool

Mathpix is a nifty little tool that allows you to take screenshots of complex mathematical equations and instantly converts it into LaTeX editable text.

LaTeX editors are excellent when it comes to writing academic and scientific documentation.

There is a steep learning curved involved of course. And this learning curve becomes steeper if you have to write complex mathematical equations.

Mathpix is a nifty little tool that helps you in this regard.

Read more at ItsFOSS

Spinnaker: The Kubernetes of Continuous Delivery

Comparing Spinnaker and Kubernetes in this way is somewhat unfair to both projects. The scale, scope, and magnitude of these technologies are different, but parallels can still be drawn.

Just like Kubernetes, Spinnaker is a technology that is battle tested, with Netflix using Spinnaker internally for continuous delivery. Like Kubernetes, Spinnaker is backed by some of the biggest names in the industry, which helps breed confidence among users. Most importantly, though, both projects are open source, designed to build a diverse and inclusive ecosystem around them.

Frankenstein’s Monster

Continuous Delivery (CD) is a solved problem, but it has been a bit of a Frankenstein’s monster, with companies trying to build their own creations by stitching parts together, along with Jenkins. “We tried to build a lot of custom continuous delivery tooling, but they all fell short of our expectation,” said Brandon Leach, Sr. Manager of Platform Engineering at Lookout.

“We were using Jenkins along with tools like Rundeck, but both had their own set of problems. While Rundeck didn’t have a first-class deployment tool, Jenkins was becoming a nightmare and we ended up moving to Gitlabs,” said Gard Voigt Rimestad of Schibsted, a major Norwegian media group.

Netflix created a more elegant way for continuous delivery called Asgard, open sourced in 2012, which was designed to run Netflix’s own workload on AWS. Many companies were using Asgard, including Schibsted, and it was gaining momentum. But it was tied closely to the kind of workload Netflix was running with AWS. Bigger companies who liked Asgard forked it to run their own workloads. IBM forked it twice to make it work with Docker containers.

IBM’s forking of Asgard was an eye-opening experience for Netflix. At that point, Netflix had started looking into containerized workloads, and IBM showed how it could be done with Asgard.

Google was also planning to fork Asgard to make it work on Google Compute Engine. By that time, Netflix had started working on the successor to Asgard, called Spinnaker. “Before Google could fork the project, we managed to convince Google to collaborate on Spinnaker instead of forking Asgard. Pivotal also joined in,” said Andy Glover, shepherd of Spinnaker and Director of Delivery Engineering at Netflix. The rest is history.

Continuous popularity

There are many factors at play that contribute to the popularity and adoption of Spinnaker. First and foremost, it’s a proven technology that’s been used at Netflix. It instills confidence in users. “Spinnaker is the way Netflix deploys its services. They do things at the scale we don’t do in AWS. That was compelling,” said Leach.

The second factor is the powerful community around Spinnaker that includes heavyweights like Microsoft, Google, and Netflix. “These companies have engineers on their staff that are dedicated to working on Spinnaker,” added Leach.

Governance

In October 2018, the Spinnaker community organized its first official Spinnaker Summit in Seattle. During the Summit, the community announced the governance structure for the project.

“Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker.  The broader community is organized around a set of special interest groups (SIGs) that enable users to focus on particular areas of interest.

“There are users who have deployed Spinnaker in their environment, but they are often intimidated by two big players like Google and Netflix. The governance structure will enable everyone to be able to have a voice in the community,” said Kim.

At the moment, the project is being run by Google and Netflix, but eventually, it may be donated to an organization that has a better infrastructure for managing such projects. “It could be the OpenStack Foundation, CNCF, or the Apache Foundation,” said Boris Renski, Co-founder and CMO of Mirantis.

I met with more than a dozen users at the Summit, and they were extremely bullish about Spinnaker. Companies are already using it in a way even Netflix didn’t envision. Since continuous delivery is at the heart of multi-cloud strategy, Spinnaker is slowly but steadily starting to beat at the heart of many companies.

Spinnaker might not become as big as Kubernetes, due to its scope, but it’s certainly becoming as important. Spinnaker has made some bold promises, and I am sure it will continue to deliver on them.

Kali Linux for Vagrant: Hands-On

What Vagrant actually does is provide a way of automating the building of virtualized development environments using a variety of the most popular providers, such as VirtualBox, VMware, AWS and others. It not only handles the initial setup of the virtual machine, it can also provision the virtual machine based on your specifications, so it provides a consistent environment which can be shared and distributed to others.

The first step, obviously, is to get Vagrant itself installed and working — and as it turns out, doing that requires getting at least one of the virtual machine providers installed and working. In the case of the Kali distribution for Vagrant, this means getting VirtualBox installed.

Fortunately, both VirtualBox and Vagrant are available in the repositories of most of the popular Linux distributions. I typically work on openSUSE Tumbleweed, and I was able to install both of them from the YAST Software Management tool. I have also checked that both are available on Manjaro, Debian Testing and Linux Mint. I didn’t find Vagrant on Fedora, but there are several articles in the Fedora Developer Portal which describe installing and using it.

Read more at ZDNet