Home Blog Page 356

Systemd Services: Reacting to Change

I have one of these Compute Sticks (Figure 1) and use it as an all-purpose server. It is inconspicuous and silent and, as it is built around an x86 architecture, I don’t have problems getting it to work with drivers for my printer, and that’s what it does most days: it interfaces with the shared printer and scanner in my living room.

An Intel ComputeStick. Euro coin for size.

Most of the time it is idle, especially when we are out, so I thought it would be good idea to use it as a surveillance system. The device doesn’t come with its own camera, and it wouldn’t need to be spying all the time. I also didn’t want to have to start the image capturing by hand because this would mean having to log into the Stick using SSH and fire up the process by writing commands in the shell before rushing out the door.

So I thought that the thing to do would be to grab a USB webcam and have the surveillance system fire up automatically just by plugging it in. Bonus points if the surveillance system fired up also after the Stick rebooted, and it found that the camera was connected.

In prior installments, we saw that systemd services can be started or stopped by hand or when certain conditions are met. Those conditions are not limited to when the OS reaches a certain state in the boot up or powerdown sequence but can also be when you plug in new hardware or when things change in the filesystem. You do that by combining a Udev rule with a systemd service.

Hotplugging with Udev

Udev rules live in the /etc/udev/rules directory and are usually a single line containing conditions and assignments that lead to an action.

That was a bit cryptic. Let’s try again:

Typically, in a Udev rule, you tell systemd what to look for when a device is connected. For example, you may want to check if the make and model of a device you just plugged in correspond to the make and model of the device you are telling Udev to wait for. Those are the conditions mentioned earlier.

Then you may want to change some stuff so you can use the device easily later. An example of that would be to change the read and write permissions to a device: if you plug in a USB printer, you’re going to want users to be able to read information from the printer (the user’s printing app would want to know the model, make, and whether it is ready to receive print jobs or not) and write to it, that is, send stuff to print. Changing the read and write permissions for a device is done using one of the assignments you read about earlier.

Finally, you will probably want the system to do something when the conditions mentioned above are met, like start a backup application to copy important files when a certain external hard disk drive is plugged in. That is an example of an action mentioned above.

With that in mind, ponder this:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207", 
SYMLINK+="mywebcam", TAG+="systemd", MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service"

The first part of the rule,

ACTION=="add", SUBSYSTEM=="video4linux",  ATTRS{idVendor}=="03f0", 
ATTRS{idProduct}=="e207" [etc... ]

shows the conditions that the device has to meet before doing any of the other stuff you want the system to do. The device has to be added (ACTION=="add") to the machine, it has to be integrated into the video4linux subsystem. To make sure the rule is applied only when the correct device is plugged in, you have to make sure Udev correctly identifies the manufacturer (ATTRS{idVendor}=="03f0") and a model (ATTRS{idProduct}=="e207") of the device.

In this case, we’re talking about this device (Figure 2):

The HP webcam used in this experiment.

Notice how you use == to indicate that these are a logical operation. You would read the above snippet of the rule like this:

if the device is added and the device controlled by the video4linux subsystem 
and the manufacturer of the device is 03f0 and the model is e207, then...

But where do you get all this information? Where do you find the action that triggers the event, the manufacturer, model, and so on? You will probably have to use several sources. The IdVendor and idProduct you can get by plugging the webcam into your machine and running lsusb:

lsusb
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub 
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub 
Bus 003 Device 003: ID 03f0:e207 Hewlett-Packard
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub 
Bus 001 Device 003: ID 04f2:b1bb Chicony Electronics Co., Ltd
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

The webcam I’m using is made by HP, and you can only see one HP device in the list above. The ID gives you the manufacturer and the model numbers separated by a colon (:). If you have more than one device by the same manufacturer and not sure which is which, unplug the webcam, run lsusb again and check what’s missing.

OR…

Unplug the webcam, wait a few seconds, run the command udevadmin monitor --environment and then plug the webcam back in again. When you do that with the HP webcam, you get:

udevadmin monitor --environment
UDEV  [35776.495221] add      /devices/pci0000:00/0000:00:1c.3/0000:04:00.0
    /usb3/3-1/3-1:1.0/input/input21/event11 (input) 
.MM_USBIFNUM=00 
ACTION=add 
BACKSPACE=guess 
DEVLINKS=/dev/input/by-path/pci-0000:04:00.0-usb-0:1:1.0-event 
     /dev/input/by-id/usb-Hewlett_Packard_HP_Webcam_HD_2300-event-if00 
DEVNAME=/dev/input/event11 
DEVPATH=/devices/pci0000:00/0000:00:1c.3/0000:04:00.0/
     usb3/3-1/3-1:1.0/input/input21/event11 
ID_BUS=usb 
ID_INPUT=1 
ID_INPUT_KEY=1 
ID_MODEL=HP_Webcam_HD_2300 
ID_MODEL_ENC=HPx20Webcamx20HDx202300 
ID_MODEL_ID=e207 
ID_PATH=pci-0000:04:00.0-usb-0:1:1.0 
ID_PATH_TAG=pci-0000_04_00_0-usb-0_1_1_0 
ID_REVISION=1020 
ID_SERIAL=Hewlett_Packard_HP_Webcam_HD_2300 
ID_TYPE=video 
ID_USB_DRIVER=uvcvideo 
ID_USB_INTERFACES=:0e0100:0e0200:010100:010200:030000: 
ID_USB_INTERFACE_NUM=00 
ID_VENDOR=Hewlett_Packard 
ID_VENDOR_ENC=Hewlettx20Packard 
ID_VENDOR_ID=03f0 
LIBINPUT_DEVICE_GROUP=3/3f0/e207:usb-0000:04:00.0-1/button 
MAJOR=13 
MINOR=75 
SEQNUM=3162 
SUBSYSTEM=input 
USEC_INITIALIZED=35776495065 
XKBLAYOUT=es 
XKBMODEL=pc105 
XKBOPTIONS= 
XKBVARIANT=

That may look like a lot to process, but, check this out: the ACTION field early in the list tells you what event just happened, i.e., that a device got added to the system. You can also see the name of the device spelled out on several of the lines, so you can be pretty sure that it is the device you are looking for. The output also shows the manufacturer’s ID number (ID_VENDOR_ID=03f0) and the model number (ID_VENDOR_ID=03f0).

This gives you three of the four values the condition part of the rule needs. You may be tempted to think that it a gives you the fourth, too, because there is also a line that says:

SUBSYSTEM=input

Be careful! Although it is true that a USB webcam is a device that provides input (as does a keyboard and a mouse), it is also belongs to the usb subsystem, and several others. This means that your webcam gets added to several subsystems and looks like several devices. If you pick the wrong subsystem, your rule may not work as you want it to, or, indeed, at all.

So, the third thing you have to check is all the subsystems the webcam has got added to and pick the correct one. To do that, unplug your webcam again and run:

ls /dev/video*

This will show you all the video devices connected to the machine. If you are using a laptop, most come with a built-in webcam and it will probably show up as /dev/video0. Plug your webcam back in and run ls /dev/video* again.

Now you should see one more video device (probably /dev/video1).

Now you can find out all the subsystems it belongs to by running udevadm info -a /dev/video1:

udevadm info -a /dev/video1

Udevadm info starts with the device specified by the devpath and then 
walks up the chain of parent devices. It prints for every device 
found, all possible attributes in the udev rules key format. 
A rule to match, can be composed by the attributes of the device 
and the attributes from one single parent device. 

 looking at device '/devices/pci0000:00/0000:00:1c.3/0000:04:00.0
    /usb3/3-1/3-1:1.0/video4linux/video1': 
   KERNEL=="video1" 
   SUBSYSTEM=="video4linux" 
   DRIVER=="" 
   ATTR{dev_debug}=="0" 
   ATTR{index}=="0" 
   ATTR{name}=="HP Webcam HD 2300: HP Webcam HD"

[etc...]

The output goes on for quite a while, but what you’re interested is right at the beginning: SUBSYSTEM=="video4linux". This is a line you can literally copy and paste right into your rule. The rest of the output (not shown for brevity) gives you a couple more nuggets, like the manufacturer and mode IDs, again in a format you can copy and paste into your rule.

Now you have a way of identifying the device and what event should trigger the action univocally, it is time to tinker with the device.

The next section in the rule, SYMLINK+="mywebcam", TAG+="systemd", MODE="0666" tells Udev to do three things: First, you want to create symbolic link from the device to (e.g. /dev/video1) to /dev/mywebcam. This is because you cannot predict what the system is going to call the device by default. When you have an in-built webcam and you hotplug a new one, the in-built webcam will usually be /dev/video0 while the external one will become /dev/video1. However, if you boot your computer with the external USB webcam plugged in, that could be reversed and the internal webcam can become /dev/video1 and the external one /dev/video0. What this is telling you is that, although your image-capturing script (which you will see later on) always needs to point to the external webcam device, you can’t rely on it being /dev/video0 or /dev/video1. To solve this problem, you tell Udev to create a symbolic link which will never change in the moment the device is added to the video4linux subsystem and you will make your script point to that.

The second thing you do is add "systemd" to the list of Udev tags associated with this rule. This tells Udev that the action that the rule will trigger will be managed by systemd, that is, it will be some sort of systemd service.

Notice how in both cases you use += operator. This adds the value to a list, which means you can add more than one value to SYMLINK and TAG.

The MODE values, on the other hand, can only contain one value (hence you use the simple = assignment operator). What MODE does is tell Udev who can read from or write to the device. If you are familiar with chmod (and, if you are reading this, you should be), you will also be familiar of how you can express permissions using numbers. That is what this is: 0666 means “give read and write privileges to the device to everybody“.

At last, ENV{SYSTEMD_WANTS}="webcam.service" tells Udev what systemd service to run.

Save this rule into file called 90-webcam.rules (or something like that) in /etc/udev/rules.d and you can load it either by rebooting your machine, or by running:

sudo udevadm control --reload-rules && udevadm trigger

Service at Last

The service the Udev rule triggers is ridiculously simple:

# webcam.service

[Service] 
Type=simple 
ExecStart=/home/[user name]/bin/checkimage.sh

Basically, it just runs the checkimage.sh script stored in your personal bin/ and pushes it the background. This is something you saw how to do in prior installments. It may seem something little, but just because it is called by a Udev rule, you have just created a special kind of systemd unit called a device unit. Congratulations.

As for the checkimage.sh script webcam.service calls, there are several ways of grabbing an image from a webcam and comparing it to a prior one to check for changes (which is what checkimage.sh does), but this is how I did it:

#!/bin/bash 
# This is the checkimage.sh script

mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
    /dev/mywebcam &>/dev/null 
mv 00000001.png /home/[user name]/monitor/monitor.png 

while true 
do 
   mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=/dev/mywebcam &>/dev/null 
   mv 00000001.png /home/[user name]/monitor/temp.png 

   imagediff=`compare -metric mae /home/[user name]/monitor/monitor.png /home/[user name]
       /monitor/temp.png /home/[user name]/monitor/diff.png 2>&1 > /dev/null | cut -f 1 -d " "` 
   if [ `echo "$imagediff > 700.0" | bc` -eq 1 ] 
       then 
           mv /home/[user name]/monitor/temp.png /home/[user name]/monitor/monitor.png 
       fi 
    
   sleep 0.5 
done

Start by using MPlayer to grab a frame (00000001.png) from the webcam. Notice how we point mplayer to the mywebcam symbolic link we created in our Udev rule, instead of to video0 or video1. Then you transfer the image to the monitor/ directory in your home directory. Then run an infinite loop that does the same thing again and again, but also uses Image Magick’s compare tool to see if there any differences between the last image captured and the one that is already in the monitor/ directory.

If the images are different, it means something has moved within the webcam’s frame. The script overwrites the original image with the new image and continues comparing waiting for some more movement.

Plugged

With all the bits and pieces in place, when you plug your webcam in, your Udev rule will be triggered and will start the webcam.service. The webcam.service will execute checkimage.sh in the background, and checkimage.sh will start taking pictures every half a second. You will know because your webcam’s LED will start flashing indicating every time it takes a snap.

As always, if something goes wrong, run

systemctl status webcam.service

to check what your service and script are up to.

Coming up

You may be wondering: Why overwrite the original image? Surely you would want to see what’s going on if the system detects any movement, right? You would be right, but as you will see in the next installment, leaving things as they are and processing the images using yet another type of systemd unit makes things nice, clean and easy.

Just wait and see.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

LF Deep Learning Foundation Announces Project Contribution Process

The LF Deep Learning Foundation, a community umbrella project of The Linux Foundation with the mission of supporting artificial intelligence, machine learning and deep learning open source projects, is working to build a self-sustaining ecosystem of projects.  Having a clear roadmap for how to contribute projects is a first step. Contributed projects operate under their own technical governance with collaboration resources allocated and provided by the LF Deep Learning Foundation’s Governing Board. Membership in the LF Deep Learning Foundation is not required to propose a project contribution.

The project lifecycle and contribution process documents can be found here: https://lists.deeplearningfoundation.org/g/tac-general/wiki/Lifecycle-Document-and-Project-Proposal-Process

Read more at The Linux Foundation

Speak at ELC + OpenIoT Summit EU – Proposals due by Sunday, July 1

Share your expertise! Submit your proposal to speak at ELC + OpenIoT Summit Europe by July 1.

For the past 13 years, Embedded Linux Conference (ELC) has been the premier vendor-neutral technical conference for companies and developers using Linux in embedded products. ELC has become the preeminent space for product vendors as well as kernel and systems developers to collaborate with user-space developers – the people building applications on embedded Linux.

View Full List of Suggested Topics and Submit Now >>

Read more at The Linux Foundation

8 Roles on a Cross-Functional DevOps Team

If you’re just getting started with a squad model, you may not be sure what roles you’ll need for your team to function smoothly. Our squad model in the IBM Digital Business Group is based on the Spotify Squad framework. At a high level, a squad is a small, cross-functional team that has autonomy to deliver on their squad mission. The squad missions and cross-squad priorities are set at an organizational level. Then within each squad, they decide “what to build, how to build it, and how to work together while building it.

“The key takeaway here is not that our way of assigning responsibilities to roles is the best or only way to do it. In fact, we have some variation between the squads in our own organization. The most important thing is that it’s crystal clear to everyone involved who is responsible for what. Perhaps you’ll see something on this list of responsibilities that you were missing before. So, let’s dive in with the first role.

Read more at OpenSource.com

R Consortium Looks for Feedback on R Package Best Practices

The R Consortium Infrastructure Steering Committee (ISC) is exploring the benefits of recommending that R package authors, contributors, and maintainers adopt the Linux Foundation (LF) Core Infrastructure Initiative (CII) Best Practices Badge. This badge provides a means for Free/Libre and Open Source Software (FLOSS) projects to highlight to what extent package authors follow best software practices, while enabling individuals and enterprises to assess quickly a package’s strengths and weaknesses across a range of dimensions. The CII Best Practices Badge Program is a voluntary, self-certification, at no cost. An easy to use web application guides users in the process.

More information on the CII Best Practices Badging Program is available: criteria, is available on GitHubProject statistics and criteria statistics

Read more at R Consortium

Why You Need Centralized Logging and Event Log Management

More companies are using their security logs to detect malicious incidents. Many of them are collecting too much log data—often billions of events. They brag about the size of their log storage drive arrays. Are they measured in terabytes or petabytes?

A mountain of data doesn’t give them as much useful information as they would like. Sometimes less is more. If you get so many alerts that you can’t adequately respond to them all, then something needs to change. A centralized log management system can help. This quick intro to logging and expert recommendations help new system admins understand what they should be doing to get the most out of security logs.

Security event logging basics

One of the best guides to security logging is the National Instituted of Standards & Technology (NIST) Special Publication 800-92, Guide to Computer Security Log Management. Although it’s a bit dated, written in 2006, it still covers the basics of security log management well.

It places security log generators into three categories: operating system, application, or security-specific software (e.g., firewalls or intrusion detection systems [IDS]). Most computers have dozens of logs. …

Unix-style systems usually have syslog enabled by default, and you can configure the detail level. Many other application and security log files are disabled by default, but you can enable each individually with a single command line.

Each network device and security application and device will generate its own logs. Altogether, a system administrator will have hundreds of log files to choose from. A typical end-user system can generate thousands to tens of thousands of events per day. A server can easily generate hundreds of thousands to millions of events a day.  

Read more at CSO

Turn Your Raspberry Pi into a Tor Relay Node

If you’re anything like me, you probably got yourself a first- or second-generation Raspberry Pi board when they first came out, played with it for a while, but then shelved it and mostly forgot about it. After all, unless you’re a robotics enthusiast, you probably don’t have that much use for a computer with a pretty slow processor and 256 megabytes of RAM. This is not to say that there aren’t cool things you can do with one of these, but between work and other commitments, I just never seem to find the right time for some good old nerding out.

However, if you would like to put it to good use without sacrificing too much of your time or resources, you can turn your old Raspberry Pi into a perfectly functioning Tor relay node.

What is a Tor Relay node

You have probably heard about the Tor project before, but just in case you haven’t, here’s a very quick summary. The name “Tor” stands for “The Onion Router” and it is a technology created to combat online tracking and other privacy violations.

Everything you do on the Internet leaves a set of digital footprints in every piece of equipment that your IP packets traverse: all of the switches, routers, load balancers and destination websites log the IP address from which your session originated and the IP address of the internet resource you are accessing (and often its hostname, even when using HTTPS). If you’re browsing from home, then your IP can be directly mapped to your household. If you’re using a VPN service (as you should be), then your IP can be mapped to your VPN provider, and then they are the ones who can map it to your household. In any case, odds are that someone somewhere is assembling an online profile on you based on the sites you visit and how much time you spend on each of them. Such profiles are then sold, aggregated with matching profiles collected from other services, and then monetized by ad networks. At least, that’s the optimist’s view of how that data is used — I’m sure you can think of many examples of how your online usage profiles can be used against you in much more nefarious ways.

The Tor project attempts to provide a solution to this problem by making it impossible (or, at least, unreasonably difficult) to trace the endpoints of your IP session. Tor achieves this by bouncing your connection through a chain of anonymizing relays, consisting of an entry node, relay node, and exit node:

  1. The entry node only knows your IP address, and the IP address of the relay node, but not the final destination of the request;

  2. The relay node only knows the IP address of the entry node and the IP address of the exit node, and neither the origin nor the final destination

  3. The exit node only knows the IP address of the relay node and the final destination of the request; it is also the only node that can decrypt the traffic before sending it over to its final destination

Relay nodes play a crucial role in this exchange because they create a cryptographic barrier between the source of the request and the destination. Even if exit nodes are controlled by adversaries intent on stealing your data, they will not be able to know the source of the request without controlling the entire Tor relay chain.

As long as there are plenty of relay nodes, your privacy when using the Tor network remains protected — which is why I heartily recommend that you set up and run a relay node if you have some home bandwidth to spare.

Things to keep in mind regarding Tor relays

A Tor relay node only receives encrypted traffic and sends encrypted traffic — it never accesses any other sites or resources online, so you do not need to worry that someone will browse any worrisome sites directly from your home IP address. Having said that, if you reside in a jurisdiction where offering anonymity-enhancing services is against the law, then, obviously, do not operate your own Tor relay. You may also want to check if operating a Tor relay is against the terms and conditions of your internet access provider.

What you will need

  • A Raspberry Pi (any model/generation) with some kind of enclosure

  • An SD card with Raspbian Stretch Lite

  • An ethernet cable

  • A micro-USB cable for power

  • A keyboard and an HDMI-capable monitor (to use during the setup)

This guide will assume that you are setting this up on your home connection behind a generic cable or ADSL modem router that performs NAT translation (and it almost certainly does). Most of them have a USB port you can use to power up your Raspberry Pi, and if you’re only using the wifi functionality of the router, then it should have a free ethernet port for you to plug into. However, before we get to the point where we can set-and-forget your Raspberry Pi, we’ll need to set it up as a Tor relay node, for which you’ll need a keyboard and a monitor.

The bootstrap script

I’ve adapted a popular Tor relay node bootstrap script for use with Raspbian Stretch — you can find it in my GitHub repository here: https://github.com/mricon/tor-relay-bootstrap-rpi. Once you have booted up your Raspberry Pi and logged in with the default “pi” user, do the following:

sudo apt-get install -y git
git clone https://github.com/mricon/tor-relay-bootstrap-rpi
cd tor-relay-bootstrap-rpi
sudo ./bootstrap.sh

Here is what the script will do:

  1. Install the latest OS updates to make sure your Pi is fully patched

  2. Configure your system for automated unattended updates, so you automatically receive security patches when they become available

  3. Install Tor software

  4. Tell your NAT router to forward the necessary ports to reach your relay (the ports we’ll use are 443 and 8080, since they are least likely to be filtered by your internet provider)

Once the script is done, you’ll need to configure the torrc file — but first, decide how much bandwidth you’ll want to donate to Tor traffic. First, type “Speed Test” into Google and click the “Run Speed Test” button. You can disregard the “Download speed” result, as your Tor relay can only operate as fast as your maximum upload bandwidth.

Therefore, take the “Mbps upload” number, divide by 8 and multiply by 1024 to find out the bandwidth speed in Kilobytes per second. E.g. if you got 21.5 Mbps for your upload speed, then that number is:

21.5 Mbps / 8 * 1024 = 2752 KBytes per second

You’ll want to limit your relay bandwidth to about half that amount, and allow bursting to about three-quarters of it. Once decided, open /etc/tor/torrc using your favourite editor and tweak the bandwidth settings.

RelayBandwidthRate 1300 KBytes
RelayBandwidthBurst 2400 KBytes

Of course, if you’re feeling more generous, then feel free to put in higher numbers, though you don’t want to max out your outgoing bandwidth — it will noticeably impact your day-to-day usage if these numbers are set too high.

While you have that file open, you should set two more things. First, the Nickname — just for your own recordkeeping, and second the ContactInfo line, which should list a single email address. Since your relay will be running unattended, you should use an email address that you regularly check — you will receive an alert from the “Tor Weather” service if your relay goes offline for longer than 48 hours.

Nickname myrpirelay
ContactInfo you@example.com

Save the file and reboot the system to start the Tor relay.

Testing to make sure Tor traffic is flowing

If you would like to make sure that the relay is functioning, you can run the “arm” tool:

sudo -u debian-tor arm

It will take a while to start, especially on older-generation boards, but eventually it will show you a bar chart of incoming and outgoing traffic (or error messages that will help you troubleshoot your setup).

Once you are convinced that everything is functioning, you can unplug the keyboard and the monitor and relocate the Raspberry Pi into the basement where it will quietly sit and shuffle encrypted bits around. Congratulations, you’ve helped improve privacy and combat malicious tracking online!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Crack Open ACRN – A Device Hypervisor Designed for IoT

As the Internet of Things has grown in scale, IoT developers are increasingly expected to support a range of hardware resources, operating systems, and software tools/applications. This is a challenge given many connected devices are size-constrained. Virtualization can help meet these broad needs, but existing options don’t offer the right mix of size, flexibility, and functionality for IoT development.

ACRN is different by design. Launched at Embedded Linux Conference 2018, ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind and optimized to streamline embedded development through an open source platform.

One of ACRN’s biggest advantages is its small size — roughly only 25K lines of code at launch.

Read more at The Linux Foundation

This Week in Numbers: Managing JavaScript Packages with NPM and Yarn

This week we analyze more data from the Node.js Foundation‘s user survey. Almost three-quarters (73 percent) of survey respondents said they use a package manager. NPM was used by 60 percent and Yarn cited by 13 percent. Since Yarn sits on top of NPM, in reality these respondents are referring to an interface or tool they actually use day-to-day. Yarn’s use rose 44 percent compared to last year’s study.

Is Yarn a better alternative to NPM? That is up to you to decide.

We looked through the raw data provided by the Node.js Foundation and found two questions about JavaScript tooling that were not included as charts in its reports. The graphic below shows the charts we created about transpiler and modular bundler usage. The New Stack will use this data to support future reporting.

Read more at The New Stack

Helm vs Kapitan vs Kustomize

TLDR;

  • Kapitan (and presumably Ksonnet) is the more flexible and customizable (json and jsonnet)
  • Kustomize if the more straightforward, just released so we’ll need a bit more documentation on built-in functions (yaml only)
  • Helm combines a package approach and releases management that is powerful, with the caveats of Tiller for the release management part (additional source of truth)

During the implementation of https://www.weyv.com environments in Kubernetes, we went through various stages. From plain yaml files, to Helm charts releases and finally helm charts but with helm template output. Now with the announcement of Kustomize, I take the opportunity to re-evaluate our choice of tool vs our requirements with 3 contenders: Helm, Kapitan, Kustomize. I left out Ksonnet (https://github.com/ksonnet/ksonnet) it seems very close to Kapitan.

Helm: https://github.com/kubernetes/helm

Kapitan: https://github.com/deepmind/kapitan

Kustomize: https://github.com/kubernetes-sigs/kustomize

Read more at Medium