Home Blog Page 334

Systemd Timers: Three Use Cases

In this systemd tutorial series, we have already talked about systemd timer units to some degree, but, before moving on to the sockets, let’s look at three examples that illustrate how you can best leverage these units.

Simple cron-like behavior

This is something I have to do: collect popcon data from Debian every week, preferably at the same time so I can see how the downloads for certain applications evolve. This is the typical thing you can have a cron job do, but a systemd timer can do it too:

# cron-like popcon.timer

[Unit] 
Description= Says when to download and process popcons 

[Timer] 
OnCalendar= Thu *-*-* 05:32:07 
Unit= popcon.service 

[Install] 
WantedBy= basic.target

The actual popcon.service runs a regular wget job, so nothing special. What is new in here is the OnCalendar= directive. This is what lets you set a service to run on a certain date at a certain time. In this case, Thu means “run on Thursdays” and the *-*-* means “the exact date, month and year don’t matter“, which translates to “run on Thursday, regardless of the date, month or year“.

Then you have the time you want to run the service. I chose at about 5:30 am CEST, which is when the server is not very busy.

If the server is down and misses the weekly deadline, you can also work an anacron-like functionality into the same timer:

# popcon.timer with anacron-like functionality

[Unit] 
Description=Says when to download and process popcons 

[Timer] 
Unit=popcon.service 
OnCalendar=Thu *-*-* 05:32:07
Persistent=true

[Install] 
WantedBy=basic.target

When you set the Persistent= directive to true, it tells systemd to run the service immediately after booting if the server was down when it was supposed to run. This means that if the machine was down, say for maintenance, in the early hours of Thursday, as soon as it is booted again, popcon.service will be run immediately and then it will go back to the routine of running the service every Thursday at 5:32 am.

So far, so straightforward.

Delayed execution

But let’s kick thing up a notch and “improve” the systemd-based surveillance system. Remember that the system started taking pictures the moment you plugged in a camera. Suppose you don’t want pictures of your face while you install the camera. You will want to delay the start up of the picture-taking service by a minute or two so you can plug in the camera and move out of frame.

To do this; first change the Udev rule so it points to a timer:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", 
ATTRS{idProduct}=="e207", TAG+="systemd", ENV{SYSTEMD_WANTS}="picchanged.timer", 
SYMLINK+="mywebcam", MODE="0666"

The timer looks like this:

# picchanged.timer

[Unit] 
Description= Runs picchanged 1 minute after the camera is plugged in 

[Timer] 
OnActiveSec= 1 m
Unit= picchanged.path

[Install]
WantedBy= basic.target

The Udev rule gets triggered when you plug the camera in and it calls the timer. The timer waits for one minute after it starts (OnActiveSec= 1 m) and then runs picchanged.path, which monitors to see if the master image changes. The picchanged.path is also in charge of pulling in the webcam.service, the service that actually takes the picture.

Start and stop Minetest server at a certain time every day

In the final example, let’s say you have decided to delegate parenting to systemd. I mean, systemd seems to be already taking over most of your life anyway. Why not embrace the inevitable?

So you have your Minetest service set up for your kids. You also want to give some semblance of caring about their education and upbringing and have them do homework and chores. What you want to do is make sure Minetest is only available for a limited time (say from 5 pm to 7 pm) every evening.

This is different from “starting a service at certain time” in that, writing a timer to start the service at 5 pm is easy…:

# minetest.timer 

[Unit]
Description= Runs the minetest.service at 5pm everyday

[Timer]
OnCalendar= *-*-* 17:00:00 
Unit= minetest.service

[Install]
WantedBy= basic.target

… But writing a counterpart timer that shuts down a service at a certain time needs a bigger dose of lateral thinking.

Let’s start with the obvious — the timer:

# stopminetest.timer

[Unit]
Description= Stops the minetest.service at 7 pm everyday

[Timer]
OnCalendar= *-*-* 19:05:00 
Unit= stopminetest.service

[Install]
WantedBy= basic.target

The tricky part is how to tell stopminetest.service to actually, you know, stop the Minetest. There is no way to pass the PID of the Minetest server from minetest.service. and there are no obvious commands in systemd’s unit vocabulary to stop or disable a running service.

The trick is to use systemd’s Conflicts= directive. The Conflicts= directive is similar to systemd’s Wants= directive, in that it does exactly the opposite. If you have Wants=a.service in a unit called b.service, when it starts, b.service will run a.service if it is not running already. Likewise, if you have a line that reads Conflicts= a.service in your b.service unit, as soon as b.service starts, systemd will stop a.service.

This was created for when two services could clash when trying to take control of the same resource simultaneously, say when two services needed to access your printer at the same time. By putting a Conflicts= in your preferred service, you could make sure it would override the least important one.

You are going to use Conflicts= a bit differently, however. You will use Conflicts= to close down cleanly the minetest.service:

# stopminetest.service

[Unit]
Description= Closes down the Minetest service
Conflicts= minetest.service

[Service]
Type= oneshot
ExecStart= /bin/echo "Closing down minetest.service"

The stopminetest.service doesn’t do much at all. Indeed, it could do nothing at all, but just because it contins that Conflicts= line in there, when it is started, systemd will close down minetest.service.

There is one last wrinkle in your perfect Minetest set up: What happens if you are late home from work, it is past the time when the server should be up but playtime is not over? The Persistent= directive (see above) that runs a service if it has missed its start time is no good here, because if you switch the server on, say at 11 am, it would start Minetest and that is not what you want. What you really want is a way to make sure that systemd will only start Minetest between the hours of 5 and 7 in the evening:

# minetest.timer 

[Unit]
Description= Runs the minetest.service every minute between the hours of 5pm and 7pm

[Timer]
OnCalendar= *-*-* 17..19:*:00
Unit= minetest.service

[Install]
WantedBy= basic.target

The line OnCalendar= *-*-* 17..19:*:00 is interesting for two reasons: (1) 17..19 is not a point in time, but a period of time, in this case the period of time between the times of 17 and 19; and (2) the * in the minute field indicates that the service must be run every minute. Hence, you would read this as “run the minetest.service every minute between 5 and 7 pm“.

There is still one catch, though: once the minetest.service is up and running, you want minetest.timer to stop trying to run it again and again. You can do that by including a Conflicts= directive into minetest.service:

# minetest.service 

[Unit]
Description= Runs Minetest server
Conflicts= minetest.timer

[Service]
Type= simple
User= <your user name>

ExecStart= /usr/bin/minetest --server 
ExecStop= /bin/kill -2 $MAINPID

[Install]
WantedBy= multi-user.targe

The Conflicts= directive shown above makes sure minetest.timer is stopped as soon as the minetest.service is successfully started.

Now enable and start minetest.timer:

systemctl enable minetest.timer
systemctl start minetest.timer

And, if you boot the server at, say, 6 o’clock, minetest.timer will start up and, as the time falls between 5 and 7, minetest.timer will try and start minetest.service every minute. But, as soon as minetest.service is running, systemd will stop minetest.timer because it “conflicts” with minetest.service, thus avoiding the timer from trying to start the service over and over when it is already running.

It is a bit counterintuitive that you use the service to kill the timer that started it up in the first place, but it works.

Conclusion

You probably think that there are better ways of doing all of the above. I have heard the term “overengineered” in regard to these articles, especially when using systemd timers instead of cron.

But, the purpose of this series of articles is not to provide the best solution to any particular problem. The aim is to show solutions that use systemd units as much as possible, even to a ridiculous length. The aim is to showcase plenty of examples of how the different types of units and the directives they contain can be leveraged. It is up to you, the reader, to find the real practical applications for all of this.

Be that as it may, there is still one more thing to go: next time, we’ll be looking at sockets and targets, and then we’ll be done with systemd units.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

pdsh Parallel Shell

For HPC, one of the fundamentals is being able to run a command across a number of nodes in a cluster. A parallel shell is a simple but powerful tool that allows you to do so on designated (or all) nodes in the cluster, so you do not have to log in to each node and run the same command. This single tool has an infinite number of ways to be useful, but I like to use it when performing administrative tasks, such as:

  • discovering the status of the nodes in the cluster quickly,
  • checking the versions of particular software packages on each node,
  • checking the OS version on all nodes,
  • checking the kernel version on all nodes,
  • searching the system logs on each node (if you do not store them centrally),
  • examining the CPU usage on each node,
  • examining local I/O (if the nodes do any local I/O),
  • checking whether any nodes are swapping,
  • spot-monitoring the compute nodes, and
  • debugging.

This list is just the short version; the real list is extensive. Anything you want to do on a single node can be done on a large number of nodes using a parallel shell tool. However, for those that might be asking if they can use parallel shells on their 50,000-node clusters, the answer is that you can, but the time skew in the results will be large enough that the results might not be useful (which is a completely different subject). Parallel shells are more practical when used on a smaller number of nodes, on specific nodes (e.g., those associated with a specific job in a resource manager), or for gathering information that varies somewhat slowly. However, some techniques will allow you to run parallel commands on a large number of nodes.

Read more at ADMIN

How to Balance Development Goals with Security and Privacy

As a software security evaluator and a one-time engineer, I can confirm what the daily security breaches are telling us: software engineers and architects regularly fail at building in sufficient security and privacy. As someone who has been on both sides of this table, I’d like to share some of my own security-related engineering sins and provide some practical advice for both engineers and security officers on how best to balance development goals with privacy concerns.

I started programming many years ago, working in a role where I created artificial intelligence software for data analysis. My team and I built innovative software solutions for predicting behavior, such as programs that could aid in preventing crimes. As enthusiastic engineers, we were so focused on building something cool and of great value that we tended to overlook the security risks in our programs, profoundly annoyed when privacy officers said “no” to what we wanted to do. This behavior typically resulted in unofficial implementations with unfortunate privacy anti-patterns (nicknamed in capitals): COLLECTTOOMUCH, KEEPTOOLONG, BADSECURITY, and SCATTER, the last of which refers to storing data elsewhere without keeping it up to date. I also saw LEAKEXTERNAL,…

Read more at O’Reilly

‘Sed’ Command In Linux: Useful Applications Explained

Have you ever needed to replace some text in a file really quickly? Then you have to open up your text editor, find the line, and then type out your replacement. What if you have to do that many times? What if it isn’t also exactly the same thing and you have to run multiple searches and replace each occurrence? It gets tedious very quickly, but there’s a better way of doing it with a tool called sed.

We’ve written about POSIX and went over some of the interfaces and utilities a system must provide in order to be POSIX compliant. The command line tool sed is one of those utilities that provide a feature-rich way to filter, find, substitute, and rearrange data in texts files. It is an extremely powerful tool that is very easy to get started with, but very difficult to learn through and through due to its seemingly endless number of features.

Read more at FOSSbytes

Cloud Management: The Good, The Bad, and The Ugly – Part 1

The journey to becoming a truly software-driven, digital-native organizations requires enterprises to develop cultural practices and  technology capabilities that support three main goals:

  1. Corporate IT needs to become aligned with and responsive to the lines of business and to the software delivery functions that are in charge of software and digital innovation.
  2. IT needs to lead the charge on fostering and enabling a culture of constant business innovation.
  3. The inevitable transformation in IT needs to be accompanied by a reduction in IT spend. Cloud modernization needs to not become a massive business transformation project that is bound to fail, and that is placing undue pressures on the company – both from a cost perspective, as well as processes and tooling.

Cloud Management is a key aspect that organizations are looking at in order to simplify operations, increase IT efficiency and reduce data center costs.

Given the strains that digital disruption puts on IT Ops, we often see that large and complex enterprises that have invested in Cloud Management Platform (CMP) capabilities struggle to identify the highest priority areas to target across lines of business or in shared services, and can’t really realize the promise of CMPs to optimize their IT processes across various company initiatives. The CMP implementation often becomes another ‘Moby Dick’ endless chase, sucking time and resources and causing frustration throughout the organization, with often not a lot to show for it.

In this article, I want to share our point of view and some insights into the fundamentals of Cloud Management capabilities that large enterprises need to put in place in order to support digital transformation in their organization, for both legacy infrastructure as well as new, modern applications and technologies.

Read more at Platform9

5 Essential Tools for Linux Development

Linux has become a mainstay for many sectors of work, play, and personal life. We depend upon it. With Linux, technology is expanding and evolving faster than anyone could have imagined. That means Linux development is also happening at an exponential rate. Because of this, more and more developers will be hopping on board the open source and Linux dev train in the immediate, near, and far-off future. For that, people will need tools. Fortunately, there are a ton of dev tools available for Linux; so many, in fact, that it can be a bit intimidating to figure out precisely what you need (especially if you’re coming from another platform).

To make that easier, I thought I’d help narrow down the selection a bit for you. But instead of saying you should use Tool X and Tool Y, I’m going to narrow it down to five categories and then offer up an example for each. Just remember, for most categories, there are several available options. And, with that said, let’s get started.

Containers

Let’s face it, in this day and age you need to be working with containers. Not only are they incredibly easy to deploy, they make for great development environments. If you regularly develop for a specific platform, why not do so by creating a container image that includes all of the tools you need to make the process quick and easy. With that image available, you can then develop and roll out numerous instances of whatever software or service you need.

Using containers for development couldn’t be easier than it is with Docker. The advantages of using containers (and Docker) are:

  • Consistent development environment.

  • You can trust it will “just work” upon deployment.

  • Makes it easy to build across platforms.

  • Docker images available for all types of development environments and languages.

  • Deploying single containers or container clusters is simple.

Thanks to Docker Hub, you’ll find images for nearly any platform, development environment, server, service… just about anything you need. Using images from Docker Hub means you can skip over the creation of the development environment and go straight to work on developing your app, server, API, or service.

Docker is easily installable of most every Linux platform. For example: To install Docker on Ubuntu, you only have to open a terminal window and issue the command:

sudo apt-get install docker.io

With Docker installed, you’re ready to start pulling down specific images, developing, and deploying (Figure 1).

Figure 1: Docker images ready to deploy.

Version control system

If you’re working on a large project or with a team on a project, you’re going to need a version control system. Why? Because you need to keep track of your code, where your code is, and have an easy means of making commits and merging code from others. Without such a tool, your projects would be nearly impossible to manage. For Linux users, you cannot beat the ease of use and widespread deployment of Git and GitHub. If you’re new to their worlds, Git is the version control system that you install on your local machine and GitHub is the remote repository you use to upload (and then manage) your projects. Git can be installed on most Linux distributions. For example, on a Debian-based system, the install is as simple as:

sudo apt-get install git

Once installed, you are ready to start your journey with version control (Figure 2).

Figure 2: Git is installed and available for many important tasks.

Github requires you to create an account. You can use it for free for non-commercial projects, or you can pay for commercial project housing (for more information check out the price matrix here).

Text editor

Let’s face it, developing on Linux would be a bit of a challenge without a text editor. Of course what a text editor is varies, depending upon who you ask. One person might say vim, emacs, or nano, whereas another might go full-on GUI with their editor. But since we’re talking development, we need a tool that can meet the needs of the modern day developer. And before I mention a couple of text editors, I will say this: Yes, I know that vim is a serious workhorse for serious developers and, if you know it well vim will meet and exceed all of your needs. However, getting up to speed enough that it won’t be in your way, can be a bit of a hurdle for some developers (especially those new to Linux). Considering my goal is to always help win over new users (and not just preach to an already devout choir), I’m taking the GUI route here.

As far as text editors are concerned, you cannot go wrong with the likes of Bluefish. Bluefish can be found in most standard repositories and features project support, multi-threaded support for remote files, search and replace, open files recursively, snippets sidebar, integrates with make, lint, weblint, xmllint, unlimited undo/redo, in-line spell checker, auto-recovery, full screen editing, syntax highlighting (Figure 3), support for numerous languages, and much more.

Figure 3: Bluefish running on Ubuntu Linux 18.04.

IDE

Integrated Development Environment (IDE) is a piece of software that includes a comprehensive set of tools that enable a one-stop-shop environment for developing. IDEs not only enable you to code your software, but document and build them as well. There are a number of IDEs for Linux, but one in particular is not only included in the standard repositories it is also very user-friendly and powerful. That tool in question is Geany. Geany features syntax highlighting, code folding, symbol name auto-completion, construct completion/snippets, auto-closing of XML and HTML tags, call tips, many supported filetypes, symbol lists, code navigation, build system to compile and execute your code, simple project management, and a built-in plugin system.

Geany can be easily installed on your system. For example, on a Debian-based distribution, issue the command:

sudo apt-get install geany

Once installed, you’re ready to start using this very powerful tool that includes a user-friendly interface (Figure 4) that has next to no learning curve.

Figure 4: Geany is ready to serve as your IDE.

diff tool

There will be times when you have to compare two files to find where they differ. This could be two different copies of what was the same file (only one compiles and the other doesn’t). When that happens, you don’t want to have to do that manually. Instead, you want to employ the power of tool like Meld. Meld is a visual diff and merge tool targeted at developers. With Meld you can make short shrift out of discovering the differences between two files. Although you can use a command line diff tool, when efficiency is the name of the game, you can’t beat Meld.

Meld allows you to open a comparison between to files and it will highlight the differences between each. Meld also allows you to merge comparisons either from the right or the left (as the files are opened side by side – Figure 5).

Figure 5: Comparing two files with a simple difference.

Meld can be installed from most standard repositories. On a Debian-based system, the installation command is:

sudo apt-get install meld

Working with efficiency

These five tools not only enable you to get your work done, they help to make it quite a bit more efficient. Although there are a ton of developer tools available for Linux, you’re going to want to make sure you have one for each of the above categories (maybe even starting with the suggestions I’ve made).

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Bringing Intelligence to the Edge with Cloud IoT

There are also many benefits to be gained from intelligent, real-time decision-making at the point where these devices connect to the network—what’s known as the “edge.” Manufacturing companies can detect anomalies in high-velocity assembly lines in real time. Retailers can receive alerts as soon as a shelved item is out of stock. Automotive companies can increase safety through intelligent technologies like collision avoidance, traffic routing, and eyes-off-the-road detection systems.

But real-time decision-making in IoT systems is still challenging due to cost, form factor limitations, latency, power consumption, and other considerations. We want to change that.

Bringing machine learning to the edge
Today, we’re announcing two new products aimed at helping customers develop and deploy intelligent connected devices at scale: Edge TPU, a new hardware chip, and Cloud IoT Edge, a software stack that extends Google Cloud’s powerful AI capability to gateways and connected devices. This lets you build and train ML models in the cloud, then run those models on the Cloud IoT Edge device through the power of the Edge TPU hardware accelerator.

Read more at Google Cloud

ctop – Top-like Interface for Monitoring Docker Containers

ctop is a free open source, simple and cross-platform top-like command-line tool for monitoring container metrics in real-time. It allows you to get an overview of metrics concerning CPU, memory, network, I/O for multiple containers and also supports inspection of a specific container.

At the time of writing this article, it ships with built-in support for Docker (default container connector) and runC; connectors for other container and cluster platforms will be added in future releases.

How to Install ctop in Linux Systems

Installing the latest release of ctop is as easy as running the following commands to download the binary for your Linux distribution and install it under /usr/local/bin/ctop and make it executable to run it.

Read more at Tecmint

How to Lead a Disaster Recovery Exercise For Your On-Call Team

A disaster recovery exercise is a fire drill for your on-call team. The exercise is the most useful when it is as realistic as possible. A well-designed exercise will involve engineers searching through your production codebase trying to find the tools to operate on a production-like environment.

Our disaster recovery exercises follow four basic principles:

  • All on-call engineers are gathered in one room
  • Sterilized environment (like prod, but not prod)
  • Clear objective
  • Timeboxed recovery

At SigOpt, we run on AWS, so our first exercise was to spin up an API from scratch in our backup region. Our sterilized environment was us-east-1, with no access to AMIs, instances, or databases in our production region. Our objective was to hit dr-api.sigopt.com and service an API requests. Our timebox was 4 hours, which we chose from an engineering OKR.

Disaster Recovery Exercise as an Infrastructure Diagnostic

We ran our original disaster recovery exercise to diagnose holes in our ability to recovery our infrastructure. True to our goal, the exercise produced a few months of projects to work on.

Read more at Medium

 

​Opera is Available in a Snap on Linux

Opera is far from the most popular web browser, but it has its loyal fans. Now, if those fans also happen to be Linux desktop users, CanonicalUbuntu Linux‘s parent company, and Opera SA have made it easier than ever to install it on almost any Linux distribution.

They’ve done this by packing Opera into a Snap in the Snap Store. The Opera snap is supported on Debian, Elementary, Fedora, Linux Mint, Manjaro, OpenSUSE, Ubuntu, and other Linux distributions.

Read more at ZDNet