Home Blog Page 280

And, Ampersand, and & in Linux

Take a look at the tools covered in the three previous articles, and you will see that understanding the glue that joins them together is as important as recognizing the tools themselves. Indeed, tools tend to be simple, and understanding what mkdir, touch, and find do (make a new directory, update a file, and find a file in the directory tree, respectively) in isolation is easy.

But understanding what

mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt &

does, and why we would write a command line like that is a whole different story.

It pays to look more closely at the sign and symbols that live between the commands. It will not only help you better understand how things work, but will also make you more proficient in chaining commands together to create compound instructions that will help you work more efficiently.

In this article and the next, we’ll be looking at the the ampersand (&) and its close friend, the pipe (|), and see how they can mean different things in different contexts.

Behind the Scenes

Let’s start simple and see how you can use & as a way of pushing a command to the background. The instruction:

cp -R original/dir/ backup/dir/

Copies all the files and subdirectories in original/dir/ into backup/dir/. So far so simple. But if that turns out to be a lot of data, it could tie up your terminal for hours.

However, using:

cp -R original/dir/ backup/dir/ &

pushes the process to the background courtesy of the final &. This frees you to continue working on the same terminal or even to close the terminal and still let the process finish up. Do note, however, that if the process is asked to print stuff out to the standard output (like in the case of echo or ls), it will continue to do so, even though it is being executed in the background.

When you push a process into the background, Bash will print out a number. This number is the PID or the Process’ ID. Every process running on your Linux system has a unique process ID and you can use this ID to pause, resume, and terminate the process it refers to. This will become useful later.

In the meantime, there are a few tools you can use to manage your processes as long as you remain in the terminal from which you launched them:

  • jobs shows you the processes running in your current terminal, whether be it in the background or foreground. It also shows you a number associated with each job (different from the PID) that you can use to refer to each process:

    $ jobs 
    [1]-  Running                 cp -i -R original/dir/* backup/dir/ & 
    [2]+  Running                 find . -iname "*jpg" > backup/dir/images.txt &
    
  • fg brings a job from the background to the foreground so you can interact with it. You tell fg which process you want to bring to the foreground with a percentage symbol (%) followed by the number associated with the job that jobs gave you:

    $ fg %1 # brings the cp job to the foreground
    cp -i -R original/dir/* backup/dir/
    

    If the job was stopped (see below), fg will start it again.

  • You can stop a job in the foreground by holding down [Ctrl] and pressing [Z]. This doesn’t abort the action, it pauses it. When you start it again with (fg or bg) it will continue from where it left off…

    …Except for sleep: the time a sleep job is paused still counts once sleep is resumed. This is because sleep takes note of the clock time when it was started, not how long it was running. This means that if you run sleep 30 and pause it for more than 30 seconds, once you resume, sleep will exit immediately.

  • The bg command pushes a job to the background and resumes it again if it was paused:
    $ bg %1
    [1]+ cp -i -R original/dir/* backup/dir/ &
    

As mentioned above, you won’t be able to use any of these commands if you close the terminal from which you launched the process or if you change to another terminal, even though the process will still continue working.

To manage background processes from another terminal you need another set of tools. For example, you can tell a process to stop from a a different terminal with the kill command:

kill -s STOP <PID>

And you know the PID because that is the number Bash gave you when you started the process with &, remember? Oh! You didn’t write it down? No problem. You can get the PID of any running process with the ps (short for processes) command. So, using

ps | grep cp

will show you all the processes containing the string “cp“, including the copying job we are using for our example. It will also show you the PID:

$ ps | grep cp
14444 pts/3    00:00:13 cp

In this case, the PID is 14444. and it means you can stop the background copying with:

kill -s STOP 14444

Note that STOP here does the same thing as [Ctrl] + [Z] above, that is, it pauses the execution of the process.

To start the paused process again, you can use the CONT signal:

kill -s CONT 14444

There is a good list of many of the main signals you can send a process here. According to that, if you wanted to terminate the process, not just pause it, you could do this:

kill -s TERM 14444

If the process refuses to exit, you can force it with:

kill -s KILL 14444

This is a bit dangerous, but very useful if a process has gone crazy and is eating up all your resources.

In any case, if you are not sure you have the correct PID, add the x option to ps:

$ ps x| grep cp
14444 pts/3    D      0:14 cp -i -R original/dir/Hols_2014.mp4  
  original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4 
  original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/ 

And you should be able to see what process you need.

Finally, there is nifty tool that combines ps and grep all into one:

$ pgrep cp
8 
18 
19 
26 
33 
40 
47 
54 
61 
72 
88 
96 
136 
339 
6680 
13735 
14444

Lists all the PIDs of processes that contain the string “cp“.

In this case, it isn’t very helpful, but this…

$ pgrep -lx cp
14444 cp

… is much better.

In this case, -l tells pgrep to show you the name of the process and -x tells pgrep you want an exact match for the name of the command. If you want even more details, try pgrep -ax command.

Next time

Putting an & at the end of commands has helped us explain the rather useful concept of processes working in the background and foreground and how to manage them.

One last thing before we leave: processes running in the background are what are known as daemons in UNIX/Linux parlance. So, if you had heard the term before and wondered what they were, there you go.

As usual, there are more ways to use the ampersand within a command line, many of which have nothing to do with pushing processes into the background. To see what those uses are, we’ll be back next week with more on the matter.

Read more:

Linux Tools: The Meaning of Dot

Understanding Angle Brackets in Bash

More About Angle Brackets in Bash

Learn to Use curl Command with Examples

Curl command is used to transfer files to and from a server, it supports a number of protocols like HTTP, HTTPS, FTP, FTPS, IMAP, IMAPS, DICT, FILE, GOPHER, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP etc.

Curl also supports a lot of features like proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer pause & resume, etc. There are around 120 different options that can be used with curl & in this tutorial, we are going to discuss some important Curl commands with examples.

Download or visit a single URL

To download a file using CURL from http or ftp or any other protocol, use the following command structure

$ curl https://linuxtechlab.com

If curl can’t identify the protocol being used, it will switch to http. We can also store the output of the command to a file with ‘-o’ option or can also redirect using ‘>’.

Read more at Linux Tech Lab

Outlaw Shellbot Infects Linux Servers to Mine for Monero

The Outlaw group is conducting an active campaign which is targeting Linux systems in cryptocurrency mining attacks.

On Tuesday, the JASK Special Ops research team disclosed additional details (.PDF) of the attack wave which appears to focus on seizing infrastructure resources to support illicit Monero mining activities.

The campaign uses a refined version of Shellbot, a Trojanwhich carves a tunnel between an infected system and a command-and-control (C2) server operated by threat actors. 

The backdoor is able to collect system and personal data, terminate or run tasks and processes, download additional payloads, open remote command line shells, send stolen information to a C2, and also receive additional malware payloads from controllers. …

The threat actors target organizations through denial-of-service (DoS) and SSH brute-force techniques. If servers are compromised, their strength is added to the Outlaw botnet to carry on the campaign.

Read more at ZDNet

Exploiting systemd-journald: Part 1

This is part one in a multipart series (read Part 2 here) on exploiting two vulnerabilities in systemd-journald, which were published by Qualys on January 9th. Specifically, the vulnerabilities were:

  • a user-influenced size passed to alloca(), allowing manipulation of the stack pointer (CVE-2018-16865)
  • a heap-based memory out-of-bounds read, yielding memory disclosure (CVE-2018-16866)

The affected program, systemd-journald, is a system service that collects and stores logging data. The vulnerabilities discovered in this service allow for user-generated log data to manipulate memory such that they can take over systemd-journald, which runs as root. Exploitation of these vulnerabilities thus allow for privilege escalation to root on the target system.

As Qualys did not provide exploit code, we developed a proof-of-concept exploit for our own testing and verification. There are some interesting aspects that were not covered by Qualys’ initial publication, such as how to communicate with the affected service to reach the vulnerable component, and how to control the computed hash value that is actually used to corrupt memory. We thought it was worth sharing the technical details for the community.

As the first in our series on this topic, the objective of this post is to provide the reader with the ability to write a proof-of-concept capable of exploiting the service with Address Space Layout Randomization (ASLR) disabled. In the interest of not posting an unreadably-long blog, and also not handing sharp objects to script-kiddies before the community has had chance to patch, we are saving some elements for discussion in future posts in this series, including details on how to control the key computed hash value.

Read more at Capsule8

Red Hat Launches CodeReady Workspaces Kubernetes IDE

Red Hat announced the general availability of its CodeReady Workspaces integrated developer environment (IDE) on Feb. 5, providing users with a Kubernetes-native tool for building and collaborating on application development.

In contrast with other IDEs, Red Hat CodeReady Workspaces runs inside of a Kubernetes cluster, providing developers with integrated capabilities for cloud-native deployments. Kubernetes is an open-source container orchestration platform that enables organizations to deploy and manage application workloads. Red Hat CodeReady Workspaces is tightly integrated with the company’s OpenShift Kubernetes container platform, enabling development teams with an environment to develop and deploy container applications.

Red Hat CodeReady Workspaces is based on the open-source Eclipse Che IDE project, as well as technologies that Red Hat gained via the acquisitionof Codenvy in May 2017.

Read more at eWeek

TensorFlow.js: Machine Learning for the Web and Beyond

TensorFlow.js: machine learning for the web and beyond Smilkov et al., SysML’19

If machine learning and ML models are to pervade all of our applications and systems, then they’d better go to where the applications are rather than the other way round. Increasingly, that means JavaScript – both in the browser and on the server.

TensorFlow.js brings TensorFlow and Keras to the the JavaScript ecosystem, supporting both Node.js and browser-based applications. As well as programmer accessibility and ease of integration, running on-device means that in many cases user data never has to leave the device.

On-device computation has a number of benefits, including data privacy, accessibility, and low-latency interactive applications.

TensorFlow.js isn’t just for model serving, you can run training with it as well. Since its launch in March 2018, people have done lots of creative things with it. And since it runs in the browser, these are all accessible to you with just one click!

Read more at the morning paper

Getting Started with Git: Terminology 101

Version control is an important tool for anyone looking to track their changes these days. It’s especially helpful for programmers, sysadmins, and site reliability engineers (SREs) alike. The promise of recovering from mistakes to a known good state is a huge win and a touch friendlier than the previous strategy of adding .old to a copied file.

But learning Git is often oversimplified by well-meaning peers telling everyone to “get into open source.” Before you know it, someone asks for a pull request or merge request where you rebase from upstream before they can merge from your remote—and be sure to remove merge commits. Whatever well-working contribution you want to give back to an open source project feels much further from being added when you look at all these words you don’t know. …

Knowing where you are in a Git project starts with thinking of a tree. All Git projects have a root, similar to the idea of a filesystem’s root directory. All commits branch off from that root. In this way, a branch is only a pointer to a commit. By convention, master is the default name for the default branch in your root directory.

Since Git is a distributed version control system, where the same codebase is distributed to multiple locations, people often use the term “repository” as a way of talking about all copies of the same project.

Read more at OpenSource.com

Blockchain Skills Are in Demand: Take Advantage of Hyperledger Training Options

If you ask some people, they’ll tell you that blockchain technology — an entirely reimagined approach to records, ledgers, and authentication that is helping to protect trust in transactions — is as dramatic as the creation of the Internet. Countless organizations across industries are aligning around it and there is huge demand for training and certification focused on the tools and building blocks that drive the blockchain ecosystem.

The Linux Foundation’s Hyperledger Project has created many of these key tools and is providing leadership around these complex technologies. At top business schools ranging from Berkeley to Wharton, students are flocking to classes on blockchain and cryptocurrency. Additionally, job postings related to blockchain and Hyperledger are taking off, with knowledge in these areas translating into opportunity. 

“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” said The Linux Foundation’s Clyde Seepersad, General Manager, Training & Certification, in introducing the course Blockchain: Understanding its Uses and Implications. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.”

Hyperledger offers numerous other training and certification options, and its certifications are respected across industries. The project has just introduced a new certification for Hyperledger Fabric and also offers certification for Hyperledger Sawtooth—with both platforms playing central roles in the blockchain ecosystem.

The following courses are key to building your blockchain and Hyperledger skills:

Blockchain: Understanding Its Uses and Implications (LFS170) Understand exactly what a blockchain is, its impact and potential for change around the world, and analyze use cases in technology, business, and enterprise products and institutions.

Blockchain for Business – An Introduction to Hyperledger Technologies (LFS171) This course offers a primer to blockchain and distributed ledger technologies. Learn how to start building blockchain applications with Hyperledger frameworks.

Hyperledger Fabric Fundamentals (LFD271) Teaches the fundamental concepts of blockchain and distributed ledger technologies.

Hyperledger Fabric Administration (LFS272) This course will provide a deeper understanding of the Hyperledger Fabric network and how to administer and interact with chaincode, manage peers, operate basic CA-level functions, and much more. It will help prepare you to pass the Certified Hyperledger Fabric Administrator (CHFA) exam.

Hyperledger Sawtooth Administration (LFS273) This course offers insight into the installation, configuration, component lifecycle, and permissioning-related information of a Hyperledger Sawtooth network. It will help prepare you to pass the Hyperledger Sawtooth Administration exam.

Now is a great time to learn about Hyperledger and blockchain technology.  Get started on your blockchain training journey now.

Introductory Go Programming Tutorial

You’ve probably heard of Go. Like any new programming language, it took a while to mature and stabilize to the point where it became useful for production applications. Nowadays, Go is a well established language that is used in web development, writing DevOps tools, network programming and databases. It was used to write Docker, Kubernetes, Terraform and Ethereum. Go is accelerating in popularity, with adoption increasing by 76% in 2017, and there now are Go user groups and Go conferences. Whether you want to add to your professional skills or are just interested in learning a new programming language, you should check it out.

Go History

A team of three programmers at Google created Go: Robert Griesemer, Rob Pike and Ken Thompson. The team decided to create Go because they were frustrated with C++ and Java, which through the years have become cumbersome and clumsy to work with. They wanted to bring enjoyment and productivity back to programming. …

Go is designed to be a simple compiled language that is easy to use, while allowing concisely written programs that run efficiently. Go lacks extraneous features, so it’s easy to program fluently, without needing to refer to language documentation while programming. Programming in Go is fast, fun and productive.

Read more at Linux Journal

Linux Fu: Easier File Watching

In an earlier installment of Linux Fu, I mentioned how you can use inotifywait to efficiently watch for file system changes. The comments had a lot of alternative ways to do the same job, which is great. But there was one very easy-to-use tool that didn’t show up, so I wanted to talk about it. That tool is entr. It isn’t as versatile, but it is easy to use and covers a lot of common use cases where you want some action to occur when a file changes.

The program is dead simple. It reads a list of file names on its standard input. It will then run a command and repeat it any time the input files change. There are a handful of options we’ll talk about in a bit, but it is really that simple. For example, try this after you install entr with your package manager.

  1. Open two shell windows
  2. In one window, open your favorite editor to create an empty file named /tmp/foo and save it
  3. In the second window issue the command: echo "/tmp/foo" | entr wc /tmp/foo
  4. Back in the first window (or your GUI editor) make some changes to the file and save it while observing the second window

If you can’t find entr, you can download it from the website.

Frequently, you’ll feed the output from find or a similar command to entr.

Read more at Hackaday