Home Blog Page 286

Protect Your Websites with Let’s Encrypt

Learn how to use Let’s Encrypt in this tutorial from our archives.

Back in the bad old days, setting up basic HTTPS with a certificate authority cost as much as several hundred dollars per year, and the process was difficult and error-prone to set up. Now we have Let’s Encrypt for free, and the whole thing takes just a few minutes.

Why Encrypt?

Why encrypt your sites? Because unencrypted HTTP sessions are wide open to multiple abuses:

Internet service providers lead the code-injecting offenders. How to foil their nefarious desires? Your best defense is HTTPS. Let’s review how HTTPS works.

Chain of Trust

You could set up asymmetric encryption between your site and everyone who is allowed to access it. This is very strong protection: GPG (GNU Privacy Guard, see How to Encrypt Email in Linux), and OpenSSH are common tools for asymmetric encryption. These rely on public-private key pairs. You can freely share public keys, while your private keys must be protected and never shared. The public key encrypts, and the private key decrypts.

This is a multi-step process that does not scale for random web-surfing, however, because it requires exchanging public keys before establishing a session, and you have to generate and manage key pairs. An HTTPS session automates public key distribution, and sensitive sites, such as shopping and banking, are verified by a third-party certificate authority (CA) such as Comodo, Verisign, or Thawte.

When you visit an HTTPS site, it provides a digital certificate to your web browser. This certificate verifies that your session is strongly encrypted and supplies information about the site, such as organization’s name, the organization that issued the certificate, and the name of the certificate authority. You can see all of this information, and the digital certificate, by clicking on the little padlock in your web browser’s address bar (Figure 1).

Figure 1: Click on the padlock in your web browser’s address bar for information.

The major web browsers, including Opera, Firefox, Chromium, and Chrome, all rely on the certificate authority to verify the authenticity of the site’s digital certificate. The little padlock gives the status at a glance; green = strong SSL encryption and verified identity. Web browsers also warn you about malicious sites, sites with incorrectly configured SSL certificates, and they treat self-signed certificates as untrusted.

So how do web browsers know who to trust? Browsers include a root store, a batch of root certificates, which are stored in /usr/share/ca-certificates/mozilla/. Site certificates are verified against your root store. Your root store is maintained by your package manager, just like any other software on your Linux system. On Ubuntu, they are supplied by the ca-certificates package. The root store itself is maintained by Mozilla for Linux.

As you can see, it takes a complex infrastructure to make all of this work. If you perform any sensitive online transactions, such as shopping or banking, you are trusting a whole lot of unknown people to protect you.

Encryption Everywhere

Let’s Encrypt is a global certificate authority, similar to the commercial CAs. Let’s Encrypt was founded by the non-profit Internet Security Research Group (ISRG) to make it easier to secure Websites. I don’t consider it sufficient for shopping and banking sites, for reasons which I will get to shortly, but it’s great for securing blogs, news, and informational sites that don’t have financial transactions.

There are at least three ways to use Let’s Encrypt. The best way is with the Certbot client, which is maintained by the Electronic Frontier Foundation (EFF). This requires shell access to your site.

If you are on shared hosting then you probably don’t have shell access. The easiest method in this case is using a host that supports Let’s Encrypt.

If your host does not support Let’s Encrypt, but supports custom certificates, then you can create and upload your certificate manually with Certbot. It’s a complex process, so you’ll want to study the documentation thoroughly.

When you have installed your certificate use SSL Server Test to test your site.

Let’s Encrypt digital certificates are good for 90 days. When you install Certbot it should also install a cron job for automatic renewal, and it includes a command to test that the automatic renewal works. You may use your existing private key or certificate signing request (CSR), and it supports wildcard certificates.

Limitations

Let’s Encrypt has some limitations: it performs only domain validation, that is, it issues a certificate to whoever controls the domain. This is basic SSL. It does not support Organization Validation (OV) or Extended Validation (EV) because it is not possible to automate identity validation. I would not trust a banking or shopping site that uses Let’s Encrypt– let ’em spend the bucks for a complete package that includes identity validation.

As a free-of-cost service run by a non-profit organization there is no commercial support, but only documentation and community support, both of which are quite good.

The Internet is full of malice. Everything should be encrypted. Start with Let’s Encrypt to protect your site visitors.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

New Deepin Linux Gets Even Better With Touchscreen Gesture Support

I still stand behind my claim that Deepin is one of the slickest and most beautiful desktop Linux distributions on the planet. Beyond that, it honestly blows away Windows 10 and macOS in terms of visual appeal. It has a number of innovative features that feel way ahead of the curve compared to more “mainstream” distros like Ubuntu and Linux Mint. Now the developers are killing some bugs and bringing a few more notable features to the OS with the new Deepin 15.9 update….

Read more at Forbes

What Metrics Matter: A Guide for Open Source Projects

“Without data, you’re just a person with an opinion.”

Those are the words of W. Edwards Deming, the champion of statistical process control, who was credited as one of the inspirations for what became known as the Japanese post-war economic miracle of 1950 to 1960. Ironically, Japanese manufacturers like Toyota were far more receptive to Deming’s ideas than General Motors and Ford were.

Community management is certainly an art. It’s about mentoring. It’s about having difficult conversations with people who are hurting the community. It’s about negotiation and compromise. It’s about interacting with other communities. It’s about making connections. In the words of Red Hat’s Diane Mueller, it’s about “nurturing conversations.”

However, it’s also about metrics and data.

Some have much in common with software development projects more broadly. Others are more specific to the management of the community itself. I think of deciding what to measure and how as adhering to five principles.

Read more at OpenSource.com

Back to Basics: Sort and Uniq

Learn the fundamentals of sorting and de-duplicating text on the command line.

If you’ve been using the command line for a long time, it’s easy to take the commands you use every day for granted. But, if you’re new to the Linux command line, there are several commands that make your life easier that you may not stumble upon automatically. In this article, I cover the basics of two commands that are essential in anyone’s arsenal: sort and uniq.

The sort command does exactly what it says: it takes text data as input and outputs sorted data. There are many scenarios on the command line when you may need to sort output, such as the output from a command that doesn’t offer sorting options of its own (or the sort arguments are obscure enough that you just use the sort command instead). In other cases, you may have a text file full of data (perhaps generated with some other script), and you need a quick way to view it in a sorted form.

Let’s start with a file named “test” that contains three lines:


Foo
Bar
Baz

Read more at Linux Journal

Bash Shell Utility Reaches 5.0 Milestone

As we look forward to the release of Linux Kernel 5.0 in the coming weeks, we can enjoy another venerable open source technology reaching the 5.0 milestone: the Bash shell utility. The GNU Project has launched the public version 5.0 of GNU/Linux’s default command language interpreter. Bash 5.0 adds new shell variables and other features and also repairs several major bugs.

New shell variables in Bash 5.0 include BASH_ARGV0, which “expands to $0 and sets $0 on assignment,” says the project. The EPOCHSECONDS variable expands to the time in seconds since the Unix epoch, and EPOCHREALTIME does the same, but with microsecond granularity.

New features include a “history -d” built-in function that can remove ranges of history entries and understands negative arguments as offsets from the end of the history list. There is also a new option called “localvar_inherit” that allows local variables to inherit the value of a variable with the same name at the nearest preceding scope.

A new shell option called “assoc_expand_once” causes the shell to attempt to expand associative array subscripts only once, which may be required when they are used in arithmetic expressions. Among many other new features, a new option is available that can disable sending history to syslog at runtime. In addition, the “globasciiranges” shell option is now enabled by default.

Bash 5.0 also fixes several major bugs. It overhauls how nameref variables resolve and fixes “a number of potential out-of-bounds memory errors discovered via fuzzing,” says the GNU Project’s readme. Changes have been made to the “expansion of $@ and $* in various contexts where word splitting is not performed to conform to a Posix standard interpretation.” Other fixes resolve corner cases for Posix conformance.

Finally, Bash 5.0 introduces a few incompatibilities compared to the most recent Bash 4.4.x. For example, changes to how nameref variables are resolved can cause different behaviors for some uses of namerefs.

Bash to basics

Bash (Bourne-Again Shell) may be 5.0 in development years, but it’s a lot older in Earth orbits. The utility will soon celebrate its 30th anniversary since Brian Fox released Bash 1.0 beta release in June 1989.

Over the years, Bash has expanded upon the POSIX shell spec with interactive command line editing, history substitution, brace expansion, and on some architectures, job control features. It has also borrowed features from the Korn shell (ksh) and the C shell (csh). Most sh scripts can be run by Bash without modification, says the GNU Project.

Bash and other Bourne-based shell utilities have largely survived the introduction of GUI alternatives to the command line such as Git GUI. Experienced Linux developers — and especially sysadmins — tend to prefer the greater speed and flexibility of working directly with the command line. There are also situations where the GUI will spit you back to the command line anyway.

It’s really a matter of whether you will be spending enough time doing Linux development or administration to make it worthwhile to learn the commands. Besides, in a movie, isn’t it more exciting to watch the hacker frantically clacking away at the command line to disable the nuclear weapon rather than clicking options off a menu? Clacking rules!

Bash 5.0 is available for download from the GNU Project’s Bash 5.0 readme page.

Faucet: An Open Source SDN Controller for High-Speed Production Networks

Open standards such as OpenFlow and P4 promised to improve the landscape by opening access to these devices via a programmable API, but they still require someone to write a controller to re-implement normal switch functionality, such as forwarding and routing, in a multi-vendor, standards-compliant way. This led our group to write the Faucet software-defined network (SDN) controller, which allows anyone to fully realize the dream of programmable networks.

Faucet is a compact, open source OpenFlow controller that enables users to run their networks the same way they run server clusters. Faucet makes networking approachable to all by bringing the DevOps workflow to networking. It does this by making network functions (like routing protocols, neighbor discovery, and switching algorithms) easy to manage, test, and extend by moving them to regular software that runs on a server, versus the traditional approach of embedding these functions in the firmware of a switch or router. Faucet works by ingesting a YAML configuration file that represents the network topology and required network functionality, and it does the work to program every device on the network with OpenFlow.

Read more at OpenSource.com

Ansible vs. Puppet: Declarative DevOps Tools Square Off

DevOps aims to drive collaboration between development and operations teams, but software quality drives DevOps adoption more than any other factor. As this comparison of Ansible vs. Puppet shows, software quality dramatically influences DevOps tools.

Software quality tends to be an organizational goal or a staff function, not the dominion of a dedicated group with broad responsibility to implement its decisions. Effective software quality efforts involve everyone from development to production users to ensure real value.

Puppet and Ansible are declarative configuration management and automation tools used in DevOps shops. They both help organizations ensure software quality. Evaluate Ansible vs. Puppet to determine how each product fits the software quality-driven requirements for DevOps.

Read more at TechTarget

An Introduction to the Machine Learning Platform as a Service

Machine-Learning-Platform-as-a-Service (ML PaaS) is one of the fastest growing services in the public cloud. It delivers efficient lifecycle management of machine learning models.

At a high level, there are three phases involved in training and deploying a machine learning model. These phases remain the same from classic ML models to advanced models built using sophisticated neural network architecture.

Provision and Configure Environment

Before the actual training takes place, developers and data scientists need a fully configured environment with the right hardware and software configuration.

Read more at The New Stack

Linux Tools: The Meaning of Dot

Let’s face it: writing one-liners and scripts using shell commands can be confusing. Many of the names of the tools at your disposal are far from obvious in terms of what they do (grep, tee and awk, anyone?) and, when you combine two or more, the resulting “sentence” looks like some kind of alien gobbledygook.

None of the above is helped by the fact that many of the symbols you use to build a chain of instructions can mean different things depending on their context.

Location, location, location

Take the humble dot (.) for example. Used with instructions that are expecting the name of a directory, it means “this directory” so this:

find . -name "*.jpg"

translates to “find in this directory (and all its subdirectories) files that have names that end in .jpg“.

Both ls . and cd . act as expected, so they list and “change” to the current directory, respectively, although including the dot in these two cases is not necessary.

Two dots, one after the other, in the same context (i.e., when your instruction is expecting a directory path) means “the directory immediately above the current one“. If you are in /home/your_directory and run

cd ..

you will be taken to /home. So, you may think this still kind of fits into the “dots represent nearby directories” narrative and is not complicated at all, right?

How about this, then? If you use a dot at the beginning of a directory or file, it means the directory or file will be hidden:

$ touch somedir/file01.txt somedir/file02.txt somedir/.secretfile.txt
$ ls -l somedir/
total 0 
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file01.txt 
-rw-r--r-- 1 paul paul 0 Jan 13 19:57 file02.txt 
$ # Note how there is no .secretfile.txt in the listing above
$ ls -la somedir/
total 8 
drwxr-xr-x  2 paul paul 4096 Jan 13 19:57 . 
drwx------ 48 paul paul 4096 Jan 13 19:57 .. 
-rw-r--r--  1 paul paul    0 Jan 13 19:57 file01.txt 
-rw-r--r--  1 paul paul    0 Jan 13 19:57 file02.txt 
-rw-r--r--  1 paul paul    0 Jan 13 19:57 .secretfile.txt
$ # The -a option tells ls to show "all" files, including the hidden ones

And then there’s when you use . as a command. Yep! You heard me: . is a full-fledged command. It is a synonym of source and you use that to execute a file in the current shell, as opposed to running a script some other way (which usually mean Bash will spawn a new shell in which to run it).

Confused? Don’t worry — try this: Create a script called myscript that contains the line

myvar="Hello"

and execute it the regular way, that is, with sh myscript (or by making the script executable with chmod a+x myscript and then running ./myscript). Now try and see the contents of myvar with echo $myvar (spoiler: You will get nothing). This is because, when your script plunks “Hello” into myvar, it does so in a separate bash shell instance. When the script ends, the spawned instance disappears and control returns to the original shell, where myvar never even existed.

However, if you run myscript like this:

. myscript

echo $myvar will print Hello to the command line.

You will often use the . (or source) command after making changes to your .bashrc file, like when you need to expand your PATH variable. You use . to make the changes available immediately in your current shell instance.

Double Trouble

Just like the seemingly insignificant single dot has more than one meaning, so has the double dot. Apart from pointing to the parent of the current directory, the double dot (..) is also used to build sequences.

Try this:

echo {1..10}

It will print out the list of numbers from 1 to 10. In this context, .. means “starting with the value on my left, count up to the value on my right“.

Now try this:

echo {1..10..2}

You’ll get 1 3 5 7 9. The ..2 part of the command tells Bash to print the sequence, but not one by one, but two by two. In other words, you’ll get all the odd numbers from 1 to 10.

It works backwards, too:

echo {10..1..2}

You can also pad your numbers with 0s. Doing:

echo {000..121..2}

will print out every even number from 0 to 121 like this:

000 002 004 006 ... 050 052 054 ... 116 118 120 

But how is this sequence-generating construct useful? Well, suppose one of your New Year’s resolutions is to be more careful with your accounts. As part of that, you want to create directories in which to classify your digital invoices of the last 10 years:

mkdir {2009..2019}_Invoices

Job done.

Or maybe you have a hundreds of numbered files, say, frames extracted from a video clip, and, for whatever reason, you want to remove only every third frame between the frames 43 and 61:

rm frame_{043..61..3}

It is likely that, if you have more than 100 frames, they will be named with padded 0s and look like this:

frame_000 frame_001 frame_002 ...

That’s why you will use 043 in your command instead of just 43.

Curly~Wurly

Truth be told, the magic of sequences lies not so much in the double dot as in the sorcery of the curly braces ({}). Look how it works for letters, too. Doing:

touch file_{a..z}.txt

creates the files file_a.txt through file_z.txt.

You must be careful, however. Using a sequence like {Z..a} will run through a bunch of non-alphanumeric characters (glyphs that are neither numbers or letters) that live between the uppercase alphabet and the lowercase one. Some of these glyphs are unprintable or have a special meaning of their own. Using them to generate names of files could lead to a whole bevy of unexpected and potentially unpleasant effects.

One final thing worth pointing out about sequences encased between {...} is that they can also contain lists of strings:

touch {blahg, splurg, mmmf}_file.txt

Creates blahg_file.txt, splurg_file.txt and mmmf_file.txt.

Of course, in other contexts, the curly braces have different meanings (surprise!). But that is the stuff of another article.

Conclusion

Bash and the utilities you can run within it have been shaped over decades by system administrators looking for ways to solve very particular problems. To say that sysadmins and their ways are their own breed of special would be an understatement. Consequently, as opposed to other languages, Bash was not designed to be user-friendly, easy or even logical.

That doesn’t mean it is not powerful — quite the contrary. Bash’s grammar and shell tools may be inconsistent and sprawling, but they also provide a dizzying range of ways to do everything you can possibly imagine. It is like having a toolbox where you can find everything from a power drill to a spoon, as well as a rubber duck, a roll of duct tape, and some nail clippers.

Apart from fascinating, it is also fun to discover all you can achieve directly from within the shell, so next time we will delve ever deeper into how you can build bigger and better Bash command lines.

Until then, have fun!

How to Use Netcat to Quickly Transfer Files Between Linux Computers

There’s no shortage of software solutions that can help you transfer files between computers. However, if you do this very rarely, the typical solutions such as NFS and SFTP (through OpenSSH) might be overkill. Furthermore, these services are permanently open to receiving and handling incoming connections. Configured incorrectly, this might make your device vulnerable to certain attacks.

netcat, the so-called “TCP/IP swiss army knife,” can be used as an ad-hoc solution for transferring files through local networks or the Internet. It’s also useful for transferring data to/from your virtual machines or containers when they don’t include the feature out of the box. You can even use it as a copy-paste mechanism between two devices.

Most Linux-based operating systems come with this pre-installed. Open a terminal and type:

Read more at MakeTechEasier