Home Blog Page 366

6 Key Data Strategy Considerations for Your Cloud-Native Transformation

Many organizations are making the move to cloud-native platforms as their strategy for digital transformation. cloud-native allows companies to deliver fast-responding, user-friendly applications with greater agility. However, the architecture of the data in support of cloud-native transformation is often ignored in the hope that it will take care of itself. With data becoming the information currency of every organization, how do you avoid the data mistakes commonly made during this cloud transformation journey? What data questions should you ask when building cloud-native applications? How can you gain valuable insight from your data?

The ensuing presentation includes six key considerations companies must have when they make this transition to cloud-native. …

While there are many legacy applications that are still SOA-based, the architectural mindset has changed and microservices have gained much popularity. Rather than architecting monolithic applications, developers can achieve many benefits by creating many independent services that work together in concert. A microservice architecture delivers greater agility in application development and simpler codebases; updates and scaling the services can be achieved in isolation and services can be written in different languages and connected to different data tiers and platforms of choice. 

Read more at InfoWorld

Addressing the Complexity of Big Data with Open Source

Just like a zoo with hundreds of different species and exhibits, the big data stack is created from more than 20 different projects developed by committers and contributors of the Apache Software Foundation. Each project has its own complex dependencies structure, which, in turn, build on one another very much like the Russian stacking doll (matryoshka). Further, all of these projects have their own release trains where different forks might include different features or use different versions of the same library. When combined, there are a lot of incompatibilities, and many of the components rely on each other to work properly, such as the case of a software stack. For example, Apache HBase and Apache Hive depend on Apache Hadoop’s HDFS. In this environment, is it even possible to consistently produce software that would work when deployed to a hundred computers in a data center?…

All of these moving parts effectively serve one purpose: to create the packages from known building blocks and transfer them a different environment (dev, QA, staging, and production) so that no matter where they are deployed, they will work the same way. The deployment mechanism needs to control the state of the target system. Relying on a state machine like Puppet or Chef has many benefits. You can forget about messy shell or Python scripts to copyfiles, create symlinks, and set permissions. Instead, you define “the state” that you want the target system to be, and the state machine will execute the recipe and guarantee that the end state will be as you specified. The state machine controls the environment instead of assuming one. These properties are great for operations at scale, DevOps, developers, testers, and users, as they know what to expect.

Read more at DZone

6 Industrial Touch-Panel Computers Based on the Raspberry Pi

In the smart home, voice agents are increasingly replacing the smartphone touchscreen interface as the primary human-machine interface (HMI). Yet, in noisier industrial and retail IoT environments, touchscreens are usually the only choice. The industrial touch-panel computer market has been in full swing for over a decade. Touch-panel systems based on Linux, and to a lesser extent, Android, are gaining share from those that use the still widely used Windows Embedded, and over the past year, several Raspberry Pi based systems have reached market. Here we look at six RPi-based contenders.

The first three models here use the stripped-down Raspberry Pi Compute Module 3 (CM3) while the last three use the full Raspberry Pi 3 Model B SBC. The CM3 gives you the same quad-core, Cortex-A53 Broadcom BCM2387 SoC as the Raspberry Pi 3, but without the real-world ports and built-in WiFi and Bluetooth. (It’s unlikely that we’ll see an RPi Compute Module based on the new Raspberry Pi 3 Model B+, which boosts the clock rate to 1.4GHz and offers faster WiFi and Ethernet, as well as Power-over-Ethernet.)

In addition to the all-in-one devices listed here, many more touchscreens are available for the Raspberry Pi 3 that could be turned toward HMI purposes. These range from the official, 7-inch Raspberry Pi Touchscreen, which competes with a variety of third-party 7-inchers, as well as 10.1-inch models like the Waveshare Raspberry Pi 10.1 inch. There are also numerous smaller screen options that are generally more suitable for home automation than industrial or retail applications.

Any RPi touchscreen add-on can be combined with a Raspberry Pi and applied to HMI use. (Here’s an Instructibles how-to on flush mounting the official RPi touchscreen on a wall.)

Purpose-built industrial touch-panel system add additional features such as wall-mounting kits and in some cases, VESA or DIN-rail mounting. Some offer extended temperature support, and one of the systems covered here includes IP65 ingress protection. Most of these systems provide industrial-friendly wide-range power supplies, and some offer opto-isolated interfaces, surge and EMC protection, and UPS.

Interfaces

Most Raspberry Pi touch-panel systems feature capacitive touch, which is generally preferred as being more precise than resistive technology. Several of the screens offer backlighting, extra-wide viewing angles, and higher contrast ratios. Many supply higher brightness (luminance) measured in candela per square meter (cd/m²), a unit which is often referred to as a nit.

The more industrially oriented systems often extend the RPi’s GPIO with various interfaces including serial, CAN, digital input and output (DIO). Other features include a watchdog timer, an IR interface, and a Real Time Clock (RTC). The new Acme CM3-Panel compensates for the CM3’s lack of onboard wireless by offering WiFi and RF radio options.

One alternative in between all-in-one touch-panel computers and a DIY system based on touchscreen add-ons is an industrial touchscreen sold without an onboard computer. For example, Industrial Shields offers a 10.1-inch resistive Industrial Aluminum EMC Panel PC that supports a bring-your-own Raspberry Pi, as well as Banana Pi and Hummingboard SBCs.

The touch panel computers

Here are some recent Raspberry Pi based touch-panel computers, with information links embedded in the titles. Most of the vendors are European (typically German), but many also have North American distributors:

  • Acme CM3-Panel — This RPi CM3-based touch-panel touched down earlier this month in four wireless and I/O configurations ranging from 95 Euros ($113) to 119 Euros ($142). Standard features include a 7-inch, 800×480 touchscreen with a 90-degree viewing angle, as well a MIPI-CSI camera connector, 24x GPIO, and a wide-range 12-24V DC input. The $113 model has a USB 2.0 port while the $118 version instead provides 2.4GHz WiFi. The two higher end models offer either USB or WiFi combined with wireless modules that support Acme’s open source 868MHz Yarm RF radio module spec. The Yarm module supports Acme’s ISM 868MHz Energy Harvesting radio nodes, and there are special Yarm GPIOs in addition to the 24x GPIO array. The CM3-Panel, which is only 22mm thick, supports -20 to 70°C temperatures and ships with schematics.

  • Comfile ComfilePi — Comfile’s 7-inch ComfilePi CPi-A070WR and 10.2-inch ComfilePi CPi-A102WR combine an RPi CM3 with 800 x 480, resistive touchscreens. They offer IP65 protection against ingress and support 0 to 70°C temperatures. The ComfilePi is further equipped with a 10/100 Ethernet port, 3x USB 2.0 ports, a microSD slot, and an audio jack. Serial and I2C interfaces are expressed via terminal pin connectors, and there is a 12-24V input and 5V output. Saelig sells Korea-based Comfile’s systems in North America for $226 (7-inch) and $340 (10.2-inch).

  • Distec POS-Line IoT — Aimed at Point-of-Sale (PoS), HMI, and signage, Distec’s POS-Line IoT stands out with an LVDS-driven, 10.1-inch capacitive multitouch screen with 1920 x 1200 resolution. The backlit screen offers 170-degree viewing angles and 500-nit luminance. The system is a pre-assembled version of a starter kit offered for Distec’s Artista-IoT board, which incorporates a Raspberry Pi CM3 module. The Artista-IoT provides a scaler chip that enables display functions such as DICOM pre-set, gamma correction, and color calibration. The board and touch-panel both furnish RPi 3-like ports except that there are only three USB ports. Internal features include 10x GPIO, 3x UART, 2x I2C, and an I2C and USB touch sensor interface. You also get IR and OSD keypad interfaces, plus an RTC and an 8-36V or 12V power supply. U.S. customers can buy the system from Apollo Displays.

  • Janz Tec emVIEW-7/RPI3  — This 7-inch, 800 x 480 capacitive multitouch touch-panel is based on Janz Tec’s emPC-A/RPI3 industrial controller, which is built around a Raspberry Pi 3 SBC. Targeted at industrial HMI applications, the emVIEW-7/RPI3 has a backlit, 350-nit screen. In addition to the exposed ports of the RPi 3, you get 8-bit DIO, a serial debug port, and an interface that supports serial and CAN. Sold in North America by Saelig for $665, the DIN-rail mountable system offers a 9-32V input and a 0 to 45°C range.</li>

  • MASS RPI-07 — Like the emVIEW-7/RPI3, the RPI-07  is a 7-inch, 800 x 480 system built around a Raspberry Pi 3 SBC. The screen offers 10-finger multitouch, 250 nits, and 500:1 contrast. Most of the RPi 3’s ports are exposed, and the HDMI port is available internally and can be accessed via knockouts. There’s a GPIO connector that supports an RTC or options including DIO cards with optocouplers or analog inputs and outputs. The RPI-07 provides 12V and 24V inputs and supports flush-mounted panel PC configurations or VESA 75 arm or foot mounting. No pricing was listed.

  • Sfera Labs Strato Pi Touch Display — Available directly from Italy-based Sfera Labs, the Strato Pi Touch Display comes pre-assembled with a Raspberry Pi 3 with exposed ports plus the official 7-inch Raspberry Pi Touchscreen with 800 x 680 resolution and 10-finger touch. You can pair your Pi with one of three Strato boards. The 425-Euro ($523) option gives you a Strato Pi Mini, which adds a surge-protected 9-28V terminal block input with an RTC, battery, and buzzer. The 459-Euro ($543) Base model adds to the Mini features with opto-isolated RS-232 and RS-485 interfaces, LEDs, and a watchdog. The 494-Euro ($586) UPS model adds a UPS unit based on an external lead-acid 12V battery, plus special GPIO pins and an LED dedicated to UPS. The device is protected per EN61000-6-2 (EMC) and EN60664-1 (electrical safety).

A Guide to Git Branching

In this third article on getting started with Git, learn how to add and delete Git branches.

In my two previous articles in this series, we started using Git and learned how to clone, modify, add, and delete Git files. In this third installment, we’ll explore Git branching and why and how it is used.

Picture this tree as a Git repository. It has a lot of branches, long and short, stemming from the trunk and stemming from other branches. Let’s say the tree’s trunk represents a master branch of our repo. I will use master in this article as an alias for “master branch”—i.e., the central or first branch of a repo. To simplify things, let’s assume that the master is a tree trunk and the other branches start from it.

Read more at OpenSource.com

How to Speak Linux

I didn’t even stop to imagine that people pronounced Linux commands differently until many years ago when I heard a co-worker use the word “vie” (as in “The teams will vie for the title”) for what I’d always pronounced “vee I.” It was a moment I’ll never forget.

… Unix commands evolved with a number of different pronunciation rules. The names of some commands (like “cat”) were derived from words (like “concatenate”) and were pronounced as if they were words, too (some actually are). Others derived from phrases like “cpio,” which pull together the idea of copying (cp) and I/O. Others are simply abbreviations, such as “cd” for “change directory.” 

Some commands are basically pronounced as if we are spelling them out loud — like “el es” for ls and “pee double-u dee” for pwd, while others are read like “chown” (rhyming with “clown”) as if they are words. And since many Linux users might first be exposed to the Linx command line on some old PC that they decided to put to better use, they may never hear other people saying Linux commands out loud. So, in today’s post, I’m going to explain how I pronounce Linux commands and how I’ve heard some others going in different directions.

Read more at Network World

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Containers and Microservices and Serverless, Oh My!

A new round of buzzword-heavy technologies are becoming relevant to—or at least discussed among—developers, operations professionals, and the tech staff who lead them. Need to come up to speed on the changing cloud and container trends and technologies? If you feel out of the loop, this tech-transfer explainer should provide enlightenment.

Once upon a time, virtual machines changed how we thought about servers. Then, the cloud changed how we thought about IT. Now, containers have started a new transformation. The latest entry is “serverless”—though I should point out immediately that the term serverless is a misnomer. Future cloud-native applications will consist of both microservices and functions, often wrapped as Linux containers.

VMs and the cloud enabled DevOps, the practice of developers and IT operations staff collaborating to optimize technology processes. Cloud technologies’ dynamic compute and storage resources made it easier to provision resources. The idea behind DevOps is that developers no longer need to worry about infrastructure because that’s taken care of in the background by programs such as Ansible, Chef, and Puppet.

Read more at HPE 

Purism Partners with Nitrokey to Reinforce the Security of Their Linux Laptops

Purism, the maker of Linux-powered laptops, announced today that it partners with Nitrokey, a maker of Free Software and Open Hardware USB GPG SmartCards and Hardware Security Modules (HSMs), to create a GPG-based SmartCard called Purekey.

Purism has always tried to offer its customers some of the most secure and privacy-aware laptops with the Librem 13 and 15 lineups, and it is now working to deliver the privacy-focused Librem 5 smartphone powered by PureOS,

Read more at Softpedia

See also: Nitrokey Digital Tokens for Linux Kernel Developers

 

How to Survive Embedded Linux – Part 1 The Embedded Linux Development Process

The Embedded Linux Development Process

The Linux kernel can run on many different computer architectures, most of which are quite popular in the embedded world.  All of the base packages allowing the OS to perform the basic tasks are suitable for cross-compilation, therefore Linux can be as pervasive as microcontrollers and Systems on Chip (SoCs).

A Linux distribution is an operating system made from a software collection, which is based upon the Linux Kernel and – often – a package management system. The distribution may come as pre-compiled binaries and packages put together
by the distribution maintainers, or as sources paired with instructions on how to (cross-) compile them.

In the embedded domain, since the hardware platform is usually bespoke, the OS designer generally prefers to generate the distribution from scratch, starting from sources. This gives the designer absolute control of what ends up in the
product.  Furthermore, the Board Support Package (BSP) engineer modifies the low-level code in order to make the core functionality of the OS work on the specific hardware product.

Getting all the necessary software components together to generate the Linux distribution for a particular embedded product used to be a nightmare, butthankfully this is no longer the case. 

Many have shared with the open source community the sources of build systems capable of fetching all of the Software components off the Internet, compile them and link them together, up to the generation of installation images of fully fledged operation systems. A few companies are developing and maintaining their own build system, others compile just few of the core components and then take pre-built binaries to finalize the OS.

In 2010, a workgroup from the Linux Foundation started to address tools and processes to allow the creation of Linux distributions for embedded software (a.k.a. Embedded Linux). Such a workgroup, known as the Yocto Project, aligned itself with Open Embedded, a framework with similar goals.

The Yocto Project is an open source project whose focus is on improving the software development process for embedded Linux distributions. The Yocto Project provides interoperable tools, metadata, and processes that enable the
rapid, repeatable development of Linux-based embedded systems.

The Yocto project is currently powering the most popular Linux distributions for embedded system, to a point where sometimes the terms “Embedded Linux” and “Yocto Project” are easily confused as synonyms. Yocto it’s not an Embedded Linux distribution, it creates a custom one for you.

Yocto’s meta-layers layout

The modern version of Yocto’s architecture is based on meta-layers, which are directories containing configuration files and rules on how to compile and assemble Linux distributions for embedded systems.

Usually, but not always, a meta-layer lives on its own git repository, and provides:

  • its own packages (defined by recipes, .bb files),
  • modifications to packages provided by other meta-layers (.bbappend files),
  • machines (.conf files),
  • configuration files (.conf files),
  • common code  (.bbclass files),
  • licenses,
  • and other minor bits and pieces.

A single meta-layer normally addresses a specific purpose.  Therefore to achieve a fully working system, more meta-layers need to be combined together.

Versions picking and matching

When putting different software components together, one needs to be mindful of the version of each component, as the wrong version may not work well together with the other components or even break the system.

The Yocto project provide releases of components known to work well together, but that is just the starting point for your product.

The Linux kernel is a big chunk of code that needs to expose the right interfaces to user space, and has to contain the right drivers, for the system to work properly. Therefore, the role of the silicon vendor has become more and more important these days, as they usually have their own development repositories for the Linux kernel and the bootloader, hence, they are the best people to put together a working base system based on their technology.

Google’s repo

Originally developed to cope with the multitude of Git repositories in an Android project, repo has become quite popular among Yocto developers too.

Repo is a tool built on top of Git; it uses a “manifest file” to clone and pull a set of Git repositories all the same time.
A repo manifest is an .xml document containing references to git repositories (along with their versions), repo can use the manifest to populate a directory with all of the sources coming from the several Git repositories required to build a project.

Also, the same manifest may be used by repo to keep in check (sync) the project sources when upstream make changes.

A few silicon vendors provide manifests for their development and release branches these days, so that designers can easily check out the starting point for their own products.

Yocto-based product development

The BSP engineer usually starts from the silicon vendor repo manifest to checkout the version of the software for the reference design (which is, a design provided by the silicon vendor itself or one of its partners, containing the same or similar SoC to the one the new product is based on). The engineer makes changes to the bootloader and the Linux kernel to make sure the hardware selected by the Electronics Engineer has proper low-level software support (e.g. device drivers, device tree, kernel configuration, etc.).

The purpose of the product is to run one or more applications, therefore the BSP/OS engineer makes sure that all of the dependencies of the application(s) are being built for the system. The engineers developing the application need a Software Develpment Kit (SDK) to cross-compile and link the application, therefore the BSP/OS engineer
will provide them with such a kit, and thanks to Yocto this has become quite straightforward.

Embedded Linux good practice

The repo manifests used for development usually contain reference to development branches, which mean repo will fetch the latest commit for those branches.

If you use the same manifest to fetch the project at a later date, you may fetch a different version of the code! This is perfectly fine for development because you want to stick with the latest version of your project, but one of your development versions will eventually become a release, and therefore you need to “take a picture” of that precise version of the sources used to generate the software release that goes on the project. Failing to do so can expose you to legal troubles as you won’t be able to regenerate the same build starting from sources, therefore you won’t be able to make a change on top of a specific release, forcing the customer to re-test the entire system as you’ll be forced to fix the bug or add the new feature on top of the latest version of the software.  

Also, if you don’t take those snapshots, there is no way you can run a bisect on the project sources to find out which commit has broken the functionality you so desperately need.  When designing your development process, find a way to automatically generate repo manifests with precise commits in them, so that you can save them alongside releases to checkout the same sources again at a later date, and do whatever you are paid to do. 

Copy sources in house

Keep also in mind that 99.9% of the sources that go inside your product come from the open source community, which means you have no guarantee the same sources will be available for download again.   As a designer, you need to protect yourself against changes and mistakes made upstream. Keep a copy of all the relevant sources in house, and find a way to plug them back into your build system. Also, you may want to mirror some of the repositories you use the most, as sometimes upstream git servers may suddenly became unavailable.  If you don’t have an internal copy you’ll be stuck until the servers will come back online.

At ByteSnap, we have a fully automated way of releasing Yocto based projects, such that we can get recover the sources that go into a release, and also, re-build the same release at a later date. We keep copies of open source packages in an automatic fashion, so that we experience no down time caused by faulty servers around the world. Furthermore, we back everything up every single day so that we can guarantee no work will be lost even in case of disaster on site.

Fabrizio Castro
Fab is a senior software engineer. He gained his bachelor and master degrees at Politecnico di Milano, in Milan, Italy. He has 20 years’ experience of all-round software development (services, data bases, applications, scientific software, firmware, RTOS, device drivers, Linux kernel, etc.), spent working in academia and industry. Fab has co-authored scientific papers and books, and worked on patents. As well as research and development, he specialises in Embedded Linux development – delivering state-of-art designs powering successful scientific, industrial, commercial, and military products. Fab has also been a lecturer and has taught undergraduates at some of the most prestigious universities in Europe. For more information about ByteSnap Design visit http://www.bytesnap.co.uk.

 

A Beginner’s Guide to Linux

Many people have heard of Linux, but most don’t really know what it is.  Linux is an operating system that can perform the same functions as Windows 10 and Mac OS. The key difference is that Linux is open source. In the most simple terms, it just means that no one single person or corporation controls the code. Instead, the operating system is maintained by a dedicated group of developers from around the world. Anyone who is interested can contribute to the code and help check for errors. Linux is more than an operating system; it is a community.

Linux distributions are always changing, so here are a few of the most popular ones. If you are an avid Windows user, then Ubuntu is a great place to start. The visual layout will be familiar for a Windows user, while the more complex aspects of Linux are smoothed away.

Read more at Softonic

Manipulating Directories in Linux

If you are new to this series (and to Linux), take a look at our first installment. In that article, we worked our way through the tree-like structure of the Linux filesystem, or more precisely, the File Hierarchy Standard. I recommend reading through it to make sure you understand what you can and cannot safely touch. Because this time around, I’ll show how to get all touchy-feely with your directories.

Making Directories

Let’s get creative before getting destructive, though. To begin, open a terminal window and use mkdir to create a new directory like this:

mkdir <directoryname>

If you just put the directory name, the directory will appear hanging off the directory you are currently in. If you just opened a terminal, that will be your home directory. In a case like this, we say the directory will be created relative to your current position:

$ pwd #This tells you where you are now -- see our first tutorial
/home/<username>
$ mkdir newdirectory #Creates /home/<username>/newdirectory

(Note that you do not have to type the text following the #. Text following the pound symbol # is considered a comment and is used to explain what is going on. It is ignored by the shell).

You can create a directory within an existing directory hanging off your current location by specifying it in the command line:

mkdir Documents/Letters

Will create the Letters directory within the Documents directory.

You can also create a directory above where you are by using .. in the path. Say you move into the Documents/Letters/ directory you just created and you want to create a Documents/Memos/ directory. You can do:

cd Documents/Letters # Move into your recently created Letters/ directory
mkdir ../Memos

Again, all of the above is done relative to you current position. This is called using a relative path.

You can also use an absolute path to directories: This means telling mkdir where to put your directory in relation to the root (/) directory:

mkdir /home/<username>/Documents/Letters

Change <username> to your user name in the command above and it will be equivalent to executing mkdir Documents/Letters from your home directory, except that it will work from wherever you are located in the directory tree.

As a side note, regardless of whether you use a relative or an absolute path, if the command is successful, mkdir will create the directory silently, without any apparent feedback whatsoever. Only if there is some sort of trouble will mkdir print some feedback after you hit [Enter].

As with most other command-line tools, mkdir comes with several interesting options. The -p option is particularly useful, as it lets you create directories within directories within directories, even if none exist. To create, for example, a directory for letters to your Mom within Documents/, you could do:

mkdir -p Documents/Letters/Family/Mom

And mkdir will create the whole branch of directories above Mom/ and also the directory Mom/ for you, regardless of whether any of the parent directories existed before you issued the command.

You can also create several folders all at once by putting them one after another, separated by spaces:

mkdir Letters Memos Reports

will create the directories Letters/, Memos/ and Reports under the current directory.

In space nobody can hear you scream

… Which brings us to the tricky question of spaces in directory names. Can you use spaces in directory names? Yes, you can. Is it advised you use spaces? No, absolutely not. Spaces make everything more complicated and, potentially, dangerous.

Say you want to create a directory called letters mom/. If you didn’t know any better, you could type:

mkdir letters mom

But this is WRONG! WRONG! WRONG! As we saw above, this will create two directories, letters/ and mom/, but not letters mom/.

Agreed that this is a minor annoyance: all you have to do is delete the two directories and start over. No big deal.

But, wait! Deleting directories is where things get dangerous. Imagine you did create letters mom/ using a graphical tool, like, say Dolphin or Nautilus. If you suddenly decide to delete letters mom/ from a terminal, and you have another directory just called letters/ under the same directory, and said directory is full of important documents, and you tried this:

rmdir letters mom

You would risk removing letters/. I say “risk” because fortunately rmdir, the instruction used to remove directories, has a built in safeguard and will warn you if you try to delete a non-empty directory.

However, this:

rm -Rf letters mom

(and this is a pretty standard way of getting rid of directories and their contents) will completely obliterate letters/ and will never even tell you what just happened.

The rm command is used to delete files and directories. When you use it with the options -R (delete recursively) and -f (force deletion), it will burrow down into a directory and its subdirectories, deleting all the files they contain, then deleting the subdirectories themselves, then it will delete all the files in the top directory and then the directory itself.

rm -Rf is an instruction you must handle with extreme care.

My advice is, instead of spaces, use underscores (_), but if you still insist on spaces, there are two ways of getting them to work. You can use single or double quotes (' or ") like so:

mkdir 'letters mom'
mkdir "letters dad"

Or, you can escape the spaces. Some characters have a special meaning for the shell. Spaces, as you have seen, are used to separate options and arguments on the command line. “Separating options and arguments” falls under the category of “special meaning”. When you want the shell to ignore the special meaning of a character, you need to escape it and to escape a character, you put a backslash () in front of it:

mkdir letters mom
mkdir letter dad

There are other special characters that would need escaping, like the apostrophe or single quote ('), double quotes ("), and the ampersand (&):

mkdir mom & dad's letters

I know what you’re thinking: If the backslash has a special meaning (to wit, telling the shell it has to escape the next character), that makes it a special character, too. Then, how would you escape the escape character which is ?

Turns out, the exact way you escape any other special character:

mkdir special\characters

will produce a directory called specialcharacters.

Confusing? Of course. That’s why you should avoid using special characters, including spaces, in directory names.

For the record, here is a list of special characters you can refer to just in case.

Things to Remember

  • Use mkdir <directory name> to create a new directory.
  • Use rmdir <directory name> to delete a directory (only works if it is empty).
  • Use rm -Rf <directory name> to annihilate a directory — use with extreme caution.
  • Use a relative path to create directories relative to your current directory: mkdir newdir..
  • Use an absolute path to create directories relative to the root directory (/): mkdir /home/<username>/newdir
  • Use .. to create a directory in the directory above the current directory: mkdir ../newdir
  • You can create several directories all in one go by separating them with spaces on the command line: mkdir onedir twodir threedir
  • You can mix and mash relative and absolute paths when creating several directories simultaneously: mkdir onedir twodir /home/<username>/threedir
  • Using spaces and special characters in directory names guarantees plenty of headaches and heartburn. Don’t do it.

For more information, you can look up the manuals of mkdir, rmdir and rm:

man mkdir
man rmdir
man rm

To exit the man pages, press [q].

Next Time

In the next installment, you’ll learn about creating, modifying, and erasing files, as well as everything you need to know about permissions and privileges. See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.