Home Blog Page 367

A Beginner’s Guide to Linux

Many people have heard of Linux, but most don’t really know what it is.  Linux is an operating system that can perform the same functions as Windows 10 and Mac OS. The key difference is that Linux is open source. In the most simple terms, it just means that no one single person or corporation controls the code. Instead, the operating system is maintained by a dedicated group of developers from around the world. Anyone who is interested can contribute to the code and help check for errors. Linux is more than an operating system; it is a community.

Linux distributions are always changing, so here are a few of the most popular ones. If you are an avid Windows user, then Ubuntu is a great place to start. The visual layout will be familiar for a Windows user, while the more complex aspects of Linux are smoothed away.

Read more at Softonic

Manipulating Directories in Linux

If you are new to this series (and to Linux), take a look at our first installment. In that article, we worked our way through the tree-like structure of the Linux filesystem, or more precisely, the File Hierarchy Standard. I recommend reading through it to make sure you understand what you can and cannot safely touch. Because this time around, I’ll show how to get all touchy-feely with your directories.

Making Directories

Let’s get creative before getting destructive, though. To begin, open a terminal window and use mkdir to create a new directory like this:

mkdir <directoryname>

If you just put the directory name, the directory will appear hanging off the directory you are currently in. If you just opened a terminal, that will be your home directory. In a case like this, we say the directory will be created relative to your current position:

$ pwd #This tells you where you are now -- see our first tutorial
/home/<username>
$ mkdir newdirectory #Creates /home/<username>/newdirectory

(Note that you do not have to type the text following the #. Text following the pound symbol # is considered a comment and is used to explain what is going on. It is ignored by the shell).

You can create a directory within an existing directory hanging off your current location by specifying it in the command line:

mkdir Documents/Letters

Will create the Letters directory within the Documents directory.

You can also create a directory above where you are by using .. in the path. Say you move into the Documents/Letters/ directory you just created and you want to create a Documents/Memos/ directory. You can do:

cd Documents/Letters # Move into your recently created Letters/ directory
mkdir ../Memos

Again, all of the above is done relative to you current position. This is called using a relative path.

You can also use an absolute path to directories: This means telling mkdir where to put your directory in relation to the root (/) directory:

mkdir /home/<username>/Documents/Letters

Change <username> to your user name in the command above and it will be equivalent to executing mkdir Documents/Letters from your home directory, except that it will work from wherever you are located in the directory tree.

As a side note, regardless of whether you use a relative or an absolute path, if the command is successful, mkdir will create the directory silently, without any apparent feedback whatsoever. Only if there is some sort of trouble will mkdir print some feedback after you hit [Enter].

As with most other command-line tools, mkdir comes with several interesting options. The -p option is particularly useful, as it lets you create directories within directories within directories, even if none exist. To create, for example, a directory for letters to your Mom within Documents/, you could do:

mkdir -p Documents/Letters/Family/Mom

And mkdir will create the whole branch of directories above Mom/ and also the directory Mom/ for you, regardless of whether any of the parent directories existed before you issued the command.

You can also create several folders all at once by putting them one after another, separated by spaces:

mkdir Letters Memos Reports

will create the directories Letters/, Memos/ and Reports under the current directory.

In space nobody can hear you scream

… Which brings us to the tricky question of spaces in directory names. Can you use spaces in directory names? Yes, you can. Is it advised you use spaces? No, absolutely not. Spaces make everything more complicated and, potentially, dangerous.

Say you want to create a directory called letters mom/. If you didn’t know any better, you could type:

mkdir letters mom

But this is WRONG! WRONG! WRONG! As we saw above, this will create two directories, letters/ and mom/, but not letters mom/.

Agreed that this is a minor annoyance: all you have to do is delete the two directories and start over. No big deal.

But, wait! Deleting directories is where things get dangerous. Imagine you did create letters mom/ using a graphical tool, like, say Dolphin or Nautilus. If you suddenly decide to delete letters mom/ from a terminal, and you have another directory just called letters/ under the same directory, and said directory is full of important documents, and you tried this:

rmdir letters mom

You would risk removing letters/. I say “risk” because fortunately rmdir, the instruction used to remove directories, has a built in safeguard and will warn you if you try to delete a non-empty directory.

However, this:

rm -Rf letters mom

(and this is a pretty standard way of getting rid of directories and their contents) will completely obliterate letters/ and will never even tell you what just happened.

The rm command is used to delete files and directories. When you use it with the options -R (delete recursively) and -f (force deletion), it will burrow down into a directory and its subdirectories, deleting all the files they contain, then deleting the subdirectories themselves, then it will delete all the files in the top directory and then the directory itself.

rm -Rf is an instruction you must handle with extreme care.

My advice is, instead of spaces, use underscores (_), but if you still insist on spaces, there are two ways of getting them to work. You can use single or double quotes (' or ") like so:

mkdir 'letters mom'
mkdir "letters dad"

Or, you can escape the spaces. Some characters have a special meaning for the shell. Spaces, as you have seen, are used to separate options and arguments on the command line. “Separating options and arguments” falls under the category of “special meaning”. When you want the shell to ignore the special meaning of a character, you need to escape it and to escape a character, you put a backslash () in front of it:

mkdir letters mom
mkdir letter dad

There are other special characters that would need escaping, like the apostrophe or single quote ('), double quotes ("), and the ampersand (&):

mkdir mom & dad's letters

I know what you’re thinking: If the backslash has a special meaning (to wit, telling the shell it has to escape the next character), that makes it a special character, too. Then, how would you escape the escape character which is ?

Turns out, the exact way you escape any other special character:

mkdir special\characters

will produce a directory called specialcharacters.

Confusing? Of course. That’s why you should avoid using special characters, including spaces, in directory names.

For the record, here is a list of special characters you can refer to just in case.

Things to Remember

  • Use mkdir <directory name> to create a new directory.
  • Use rmdir <directory name> to delete a directory (only works if it is empty).
  • Use rm -Rf <directory name> to annihilate a directory — use with extreme caution.
  • Use a relative path to create directories relative to your current directory: mkdir newdir..
  • Use an absolute path to create directories relative to the root directory (/): mkdir /home/<username>/newdir
  • Use .. to create a directory in the directory above the current directory: mkdir ../newdir
  • You can create several directories all in one go by separating them with spaces on the command line: mkdir onedir twodir threedir
  • You can mix and mash relative and absolute paths when creating several directories simultaneously: mkdir onedir twodir /home/<username>/threedir
  • Using spaces and special characters in directory names guarantees plenty of headaches and heartburn. Don’t do it.

For more information, you can look up the manuals of mkdir, rmdir and rm:

man mkdir
man rmdir
man rm

To exit the man pages, press [q].

Next Time

In the next installment, you’ll learn about creating, modifying, and erasing files, as well as everything you need to know about permissions and privileges. See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Red Hat’s Serverless Blockchain Future Powered by Open Source Innovation

On the final day of the Red Hat Summit last week, Red Hat CTO Chris Wright presided over the closing keynotes where he outlined how his company innovates and hinted at multiple future product developments.

“Innovation in the enterprise is about adapting to change without compromising the business,” Wright said.

Wright added that at the core of modern innovation is the Linux operating system. Whereas a decade or more ago, Linux was sometimes seen as a follower in terms of innovation, at this point in 2018 it’s clear Linux is now where new innovations, be it cloud, blockchain, serverless or Artificial Intelligence (AI), are built upon.

Looking deeper than just Linux is the open-source community that enables it, as well as a vast landscape of project code. Wright said Red Hat’s role when it comes to developers is to help provide them with tools and techniques to provide business value as code….

Among the emerging areas of technology innovation that Red Hat is now working on is serverless, which is also sometimes referred to as Functions as a Service (FaaS). Red Hat is now working on a project called OpenShift Cloud Functions.

Read more at ServerWatch

Free Webinar on Community-Driven Governance for Open Source Projects

Topics such as licensing and governance are complex but nonetheless critical considerations for open source projects. And, understanding and implementing the requirements in a strategic way are key to a project’s long-term health and success. In an upcoming webinar — Governance Models of Community-Driven Open Source Projects”The Linux Foundation’s Scott Nicholas will examine various approaches for structuring open source projects with these requirements in mind.

This free, hour-long webinar (at 10:00 am Pacific, May 30, 2018) will address some of the differences that exist in community-driven governance and will explore various case studies, including:

  • “Single-project” projects

  • Unfunded and funded projects

  • Technology-focused umbrella projects

  • Industry-focused umbrella projects

Scott Nicholas, who is Sr. Director of Strategic Programs of The Linux Foundation, will also discuss some common issues faced by new and growing open source projects, including project lifecycle and maturation considerations. He’ll also review differences in licensing models and outline approaches to the licensing of specifications.

Scott Nicholas

Learn more about the speaker

As Sr. Director of Strategic Programs, Scott assists in the launch and support of open source projects and contributes to The Linux Foundation’s legal programs. Scott has assisted in setting up numerous projects across the technology stack including R Consortium, Node.js Foundation, Open Mainframe Project, Civil Infrastructure Platform, OpenHPC, the ONAP Project, and the LF Networking Fund. Scott’s professional experience spans both the legal and financial aspects of technology, having worked as a corporate attorney and as an investment analyst covering the technology sector.

Join us Wednesday, May 30, 2018 at 10:00 am Pacific for this free webinar. Register Now.

This article originally appeared at The Linux Foundation.

How to Maximize the Scalability of Your Containerized Environment

One main reason to use containers is that they help make apps, services and environments highly scalable. But that doesn’t happen magically. In order to take full advantage of the benefits of containers, you have to build your stack and configure your environment in ways that maximize scalability.

Below, I take a look at some strategies that can ensure that your containers and the software they host are as scalable as they can be.
Defining Scalability

First, though, let’s spend a moment discussing what scalability means, exactly, and what it looks like in practice.
Scalability can take multiple forms:

  • Being able to increase or decrease the capacity of an application without adding or subtracting deployments. For example, perhaps your web app has 10,000 users per day today and you want it to be able to handle 20,000 without creating a new instance of the app. You could scale in this way by assigning more resources to the app.

Read more at Wercker

GitOps: ‘Git Push’ All the Things

While the idea of cloud-native computing promises to change how modern IT operations work, the idea remains vague for many who work in the profession. “GitOps,” an idea that generated some buzz at the KubeCon + CloudNativeCon EU conference in Copenhagen last week, could bring some much-needed focus to this area.

GitOps is all about “pushing code, not containers,” said Alexis Richardson, Weaveworks CEO and chair of the Cloud Native Computing Foundation‘s Technical Oversight Committee, in his KubeCon keynote. The idea is to “make git the center of control,” of cloud-native operations, for both the developer and the system administrator,  Richardson said.

“The key point of the developer experience is ‘git-push,’” he said, alluding to the git command used to submit code as finished. He added that this approach is “entirely code-centric.”

Read more at The New Stack

PacVim – A CLI Game To Learn Vim Commands

Howdy, Vim users! Today, I stumbled upon a cool utility to sharpen your Vim usage skills. Vim is a great editor to write and edit code. However, some of you (including me) are still struggling with the steep learning curve. Not anymore! Meet PacVim, a CLI game that helps you to learn Vim commands. PacVim is inspired by the classic game PacMan and it gives you plenty of practice with Vim commands in a fun and interesting way. Simply put, PacVim is a fun, free way to learn about the vim commands in-depth. Please do not confuse PacMan with pacman (the arch Linux package manager). PacMan is a classic, popular arcade game released in the 1980s.

In this brief guide, we will see how to install and use PacVim in Linux.

Install PacVim

First, install Ncurses library.

On Arch-based systems, install ncurses using command:

$ sudo pacman -S ncurses

Read more at OS Technix

Learn to Create Bootable Linux Flash Drive (using Ubuntu)

To be able to install Ubuntu or any other Linux OS or even any other OS like Windows etc, we either need a bootable FlashDrive or a DVD of the OS. In this tutorial, we will discuss how we can create bootable Linux Flash Drive using a Ubuntu System.

There are many tools on almost all operating systems available with most of them being 3rd party tools, but we have an inbuilt tool on Ubuntu for creating a bootable flash drive, called ‘Startup Disk Creator’. We will be using the same tool to create bootable Linux flash drive.

But before we start we will need a Flash drive with capacity more than 2 Gb & a system with Ubuntu installed on it. Once you have the flash drive & the system, you can proceed with the tutorial.

Read more at LinuxTech Lab

New Technologies Lead to New Linux and Cloud Training Options

At KubeCon + CloudNativeCon in Copenhagen, Denmark recently, I met many users, technologists, and business leaders — all aware of the dramatic pace of innovation and eager to learn about the new technologies that are coming out of open source space. With these rapid changes, many companies are now also worried about finding skilled developers who are well versed in the latest technologies.

As the adoption of these cloud native technologies increases, there will be an even greater demand for skilled developers, and I wondered where all those new developers would come from. Who will train and certify them?

New Courses

The answer lies in various training programs and courses, including those from The Linux Foundation, which, in partnership with edX, has been steadily working to close skills gaps by offering online courses — many of which are free — on open source platforms, tools, and practices. The Foundation recently passed the one million mark for people enrolled in courses on edX  and just announced a new Certified Kubernetes Application Developer (CKAD) Exam and corresponding Kubernetes for Developers course through the Cloud Native Computing Foundation.

Many other organizations can also help developers and sysadmins achieve the skills they need.  At KubeCon, I talked with Amy Marrich, a course author at the Linux Academy, which offers courses for all major technologies — including Kubernetes, Docker, DevOps, BigData, and OpenStack — to help developers hone the skills needed to master exams and become certified developers.

In addition to being a course author, Marrich is a core reviewer of the OpenStack Ansible project and also one of leaders of the Women of OpenStack. She is extremely active in the OpenStack community and also worked on the latest user survey and user committee elections.

Community Experts

Marrich said this connection with the community brings credibility and validity. “Since we are experts in OpenStack and part of the community, if a student has an issue, we know people in the community who could help that student,” said Marrich, “Even if it’s not my project, I am able to find the right person and get help for our students.”

Often students come up with questions that are not part of the course because some of these technologies are so large and their use-cases go beyond what any course can cover. “Being an active member of the community provides us with the ability to answer the questions they have or at least help them track them down – if it’s a bug, that adds to their learning experience,” Marrich.

Instructors also come from different technological backgrounds, so there is a lot of cross-pollination. There are experts who work on AWS, specifically OpenStack distributions, Kubernetes, Docker, and Linux. As a result, they are able to solve a huge gamut of problems that students may come across. They also keep an eye on trends and hot topics, such as serverless and cloud native, so they can add courses for these new technologies.

However, teaching is as much about how you teach as it is about the technology. One of the main goals is to make it easy for students to access course material. The videos are accompanied by study guides, which follow the videos closely.

“While the wording may not be same, the exercise is the same. We want to make sure that anything in the study guide is the same in the video,” she said. “If I type in a command, you are going to see the exact same output in the study guide as in the videos.” Additionally, the iOS and Android apps of Linux Academy allow users to download those videos for offline usage, so they can study even when they have no connectivity.

If you are looking to hone your open source skills, check out the courses from Linux Academy as well as those from The Linux Foundation to help you reach your goals.

Learn more about the Kubernetes for Developers training course and certification.

Linux-Friendly Arduino Simplifies IoT Development

Arduino’s support for Linux IoT devices and single-board computers (SBCs) announced at the Embedded Linux Conference+Open IoT Summit NA in March cemented Arduino’s focus on cloud-connected IoT development, extending its reach into edge computing. This move was likely driven by multiple factors — increased complexity of IoT solutions and, secondarily, by more interest in Arduino boards running Linux.

In a “blending” of development communities for the masses — Arduino, Raspberry Pi, and BeagleBone — Arduino’s support for Linux-based boards lowers the barrier of development for IoT devices by combining Arduino’s sensor and actuator nodes with higher processor-powered boards like Raspberry Pi and BeagleBone. Top this with a user-friendly web wizard to connect the Linux boards via the cloud and it simplifies the entire process.

Read more at EETimes