Home Blog Page 366

A Guide to Git Branching

In this third article on getting started with Git, learn how to add and delete Git branches.

In my two previous articles in this series, we started using Git and learned how to clone, modify, add, and delete Git files. In this third installment, we’ll explore Git branching and why and how it is used.

Picture this tree as a Git repository. It has a lot of branches, long and short, stemming from the trunk and stemming from other branches. Let’s say the tree’s trunk represents a master branch of our repo. I will use master in this article as an alias for “master branch”—i.e., the central or first branch of a repo. To simplify things, let’s assume that the master is a tree trunk and the other branches start from it.

Read more at OpenSource.com

How to Speak Linux

I didn’t even stop to imagine that people pronounced Linux commands differently until many years ago when I heard a co-worker use the word “vie” (as in “The teams will vie for the title”) for what I’d always pronounced “vee I.” It was a moment I’ll never forget.

… Unix commands evolved with a number of different pronunciation rules. The names of some commands (like “cat”) were derived from words (like “concatenate”) and were pronounced as if they were words, too (some actually are). Others derived from phrases like “cpio,” which pull together the idea of copying (cp) and I/O. Others are simply abbreviations, such as “cd” for “change directory.” 

Some commands are basically pronounced as if we are spelling them out loud — like “el es” for ls and “pee double-u dee” for pwd, while others are read like “chown” (rhyming with “clown”) as if they are words. And since many Linux users might first be exposed to the Linx command line on some old PC that they decided to put to better use, they may never hear other people saying Linux commands out loud. So, in today’s post, I’m going to explain how I pronounce Linux commands and how I’ve heard some others going in different directions.

Read more at Network World

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Containers and Microservices and Serverless, Oh My!

A new round of buzzword-heavy technologies are becoming relevant to—or at least discussed among—developers, operations professionals, and the tech staff who lead them. Need to come up to speed on the changing cloud and container trends and technologies? If you feel out of the loop, this tech-transfer explainer should provide enlightenment.

Once upon a time, virtual machines changed how we thought about servers. Then, the cloud changed how we thought about IT. Now, containers have started a new transformation. The latest entry is “serverless”—though I should point out immediately that the term serverless is a misnomer. Future cloud-native applications will consist of both microservices and functions, often wrapped as Linux containers.

VMs and the cloud enabled DevOps, the practice of developers and IT operations staff collaborating to optimize technology processes. Cloud technologies’ dynamic compute and storage resources made it easier to provision resources. The idea behind DevOps is that developers no longer need to worry about infrastructure because that’s taken care of in the background by programs such as Ansible, Chef, and Puppet.

Read more at HPE 

Purism Partners with Nitrokey to Reinforce the Security of Their Linux Laptops

Purism, the maker of Linux-powered laptops, announced today that it partners with Nitrokey, a maker of Free Software and Open Hardware USB GPG SmartCards and Hardware Security Modules (HSMs), to create a GPG-based SmartCard called Purekey.

Purism has always tried to offer its customers some of the most secure and privacy-aware laptops with the Librem 13 and 15 lineups, and it is now working to deliver the privacy-focused Librem 5 smartphone powered by PureOS,

Read more at Softpedia

See also: Nitrokey Digital Tokens for Linux Kernel Developers

 

How to Survive Embedded Linux – Part 1 The Embedded Linux Development Process

The Embedded Linux Development Process

The Linux kernel can run on many different computer architectures, most of which are quite popular in the embedded world.  All of the base packages allowing the OS to perform the basic tasks are suitable for cross-compilation, therefore Linux can be as pervasive as microcontrollers and Systems on Chip (SoCs).

A Linux distribution is an operating system made from a software collection, which is based upon the Linux Kernel and – often – a package management system. The distribution may come as pre-compiled binaries and packages put together
by the distribution maintainers, or as sources paired with instructions on how to (cross-) compile them.

In the embedded domain, since the hardware platform is usually bespoke, the OS designer generally prefers to generate the distribution from scratch, starting from sources. This gives the designer absolute control of what ends up in the
product.  Furthermore, the Board Support Package (BSP) engineer modifies the low-level code in order to make the core functionality of the OS work on the specific hardware product.

Getting all the necessary software components together to generate the Linux distribution for a particular embedded product used to be a nightmare, butthankfully this is no longer the case. 

Many have shared with the open source community the sources of build systems capable of fetching all of the Software components off the Internet, compile them and link them together, up to the generation of installation images of fully fledged operation systems. A few companies are developing and maintaining their own build system, others compile just few of the core components and then take pre-built binaries to finalize the OS.

In 2010, a workgroup from the Linux Foundation started to address tools and processes to allow the creation of Linux distributions for embedded software (a.k.a. Embedded Linux). Such a workgroup, known as the Yocto Project, aligned itself with Open Embedded, a framework with similar goals.

The Yocto Project is an open source project whose focus is on improving the software development process for embedded Linux distributions. The Yocto Project provides interoperable tools, metadata, and processes that enable the
rapid, repeatable development of Linux-based embedded systems.

The Yocto project is currently powering the most popular Linux distributions for embedded system, to a point where sometimes the terms “Embedded Linux” and “Yocto Project” are easily confused as synonyms. Yocto it’s not an Embedded Linux distribution, it creates a custom one for you.

Yocto’s meta-layers layout

The modern version of Yocto’s architecture is based on meta-layers, which are directories containing configuration files and rules on how to compile and assemble Linux distributions for embedded systems.

Usually, but not always, a meta-layer lives on its own git repository, and provides:

  • its own packages (defined by recipes, .bb files),
  • modifications to packages provided by other meta-layers (.bbappend files),
  • machines (.conf files),
  • configuration files (.conf files),
  • common code  (.bbclass files),
  • licenses,
  • and other minor bits and pieces.

A single meta-layer normally addresses a specific purpose.  Therefore to achieve a fully working system, more meta-layers need to be combined together.

Versions picking and matching

When putting different software components together, one needs to be mindful of the version of each component, as the wrong version may not work well together with the other components or even break the system.

The Yocto project provide releases of components known to work well together, but that is just the starting point for your product.

The Linux kernel is a big chunk of code that needs to expose the right interfaces to user space, and has to contain the right drivers, for the system to work properly. Therefore, the role of the silicon vendor has become more and more important these days, as they usually have their own development repositories for the Linux kernel and the bootloader, hence, they are the best people to put together a working base system based on their technology.

Google’s repo

Originally developed to cope with the multitude of Git repositories in an Android project, repo has become quite popular among Yocto developers too.

Repo is a tool built on top of Git; it uses a “manifest file” to clone and pull a set of Git repositories all the same time.
A repo manifest is an .xml document containing references to git repositories (along with their versions), repo can use the manifest to populate a directory with all of the sources coming from the several Git repositories required to build a project.

Also, the same manifest may be used by repo to keep in check (sync) the project sources when upstream make changes.

A few silicon vendors provide manifests for their development and release branches these days, so that designers can easily check out the starting point for their own products.

Yocto-based product development

The BSP engineer usually starts from the silicon vendor repo manifest to checkout the version of the software for the reference design (which is, a design provided by the silicon vendor itself or one of its partners, containing the same or similar SoC to the one the new product is based on). The engineer makes changes to the bootloader and the Linux kernel to make sure the hardware selected by the Electronics Engineer has proper low-level software support (e.g. device drivers, device tree, kernel configuration, etc.).

The purpose of the product is to run one or more applications, therefore the BSP/OS engineer makes sure that all of the dependencies of the application(s) are being built for the system. The engineers developing the application need a Software Develpment Kit (SDK) to cross-compile and link the application, therefore the BSP/OS engineer
will provide them with such a kit, and thanks to Yocto this has become quite straightforward.

Embedded Linux good practice

The repo manifests used for development usually contain reference to development branches, which mean repo will fetch the latest commit for those branches.

If you use the same manifest to fetch the project at a later date, you may fetch a different version of the code! This is perfectly fine for development because you want to stick with the latest version of your project, but one of your development versions will eventually become a release, and therefore you need to “take a picture” of that precise version of the sources used to generate the software release that goes on the project. Failing to do so can expose you to legal troubles as you won’t be able to regenerate the same build starting from sources, therefore you won’t be able to make a change on top of a specific release, forcing the customer to re-test the entire system as you’ll be forced to fix the bug or add the new feature on top of the latest version of the software.  

Also, if you don’t take those snapshots, there is no way you can run a bisect on the project sources to find out which commit has broken the functionality you so desperately need.  When designing your development process, find a way to automatically generate repo manifests with precise commits in them, so that you can save them alongside releases to checkout the same sources again at a later date, and do whatever you are paid to do. 

Copy sources in house

Keep also in mind that 99.9% of the sources that go inside your product come from the open source community, which means you have no guarantee the same sources will be available for download again.   As a designer, you need to protect yourself against changes and mistakes made upstream. Keep a copy of all the relevant sources in house, and find a way to plug them back into your build system. Also, you may want to mirror some of the repositories you use the most, as sometimes upstream git servers may suddenly became unavailable.  If you don’t have an internal copy you’ll be stuck until the servers will come back online.

At ByteSnap, we have a fully automated way of releasing Yocto based projects, such that we can get recover the sources that go into a release, and also, re-build the same release at a later date. We keep copies of open source packages in an automatic fashion, so that we experience no down time caused by faulty servers around the world. Furthermore, we back everything up every single day so that we can guarantee no work will be lost even in case of disaster on site.

Fabrizio Castro
Fab is a senior software engineer. He gained his bachelor and master degrees at Politecnico di Milano, in Milan, Italy. He has 20 years’ experience of all-round software development (services, data bases, applications, scientific software, firmware, RTOS, device drivers, Linux kernel, etc.), spent working in academia and industry. Fab has co-authored scientific papers and books, and worked on patents. As well as research and development, he specialises in Embedded Linux development – delivering state-of-art designs powering successful scientific, industrial, commercial, and military products. Fab has also been a lecturer and has taught undergraduates at some of the most prestigious universities in Europe. For more information about ByteSnap Design visit http://www.bytesnap.co.uk.

 

A Beginner’s Guide to Linux

Many people have heard of Linux, but most don’t really know what it is.  Linux is an operating system that can perform the same functions as Windows 10 and Mac OS. The key difference is that Linux is open source. In the most simple terms, it just means that no one single person or corporation controls the code. Instead, the operating system is maintained by a dedicated group of developers from around the world. Anyone who is interested can contribute to the code and help check for errors. Linux is more than an operating system; it is a community.

Linux distributions are always changing, so here are a few of the most popular ones. If you are an avid Windows user, then Ubuntu is a great place to start. The visual layout will be familiar for a Windows user, while the more complex aspects of Linux are smoothed away.

Read more at Softonic

Manipulating Directories in Linux

If you are new to this series (and to Linux), take a look at our first installment. In that article, we worked our way through the tree-like structure of the Linux filesystem, or more precisely, the File Hierarchy Standard. I recommend reading through it to make sure you understand what you can and cannot safely touch. Because this time around, I’ll show how to get all touchy-feely with your directories.

Making Directories

Let’s get creative before getting destructive, though. To begin, open a terminal window and use mkdir to create a new directory like this:

mkdir <directoryname>

If you just put the directory name, the directory will appear hanging off the directory you are currently in. If you just opened a terminal, that will be your home directory. In a case like this, we say the directory will be created relative to your current position:

$ pwd #This tells you where you are now -- see our first tutorial
/home/<username>
$ mkdir newdirectory #Creates /home/<username>/newdirectory

(Note that you do not have to type the text following the #. Text following the pound symbol # is considered a comment and is used to explain what is going on. It is ignored by the shell).

You can create a directory within an existing directory hanging off your current location by specifying it in the command line:

mkdir Documents/Letters

Will create the Letters directory within the Documents directory.

You can also create a directory above where you are by using .. in the path. Say you move into the Documents/Letters/ directory you just created and you want to create a Documents/Memos/ directory. You can do:

cd Documents/Letters # Move into your recently created Letters/ directory
mkdir ../Memos

Again, all of the above is done relative to you current position. This is called using a relative path.

You can also use an absolute path to directories: This means telling mkdir where to put your directory in relation to the root (/) directory:

mkdir /home/<username>/Documents/Letters

Change <username> to your user name in the command above and it will be equivalent to executing mkdir Documents/Letters from your home directory, except that it will work from wherever you are located in the directory tree.

As a side note, regardless of whether you use a relative or an absolute path, if the command is successful, mkdir will create the directory silently, without any apparent feedback whatsoever. Only if there is some sort of trouble will mkdir print some feedback after you hit [Enter].

As with most other command-line tools, mkdir comes with several interesting options. The -p option is particularly useful, as it lets you create directories within directories within directories, even if none exist. To create, for example, a directory for letters to your Mom within Documents/, you could do:

mkdir -p Documents/Letters/Family/Mom

And mkdir will create the whole branch of directories above Mom/ and also the directory Mom/ for you, regardless of whether any of the parent directories existed before you issued the command.

You can also create several folders all at once by putting them one after another, separated by spaces:

mkdir Letters Memos Reports

will create the directories Letters/, Memos/ and Reports under the current directory.

In space nobody can hear you scream

… Which brings us to the tricky question of spaces in directory names. Can you use spaces in directory names? Yes, you can. Is it advised you use spaces? No, absolutely not. Spaces make everything more complicated and, potentially, dangerous.

Say you want to create a directory called letters mom/. If you didn’t know any better, you could type:

mkdir letters mom

But this is WRONG! WRONG! WRONG! As we saw above, this will create two directories, letters/ and mom/, but not letters mom/.

Agreed that this is a minor annoyance: all you have to do is delete the two directories and start over. No big deal.

But, wait! Deleting directories is where things get dangerous. Imagine you did create letters mom/ using a graphical tool, like, say Dolphin or Nautilus. If you suddenly decide to delete letters mom/ from a terminal, and you have another directory just called letters/ under the same directory, and said directory is full of important documents, and you tried this:

rmdir letters mom

You would risk removing letters/. I say “risk” because fortunately rmdir, the instruction used to remove directories, has a built in safeguard and will warn you if you try to delete a non-empty directory.

However, this:

rm -Rf letters mom

(and this is a pretty standard way of getting rid of directories and their contents) will completely obliterate letters/ and will never even tell you what just happened.

The rm command is used to delete files and directories. When you use it with the options -R (delete recursively) and -f (force deletion), it will burrow down into a directory and its subdirectories, deleting all the files they contain, then deleting the subdirectories themselves, then it will delete all the files in the top directory and then the directory itself.

rm -Rf is an instruction you must handle with extreme care.

My advice is, instead of spaces, use underscores (_), but if you still insist on spaces, there are two ways of getting them to work. You can use single or double quotes (' or ") like so:

mkdir 'letters mom'
mkdir "letters dad"

Or, you can escape the spaces. Some characters have a special meaning for the shell. Spaces, as you have seen, are used to separate options and arguments on the command line. “Separating options and arguments” falls under the category of “special meaning”. When you want the shell to ignore the special meaning of a character, you need to escape it and to escape a character, you put a backslash () in front of it:

mkdir letters mom
mkdir letter dad

There are other special characters that would need escaping, like the apostrophe or single quote ('), double quotes ("), and the ampersand (&):

mkdir mom & dad's letters

I know what you’re thinking: If the backslash has a special meaning (to wit, telling the shell it has to escape the next character), that makes it a special character, too. Then, how would you escape the escape character which is ?

Turns out, the exact way you escape any other special character:

mkdir special\characters

will produce a directory called specialcharacters.

Confusing? Of course. That’s why you should avoid using special characters, including spaces, in directory names.

For the record, here is a list of special characters you can refer to just in case.

Things to Remember

  • Use mkdir <directory name> to create a new directory.
  • Use rmdir <directory name> to delete a directory (only works if it is empty).
  • Use rm -Rf <directory name> to annihilate a directory — use with extreme caution.
  • Use a relative path to create directories relative to your current directory: mkdir newdir..
  • Use an absolute path to create directories relative to the root directory (/): mkdir /home/<username>/newdir
  • Use .. to create a directory in the directory above the current directory: mkdir ../newdir
  • You can create several directories all in one go by separating them with spaces on the command line: mkdir onedir twodir threedir
  • You can mix and mash relative and absolute paths when creating several directories simultaneously: mkdir onedir twodir /home/<username>/threedir
  • Using spaces and special characters in directory names guarantees plenty of headaches and heartburn. Don’t do it.

For more information, you can look up the manuals of mkdir, rmdir and rm:

man mkdir
man rmdir
man rm

To exit the man pages, press [q].

Next Time

In the next installment, you’ll learn about creating, modifying, and erasing files, as well as everything you need to know about permissions and privileges. See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Red Hat’s Serverless Blockchain Future Powered by Open Source Innovation

On the final day of the Red Hat Summit last week, Red Hat CTO Chris Wright presided over the closing keynotes where he outlined how his company innovates and hinted at multiple future product developments.

“Innovation in the enterprise is about adapting to change without compromising the business,” Wright said.

Wright added that at the core of modern innovation is the Linux operating system. Whereas a decade or more ago, Linux was sometimes seen as a follower in terms of innovation, at this point in 2018 it’s clear Linux is now where new innovations, be it cloud, blockchain, serverless or Artificial Intelligence (AI), are built upon.

Looking deeper than just Linux is the open-source community that enables it, as well as a vast landscape of project code. Wright said Red Hat’s role when it comes to developers is to help provide them with tools and techniques to provide business value as code….

Among the emerging areas of technology innovation that Red Hat is now working on is serverless, which is also sometimes referred to as Functions as a Service (FaaS). Red Hat is now working on a project called OpenShift Cloud Functions.

Read more at ServerWatch

Free Webinar on Community-Driven Governance for Open Source Projects

Topics such as licensing and governance are complex but nonetheless critical considerations for open source projects. And, understanding and implementing the requirements in a strategic way are key to a project’s long-term health and success. In an upcoming webinar — Governance Models of Community-Driven Open Source Projects”The Linux Foundation’s Scott Nicholas will examine various approaches for structuring open source projects with these requirements in mind.

This free, hour-long webinar (at 10:00 am Pacific, May 30, 2018) will address some of the differences that exist in community-driven governance and will explore various case studies, including:

  • “Single-project” projects

  • Unfunded and funded projects

  • Technology-focused umbrella projects

  • Industry-focused umbrella projects

Scott Nicholas, who is Sr. Director of Strategic Programs of The Linux Foundation, will also discuss some common issues faced by new and growing open source projects, including project lifecycle and maturation considerations. He’ll also review differences in licensing models and outline approaches to the licensing of specifications.

Scott Nicholas

Learn more about the speaker

As Sr. Director of Strategic Programs, Scott assists in the launch and support of open source projects and contributes to The Linux Foundation’s legal programs. Scott has assisted in setting up numerous projects across the technology stack including R Consortium, Node.js Foundation, Open Mainframe Project, Civil Infrastructure Platform, OpenHPC, the ONAP Project, and the LF Networking Fund. Scott’s professional experience spans both the legal and financial aspects of technology, having worked as a corporate attorney and as an investment analyst covering the technology sector.

Join us Wednesday, May 30, 2018 at 10:00 am Pacific for this free webinar. Register Now.

This article originally appeared at The Linux Foundation.

How to Maximize the Scalability of Your Containerized Environment

One main reason to use containers is that they help make apps, services and environments highly scalable. But that doesn’t happen magically. In order to take full advantage of the benefits of containers, you have to build your stack and configure your environment in ways that maximize scalability.

Below, I take a look at some strategies that can ensure that your containers and the software they host are as scalable as they can be.
Defining Scalability

First, though, let’s spend a moment discussing what scalability means, exactly, and what it looks like in practice.
Scalability can take multiple forms:

  • Being able to increase or decrease the capacity of an application without adding or subtracting deployments. For example, perhaps your web app has 10,000 users per day today and you want it to be able to handle 20,000 without creating a new instance of the app. You could scale in this way by assigning more resources to the app.

Read more at Wercker