Home Blog Page 397

An Introduction to Vim for Sysadmins

Why, you ask, should anyone care about Vim? It’s complex, it’s so old it’s a fossil, and you like Kate/Leafpad/Geany/Gedit/Nano/Jed/Lime/Emacs/what-have-you, and the Linux world is cram-full of great text editors, so why bother with Vim?

The main reason it is included in nearly all Linux distributions. Sure, it is rather a desert island scenario to worry about being trapped on an unfamiliar system and you only have Vim to rescue yourself. It does happen. You will especially appreciate it when you have to work over a slow SSH session with high latency because Vim is chock-a-block with powerful single-key commands.

The other reason is it is a customizable powerhouse that rewards a bit of study and tweaking with a nice productivity boost. For example, if you have to tag XML or HTML documents Vim can enter both opening and closing tags with a single keystroke. Or enter blocks of text, such as copyright notices or URLs, with a single keystroke. Vim is so flexible you can make it do just about anything with a minimum number of keystrokes.

Distro Quirks

The ancestral vi is long gone, replaced eons ago by Vim — vi IMproved. Vim includes extensive documentation, unless your distro installs only vim-tiny, which strips out the documentation and other fripperies, which is another reason to know the basics without having to look them up.

Most distros symlink vi to Vim, so you should be able to start it with either vi or vim. However, on some distros, notably Ubuntu, vi starts Vim in vi-compatible mode, and it will behave like the ancestral vi. It will even tell you on the home screen:

Running in Vi compatible mode                                        
type :set nocp for Vim defaults

The biggest hassle with vi-compatible mode is you can’t use the arrow, home, end, page up, or page down keys for navigating your document without entering command mode. We’ll get to Vim’s modes in a moment; the short story is Vim is more comfortable to use than vi. There are two ways to return to normal Vim mode. One is to do what the home screen tells you: press the Escape key and type :set nocp, then press Enter.

To change this permanently create ~/.vimrc and enter this line:

set nocp

A third way is to start Vim with vim instead of vi. On Ubuntu use vim.tiny.

Remembering Vim Commands

Vim’s commands are mnemonic, so it’s not that farfetched that you will remember them. d = delete, y = yank or cut, p = paste, w = write, q = quit. Your hands never leave the keyboard, so you are fast and efficient.

Starting Vim

The biggest hurdle for new Vim users is its dual-mode system. It has a command mode for entering commands, and an input mode for typing your text. Vim starts up in command mode. Let’s start with basic usage.

  • vi opens a new empty document
  • vi [newfilename] opens and names a new document
  • vi [filename] opens an existing document
  • Press the i or Insert key to enter input mode
  • Press the Escape key to leave insert mode and enter command mode

When you are in input mode you can type and edit your text just like any other text editor. Navigate through your document with the arrow keys. Home goes to the beginning of the paragraph, and End goes to the end. Ctrl+Home goes to the beginning of your document and Ctrl+End goes to the end.

Compact Keyboards

If you find yourself stuck with a keyboard that does not have Home, End, or arrow keys, enter command mode:

  • j moves the cursor down one line
  • i moves the cursor up one line
  • G goes to the end of the document
  • gg goes to the beginning of the document
  • h moves the cursor to the left, one character at a time
  • l moves the cursor to the right, one character at a time

Saving Changes and Exiting

Now we come to the fun part: saving your changes and closing Vim.

Save your changes as you go by pressing the Escape key to enter command mode, and then type :w Enter. You will see a confirmation that says something like “newfilename” 7 lines, 40 characters written”.

:sav [filename] names a new document, or saves the file under a new name

To quit and save your changes, enter command mode and type :x Enter or :wq Enter.

:q Enter quits if you have already saved your changes.

If you try to quit with unsaved changes, Vim won’t let you. Make it obey with :q! Enter. (Or save your changes.)

Common Editing Functions

Some common command mode editing functions:

  • u toggles undo/redo.
  • r replaces the character under the cursor
  • x deletes a single character
  • dw deletes a single word, starting from the cursor
  • D deletes to the end of the line, starting from the cursor
  • dd cuts one line
  • Ndd cuts N lines; e.g. 3dd deletes three lines starting with the current line
  • yy copies one line
  • p pastes whatever has been cut or copied
  • :set number displays line numbers
  • :set nonumber turns off line numbering

Advanced Vim

This should be adequate for simple editing tasks like configuration files. To learn advanced Vim functions type :help Enter to see all the built-in documentation. I recommend starting with the tutor, which takes 30-60 minutes to complete. You can launch the tutor outside of Vim by entering the vimtutor command in your shell.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Speak at Automotive Linux Summit & OS Summit Japan — 4 Days Left to Submit a Proposal

Automotive Linux Summit (ALS) connects the developers, vendors, and users driving innovation in Automotive Linux. Co-located with Open Source Summit Japan, ALS will gather over 1,000 attendees from global companies leading and accelerating the development and adoption of a fully open software stack for the connected vehicle.

Share your knowledge and expertise with industry-leading developers, architects and executives at Automotive Linux Summit and Open Source Summit Japan. Submit your proposal by March 18.

Read more at The Linux Foundation

Protecting Code Integrity with PGP — Part 5: Moving Subkeys to a Hardware Device

In this tutorial series, we’re providing practical guidelines for using PGP.  If you missed the previous article, you can catch up with the links below. But, in this article, we’ll continue our discussion about securing your keys and look at some tips for moving your subkeys to a specialized hardware device. 

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Part 4: Moving Your Master Key to Offline Storage

Checklist

  • Get a GnuPG-compatible hardware device (NICE)

  • Configure the device to work with GnuPG (NICE)

  • Set the user and admin PINs (NICE)

  • Move your subkeys to the device (NICE)

Considerations

Even though the master key is now safe from being leaked or stolen, the subkeys are still in your home directory. Anyone who manages to get their hands on those will be able to decrypt your communication or fake your signatures (if they know the passphrase). Furthermore, each time a GnuPG operation is performed, the keys are loaded into system memory and can be stolen from there by sufficiently advanced malware (think Meltdown and Spectre).

The best way to completely protect your keys is to move them to a specialized hardware device that is capable of smartcard operations.

The benefits of smartcards

A smartcard contains a cryptographic chip that is capable of storing private keys and performing crypto operations directly on the card itself. Because the key contents never leave the smartcard, the operating system of the computer into which you plug in the hardware device is not able to retrieve the private keys themselves. This is very different from the encrypted USB storage device we used earlier for backup purposes — while that USB device is plugged in and decrypted, the operating system is still able to access the private key contents. Using external encrypted USB media is not a substitute to having a smartcard-capable device.

Some other benefits of smartcards:

  • They are relatively cheap and easy to obtain

  • They are small and easy to carry with you

  • They can be used with multiple devices

  • Many of them are tamper-resistant (depends on manufacturer)

Available smartcard devices

Smartcards started out embedded into actual wallet-sized cards, which earned them their name. You can still buy and use GnuPG-capable smartcards, and they remain one of the cheapest available devices you can get. However, actual smartcards have one important downside: they require a smartcard reader, and very few laptops come with one.

For this reason, manufacturers have started providing small USB devices, the size of a USB thumb drive or smaller, that either have the microsim-sized smartcard pre-inserted, or that simply implement the smartcard protocol features on the internal chip. Here are a few recommendations:

  • Nitrokey Start: Open hardware and Free Software: one of the cheapest options for GnuPG use, but with fewest extra security features

  • Nitrokey Pro: Similar to the Nitrokey Start, but is tamper-resistant and offers more security features (but not U2F, see the Fido U2F section of the guide)

  • Yubikey 4: Proprietary hardware and software, but cheaper than Nitrokey Pro and comes available in the USB-C form that is more useful with newer laptops; also offers additional security features such as U2F

Our recommendation is to pick a device that is capable of both smartcard functionality and U2F, which, at the time of writing, means a Yubikey 4.

Configuring your smartcard device

Your smartcard device should Just Work (TM) the moment you plug it into any modern Linux or Mac workstation. You can verify it by running:

$ gpg --card-status

If you didn’t get an error, but a full listing of the card details, then you are good to go. Unfortunately, troubleshooting all possible reasons why things may not be working for you is way beyond the scope of this guide. If you are having trouble getting the card to work with GnuPG, please seek support via your operating system’s usual support channels.

PINs don’t have to be numbers

Note, that despite having the name “PIN” (and implying that it must be a “number”), neither the user PIN nor the admin PIN on the card need to be numbers.

Your device will probably have default user and admin PINs set up when it arrives. For Yubikeys, these are 123456 and 12345678, respectively. If those don’t work for you, please check any accompanying documentation that came with your device.

Quick setup

To configure your smartcard, you will need to use the GnuPG menu system, as there are no convenient command-line switches:

$ gpg --card-edit
[...omitted...]
gpg/card> admin
Admin commands are allowed
gpg/card> passwd

You should set the user PIN (1), Admin PIN (3), and the Reset Code (4). Please make sure to record and store these in a safe place — especially the Admin PIN and the Reset Code (which allows you to completely wipe the smartcard). You so rarely need to use the Admin PIN, that you will inevitably forget what it is if you do not record it.

Getting back to the main card menu, you can also set other values (such as name, sex, login data, etc), but it’s not necessary and will additionally leak information about your smartcard should you lose it.

Moving the subkeys to your smartcard

Exit the card menu (using “q”) and save all changes. Next, let’s move your subkeys onto the smartcard. You will need both your PGP key passphrase and the admin PIN of the card for most operations. Remember, that [fpr] stands for the full 40-character fingerprint of your key.

$ gpg --edit-key [fpr]

Secret subkeys are available.

pub  rsa4096/AAAABBBBCCCCDDDD
    created: 2017-12-07  expires: 2019-12-07 usage: C
    trust: ultimate      validity: ultimate
ssb  rsa2048/1111222233334444
    created: 2017-12-07  expires: never usage: E
ssb  rsa2048/5555666677778888
    created: 2017-12-07  expires: never usage: S
[ultimate] (1). Alice Engineer <alice@example.org>
[ultimate] (2)  Alice Engineer <allie@example.net>

gpg>

Using –edit-key puts us into the menu mode again, and you will notice that the key listing is a little different. From here on, all commands are done from inside this menu mode, as indicated by gpg>.

First, let’s select the key we’ll be putting onto the card — you do this by typing key 1 (it’s the first one in the listing, our [E] subkey):

gpg> key 1

The output should be subtly different:

pub  rsa4096/AAAABBBBCCCCDDDD
    created: 2017-12-07  expires: 2019-12-07 usage: C
    trust: ultimate      validity: ultimate
ssb* rsa2048/1111222233334444
    created: 2017-12-07  expires: never usage: E
ssb  rsa2048/5555666677778888
    created: 2017-12-07  expires: never usage: S
[ultimate] (1). Alice Engineer <alice@example.org>
[ultimate] (2)  Alice Engineer <allie@example.net>

Notice the * that is next to the ssb line corresponding to the key — it indicates that the key is currently “selected.” It works as a toggle, meaning that if you type key 1 again, the * will disappear and the key will not be selected any more.

Now, let’s move that key onto the smartcard:

gpg> keytocard
Please select where to store the key:
  (2) Encryption key
Your selection? 2

Since it’s our [E] key, it makes sense to put it into the Encryption slot. When you submit your selection, you will be prompted first for your PGP key passphrase, and then for the admin PIN. If the command returns without an error, your key has been moved.

Important: Now type key 1 again to unselect the first key, and key 2 to select the [S] key:

gpg> key 1
gpg> key 2
gpg> keytocard
Please select where to store the key:
  (1) Signature key
  (3) Authentication key
Your selection? 1

You can use the [S] key both for Signature and Authentication, but we want to make sure it’s in the Signature slot, so choose (1). Once again, if your command returns without an error, then the operation was successful.

Finally, if you created an [A] key, you can move it to the card as well, making sure first to unselect key 2. Once you’re done, choose “q”:

gpg> q
Save changes? (y/N) y

Saving the changes will delete the keys you moved to the card from your home directory (but it’s okay, because we have them in our backups should we need to do this again for a replacement smartcard).

Verifying that the keys were moved

If you perform –list-secret-keys now, you will see a subtle difference in the output:

$ gpg --list-secret-keys
sec#  rsa4096 2017-12-06 [C] [expires: 2019-12-06]
     111122223333444455556666AAAABBBBCCCCDDDD
uid           [ultimate] Alice Engineer <alice@example.org>
uid           [ultimate] Alice Engineer <allie@example.net>
ssb>  rsa2048 2017-12-06 [E]
ssb>  rsa2048 2017-12-06 [S]

The > in the ssb> output indicates that the subkey is only available on the smartcard. If you go back into your secret keys directory and look at the contents there, you will notice that the .key files there have been replaced with stubs:

$ cd ~/.gnupg/private-keys-v1.d
$ strings *.key

The output should contain shadowed-private-key to indicate that these files are only stubs and the actual content is on the smartcard.

Verifying that the smartcard is functioning

To verify that the smartcard is working as intended, you can create a signature:

$ echo "Hello world" | gpg --clearsign > /tmp/test.asc
$ gpg --verify /tmp/test.asc

This should ask for your smartcard PIN on your first command, and then show “Good signature” after you run gpg –verify.

Congratulations, you have successfully made it extremely difficult to steal your digital developer identity!

Other common GnuPG operations

Here is a quick reference for some common operations you’ll need to do with your PGP key.

In all of the below commands, the [fpr] is your key fingerprint.

Mounting your master key offline storage

You will need your master key for any of the operations below, so you will first need to mount your backup offline storage and tell GnuPG to use it. First, find out where the media got mounted, for example, by looking at the output of the mount command. Then, locate the directory with the backup of your GnuPG directory and tell GnuPG to use that as its home:

$ export GNUPGHOME=/media/disk/name/gnupg-backup
$ gpg --list-secret-keys

You want to make sure that you see sec and not sec# in the output (the # means the key is not available and you’re still using your regular home directory location).

Updating your regular GnuPG working directory

After you make any changes to your key using the offline storage, you will want to import these changes back into your regular working directory:

$ gpg --export | gpg --homedir ~/.gnupg --import
$ unset GNUPGHOME

Extending key expiration date

The master key we created has the default expiration date of 2 years from the date of creation. This is done both for security reasons and to make obsolete keys eventually disappear from keyservers.

To extend the expiration on your key by a year from current date, just run:

$ gpg --quick-set-expire [fpr] 1y

You can also use a specific date if that is easier to remember (e.g. your birthday, January 1st, or Canada Day):

$ gpg --quick-set-expire [fpr] 2020-07-01

Remember to send the updated key back to keyservers:

$ gpg --send-key [fpr]

Revoking identities

If you need to revoke an identity (e.g., you changed employers and your old email address is no longer valid), you can use a one-liner:

$ gpg --quick-revoke-uid [fpr] 'Alice Engineer <aengineer@example.net>'

You can also do the same with the menu mode using gpg –edit-key [fpr].

Once you are done, remember to send the updated key back to keyservers:

$ gpg --send-key [fpr]

Next time, we’ll look at how Git supports multiple levels of integration with PGP.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Test Drive of AppSwitch, the “Network Stack from the Future”

One of the best perks of my job at Docker has been the incredible connections that I was able to make in the industry. That’s how I met Dinesh Subhraveti, one of the original authors of Linux Containers. Dinesh gave me a sneak peek at his new project, AppSwitch.

AppSwitch abstracts the networking stack of an application, just like containers (and Docker in particular) abstract the compute dimension of the application. At first, I found this statement mysterious (what does it mean exactly?), bold (huge if true!), and exciting (because container networking is hard).

The state of container networking

There are (from my perspective) two major options today for container networking: CNM and CNI.

CNM, the Container Network Model, was introduced by Docker. It lets you create networks that are secure by default, in the sense that they are isolated from each other. A given container can belong to zero, one, or many networks. This is conceptually similar to VLANs, a technology that has been used for decades to partition and segregate Ethernet networks. CNM doesn’t require you to use overlay networks, but in practice, most CNM implementations will create multiple overlay networks.

Read more at Jérôme Petazzoni

 

 

SDN Trends: The Business Benefits and Emerging SD-WAN Technology

The 2018 Open Networking Summit is rapidly approaching. In anticipation of this event, we spoke to Shunmin Zhu, Head of Alibaba Cloud Network Services to get more insights on two of the hot topics that will be discussed at the event: the future of Software Defined Networking (SDN) and the emerging SD-WAN technology.

“SDN is a network design approach beyond just a technology protocol. The core idea is decoupling the forwarding plane from the control plane and management plane. In this way, network switches and routers only focus on packet forwarding,” said Zhu.

“The forwarding policies and rules are centrally managed by a controller. From a cloud service provider’s perspective, SDN enables customers to manage their private networks in a more intelligent manner through API.”

Shunmin Zhu, Head of Alibaba Cloud Network Services

This newfound approach to networks that were previously thought to be nearly unfathomable black boxes brings welcome transparency and flexibility. And, that naturally leads to more innovation such as SD-WAN and Hybrid-WAN.

Zhu shared more information on both of those cutting-edge developments later in this interview. Here is what he had to say about how all these things come together to shape the future of the networking.

Linux.com:  Please tell us a little more about SDN for the benefit of readers who may not be familiar with it.

Shunmin Zhu: Today, cloud services make it very convenient for a user to buy a virtual machine, set up the VM, change the configurations at any time, and choose the most suitable billing method. SDN offers the flexibility of using network products the same way as using a VM. Such degree of flexibility was not seen in networks before the advent of SDN.

Before, it was unlikely for a user to divide his cloud network into several private subnets. In the SDN era, however, with VPC (Virtual Private Cloud) users are able to customize their cloud networks by choosing the private subnets and dividing them further. In short, SDN puts the power of cloud network self-management into the hands of users.

Linux.com: What were the drivers behind the development of SDN? What are the drivers spurring its adoption now?

Zhu: Traditional networks prior to SDN find it hard to support the rapid development of business applications. The past few decades witnessed fast growth in the computing industry but not so much innovation was seen in the networking sector. With emerging trends, such as cloud computing and virtualization, organizations need their networks to become as flexible as the cloud computing and storage resources in order to respond to IT and business requirements. Meanwhile the hardware, operating system, and network application of the traditional network are tightly coupled and not accessible to an outsider. The three components are usually controlled by the same OEM. Any innovation or update is thus heavily dependent on the device OEMs.

The shortcomings of the traditional network are apparent from a user’s perspective. First and foremost is the speed of delivery. Network capacity extension usually takes several months, and even a simple network configuration could take several days, which is hard for customers to accept today.

From the perspective of an Internet Service Provider (ISP), the traditional network could hardly satisfy the need of their customers. Additionally, heterogeneous network devices from multiple vendors complicate network management. There’s little that ISPs could do to improve the situation as the network functions are controlled by the device OEMs. User and carrier’s urgent need for SDN has made this technology popular. In a large extent, SDN overcomes the heterogeneity of the physical network devices and opens up network functions via APIs. Business applications can call APIs to turn on network services on demand, which is revolutionary in the network industry.

Linux.com: What are the business benefits overall?

Zhu: The benefits of SDN are twofold. On the one hand, it helps to reduce cost, increase productivity, and reuse the network resources. SDN makes the use of networking products and services very easy and flexible. It gives users the option to pay by usage or by duration. The cost reduction and productivity boost empowers the users to invest more time and money into core business and application innovations. SDN also increases the reuse of the overall network resources in an organization.

On the other hand, SDN brings new innovations and business opportunities to the networking industry. SDN technology is fundamentally reshaping networking toward a more open and prosperous ecosystem. Traditionally, only a few network device manufacturers and ISPs were the major players in the networking industry. With the arrival of SDN, more participants are encouraged to create new networking applications and services, generating tons of new business opportunities.

Linux.com: Why is SDN gaining in popularity now?

Zhu: SDN is gaining momentum because it brings revolutionary changes and tremendous business value to the networking industry. The rise of cloud computing is another factor that accelerates the adoption of SDN. The cloud computing network offers the perfect usage scenario for SDN to quickly land as a real-world application. The vast scale, large scope, and various needs of the cloud network pose a big challenge to the traditional network. SDN technology works very well with cloud computing in terms of elasticity. SDN virtualizes the underlay physical network to provide richer and more customized services to the vast number of cloud computing users.

Linux.com: What are future trends in SDN and the emerging SD-WAN technology?

Zhu: First of all, I think SDN will be adopted in more networking usage scenarios. Most of the future networks will be designed by the rule of SDN. In addition to cloud computing data centers, WAN, carrier networks, campus networks, and even wireless networks will increasingly embrace the adoption of SDN.

Secondly, network infrastructure based on SDN will further combine the power of hardware and software. By definition, SDN is software defined network. The technology seems to be prone to the software side. On the flipside, SDN cannot leave the physical network devices upon which it builds the virtual network. The difficulty to improve performance is another disadvantage of a pure software-based solution. In my vision, SDN technology will evolve towards a tighter combination with hardware.

The more powerful next generation network will be built upon the mutually reinforcing software and hardware. Some cloud service providers have already started to use SmartNIC as a core component in their SDN solution for performance boost.

The next trend is the rapid development of SDN-based network applications. SDN helps build an open industry environment. It’s a good time for technology companies to start businesses around innovative network applications such as network monitoring, network analytics, cyber security and NFV (Network Function Virtualization).

SD-WAN is the application of SDN technology in the wide area network (WAN) space. Generally speaking, WAN refers to a communications network that connects multiple remote local area networks (LANs) with a distance of tens to thousands of miles to each other. For example, a corporate WAN may connect the networks of its headquarters, branch offices, and cloud service providers. Traditional WAN solutions, such as MPLS, could be expensive and require a long period before service provisioning. Wireless networks, on the other hand, fall short in bandwidth capacity and stability. The invention of SD-WAN fixes these problems to a large extent.

For instance, a company can build its corporate WAN by connecting branch offices to the headquarters via virtual dedicated line and internet, also known as a Hybrid-WAN solution. The Internet link brings convenience to network connections between the branches to the headquarters while the virtual dedicated line guarantees the quality of the network service. The Hybrid-WAN solution balances cost, efficiency, and quality in creating a corporate WAN. Other benefits of SD-WAN include SLA, QoS, and application-aware routing rules – key applications are tagged and prioritized in network communication for a better performance. With these benefits, SD-WAN is getting increasing attention and popularity.

Linux.com: What kind of user experience do you think is expected regarding SDN products and services?

Zhu: There are three things that are most important to SDN user experience. First is the simplicity. Networking technologies and products sometimes impress users as over complicated and hard to manage. The SDN network products should be radically simplified. Even a user with limited knowledge in networking should be able to use and configure the product.

Second is the intelligence. SDN network products should be smart enough to identify incidents and fix the issues by itself. This will minimize the impact to the customer’s business and reduce the management costs.

The third most important thing is the transparency. The network is the underlying infrastructure to all applications. The lack of transparency sometimes makes users feel that their network is a black box. A successful SDN product should give more transparency to the network administrators and other network users.

This article was sponsored by Alibaba and written by Linux.com.

Sign up to get the latest updates on ONS NA 2018!

A Primer on Nvidia-Docker — Where Containers Meet GPUs

Traditional programs cannot access GPUs directly. They need a special parallel programming interface to move computations to GPU. Nvidia, the most popular graphics card manufacturer, has created Compute Unified Device Architecture (CUDA), as a parallel computing platform and programming model for general computing on GPUs. With CUDA, developers will be able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-enabled applications, the sequential part of the workload continues to run on the CPU — which is optimized for single-threaded performance — while the parallelized compute intensive part of the application is offloaded to run on thousands of GPU cores in parallel. To integrate CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB by expressing parallelism through extensions in the form of a few basic keywords.

Read more at The New Stack

What Is Open Source Programming?

At the simplest level, open source programming is merely writing code that other people can freely use and modify. But you’ve heard the old chestnut about playing Go, right? “So simple it only takes a minute to learn the rules, but so complex it requires a lifetime to master.” Writing open source code is a pretty similar experience. It’s easy to chuck a few lines of code up on GitHub, Bitbucket, SourceForge, or your own blog or site. But doing it right requires some personal investment, effort, and forethought.

Let’s be clear up front about something: Just being on GitHub in a public repo does not make your code open source. Copyright in nearly all countries attaches automatically when a work is fixed in a medium, without need for any action by the author. For any code that has not been licensed by the author, it is only the author who can exercise the rights associated with copyright ownership. Unlicensed code—no matter how publicly accessible—is a ticking time bomb for anyone who is unwise enough to use it.

Read more at OpenSource.com

Multiversion Testing With Tox

In the Python world, tox (documentation) is a powerful testing tool that allows a project to test against many combinations of versioned environments. The django-coverage-plugin package (Github) uses tox to test against a matrix of Python versions (2.7, 3.4, 3.5, and 3.6) and Django versions (1.8, 1.9, 1.10, 1.11, 1.11tip, 2.0, 2.0tip), resulting in 25 valid combinations to test.

Preparing Your System Environments

tox needs to run from a virtual environment where it is installed and from which it’s run. As of Feb 2018, I would recommend a Python 2.7 environment so that you can use the detox package (see below) to parallelize your build’s workload. Installation of tox is usually into your base development environment and tox is usually included in your project’s requirements.txt file:

tox >= 1.8
detox

Read more at CloudCity

Migrating to Linux: Using Sudo

This article is the fifth in our series about migrating to Linux. If you missed earlier ones, you can catch up here:

Part 1 – An Introduction

Part 2 – Disks, Files, and Filesystems

Part 3 – Graphical Environments

Part 4 – The Command Line

You may have been wondering about Linux for a while. Perhaps it’s used in your workplace and you’d be more efficient at your job if you used it on a daily basis. Or, perhaps you’d like to install Linux on some computer equipment you have at home. Whatever the reason, this series of articles is here to make the transition easier.

Linux, like many other operating systems supports multiple users. It even supports multiple users being logged in simultaneously.

User accounts are typically assigned a home directory where files can be stored. Usually this home directory is in:

/home/<login name>

This way, each user has their own separate location for their documents and other files.

Admin Tasks

In a traditional Linux installation, regular user accounts don’t have permissions to perform administrative tasks on the system. And instead of assigning rights to each user to perform various tasks, a typical Linux installation will require a user to log in as the admin to do certain tasks.

The administrator account on Linux is called root.

Sudo Explained

Historically, to perform admin tasks, one would have to login as root, perform the task, and then log back out. This process was a bit tedious, so many folks logged in as root and worked all day long as the admin. This practice could lead to disastrous results, for example, accidentally deleting all the files in the system. The root user, of course, can do anything, so there are no protections to prevent someone from accidentally performing far-reaching actions.

The sudo facility was created to make it easier to login as your regular user account and occasionally perform admin tasks as root without having to login, do the task, and log back out.  Specifically, sudo allows you to run a command as a different user. If you don’t specify a specific user, it assumes you mean root.

Sudo can have complex settings to allow users certain permissions to use sudo for some commands but not for others. Typically, a desktop installation will make it so the first account created has full permissions in sudo, so you as the primary user can fully administer your Linux installation.

Using Sudo

Some Linux installations set up sudo so that you still need to know the password for the root account to perform admin tasks. Others, set up sudo so that you type in your own password. There are different philosophies here. 

When you try to perform an admin task in the graphical environment, it will usually open a dialog box asking for a password. Enter either your own password (e.g., on Ubuntu), or the root account’s password (e.g., Red Hat).

When you try to perform an admin task in the command line, it will usually just give you a “permission denied” error. Then you would re-run the command with sudo in front. For example:

systemctl start vsftpd
Failed to start vsftpd.service: Access denied

sudo systemctl start vsftpd
[sudo] password for user1:

When to Use Sudo

Running commands as root (under sudo or otherwise) is not always the best solution to get around permission errors. While will running as root will remove the “permission denied” errors, it’s sometimes best to look for the root cause rather than just addressing the symptom. Sometimes files have the wrong owner and permissions.

Use sudo when you are trying to perform a task or run a program and the program requires root privileges to perform the operation. Don’t use sudo if the file just happens to be owned by another user (including root). In this second case, it’s better to set the permission on the file correctly.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Open Source LimeSDR Mini Takes Off in Satellites

The topic of 5G mobile networks dominated the recent Mobile World Congress in Barcelona, despite the expectation that widespread usage may be years away. While 5G’s mind-boggling bandwidth captivates our attention, another interesting angle is found in the potential integration with software defined radio (SDR), as seen in OpenAirInterface’s proposed Cloud-RAN (C-RAN) software-defined radio access network.

As the leading purveyor of open source SDR solutions, UK-based Lime Microsystems is well positioned to play a key role in the development of 5G SDR. SDR enables the generation and augmentation of just about any wireless protocol without swapping hardware, thereby affordably enabling complex networks across a range of standards and frequencies.

In late February, Lime announced a collaboration with the European Space Agency (ESA) to make 200 of its Ubuntu Core-driven LimeSDR Mini boards available for developing applications running on ESA’s communications satellites, as part of ESA’s Advanced Research in Telecommunications Systems (ARTES) program. The Ubuntu Core-based, Snap-packaged satcom apps will include prototypes of SDR-enabled 5G satellite networks.

Other applications will include IoT networks controlled by fleets of small, low-cost CubeSat satellites. CubeSats, as well as smaller NanoSats, have been frequently used for open source experimentation. The applications will be shared in an upcoming SDR App Store for Satcom to be developed by Lime and Canonical.

LimeSDR Mini Starts Shipping

Lime Microsystems recently passed a major milestone when its ongoing Crowd Supply campaign for the LimeSDR Mini passed the $500,000 mark. On Mar. 4, the company reported it had shipped the first 300 boards to backers, with plans to soon ship 900 more.

At MWC, Lime demonstrated the LimeSDR Mini and related technologies working with Quortus’ cellular core and Amarisoft’s LTE stack. There was also a demonstration with Vodafone regarding the carrier’s plans to use Lime’s related LimeNET computers to help develop Vodafone’s Open RAN initiative.

Back in May 2016, Lime expanded beyond its business of building field programmable RF (FPRF) transceivers for wireless broadband systems when it successfully launched the $299, open spec LimeSDR board. The $139 LimeSDR Mini that was unveiled last September has a lower-end Intel/Altera FPGA — a MAX 10 instead of a Cyclone IV — but uses the same Lime LS7002 RF transceiver chip. At 69×31.4mm, it’s only a third the size of the LimeSDR.

The LimeSDR boards can send and receive using UMTS, LTE, GSM, WiFi, Bluetooth, Zigbee, LoRa, RFID, Digital Broadcasting, Sigfox, NB-IoT, LTE-M, Weightless, and any other wireless technology that can be programmed with SDR. The boards drive low-cost, multi-lingual cellular base stations and wireless IoT gateways, and are used for various academic, industrial, hobbyist, and scientific SDR applications, such as radio astronomy.

Raspberry Pi integration

Unlike the vast majority of open source Linux hacker boards, the LimeSDR boards don’t run Linux locally. Instead, their FPGAs manage DSP and interfacing tasks, while a USB 3.0-connected host system running Ubuntu Core provides the UI and high-level supervisory functions. Yet, the LimeSDR Mini can be driven by a Raspberry Pi or other low-cost hacker board that supports Ubuntu Core instead of requiring an x86-based desktop

In late January, the LimeSDR Mini campaign added a Raspberry Pi compatible Grove Starter Kit option with a GrovePi+ board, 15 Grove sensor and actuator modules, and dual antennas for 433/868/915MHz bands. Lime is supporting the kit with its LimeSDR optimized ScratchRadio extension.

Around the same time, Lime announced an open source prototype hack that combines a LimeSDR Mini board, a Raspberry Pi Zero, and a PiCam. Lime calls the DVB (digital video broadcasting) based prototype “one of the world’s smallest DVB transmitters.”

Compared to the LimeSDR, the LimeSDR Mini has a reduced frequency range, RF bandwidth, and sample rate. The board operates at 10MHz to 3.5 GHz compared to 100 kHz to 3.8 GHz for the original. Both models, however, can achieve up to 10 GHz frequencies with the help of an LMS8001 Companion board that was added as a LimeSDR Mini stretch goal project in October.

With Ubuntu Core’s Snap application packages and support for app marketplaces, LimeSDR apps can easily be downloaded, installed, developed, and shared. The drivers that run on the Ubuntu host system are developed with an open source Lime Suite library.

Lime was one of the earliest supporters of the lightweight, transactional Ubuntu Core, in part because it’s designed to ease OTA updates — a chief benefit of SDR. Ubuntu Core continues to steadily expand on hacker boards such as the Orange Pi, as well as on smart home hubs and IoT gateways like Rigado’s recently updated Vesta IoT gateways. The use of Ubuntu Core has helped to quickly expand the open LimeSDR development community.

LimeNET expands on the high end

In May 2017, Lime Microsystems launched three open source embedded LimeNET computers that don’t require a separate tethered computer. The LimeNET Mini, LimeNET Enterprise, and LimeNET Base Station, which range in price from $2,600 to over $17,000, run Ubuntu Core on various 14nm fabricated Intel Core processors. They offer a variety of ports, antennas, WiFi, Bluetooth, and other features that turn the underlying LimeSDR boards into wireless base stations.

The top-of-the-line LimeNET Base Station features dual RF transceiver chips, as well as a LimeNET QPCIe variant of the LimeSDR board with a faster PCIe interface instead of USB. It also adds an amplifier with dual MIMO units that greatly expands the range beyond the 15-meter limit of the other LimeNET systems. If you don’t want this separately available LimeNET Amplifier Chassis, you can buy LimeNET QPCIe board as part of a cheaper LimeNET Core system.

Lime’s boards and systems aren’t the only low-cost SDR solutions running on Linux. Last year, for example, Avnet launched a Linux- and Xilinx Zynq-7020 based PicoZed SDR computer-on-module. Earlier products include the Epiq Solutions Matchstiq Z1, a handheld system that runs Linux on an iVeia Atlas-I-Z7e module equipped with a Zynq Z-7020.

Sign up for ELC/OpenIoT Summit updates to get the latest information: