Home Blog Page 400

Postage-Stamp Linux

We’ve come a long way from the early days of big iron, and few things demonstrate that better than Microchip’s new SAMA5D27. What’s a SAMA5D27, you ask? It’s a postage stamp that runs Linux. Well, not literally a postage stamp, but a fully realized microcontroller that measures about 1½ inches (40mm) on a side. It’s not much more expensive than a first-class stamp, either, at about $39 in small quantities.

For that, you get an Arm Cortex-A5 processor running at 500 MHz, a floating-point unit, 128 MB of DRAM, Ethernet with PHY, flash memory, camera and LCD interfaces, USB, CAN, a pile of everyday peripherals – and Linux. Yup, we’ve reduced the hulking mainframes of our parents’ age to the size of a postage stamp. If it were delivered by jetpack, we’d be in the future.

Read more at EE Journal

The RedMonk Programming Language Rankings: January 2018

Given that we’re into March, it seems like a reasonable time to publish our Q1 Programming Language Rankings.

The data source used for these queries is the GitHub Archive. We query languages by pull request in a manner similar to the one GitHub used to assemble the 2016 State of the Octoverse. Our query is designed to be as comparable as possible to the previous process.

  • Language is based on the base repository language. While this continues to have the caveats outlined below, it does have the benefit of cohesion with our previous methodology.
  • We exclude forked repos.
  • We use the aggregated history to determine ranking (though based on the table structure changes this can no longer be accomplished via a single query.)

The primary change is that the GitHub portion of the language ranking is now based on pull requests rather than repos. 

Read more at RedMonk

Dynamic Linux Routing with Quagga

So far in this series, we have learned the intricacies of IPv4 addressing in Linux LAN Routing for Beginners: Part 1 and how to create static routes manually in Linux LAN Routing for Beginners: Part 2.

Now we’re going to use Quagga to manage dynamic routing for us, just set it and forget it. Quagga is a suite of routing protocols: OSPFv2, OSPFv3, RIP v1 and v2, RIPng, and BGP-4, which are all managed by the zebra daemon.

OSPF means Open Shortest Path First. OSPF is an interior gateway protocol (IGP); it is for LANs and LANs connected over the Internet. Every OSPF router in your network contains the topology for the whole network, and calculates the best paths through the network. OSPF automatically multicasts any network changes that it detects. You can divide up your network into areas to keep routing tables manageable; the routers in each area only need to know the next hop out of their areas rather than the entire routing table for your network.

RIP, Routing Information Protocol, is an older protocol. RIP routers periodically multicast their entire routing tables to the network, rather than just the changes as OSPF does. RIP measure routes by hops, and sees any destination over 15 hops as unreachable. RIP is simple to set up, but OSPF is a better choice for speed, efficiency, and scalability.

BGP-4 is the Border Gateway Protocol version 4. This is an exterior gateway protocol (EGP) for routing Internet traffic. You won’t use BGP unless you are an Internet service provider.

Preparing for OSPF

In our little KVM test lab, there are two virtual machines representing two different networks, and one VM acting as the router. Create two networks: net1 is 192.168.110.0/24 and net2 is 192.168.120.0/24. It’s all right to enable DHCP because you are going to go into your three virtual machines and give each of them static addresses. Host 1 is on net1, Host 2 is on net2, and Router is on both networks. Give Host 1 a gateway of 192.168.110.126, and Host 2 gets 192.168.120.136.

  • Host 1: 192.168.110.125
  • Host 2: 192.168.120.135
  • Router: 192.168.110.126 and 192.168.120.136

Install Quagga on your router, which on most Linuxes is the quagga package. On Debian there is a separate documentation package, quagga-doc. Uncomment this line in /etc/sysctl.conf to enable packet forwarding:

net.ipv4.ip_forward=1

Then run the sysctl -p command to load the change.

Configuring Quagga

Look in your Quagga package for example configuration files, such as /usr/share/doc/quagga/examples/ospfd.conf.sample. Configuration files should be in /etc/quagga, unless your particular Linux flavor does something creative with them. Most Linuxes ship with just two files in this directory, vtysh.conf and zebra.conf. These provide minimal defaults to enable the daemons to run. zebra always has to run first, and again, unless your distro has done something strange, it should start automatically when you start ospfd. Debian/Ubuntu is a special case, which we will get to in a moment.

Each router daemon gets its own configuration file, so we must create /etc/quagga/ospfd.conf, and populate it with these lines:

!/etc/quagga/ospfd.conf
hostname router1
log file /var/log/quagga/ospfd.log
router ospf
 ospf router-id 192.168.110.15
 network 192.168.110.0/0 area 0.0.0.0
 network 192.168.120.0/0 area 0.0.0.0
access-list localhost permit 127.0.0.1/32
access-list localhost deny any
line vty
  access-class localhost

You may use either the exclamation point or hash marks to comment out lines. Let’s take a quick walk through these options.

  • hostname is whatever you want. This isn’t a normal Linux hostname, but the name you see when you log in with vtysh or telnet.
  • log file is whatever file you want to use for the logs.
  • router specifies the routing protocol.
  • ospf router-id is any 32-bit number. An IP address of the router is good enough.
  • network defines the networks your router advertises.
  • The access-list entries restrict vtysh, the Quagga command shell, to the local machine, and deny remote administration.

Debian/Ubuntu

Debian, Ubuntu, and possibly other Debian derivatives require one more step before you can launch the daemon. Edit /etc/quagga/daemons so that all lines say no except zebra=yes and ospfd=yes.

Then, to launch ospfd on Debian launch Quagga:

# systemctl start quagga

On most other Linuxes, including Fedora and openSUSE, start ospfd:

# systemctl start ospfd

Now Host 1 and Host 2 should ping each other, and the router.

That was a lot of words to describe a fairly simple setup. In real life the router will connect to two switches and provide a gateway for all the computers attached to those switches. You could add more network interfaces to your router to provide routing for more networks, or connect directly to another router, or to a LAN backbone that connects to other routers.

You probably don’t want to hassle with configuring network interfaces manually. The easy way is to advertise your router with your DHCP server. If you use Dnsmasq then you get DHCP and DNS all in one.

There are many more configuration options, such as encrypted password protection. See the official documentation at Quagga Routing Suite.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Raising More than Capital: Successful Women in Technology

One of my employees chooses a word at the beginning of each year to guide her personal and professional development efforts. Last year the word she selected was “Rise.” She told me it inspired her to elevate not only her skills, but the quality of her relationships, her attitude toward life and her self-confidence. As a female entrepreneur and the CEO of a growing global software company, our conversation led me to reflect on how successful women in technology rise above our challenges.

Raising Awareness

Research highlights the plethora of internal and external hurdles female technology entrepreneurs face, including limited access to funding, lack of advisors and mentors, sexism and harassment, social expectations, balancing personal and professional responsibility, downplaying our worth and of course, fear of failure. With such a gender gap to overcome, it’s no surprise that in 2017 only 17% of startups had a female founder, a number which has failed to increase in the last five years.

Read more at The Linux Foundation

Optimizing Data Queries for Time Series Applications

Now that we understand what time series data is and why we want to store it in a time series database, we’ve reached a new challenge. As with any application, we want to ensure our database queries are smart and performant, so let’s talk about how we can avoid some common pitfalls.

Indexing

Indexing, the oft-recommended and rarely understood solution to all attempts at optimization, is applicable to most databases. Whether the time series database you’re using is built on Cassandra or MySQL or its own unique architecture, indexing affects your queries. Essentially, an index is a data structure that stores the values from a specific column, meaning that when we search by an indexed field, we have a handy shortcut to the values. When we search by unindexed fields, we have to discover the full path to the value, no shortcuts or magic tricks. Searching unindexed fields is like having to watch Frodo walk through Middle Earth unedited — it takes a long time.

Read more at The New Stack

Infrastructure 2.0: Whatever We’re Calling it Now, It’s Here

The cloud has taught us about the economies of scale, and now containers are threatening to redefine it once again. It’s the collection of devices known as the network — or the data path — that support the scale of applications and services. In that data path lies a number of network and application services that provide for the scale, security, and speed of the applications they deliver. Each one needs to be provisioned, configured, and managed. Every. Single. One.

That’s where Infrastructure 2.0 — DevNetOps, NetOps 2.0, Super-NetOps — comes in. Because its purpose is to embrace DevOps principles and apply its methodologies to the network.

This notion comprises three core concepts: programmable (API-enabled) infrastructure, infrastructure as code, and the inclusion of integration.

Read more at SDxCentral

Improving Teamwork by Engineering Trust

Even in highly mature open organizations, where we’re doing our best to be collaborative, inclusive, and transparent, we can fail to reach alignment or common understanding. Disagreements and miscommunication between leaders and their teams, between members of the same team, between different teams in a department, or between colleagues in different departments remain common even in the most high-performing organizations. Responses to their intensity and impact run the gamut, from “Why did someone take our whiteboard?” to “Why are we doing this big project?”

Vagueness and confusion are often at the heart of these moments. And intentional relationship design is one tool to help us address them.

Read more at OpenSource.com

Windows for Linux Nerds

I am super excited about Windows Subsystem for Linux. It is one of the coolest pieces of tech I’ve seen since I started using Docker.

First, a little background on how WSL works…

You can learn a lot more about this from the Windows Subsystem for Linux Overview. I will go over some of the parts I found to be the most interesting.

The Windows NT kernel was designed from the beginning to support running POSIX, OS/2, and other subsystems. In the early days, these were just user-mode programs that would interact with ntdll to perform system calls. Since the Windows NT kernel supported POSIX there was already a fork system call implemented in the kernel. However, the Windows NT call for fork,NtCreateProcess, is not directly compatible with the Linux syscall so it has some special handling you can read about more under System Calls.

There are both user and kernel mode parts to WSL. Below is a diagram showing the basic Windows kernel and user modes alongside the WSL user and kernel modes.

Read more at Jessie Frazelle’s blog

Protecting Code Integrity with PGP — Part 4: Moving Your Master Key to Offline Storage

In this tutorial series, we’re providing practical guidelines for using PGP. You can catch up on previous articles here:

Part 1: Basic Concepts and Tools

Part 2: Generating Your Master Key

Part 3: Generating PGP Subkeys

Here in part 4, we continue the series with a look at how and why to move your master key from your home directory to offline storage. Let’s get started.

Checklist

  • Prepare encrypted detachable storage (ESSENTIAL)

  • Back up your GnuPG directory (ESSENTIAL)

  • Remove the master key from your home directory (NICE)

  • Remove the revocation certificate from your home directory (NICE)

Considerations

Why would you want to remove your master [C] key from your home directory? This is generally done to prevent your master key from being stolen or accidentally leaked. Private keys are tasty targets for malicious actors — we know this from several successful malware attacks that scanned users’ home directories and uploaded any private key content found there.

It would be very damaging for any developer to have their PGP keys stolen — in the Free Software world, this is often tantamount to identity theft. Removing private keys from your home directory helps protect you from such events.

Back up your GnuPG directory

!!!Do not skip this step!!!

It is important to have a readily available backup of your PGP keys should you need to recover them (this is different from the disaster-level preparedness we did with paperkey).

Prepare detachable encrypted storage

Start by getting a small USB “thumb” drive (preferably two!) that you will use for backup purposes. You will first need to encrypt them:

For the encryption passphrase, you can use the same one as on your master key.

Back up your GnuPG directory

Once the encryption process is over, re-insert the USB drive and make sure it gets properly mounted. Find out the full mount point of the device, for example by running the mount command (under Linux, external media usually gets mounted under /media/disk, under Mac it’s /Volumes).

Once you know the full mount path, copy your entire GnuPG directory there:

$ cp -rp ~/.gnupg [/media/disk/name]/gnupg-backup

(Note: If you get any Operation not supported on socket errors, those are benign and you can ignore them.)

You should now test to make sure everything still works:

$ gpg --homedir=[/media/disk/name]/gnupg-backup --list-key [fpr]

If you don’t get any errors, then you should be good to go. Unmount the USB drive and distinctly label it, so you don’t blow it away next time you need to use a random USB drive. Then, put in a safe place — but not too far away, because you’ll need to use it every now and again for things like editing identities, adding or revoking subkeys, or signing other people’s keys.

Remove the master key

The files in our home directory are not as well protected as we like to think. They can be leaked or stolen via many different means:

  • By accident when making quick homedir copies to set up a new workstation

  • By systems administrator negligence or malice

  • Via poorly secured backups

  • Via malware in desktop apps (browsers, pdf viewers, etc)

  • Via coercion when crossing international borders

Protecting your key with a good passphrase greatly helps reduce the risk of any of the above, but passphrases can be discovered via keyloggers, shoulder-surfing, or any number of other means. For this reason, the recommended setup is to remove your master key from your home directory and store it on offline storage.

Removing your master key

Please see the previous section and make sure you have backed up your GnuPG directory in its entirety. What we are about to do will render your key useless if you do not have a usable backup!

First, identify the keygrip of your master key:

$ gpg --with-keygrip --list-key [fpr]

The output will be something like this:

pub   rsa4096 2017-12-06 [C] [expires: 2019-12-06]
     111122223333444455556666AAAABBBBCCCCDDDD
     Keygrip = AAAA999988887777666655554444333322221111
uid           [ultimate] Alice Engineer <alice@example.org>
uid           [ultimate] Alice Engineer <allie@example.net>
sub   rsa2048 2017-12-06 [E]
     Keygrip = BBBB999988887777666655554444333322221111
sub   rsa2048 2017-12-06 [S]
     Keygrip = CCCC999988887777666655554444333322221111

Find the keygrip entry that is beneath the pub line (right under the master key fingerprint). This will correspond directly to a file in your home .gnupg directory:

$ cd ~/.gnupg/private-keys-v1.d
$ ls
AAAA999988887777666655554444333322221111.key
BBBB999988887777666655554444333322221111.key
CCCC999988887777666655554444333322221111.key

All you have to do is simply remove the .key file that corresponds to the master keygrip:

$ cd ~/.gnupg/private-keys-v1.d
$ rm AAAA999988887777666655554444333322221111.key

Now, if you issue the –list-secret-keys command, it will show that the master key is missing (the # indicates it is not available):

$ gpg --list-secret-keys
sec#  rsa4096 2017-12-06 [C] [expires: 2019-12-06]
     111122223333444455556666AAAABBBBCCCCDDDD
uid           [ultimate] Alice Engineer <alice@example.org>
uid           [ultimate] Alice Engineer <allie@example.net>
ssb   rsa2048 2017-12-06 [E]
ssb   rsa2048 2017-12-06 [S]

Remove the revocation certificate

Another file you should remove (but keep in backups) is the revocation certificate that was automatically created with your master key. A revocation certificate allows someone to permanently mark your key as revoked, meaning it can no longer be used or trusted for any purpose. You would normally use it to revoke a key that, for some reason, you can no longer control — for example, if you had lost the key passphrase.

Just as with the master key, if a revocation certificate leaks into malicious hands, it can be used to destroy your developer digital identity, so it’s better to remove it from your home directory.

cd ~/.gnupg/openpgp-revocs.d
rm [fpr].rev

Next time, you’ll learn how to secure your subkeys as well. Stay tuned.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

One Week Until Embedded Linux Conference + OpenIoT Summit in Portland: Will You Join Us?

In just one week, you could be in good company, joining 900+ developers, architects, practitioners, and Embedded Linux and Industrial IoT technologists.

Sign up for ELC/OpenIoT Summit updates to get the latest information:

Taking place March 12-14 at the Hilton Portland, ELC + OpenIoT Summit will deliver:

  • 100+ technical sessions: Conference sessions covering a range of technical topics from open industrial IoT solutions to embedded Linux development, led by experts from ARM, Dell, Intel, Microsoft, and many more.

  • Birds of a Feather Sessions (BoFs): Unconference sessions, organized by Yocto Project, OpenEmbedded, Open Source Foundries, Eclipse Foundation, Bootlin, and more, give you the opportunity to collaborate with other leading professionals.

  • Onsite Attendee Reception: Enjoy drinks and light bites as you connect and engage with other attendees while checking out the latest technologies in the ELC Technical Showcase.

  • *New* Embedded Apprentice Linux Engineer Track: This brand new track, featuring nine seminars over three days and designed for embedded engineers transitioning to Linux, includes both guided training and hands on lab time to practice skill building using a PocketBeagle board. Take as many or as few seminars as you need to hone the skills you need. Additional registration fee of $75.

  • Yocto Project Developer Day North America 2018: A one-day, hands-on training event that connects you directly to leading Yocto Project developers who will guide you through the creation of custom-built Linux distribution for embedded devices. Additional registration fee of $209.

  • The Closing Game: A perennial favorite – the closing game is part trivia, part pop culture, all fun, helping to close out this great event.

REGISTER NOW >>