Home Blog Page 404

Linux LAN Routing for Beginners: Part 2

Last week we reviewed IPv4 addressing and using the network admin’s indispensible ipcalc tool: Now we’re going to make some nice LAN routers.

VirtualBox and KVM are wonderful for testing routing, and the examples in this article are all performed in KVM. If you prefer to use physical hardware, then you need three computers: one to act as the router, and the other two to represent two different networks. You also need two Ethernet switches and cabling.

The examples assume a wired Ethernet LAN, and we shall pretend there are some bridged wireless access points for a realistic scenario, although we’re not going to do anything with them. (I have not yet tried all-WiFi routing and have had mixed success with connecting a mobile broadband device to an Ethernet LAN, so look for those in a future installment.)

Network Segments

The simplest network segment is two computers in the same address space connected to the same switch. These two computers do not need a router to communicate with each other. A useful term is broadcast domain, which describes a group of hosts that are all in the same network. They may be all connected to a single Ethernet switch, or multiple switches. A broadcast domain may include two different networks connected by an Ethernet bridge, which makes the two networks behave as a single network. Wireless access points are typically bridged to a wired Ethernetwork.

A broadcast domain can talk to a different broadcast domain only when they are connected by a network router.

Simple Network

The following example commands are not persistent, and your changes will vanish with a restart.

A broadcast domain needs a router to talk to other broadcast domains. Let’s illustrate this with two computers and the ip command. Our two computers are 192.168.110.125 and 192.168.110.126, and they are plugged into the same Ethernet switch. In VirtualBox or KVM, you automatically create a virtual switch when you configure a new network, so when you assign a network to a virtual machine it’s like plugging it into a switch. Use ip addr show to see your addresses and network interface names. The two hosts can ping each other.

Now add an address in a different network to one of the hosts:

# ip addr add 192.168.120.125/24 dev ens3

You have to specify the network interface name, which in the example is ens3. It is not required to add the network prefix, in this case /24, but it never hurts to be explicit. Check your work with ip. The example output is trimmed for clarity:

$ ip addr show
ens3: 
    inet 192.168.110.125/24 brd 192.168.110.255 scope global dynamic ens3
       valid_lft 875sec preferred_lft 875sec
    inet 192.168.120.125/24 scope global ens3
       valid_lft forever preferred_lft forever

The host at 192.168.120.125 can ping itself (ping 192.168.120.125), and that is a good basic test to verify that your configuration is working correctly, but the second computer can’t ping that address.

Now we need to do bit of network juggling. Start by adding a third host to act as the router. This needs two virtual network interfaces and a second virtual network. In real life you want your router to have static IP addresses, but for now we’ll let the KVM DHCP server do the work of assigning addresses, so you only need these two virtual networks:

  • First network: 192.168.110.0/24
  • Second network: 192.168.120.0/24

Then your router must be configured to forward packets. Packet forwarding should be disabled by default, which you can check with sysctl:

$ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 0

The zero means it is disabled. Enable it with this command:

# echo 1 > /proc/sys/net/ipv4/ip_forward

Then configure one of your other hosts to play the part of the second network by assigning the 192.168.120.0/24 virtual network to it in place of the 192.168.110.0/24 network, and then reboot the two “network” hosts, but not the router. (Or restart networking; I’m old and lazy and don’t care what weird commands are required to restart services when I can just reboot.) The addressing should look something like this:

  • Host 1: 192.168.110.125
  • Host 2: 192.168.120.135
  • Router: 192.168.110.126 and 192.168.120.136

Now go on a ping frenzy, and ping everyone from everyone. There are some quirks with virtual machines and the various Linux distributions that produce inconsistent results, so some pings will succeed and some will not. Not succeeding is good, because it means you get to practice creating a static route. First, view the existing routing tables. The first example is from Host 1, and the second is from the router:

$ ip route show
default via 192.168.110.1 dev ens3  proto static  metric 100 
192.168.110.0/24 dev ens3  proto kernel  scope link  src 192.168.110.164  metric 100
$ ip route show
default via 192.168.110.1 dev ens3 proto static metric 100
default via 192.168.120.1 dev ens3 proto static metric 101
169.254.0.0/16 dev ens3 scope link metric 1000
192.168.110.0/24 dev ens3 proto kernel scope link 
 src 192.168.110.126 metric 100
192.168.120.0/24 dev ens9 proto kernel scope link 
 src 192.168.120.136 metric 100

This shows us that the default routes are the ones assigned by KVM. The 169.* address is the automatic link local address, and we can ignore it. Then we see two more routes, the two that belong to our router. You can have multiple routes, and this example shows how to add a non-default route to Host 1:

# ip route add 192.168.120.0/24 via 192.168.110.126 dev ens3

This means Host 1 can access the 192.168.110.0/24 network via the router interface 192.168.110.126. See how it works? Host 1 and the router need to be in the same address space to connect, then the router forwards to the other network.

This command deletes a route:

# ip route del 192.168.120.0/24

In real life, you’re not going to be setting up routes manually like this, but rather using a router daemon and advertising your router via DHCP but understanding the fundamentals is key. Come back next week to learn how to set up a nice easy router daemon that does the work for you.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Security in the Modern Data Center

The National Institute of Standards and Technology, a division of the U.S. Department of Commerce, is well known in the security community for its standards and recommendations that guide many organizations towards secure culture, policies and technological infrastructure. Its recently publicized guidance, the Application Container Security Guide, analyzes the unique risks posed by containerized applications and advises organizations how to secure them. The first recommendation, “Tailor the organization’s operational culture and technical processes to support the new way of developing, running, and supporting applications made possible by containers,” sets the tone for analysis, implying that modern data centers require a major shift in enterprise strategy and means of securing them, in order to keep pace with the new methodologies of developing and running applications.

The document goes on to emphasize that securing the data center requires tools that were designed from the ground up for this purpose. The authors explain that existing security tools are simply not up for the task of securing the virtualization-based infrastructure, as they were designed before such an environment was envisioned.

Read more at The New Stack

Xen Project Member Spotlight: DornerWorks

We see virtualization becoming more and more of a necessity in the embedded world. As the complexity of processors increases, the difficulty of utilizing them increases. Processors, like the Zynq UltraScale+ MPSoC, that have a Quad-Core ARM Cortex-A53, a Dual-Core ARM Cortex-R5, and an FPGA in a single chip, can be difficult to manage. Virtualization provides a means to isolate out various pieces in a more manageable and effective way. Not only does the Xen Project Hypervisor help manage complexity, but it also can reduce size, weight, and power (SWaP), provide redundancy, address obsolescence of legacy systems, and more.

However, while the temptation is to use virtualization to create a single integrated platform for all computation, this approach could create a single point of failure unless it is mitigated by system wide redundancies.

Read more at Xen Project

EdgeX Foundry Continues Momentum with ‘California Code’ Preview

Only 3 months away from our first release, the EdgeX community has made available what we are calling the “California Preview.”  This preview is a collection of five key microservices written in Go that are drop in replacements for our Java microservices.  The work on the California release (not due until June) still continues, but we wanted to show the world that EdgeX was indeed going to be fast, small and still a flexible platform for building IIoT solutions – thus we named this a “preview” of what we hope to show in full by the California release.

In the preview, the core services (Core Data, Metadata, and Command) have been recreated in Go Lang, as well as the bulk of the Export Client and Export Distribution microservices.  Just how fast and how small has EdgeX gotten? Let’s take a look at some of the new EdgeX measurements.   

Read more at The Linux Foundation

DevOps Jobs Salaries: 9 Statistics to See

If you’re on the hunt for a DevOps job, don’t expect your search to last long. With the right set of skills, you have employers competing for your services these days.

Even IT pros just beginning a transition into a DevOps-oriented job from a more traditional role are set up for success in this market.

“The DevOps market is very strong,” says Ryan Sutton, district president at staffing and recruiting firm Robert Half Technology, adding that the demand is a logical outcome of increasing cloud adoption among companies. “[DevOps-related hiring] has been very active as companies try to keep up with the technical trend and improve efficiency and collaboration across teams.”

Read more at Enterprisers Project

SystemRescueCd

If you accidentally delete data or format a disk, good advice can be expensive. Or maybe not: You can undo many data losses with SystemRescueCd.

The price for mass storage devices of all types has been falling steadily in recent years, with a simultaneous increase in capacity. As a result, users are storing more and more data on local storage media – often without worrying about backing it up. Once the milk has been spilled, the anxious search begins for important photos, videos, correspondence, and spreadsheets. SystemRescueCd can help in these cases by providing a comprehensive toolbox for every computer, with the possibility of restoring lost items.

The Gentoo derivative is a hybrid image for 32- and 64-bit computers that comes in just under 470MB [1]. The entire distro fits on a CD, so it is also suitable for use on older systems. To boot the operating system from a USB stick, use the commands:

Read more at Linux Pro 

Protecting Code Integrity with PGP — Part 3: Generating PGP Subkeys

In this tutorial series, we’re providing practical guidelines for using PGP. Previously, we provided an introduction to basic tools and concepts, and we showed how to generate and protect your master PGP key. In this third article, we’ll explain how to generate PGP subkeys, which are used in daily work. 

Checklist

  1. Generate a 2048-bit Encryption subkey (ESSENTIAL)

  2. Generate a 2048-bit Signing subkey (ESSENTIAL)

  3. Generate a 2048-bit Authentication subkey (NICE)

  4. Upload your public keys to a PGP keyserver (ESSENTIAL)

  5. Set up a refresh cronjob (ESSENTIAL)

Considerations

Now that we’ve created the master key, let’s create the keys you’ll actually be using for day-to-day work. We create 2048-bit keys because a lot of specialized hardware (we’ll discuss this more later) does not handle larger keys, but also for pragmatic reasons. If we ever find ourselves in a world where 2048-bit RSA keys are not considered good enough, it will be because of fundamental breakthroughs in computing or mathematics and therefore longer 4096-bit keys will not make much difference.

Create the subkeys

To create the subkeys, run:

$ gpg --quick-add-key [fpr] rsa2048 encr
$ gpg --quick-add-key [fpr] rsa2048 sign

You can also create the Authentication key, which will allow you to use your PGP key for ssh purposes:

$ gpg --quick-add-key [fpr] rsa2048 auth

You can review your key information using gpg –list-key [fpr]:

pub   rsa4096 2017-12-06 [C] [expires: 2019-12-06]
     111122223333444455556666AAAABBBBCCCCDDDD
uid           [ultimate] Alice Engineer <alice@example.org>
uid           [ultimate] Alice Engineer <allie@example.net>
sub   rsa2048 2017-12-06 [E]
sub   rsa2048 2017-12-06 [S]

Upload your public keys to the keyserver

Your key creation is complete, so now you need to make it easier for others to find it by uploading it to one of the public keyservers. (Skip the step if you’re not planning to actually use the key you’ve created, as this just litters keyservers with useless data.)

$ gpg --send-key [fpr]

If this command does not succeed, you can try specifying the keyserver on a port that is most likely to work:

$ gpg --keyserver hkp://pgp.mit.edu:80 --send-key [fpr]

Most keyservers communicate with each other, so your key information will eventually synchronize to all the others.

Note on privacy: Keyservers are completely public and therefore, by design, leak potentially sensitive information about you, such as your full name, nicknames, and personal or work email addresses. If you sign other people’s keys or someone signs yours, keyservers will additionally become leakers of your social connections. Once such personal information makes it to the keyservers, it becomes impossible to edit or delete. Even if you revoke a signature or identity, that does not delete them from your key record, just marks them as revoked — making them stand out even more.

That said, if you participate in software development on a public project, all of the above information is already public record, so making it additionally available via keyservers does not result in a net loss in privacy.

Upload your public key to GitHub

If you use GitHub in your development (and who doesn’t?), you should upload your key following the instructions they have provided:

To generate the public key output suitable to paste in, just run:

$ gpg --export --armor [fpr]

Set up a refresh cronjob

You will need to regularly refresh your keyring to get the latest changes on other people’s public keys. You can set up a cronjob to do that:

$ crontab -e

Add the following on a new line:

@daily /usr/bin/gpg2 --refresh >/dev/null 2>&1

Note: Check the full path to your gpg or gpg2 command and use gpg2 if regular gpg for you is the legacy GnuPG v.1.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

​Purism Adds Open-Source Security Firmware to its Linux Laptop Line

In its latest news, Purism announced that it has successfully integrated Trammel Hudson’s Heads security firmware into its Trusted Platform Module (TPM)-equipped Librem laptops. Heads is an open-source computer firmware and configuration tool that aims to provide better physical security and data protection.

Heads combines physical hardening of hardware platforms and flash security features with custom coreboot firmware and a Linux boot loader in ROM. While still not a complete replacement for proprietary AMD or Intel firmware blobs, Heads, by controlling a system from the first instruction the CPU executes to full boot up, enables you to track steps of the boot firmware and configuration.

Read more at ZDNet

Monitoring with Prometheus

Prometheus is open-source and one of the popular CNCF projects written in Golang. Some of its components are written in Ruby but most of the components are written in Go. This means you have a single binary executables, you download and run Prometheus with it’s components as that simple. Prometheus is fully Docker compatible. A number of Prometheus components with the Prometheus itself are available on the Docker Hub.

You will see how to spin-up a minimal Prometheus server with a Node Exporter and a Grafana components in Docker containers to monitor a stand-alone Linux Ubuntu 16.04 server. Let’s see first that what are the main mandatory components in Prometheus from ground-up.

Read more at Janshair Khan

Amazon’s Alexa Takes Open-Source Route to Beat Google Into Cars

Amazon engineers are working with Nuance Communications Inc. and Voicebox Technologies Corp. to write code that makes in-vehicle apps compatible with several speech-recognition technologies, eliminating the need for developers to make multiple versions. 

The catch is that the cars must use Automotive Grade Linux, an open-source platform being developed by Toyota Motor Corp. and other auto manufacturers and suppliers to underpin all software running in the vehicle. The only cars currently on the system are Toyota’s new Camry and Sienna and the Japanese version of the plug-in Prius, though the carmaker plans to expand that list. AGL has been growing too, reaching 114 members currently, up from around 90 a year earlier. Amazon signed on last month.

Read more at Bloomberg