Home Blog Page 516

Practical Networking for Linux Admins: IPv6 Routing

Our story so far: We have learned the important bits about TCP/IP, IPv6, and IPv4 and IPv6 LAN Addressing, which is all very excellent. But, if you want your computers to talk to each other, then you must know about routing.

Simple Test Lab

Now we have a good use for the ip command. ip assigns multiple addresses to network interfaces, which is totally groovy because you can practice setting up and testing routing without needing a herd of computers. All you need to get started is two computers connected to the same Ethernet switch. In the following examples, I’m using a desktop PC and a laptop connected to an old 8-port gigabit switch. Yes, I know, there are newer switches that are so fast they reach the future before we do. Any Ethernet switch you want to use is fine.

If you are using Network Manager it will try to find a DHCP server when you plug in your Ethernet cables, so don’t run any name services on your test lab.

Assigning and Removing IP Addresses

First check your network interface names. The output is snipped for clarity:

$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP>
[...]
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP>
[...]
3: wlx9cefd5fe8f20: <BROADCAST,MULTICAST,UP,LOWER_UP> 
[...]

My Ubuntu system likes to give network interfaces strange names. enp0s25 is my wired Ethernet interface. Let’s give it an IPv6 address from the range reserved for examples and documentation (see Practical Networking for Linux Admins: Real IPv6):

$ sudo ip -6 addr add 2001:0db8::1/64 dev enp0s25

Let us admire our new address (again with trimmed output), and note also how the link local address is assigned automatically:

$ ip addr show
2: enp0s25: <BROADCAST,MULTICAST,UP,LOWER_UP>
    link/ether d0:50:99:82:e7:2b brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8::1/64 scope global 
    inet6 fe80::d250:99ff:fe82:e72b/64 scope link 

Assign an address to the second host:

$ sudo ip -6 addr add 2001:0db8::2/64 dev eth0

Now the two hosts can ping each other. Remember, ping6 requires specifying the network interface, even if you only have one:

$ ping6 -I enp0s25 2001:db8::2
PING 2001:db8::2(2001:db8::2) from 2001:db8::1 enp0s25: 56 data bytes
64 bytes from 2001:db8::2: icmp_seq=1 ttl=64 time=1.01 ms

You can also ping the link local addresses:

$ ping6 -I enp0s25 fe80::ea9a:8fff:fe67:190d
PING fe80::ea9a:8fff:fe67:190d(fe80::ea9a:8fff:fe67:190d)
from fe80::d250:99ff:fe82:e72b enp0s25: 56 data bytes
64 bytes from fe80::ea9a:8fff:fe67:190d: icmp_seq=1 ttl=64 time=0.531 ms

link/ether is the MAC address. Note the scope values of global and link. global is a routable address, while link is the link local address that operates only within a single network segment. In IPv4 networks this is called a broadcast domain, which contains all hosts within a single logical network segment. Unlike IPv4 networks, IPv6 does not use a broadcast address. IPv4 has three address types: unicast, multicast and broadcast. As the excellent TCP/IP Guide says:

“Broadcast addressing as a distinct addressing method is gone in IPv6. Broadcast functionality is implemented using multicast addressing to groups of devices.”

Delete an address this way:

$ sudo ip -6 addr del 2001:0db8::1/64 dev enp0s25

Create Route

Now we’ll add a second address to one of our test machines that’s in a different subnet. In the 2001:0db8::0/64 network, the first four octets define the network, and the last four are the host addresses. The “2” in the host address on my second test machine helps me remember which machine is which, so I’ll recycle that for the new subnet:

$ sudo ip -6 addr add 2001:db8:0:1::2/64 dev eth0

I ping the new address from the first test machine, to no avail:

$ ping6 -I enp0s25 2001:db8:0:1::2
connect: Network is unreachable

So, I’ll create a route to the new subnet. Run ip -6 route show first to see your existing routing table, and copy it for a reference. Then create the new route:

$ sudo ip -6 route add 2001:db8:0:1::0/64 dev enp0s25

Now look what ping does:

$ ping6 -I enp0s25 2001:db8:0:1::2
PING 2001:db8:0:1::2(2001:db8:0:1::2) from 2001:db8::1 enp0s25: 56 data bytes
64 bytes from 2001:db8:0:1::2: icmp_seq=1 ttl=64 time=0.583 ms

Success! We are networking nerds deluxe! Just to make sure, delete the route and try ping again:

$ sudo ip -6 route del 2001:db8:0:1::0/64 dev enp0s25
$ ping6 -I enp0s25 2001:db8:0:1::2
connect: Network is unreachable

None of these configurations survive a reboot. This is good news when you want to wipe everything and start over, but not so good news when you want to keep them. Every Linux distribution has its own special way of configuring IP addresses and static routes. If you’re running Network Manager you can configure everything with it. You can also push all of this to clients with a DHCP server, such as the excellent Dnsmasq, which provides name services, router advertisement, and network booting. All of which are large topics for another day. Until then, be well and enjoy being an IPv6 guru.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

The Birth and Rise of Ethernet: A History

Today, no company would consider using anything except Ethernet for its wired local-area network. But it wasn’t always that way. Steven Vaughan-Nichols tracks the history of Ethernet, and its once-upon-a-time networking protocol competitors.

Nowadays, we take Ethernet for granted. We plug a cable jack into the wall or a switch and we get the network. What’s to think about?

It didn’t start that way. In the 1960s and 1970s, networks were ad hoc hodgepodges of technologies with little rhyme and less reason. But then Robert “Bob” Metcalfe was asked to create a local-area network (LAN) for Xerox’s Palo Alto Research Center (PARC). His creation, Ethernet, changed everything.

Read more at HPE

What Is Intent-Based Networking?

Cisco this week jumped head first into the intent-based networking market, saying the technology that uses machine learning and advanced automation to control networks could be a major shift in how networks are managed.

But what exactly is intent-based networking?

Gartner Research Vice President Andrew Lerner says intent-based networking systems (IBNS) are not new, and in fact the ideas behind IBNS have been around for years. What’s new is that machine learning algorithms have advanced to a point where IBNS could become a reality soon. Fundamentally, an IBNS is the idea of a network administrator defining a desired state of the network, and having automated network orchestration software implement those policies.

Read more at Network World

Oracle Debuts Three New Open-Source Container Tools

Oracle is expanding its container efforts with the official public debut of three new open-source utilities designed to help improve application container security and performance. The tools include the Smith secure container builder, Crashcart container debugging tool and the Railcar container runtime.

The new Oracle container tools were publicly revealed by Oracle cloud development architect Vish (Ishaya) Abrams, who is a well-known figure in the OpenStack cloud community. Prior to joining Oracle in April 2015, Abrams had served as the project technical leader of the OpenStack Nova compute project which supports multiple virtualization technologies.

Read more at eWeek

Hijacking Bitcoin: Routing Attacks on Cryptocurrencies

Hijacking Bitcoin: routing attacks on cryptocurrencies Apostolaki et al., IEEE Security and Privacy 2017

The Bitcoin network has more than 6,000 nodes, responsible for up to 300,000 daily transactions and 16 million bitcoins valued at roughly $17B.

Given the amount of money at stake, Bitcoin is an obvious target for attackers.

This paper introduces a new class of routing attacks on the network. These aren’t supposed to be feasible since Bitcoin is a vast peer-to-peer network using random flooding. However, look a little closer and you’ll find:

  1. The Internet infrastructure itself is vulnerable to routing manipulation (BGP hijacks), and
  2. Bitcoin is really quite centralised when viewed from a routing perspective.

Read more at The Morning Paper

Dynamic Tracing in Linux User and Kernel Space

Have you ever been in a situation where you realize that you didn’t insert debug print at a few points in your code, so now you won’t know if your CPU hits a particular line of code for execution until you recompile the code with debug statements? Don’t worry, there’s an easier solution. Basically, you need to insert dynamic probe points at different locations of your source code assembly instructions.

For advanced users, kernel documentation/trace and man perf provide a lot of information about different types of kernels and user space tracing mechanisms; however, average users just want a few simple steps and an example to get started quickly. That’s where this article will help.

Read more at OpenSource.com

Linux Kernel 4.12: “One of The Bigger Releases”

Linus Torvalds released Linux kernel 4.12 on Sunday, July 2 and remarked how it was “one of the bigger releases historically.” Indeed, just shy of 12,000 commits, only 4.9 was significantly larger, and that was because Greg Kroah-Hartman declared it an LTS release.

Despite Torvalds’ unassuming comment about how there’s “nothing particularly odd going on” in this release, there are definitely many things going on. Apart from the numerous commits, this kernel has also received an abnormally large number of patches. About 50 percent of these patches are from the work being carried out on supporting the AMD’s high-end Vega series of cards, which are to go on sale later this year.

Getting support for hardware that isn’t even available in shops yet is exciting, but even more so is the work being carried out on supporting USB-C natively. In case you are not aware of these nifty interfaces, USB-C ports are an ultrabook designer’s dream. The protocol itself allows users to plug in a cable however they choose — no more fumbling to get it right side up! But, more importantly, USB-C allows for a wider range of functionalities than prior versions of USB. You can, for example, deliver power over a USB-C to charge a mobile device from your laptop, yes, but you can also have your laptop receive power. This means you could charge your laptop back and never have to bother with a non-standard charging port again.

Not only that, but a USB-C protocol can also act as an HDMI port and stream video to an external monitor. And, of course, USB-C still supports mass storage devices, mice, keyboards, cameras, microphones, printers, and so forth. With a couple of these devices on your machine, you are covered for almost everything.

The support of USB-C in the kernel is not easy. Apart from knowing what format of data must be sent over the wire, you have the added complication of determining which way charging is happening: Is the power flowing out through the USB-C to a device? Or is it flowing the other way round, charging your laptop? All these things must be “negotiated” by the devices at either end of the cable and, on the kernel end, we now have a USB Type-C Port Manager driver, or TCPM for short. As Phoronix explains, “[t]his driver serves as a state machine while other USB Type-C drivers are responsible for the rest of the functionality.”

Other things to look forward to in 4.12

  • A new BFQ I/O scheduler. The Budget Fair Queuing, or BFQ, is a new I/O scheduler that makes applications on desktops more responsive. By reducing latency on servers, it also helps reduce jittering and jumps when streaming audio or video, and speeds up the retrieval of web pages. Overall, the new BFQ is going to make life more pleasant for end users. The new Kyber I/O scheduler, on the other hand, speeds up access to block devices, like disk drives.

  • Support for ARM 64 devices keeps growing. Both the HWacom’s AmazeTV set top box and the Orange Pi PC 2 board are now supported among others. The old Motorola DROID4 smartphone is also supported.

  • And, speaking of alternative architectures, the Power9 chips have received a boost and can now address 512TB of virtual address space. Should be enough for gaming, methinks.

For more information regarding Linux’s Kernel 4.12, check out the reports at Kernel Newbies and Phoronix.

Learn the Basics of Docker Compose

In this preview of Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation, we’ve covered Docker installation, introduced Docker Machine, performed basic Docker container and image operations, and looked at Dockerfiles and Docker Volumes.

This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. If you are using Docker for Mac or Windows, or you install the Docker Toolbox, then Docker Compose will be available by default. If not, you can download it manually.

To try out WordPress, for example, let’s create a folder called wordpress, and, in that folder, create a file called docker-compose.yaml. We will be exporting the wordpress container on the 8000 port of the host system.

When we start an application with Docker Compose, it creates a user-defined network on which it attaches the containers for the application. The containers communicate over that network. As we have configured Docker Machine to connect to our dockerhost, Docker Compose would also use that.

Now, with the docker-compose up command, we can deploy the application. With docker-compose ps command, we can list the containers created by Docker Compose, and with docker-compose down, we can stop and remove the containers. This also removes the network associated with the application. To additionally delete the associated volume, we need to pass the -v option with the docker-compose down command.

 Want to learn more? Access all the free sample chapter videos now!

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

Balancing Competing Interests in Software Projects

The typical software shop is both overcommitted and poorly coordinated. These conditions form a vicious cycle: a lack of effective communication leads to inefficient work, which in turn leads to a permanent state of being too busy to communicate with one another.

The traditional remedy to this problem is something along the lines of “do less stuff, better.” When it can be pulled off, it is super effective! But in most places, the idea of waving a magic “do less” wand tends to be rejected out of hand, or at least kicked down the road to be considered in quieter times that never come.

If you find yourself in a situation where your team can’t immediately solve its overcommitment problems, that’s a sign that it’s time to focus on improving coordination. Recognizing that busy people generally don’t have the time or patience for revolutionary transformations, your goal is to look for small adjustments here and there that when taken in aggregate lead to a massive reduction in friction.

Read more at O’Reilly

HTTPS Certificate Revocation Is Broken, and It’s Time for Some New Tools

We have a little problem on the web right now and I can only see it becoming a larger concern as time goes by: more and more sites are obtaining certificates, vitally important documents needed to deploy HTTPS, but we have no way of protecting ourselves when things go wrong.

Certificates

We’re currently seeing a bit of a gold rush for certificates on the Web as more and more sites deploy HTTPS. Beyond the obvious security and privacy benefits of HTTPS, there are quite a few reasons you might want to consider moving to a secure connection that I outline in my article Still think you don’t need HTTPS?. Commonly referred to as “SSL certificates” or “HTTPS certificates”, the wider Internet is obtaining them at a rate we’ve never seen before in the history of the web. Every day I crawl the top one million sites on the Web and analyze various aspects of their security and every 6 months I publish a report. You can see the reports here, but the main result to focus on right now is the adoption of HTTPS.

Read more at Ars Technica