Home Blog Page 522

Effective Microservices Architecture with Event-Driven Design

There’s no doubt, in the IT world, microservices are sexy. But just because you find something cool and attractive doesn’t mean it’s good for you. And it doesn’t mean you know how to use it properly.

In fact, microservices, in particular, aren’t easy. Yes, microservices allow for different team members to work on different parts of the code at different speeds. But this wonderful autonomy often finds folks even more siloed, not knowing what the other is doing before throwing it over the wall to the testing and documentation teams, according to Red Hat’s Principle Middleware Architect Christian Posta.

In order to gain true microservices independence, Posta says that we need to shed our dependencies, such as the assumption that microservices mean each service controls its own database.

Read more at The New Stack

Best Linux Distro: Linux Experts Rate Distros

Selecting the best Linux distro is a matter of personal choice, based on your daily work flow. Two Linux experts list their personal picks for best distro and discuss each distro’s merits and challenges.

Bruce Byfield’s Picks

The best Linux distro is always subjective. My own list of the best Linux distros depend on my current interests. One or two are always on my list, but the others are usually ones that have boast something different. Since I am regularly watching for new distros and developing new interests, this year’s list has only some overlaps with last year’s list.

To be honest, I follow Linux desktops more closely than Linux distributions. To me, desktop environments are where the innovation occurs. In fact, I would argue that when a distribution calls attention to itself, something is probably wrong.

All the same, I have my favorite Linux distros. They are not necessarily the most popular – that would be bland – but they are distributions that, one way or the other, are influential or fill a niche extremely well.

Read more at Datamation

A Tour of the Kubernetes Source Code Part One: From kubectl to API Server

Kubernetes continues to experience explosive growth and software developers that are able to understand and contribute to the Kubernetes code base are in high demand. Learning the Kubernetes code base is not easy. Kubernetes is written Go which is a fairly new programming language and it has a large amount of source code. In this multi-part series of articles I will dig in and explain key portions of the Kubernetes code base and also explain the techniques I have used to help me understand the code. My goal is to provide a set of articles that will enable software developers new to Kubernetes to more quickly learn the Kubernetes source code. In this first article, I will cover the flow through the code from running a simple kubectl command to sending a REST call to the API Server. Before using this article to dig into the Kubernetes code, I recommend you read an outstanding high level overview of the Kubernetes architecture by Julia Evans.

Read more at IBM Open Tech

Let’s Encrypt ACME Certificate Protocol Set for Standardization

The open-source Let’s Encrypt project has been an innovating force on the security landscape over the last several years, providing millions of free SSL/TLS certificates to help secure web traffic. Aside from the disruptive model of providing certificates for free, Let’s Encrypt has also helped to pioneer new technology to help manage and deliver certificates as well, including the Automated Certificate Management Environment (ACME).

ACME is no longer just a Let’s Encrypt effort at this point in 2017 and is now being standardized by the Internet Engineering Task Force (IETF). The ACME protocol can be used by a Certificate Authority (CA) to automate the process of verification and certificate issuance.  

Read more at eWeek

Practical Networking for Linux Admins: Real IPv6

When last we met, we reviewed essential TCP/IP basics for Linux admins in Practical Networking for Linux Admins: TCP/IP. Here, we will review network and host addressing and find out whatever happened to IPv6?

IPv4 Ran Out Already

Once upon a time, alarms were sounding everywhere: We are running out of IPv4 addresses! Run in circles, scream and shout! So, what happened? We ran out. IPv4 Address Status at ARIN says “ARIN’s free pool of IPv4 address space was depleted on 24 September 2015. As a result, we no longer can fulfill requests for IPv4 addresses unless you meet certain policy requirements…” Most of us get our IPv4 addresses from our Internet service providers (ISPs), so our ISPs are duking it out for new address blocks.

What do we do about it? Start with bitter laughter, because service providers and device manufacturers are still not well-prepared, and IPv6 support is incomplete despite having more than a decade to implement it. This is not surprising, given how many businesses think computing is like office furniture: buy it once and use it forever (except, of course, for the executive team, who get all the shiny new doodads while us worker bees get stuck with leftovers). Google, who sees all and mines all, has some interesting graphs on IPv6 adoption. Overall adoption is about 18 percent, with the United States at 34 percent and Belgium leading at 48 percent.

What can we Linux nerds do about this? Linux, of course, has had IPv6 support for ages. The first stop is your ISP; visit Test IPv6 to learn their level of IPv6 support. If they are IPv6-ready, they will assign you a block of addresses, and then you can spend many fun hours roaming the Internet in search of sites that can be reached over IPv6.

IPv6 Addressing

IPv6 addresses are 128-bit, which means we have a pool of 2^128 addresses to use. That is 340,282,366,920,938,463,463,374,607,431,768,211,456, or 340 undecillion, 282 decillion, 366 nonillion, 920 octillion, 938 septillion, 463 sextillion, 463 quintillion, 374 quadrillion, 607 trillion, 431 billion, 768 million, 211 thousand and 456 addresses. Which should be just about enough for the Internet of Insecure Intrusive Gratuitously Connected Things.

In contrast, 32-bit IPv4 supplies 2^32 addresses, or just under 4.3 billion. Network address translation (NAT) is the only thing that has kept IPv4 alive this long. NAT is why most home and small businesses get by with one public IPv4 address serving large private LANs. NAT forwards and rewrites your LAN addresses so that lonely public address can serve multitudes of hosts in private address spaces. It’s a clever hack, but it adds complexity to firewall rules and services, and in my not-quite-humble opinion that ingenuity would have been better invested in moving forward instead of clinging to inadequate legacies. Of course, that’s a social problem rather than a technical problem, and social problems are the most challenging.

IPv6 addresses are long at 8 hexadecimal octets. This is the loopback address, 127.0.0.1, in IPv6:

0000:0000:0000:0000:0000:0000:0000:0001

Fortunately, there are shortcuts. Any quad of zeroes can be condensed into a single zero, like this:

0:0:0:0:0:0:0:1

You can shorten this even further, as any unbroken sequence of consecutive zeros can be replaced with a pair of colons, so the loopback address becomes:

::1

Which you can see on your faithful Linux system with ifconfig:

$ ifconfig lo
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host

I know, we’re supposed to use the ip command because ifconfig is deprecated. When ip formats its output as readably as ifconfig then I will consider it.

Be Quiet and Drink Your CIDR

Classless Inter-Domain Routing (CIDR) defines how many addresses are in a network block. For the loopback address, ::1/128, that is a single address because it uses all 128 bits. CIDR notation is described as a prefix, which is confusing because it looks like a suffix. But it really is a prefix, because it tells you the bit length of a common prefix of bits, which defines a single block of addresses. Then you have a subnet, and finally the host portion of the address. 2001:0db8::/64 expands to this:

2001:db8:0000:0000:0000:0000:0000:0000
_____________|____|___________________
network ID   subnet  interface address

When your ISP gives you a block of addresses, they control the network ID and you control the rest. This example gives you 18,446,744,073,709,551,616 individual addresses and 65,536 subnets. Mediawiki has a great page with charts that explains all of this, and how allocations are managed, at Range blocks/IPv6

2000::/3 is the global unicast range, or public routable addresses. Do not use these for experimentation without blocking them from leaving your LAN. Better yet, don’t use them and move on to the next paragraph.

The 2001:0DB8::/32 block is reserved for documentation and examples, so use these for testing. This example assigns the first available address to interface enp0s25, which is what is what Ubuntu calls my eth0 interface:

# ip -6 addr add 2001:0db8::1/64 dev enp0s25
$ ifconfig enp0s25
enp0s25   Link encap:Ethernet  HWaddr d0:50:99:82:e7:2b  
          inet6 addr: 2001:db8::1/64 Scope:Global

Increment up from :1 in hexadecimal: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1a, 1b, and so on.

You can add as many addresses as you like to a single interface. You can ping them from the host they’re on, but not from other hosts on your LAN because you need a router. Next week, we’ll set up routing.

IPcalc

All of these fine hexadecimal addresses are converted from binary. Where does the binary come from? The breath of angels. Or maybe the tears of unicorns, I forget. At any rate, you’re welcome to work these out the hard way, or install ipcalc on your Linux machine, or use any of the nice web-based IP calculators. Don’t be too proud to use these because they’re lifesavers, especially for routing, as we’ll see next week.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How to Install Linux on a Chromebook (And Why You Should)

Chromebooks are one of the most secure devices you can give a non-technical end user, and at a price point few can argue with, but that security comes with a privacy trade off: you have to trust Google, which is part of the NSA’s Prism programme, with your data in the cloud.

Even those who put their faith in the company’s rusty “don’t be evil” mantra may find Chromebook functionality limiting—if you want more than Google services, Netflix, some other Web apps, and maybe the Android app store, then you’re out of luck.

Geeky users willing to engage in some entry-level hackery, however, can install Linux on their Chromebook and unleash the Power of Torvalds™.

Read more at Ars Technica

DNS Infrastructure at GitHub

At GitHub we recently revamped how we do DNS from the ground up. This included both how we interact with external DNS providers and how we serve records internally to our hosts. To do this, we had to design and build a new DNS infrastructure that could scale with GitHub’s growth and across many data centers.

Previously GitHub’s DNS infrastructure was fairly simple and straightforward. It included a local, forwarding only DNS cache on every server and a pair of hosts that acted as both caches and authorities used by all these hosts. These hosts were available both on the internal network as well as public internet. We configured zone stubs in the caching daemon to direct queries locally rather than recurse on the internet. We also had NS records set up at our DNS providers that pointed specific internal zones to the public IPs of this pair of hosts for queries external to our network.

Read more at GitHub

Encryption Technology in Your Code Impacts Export Requirements

US export laws require companies to declare what encryption technology is used in any software to be exported. The use of open source makes complying with these regulations a tricky process.

US Export Requirements

The regulations on US software exports come from the US Commerce Department’s Bureau of Industry and Security (BIS). The specific regulations are called Export Administration Regulations (EARs). The restriction of encryption is based in national defense concerns: we don’t want bad guys to be able to hack into our secret communications, nor prevent us from cracking into theirs. 

The specifics of these regulations are complex and belong in the realm of experts. The basics are that you need to tell the BIS what encryption is in any software you export, though it restricts only strong cryptography, with particular sensitivity to a small number of bad actor nation states. The agency is serious about the requirements and has been known to enforce them, notably fining Wind River $750,000 in 2014 (despite Wind River’s voluntarily disclosing the issue they had discovered themselves).  

Read more at Black Duck

Why Infrakit & LinuxKit Are Better Together for Building Immutable Infrastructure?

Let us accept the fact – “Managing Docker on different Infrastructure is still difficult and not portable”. While working on Docker for Mac, AWS, GCP & Azure, Docker Team realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem. One serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required.

Also, in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. Docker Team introduced and open sourced “InfraKit” last year to solve these problems and to provide the ability to create a self healing infrastructure for distributed systems.

Read more at Collabnix

Viewing Linux Output in Columns

The Linux column command makes it easy to display data in a columnar format — often making it easier to view, digest, or incorporate into a report. While column is a command that’s simple to use, it has some very useful options that are worth considering. In the examples in this post, you will get a feel for how the command works and how you can get it to format data in the most useful ways.

By default, the column command will ignore blanks lines in the input data. When displaying data in multiple columns, it will organize the content by filling the left column first and then moving to the right. For example, a file containing numbers 1 to 12 might be displayed in this order:

Read more at Network World