Home Blog Page 522

DNS Infrastructure at GitHub

At GitHub we recently revamped how we do DNS from the ground up. This included both how we interact with external DNS providers and how we serve records internally to our hosts. To do this, we had to design and build a new DNS infrastructure that could scale with GitHub’s growth and across many data centers.

Previously GitHub’s DNS infrastructure was fairly simple and straightforward. It included a local, forwarding only DNS cache on every server and a pair of hosts that acted as both caches and authorities used by all these hosts. These hosts were available both on the internal network as well as public internet. We configured zone stubs in the caching daemon to direct queries locally rather than recurse on the internet. We also had NS records set up at our DNS providers that pointed specific internal zones to the public IPs of this pair of hosts for queries external to our network.

Read more at GitHub

Encryption Technology in Your Code Impacts Export Requirements

US export laws require companies to declare what encryption technology is used in any software to be exported. The use of open source makes complying with these regulations a tricky process.

US Export Requirements

The regulations on US software exports come from the US Commerce Department’s Bureau of Industry and Security (BIS). The specific regulations are called Export Administration Regulations (EARs). The restriction of encryption is based in national defense concerns: we don’t want bad guys to be able to hack into our secret communications, nor prevent us from cracking into theirs. 

The specifics of these regulations are complex and belong in the realm of experts. The basics are that you need to tell the BIS what encryption is in any software you export, though it restricts only strong cryptography, with particular sensitivity to a small number of bad actor nation states. The agency is serious about the requirements and has been known to enforce them, notably fining Wind River $750,000 in 2014 (despite Wind River’s voluntarily disclosing the issue they had discovered themselves).  

Read more at Black Duck

Why Infrakit & LinuxKit Are Better Together for Building Immutable Infrastructure?

Let us accept the fact – “Managing Docker on different Infrastructure is still difficult and not portable”. While working on Docker for Mac, AWS, GCP & Azure, Docker Team realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem. One serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required.

Also, in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. Docker Team introduced and open sourced “InfraKit” last year to solve these problems and to provide the ability to create a self healing infrastructure for distributed systems.

Read more at Collabnix

Viewing Linux Output in Columns

The Linux column command makes it easy to display data in a columnar format — often making it easier to view, digest, or incorporate into a report. While column is a command that’s simple to use, it has some very useful options that are worth considering. In the examples in this post, you will get a feel for how the command works and how you can get it to format data in the most useful ways.

By default, the column command will ignore blanks lines in the input data. When displaying data in multiple columns, it will organize the content by filling the left column first and then moving to the right. For example, a file containing numbers 1 to 12 might be displayed in this order:

Read more at Network World

Cloud Native Apps and Security: The Case for CoreOS Rkt and Xen

CoreOS’s rkt started at the beginning of 2014 as a security-focused alternative to Docker. The project aimed to create a signature verification of cloud-native apps by default; the intention was to guarantee the integrity of the apps. It also stepped away from the central-daemon design of Docker, which requires root privileges for all operations. By contrast, the rkt process is short-lived, limiting the chances of being exploited, and some of rkt commands can be executed as unprivileged user.

The project has come a long way since it was conceived. It is stable, fully featured, and it supports a variety of ways to fetch and start cloud-native apps with security being top of mind. For example, it can download apps from the Docker registry and use virtualization to run them.

Rkt champions open standards and supports the Open Container Initiative image and runtime specifications. A couple of months ago, it was accepted into the Cloud Native Computing Foundation, becoming a member of the same family as Kubernetes, the popular container orchestration service. Rkt already has excellent support for Kubernetes, and it will strengthen further now that they live under the same roof.

Why Rkt Works So Well: It’s The Architecture

The primary benefit of rkt is that it is versatile. Its versatility stems from its architecture, which is based on multiple stages of execution: stage0, stage1, and stage2:

  • Stage0 is responsible for readying the images and is implemented by the rkt executable.

  • Stage1 is in charge of creating isolated environments to run cloud-native apps. Stage1s are distributed in Application Container Image format (also known as ACI), which is a tarball containing a rootfs and a JSON manifest. It is the same format used for cloud-native apps.

  • Stage2 is the environment in which the applications actually run. 

Taking a lesson from Kubernetes, the basic execution unit of rkt is a pod: a small set of cloud-native apps to be run in a shared context. Typically, a pod is just a couple of apps, for example, a server app and a log parsing app. The log parsing application needs access to the logs of the other app; hence, the two apps share filesystem access.

When the user executes rkt run to start a pod, rkt unpacks the stage1 tarball, sets up the stage2s’ rootfs at a known location under the stage1 filesystem hierarchy, and runs a stage1 application with the right arguments. The stage1 application to run is specified in the stage1 manifest. The stage1 binary takes charge of setting up a fresh new environment, then runs the stage2 applications.

The beauty of this architecture is that stage1s are entirely independent and self-contained. Developers can implement new stage1s easily. They can be maintained, built, and shipped separately from rkt. Today, rkt supports five in-tree stage1s, plus two out-of-tree, including a stage1 based on Linux namespaces named coreos, and a stage1 based on KVM. End-users are given the choice of multiple stage1s with different trade-offs; they can pick the best for their use-cases at runtime, with a simple command line option.

The industry has come a long way since the early days of Docker, when many people confused cloud-native apps with Linux namespaces, because they were both called containers. Linux namespaces are only one of the many technologies to run applications. Similarly, cloud-native apps are packaged according to the ACI format, which is only one of many ways to package applications binaries. The two technologies are orthogonal. The distinction between them is extremely stark in rkt. 

Xen Joins the Party

A couple of weeks ago, the rkt community gained stage1-xen, a new stage1 based on the Xen Project hypervisor. It is still in its very early days, but it is a good proof of concept. Xen Project offers a few unique properties, not just in terms of technology, but also in terms of community and processes.

Xen Project is known as the enabler of many strong isolation and privilege separation architectures. Projects like Qubes OS and OpenXT, aimed at highly secure environments, take the security by compartmentalization approach, using the Xen Project hypervisor to create multiple isolated compartments. Each workload runs on a separate virtual machine. Infrastructure components, such as the network stack and the network drivers, can also be moved into their own separate VMs, named driver domains. Even if an attacker manages to penetrate and assume control of a driver domain, the intruder still does not gain full system access.

The Xen stage1 enables users to take advantage of rkt’s easy to use and powerful app management features, together with the Xen Project’s security and isolation properties. It creates a separate, secure by default, Xen virtual machine for each pod.

Configuring Linux namespaces for isolation is hard; it is a daunting task at any scale. SELinux is the top technology to do it, but it has a steep learning curve, and often end-users disable it. It is hard to believe that the first completion suggestion for “how to disable” on Google Search is actually “selinux.” As companies are redesigning their software stacks around microservices, they’ll benefit from a Xen Project solution which is secure and doesn’t need additional settings to increase isolation.

Xen is most often associated with the largest public clouds in production, but the target of this project is not limited to servers. In fact, cloud-native apps are becoming the new way of packaging and distributing applications across all market segments. Stage1-xen will be of great help to developers in embedded environments, such as the automotive industry, where higher security standards are to be upheld. It will allow them to download and deploy new apps to vehicles, keeping them strongly isolated from the critical functions of the car.

Xen and Its Proclivity for Cloud Computing

There are many reasons why Xen is a great hypervisor for cloud-native applications; one of them is that Xen can run anywhere, from the latest and greatest physical servers to the smallest Amazon AWS instances. Let’s start by looking at virtualization technologies to understand how this is possible.

Xen offers two virtual machine types on Intel and AMD processors: PV and HVM guests. The Xen stage1 uses PV guests because they are lightweight and they don’t require any hardware emulation or additional processes on the host. Also, they have short boot times as they don’t run any guest firmware (i.e., there is no UEFI or Seabios to be run inside the virtual machine). They are a good match for cloud-native apps.

A fundamental characteristic of PV guests is that they don’t require hardware virtualization extensions. Intel calls them VT-x, while AMD named them AMD-V. They were introduced around 2006. All modern x86 machines support them, but cloud instances do not.

Although both Xen and KVM can create virtual machines with a virtual version of VT-x and AMD-V, cloud providers do not enable this feature. As a consequence, Amazon and Google Cloud instances look like pre-2006 hardware: they have neither VT-x nor AMD-V. Thus, it is not possible to create a nested KVM virtual machine on top of an Amazon AWS instance, but it is possible to start a nested Xen PV guest in the same environment because it doesn’t require virtualization extensions. With stage1-xen, Rkt users gain the ability to execute cloud-native apps as virtual machines on top of AWS and Google Cloud, the same way they do today with the default coreos stage1.

Beyond the Technicalities: The Security Process

Besides the technical features, Xen Project has a strong security track record and a fully transparent security policy that supports responsible disclosure.

Security fixes are easy to track, apply and deploy. Stable trees are maintained for two years. It is possible to patch productions systems before the public disclosure date when a fix doesn’t expose technicalities that could introduce the risk of re-discovery of the vulnerability. Security management is one of the top reasons for choosing the Xen Project hypervisor, which makes it a great fit for a security-focused project like rkt.

Stage1-xen is still in its infancy, and we need your help in making it fully supported and ready for primetime.

If you are interested in cloud-native apps and security, join the community and take the opportunity to shape its future. If you are located in or near Budapest, Hungary, I’ll be talking more about this topic during the Xen Project Developer and Design Summit happening July 11 – 13, 2017.

Hardware Is the New Software

Hardware is the new software Baumann, HotOS’17

This is a very readable short paper that sheds an interesting light on what’s been happening with the Intel x86 instruction set architecture (ISA) of late. We’re seeing a sharp rise in the number and complexity of extensions, with some interesting implications for systems researchers (and for Intel!). We’re also seeing an increasing use of microcode blurring the role of the ISA as the boundary between hardware and software.

We argue that these extensions are now approaching software-like levels of complexity, yet carry all the attendant drawbacks of a hardware implementation and the slow deployment cycle that implies. We suspect that the current path may be unsustainable, and posit an alternative future with the ultimate goal of decoupling new ISA features from the underlying hardware.

Read more at The Morning Paper

Hallmarks of a Good Technical Leader

I recently sat down with Camille Fournier, the head of Platform Engineering at Two Sigma, to talk about what constitutes great technical leadership and how organizations can foster it. Here are some highlights from our chat.

How do you define technical leadership (as opposed to leadership in general)?

Technical leaders don’t just generically inspire people to do things, but are capable of communicating with technical stakeholders and engineers in language that they understand. Technical leadership is about understanding the technical context under which decisions are being made, and asking questions to help make sure the right decisions are being made given the technical concerns.

Read more at O’Reilly

Mikeal Rogers: Node.js Will Overtake Java Within a Year

Mikeal Rogers has been with the Node.js Foundation since day one. His job as community manager for the foundation involved hands-on oversight of operations, from communications and marketing to conference planning, to running board meetings. Rogers’ main contribution, though, is organization and coordination within the Node.js open source community — particularly in scaling governance and processes as the project has accelerated from a dozen early contributors to many hundreds.

Rogers spoke with The New Stack to talk about his experience getting started in the open source world, working at the Node.js Foundation and becoming an open source governance principals guru.

First things first: you’re not going to be with the Node.js Foundation for much longer?

That’s right — in a few weeks, I’ll be packing up my desk. I’ve been here since the beginning, and things have really taken good shape. I’m ready to move on to something new, though I haven’t decided yet exactly what or where that will be.

Read more at The New Stack

China Is Driving To 5G And IoT Through Global Collaboration

Telecoms and cloud service providers are gearing up for two of the largest functional changes in decades: The Internet of Things (IoT) which is happening now and 5G which is on the horizon. Both will require substantial investments in capital and operations for today’s networks to be competitive and thrive in this connected future. No single vendor can deliver the full stack, and proprietary technologies will not keep pace with these future needs. This transformation will be delivered in virtualized (not physical) technologies, open source and multivendor, relying on significant integration work across many in the industry to be successful. Chinese players like China Mobile Huawei and ZTE are emerging as leaders in this space, through something not traditionally expected from the region: global collaboration.

OPNFV is an initiative from the Linux Foundation that is working on the interoperability and integration of these virtual components, referred to as virtual network functions (VNFs), into a platform called network function virtualization (NFV). 

Read more at Forbes

Innovating With Open Source: Microsoft’s Story

This article was sponsored by Microsoft and written by Linux.com.

After much anticipation, LinuxCon, ContainerCon and Cloud Open China is finally here. Some of the world’s top technologists and open source leaders are gathering at the China National Convention Center in Beijing to discover and discuss Linux, containers, cloud technologies, networking, microservices, and more. Attendees will also exchange insights and tips on how to navigate and lead in the open source community, and what better way than to meet in person at the conference?

To preview how some leading companies are using open source and participating in the open source community, Linux.com interviewed several companies attending LinuxCon China. Here, Microsoft discusses how and why they adopted open source, how that strategy helps their customers and the open source community, but also how it helps Microsoft innovate and change how it does business.

Gebi Liang
We spoke with Gebi Liang, Partner Director of Microsoft Cloud and Enterprise China Cloud Incubation Center to learn more.

Linux.com: What is Microsoft’s open source strategy today?

Gebi Liang: Our company mission is to enable companies to do more. An important step is enabling organizations to work on the tools and platforms they know, love and have already invested in. Thus, our strategy centers around providing an open and flexible platform that works the way you want and need it to. The platform integrates with leading ecosystems to deliver consistent offerings. But Microsoft went even further to release technology to support a strong ecosystem through Microsoft’s portfolio of investments, and to contribute technology to the open source community as well.

Shaping and deploying this strategy has been a multi-year journey. But each step along the way was significant including investing in open source contributions across the company and joining key foundations to deepen our partnerships with the community. We also made Linux and OSS run great and smoothly on Azure, and now one in three VMs on Azure are Linux. Microsoft teams forged key open source partnerships to bring more choice in solutions to Azure, such as Canonical, Red Hat, Pivotal, Docker, Chef and many more. Plus, we are also bringing many of our technologies into the open, or making them available on Linux.  

Linux.com: What are some of Microsoft’s contributions in open source and as a platform?

Gebi Liang: We are making great progress in enabling and integrating open source, but also in contributing and releasing aspects.

First, while integrating open source solutions into our platforms, we collaborate with the community and contribute the code back to the community.  Projects we contributed to is included , but not limited to: Linux and FreeBSD on Hyper-V, Hadoop, Windows container, Mesos and Kubernetes, Cloud Foundry and Openshift, various cloud deployment and management tools such as Chef & Puppet, and Hashicorp tools. Of course, there are many other projects too.

While developing our own VS code, the strong and lightweight IDE, we had also made a lot of contributions to the Electron codebase.  As Microsoft has become member of many prominent open source foundations such as the Linux Foundation, we will be even more involved in these communities and continuously contribute.

Microsoft has also been releasing more and more of our Platforms, Services and Products to the open source community.  The best-known ones include .Net, Powershell, Typescript, Xamarin, CNTK for machine learning, all the Azure SDKs and CLIs and VS Code.  

After the acquisition of Deis, we continue to invest in the set of popular K8s tools they developed, and recently released Draft, the tool to create apps for K8s, on Github.  Even for products that are not fully open sourced, there are many components, especially the newly developed ones, become open source, such as many of the IoT tools and adapters, and the OMS agent for Linux.  You can find the full list at https://opensource.microsoft.com/.  

Even in the hardware space, we’re contributing our data center design to Open Compute Project.

Linux.com: How exactly does Microsoft empower companies that are using or looking to use open source?

Gebi Liang: We fully recognize that customers wanted to have more choices including the use of open source, we have been enabling the popular Open source stacks on our platform with unprecedented speed. I am very proud to share a list of such project covering just about every aspect of what customers need.  On OS images, we enabled all the major Linux distros plus FreeBSD and OpenBSD as the latest addition. On Dev tools, now developers who are used to Mac environment can use Visual Studio on Mac, VS Code on Linux/Mac, or Eclipse, IntelliJ. And on database/big data, a Linux developer can use SQL on Linux and also use fully managed MySQL/PostgreSQL service on Azure.  

In terms of Management/monitoring, one can not only use OMS, PowerShell, but also Chef/Puppet/Ansible/Terraform/Zabbix, etc.  And for the popular microservices, we provide the fully diversified microservice platform support on Azure such as Docker Swarm, Mesos DC/OS, Kubernetes (k8s) in addition to Microsoft’s own microservice, service fabric, which supports both Windows and Linux. As a result, today we have 30%+ IaaS VMs running Linux on Azure and in China that number has reached 60%!

Linux.com: How is open source important to innovation at Microsoft?

Gebi Liang: Open source allows us to build on what the community had contributed, it gave us much speed to go to market.  Also, when we contribute and release software back to the community, we can leverage the communities for better feedback and build better application inspired by new & creative ideas. This helps us innovate faster and making best practices beyond any single company could achieve. And that’s the Power of Crowd’s Wisdom.

Linux.com: It’s interesting to hear how Microsoft’s embrace of open source helps its customers, but also how open source helps Microsoft innovate internally. What else is Microsoft doing to build or empower an open source culture?

Gebi Liang: We are committed to building a sustainable open source culture at Microsoft. Cultural Shift requires deep internal alignment with rewards and compensation. Microsoft had refined the Performance Review system for better accommodating the culture to share and to contribute. All employees are asked at every performance review to describe how they are empowering others and how they are building on the work of others. Open source is an officially recognized and documented core aspect of the developer skill set. And we can see that internal culture change is paying off with over 16K employees on GitHub, with some of them are even making critical contributions to projects like Docker and Hadoop.

I hope to see everyone at LinuxCon China. I’m happy to share more information about Microsoft and open source and perhaps collaborate on new projects too. See you there!