Home Blog Page 679

How Bad Is Dirty COW?

“Dirty COW” is a serious Linux kernel vulnerability that was recently discovered to have been lurking in the code for more than nine years. It is pretty much guaranteed that if you’re using any version of Linux or Android released in the past decade, you’re vulnerable. But what is this vulnerability, exactly, and how does it work? To understand this, it’s helpful to illustrate it using a popular tourist scam.

The con

Have you ever played the game of shells? It’s traditionally played with a pea and three walnut shells — hence the name — and it is found on touristy street corners all over the world. Besides the shells themselves, it also involves the gullible “mark” (that’s you), the con artist (that’s the person moving the shells), and, invariably, one or several scammer’s assistants in the crowd pretending to be fellow tourists. At first, the accomplices “bait” the crowd by winning many bets in a row, so you assume the game is pretty easy to win — after all, you can clearly see the ball move from shell to shell, and it’s always revealed right where you thought it would be.

So, you step forward, win a few rounds, and then decide to go for a big bet, usually goaded by the scammers. At just the right time you’re momentarily distracted by the scammer’s assistants, causing you to look away for a mere fraction of a second — but that’s enough for the scammer to palm the pea or quickly move it to another shell. When you call your bet, the empty shell is triumphantly revealed and you walk away relieved of your cash.

The race

In computing terms, you just experienced a “race condition.” You saw the ball go under a specific shell (checked for required condition), and therefore that’s the one you pointed at (performed the action). However, unknown to you, between the check and the action the situation has changed, causing the initial condition to no longer be true. In real life, you were probably only out of a couple of hundred bucks, but in computing world race conditions can lead to truly bad outcomes.

Race conditions are usually solved by requiring that the check and the action are performed as part of an atomic transaction, locking the state of the system so that the initial condition cannot be modified until the action is completed. Think of it as putting your foot on the shell right after you see the pea go under it — to prevent the scammer from palming or moving it while you are distracted (though I don’t suggest you try this unless you’re ready to get into a fistfight with the scammer and their accomplices).

The COW

Unfortunately, one such race condition was recently discovered in the part of the Linux Kernel that is responsible for memory mapping. Linux uses the “Change on Write” (COW) approach to reduce unnecessary duplication of memory objects. If you are a programmer, imagine you have the following code:

a = ‘COW’

b = a

Even though there are two variables here, they both point at the same memory object — since there is no need to take up twice the amount of RAM for two identical values. Next, the OS will wait until the value of the duplicate object is actually modified:

b += ‘ Dirty’

At this point, Linux will do the following (I’m simplifying for clarity):

  1. allocate memory for the new, modified version of the object

  2. read the original contents of the object being duplicated (‘COW’)

  3. perform any required changes to it (append ‘ Dirty’)

  4. write modified contents into the newly allocated area of memory

Unfortunately, there is a race condition between step 2 and step 4 which tricks the memory mapper to write the modified contents into the original memory range instead of the newly allocated area, such that instead of modifying memory belonging to “b” we end up modifying “a”.

The paydirt

Just like any other POSIX system, Linux implements “Discretionary Access Controls” (DAC), which relies on a framework of users and groups to grant or deny access to various parts of the OS. The grant permission can be read-only, or read-write. For example, as a non-privileged user you should be able to read/bin/bash” in order to start a shell session when you log in, but not write to it. Only a privileged account (e.g. “root”) should be able to modify this file — otherwise any malicious user could replace the bash binary with a modified version that, for example, logs all passwords or starts up a backdoor.

The race condition described above allows the attacker to bypass this permissions framework by tricking the COW mechanism to modify the original read-only objects instead of their copies. In other words, a carefully crafted attack can indeed replace “/bin/bash” with a malicious version by an unprivileged user. This vulnerability has been assigned both the boring name (“CVE-2016-5195”), and the now-customary branded name of “Dirty COW.”

The really bad news is that this race condition has been present in the kernel for over 9 years, which is a very long time when it comes to computing. It is pretty much guaranteed that if you’re using any version of Linux or Android released in the past decade, you’re vulnerable.

The fix

Triggering this exploit is not as trivial as running a simple “cp” operation and putting any kind of modified binary in place. That said, given enough time and perseverance, we should assume that attackers will come up with cookie-cutter exploits that will allow them to elevate privileges (i.e. “become root”) on any unpatched system where they are able to freely execute arbitrary code. It is imperative that all Linux systems are patched as soon as possible — and a full reboot will be required, unless you have some kind of live patching solution available to you (if you don’t already know whether you can live-patch, then you probably cannot, as it’s not a widely used technology yet).

There is a fix available in the upstream kernel, and, at the time of writing this article, the distributions are starting to release updated packages. You should be closely monitoring your distribution’s release alerts and apply any outstanding kernel errata as soon as it becomes available. The same applies to any Android devices you may have.

If you cannot update and reboot your system right away, there are some mitigation mechanisms available to you while you wait (see this Red Hat Bugzilla entry for more details). It is important to note that the STAP method will only mitigate against known proof of concept exploits and is not generic enough to be considered a good long-term fix. Unfortunately, “Dirty COW” is not the kind of bug that can be prevented (much) by SELinux, AppArmor or any other RBAC solution, nor is it mitigated by PaX/GrSecurity hardening patches.

The takeaway

As I said earlier, in order to exploit the “Dirty COW” bug, the attacker must first be able to execute arbitrary code on the system. This, in itself, is bad enough — even if an attacker is not able to gain immediate root-level privilege, being able to execute arbitrary code gives them a massive foothold on your infrastructure and allows them a pivot point to reach your internal networks.

In fact, you should always assume that there are bad bugs lurking in the kernel that we do not yet know about (but the attackers do). Kees Cook in his blog about security bug lifetimes points out that vulnerabilities are usually fixed long after they are first introduced — many of them lurking in the code for years. Really bad bugs the caliber of the “Dirty COW” are worth hundreds of thousands of dollars on the black market, and you should always assume that an attacker who is able to execute arbitrary code on your systems will eventually be able to escalate their privileges and gain root access. Efforts like the “Kernel Self Protection Project” can help reduce the impact of some of these lurking bugs, but not all of them — for example, race conditions are particularly tricky to guard against and can be devastating in their scope of impact.

Therefore, any mitigation for the “Dirty COW” and other privilege escalation bugs should really be considered a part of a comprehensive defense-in-depth strategy that would work to keep attackers as far away as possible from being able to execute arbitrary code on your systems. Before they even get close to the kernel stack, the attackers should have to first defeat your network firewalls, your intrusion prevention systems, your web filters, and the RBAC protections around your daemons.

Taken altogether, these technologies will provide your systems with a great deal of herd immunity to ensure that no single exploit like the “Dirty COW” can bring your whole infrastructure to its tipping point.

Learn more about how to secure Linux systems through The Linux Foundation’s online, self-paced course Linux Security Fundamentals.

Keynote: Brian Behlendorf, Executive Director, Hyperledger Project

Brian Behlendorf, Executive Director of the Hyperledger Project, explains everything you need to know about blockchain, Hyperledger, and other amazing new technologies in his LinuxCon North America keynote.

Top 5 Reasons to Love Kubernetes

At LinuxCon Europe in Berlin I gave a talk about Kubernetes titled “Why I love Kubernetes? Top 10 reasons.” The response was great, and several folks asked me to write a blog about it. So here it is, with the first five reasons in this article and the others to follow. As a quick introduction, Kubernetes is “an open-source system for automating deployment, scaling and management of containerized applications” often referred to as a container orchestrator.

Created in June 2014 by Google, it currently has over 1000 contributors, more than 37k commits and over 17k stars on GitHub and is now under the governance of the Cloud Native Computing Foundation at The Linux Foundation. A recent private survey by Gartner listed Kubernetes as the leading system to manage containers at scale.

Choosing a distributed system to perform tasks in a datacenter is quite difficult, because comparing various solutions is much more complex than looking at a spreadsheet of features or performance. Measuring performance of systems like Kubernetes fairly is quite challenging due to the many variables we face. I believe that choosing a system also depends highly on past experiences, one’s own perspective and the skills available in a team. Yes, this does not sound rational but that’s what I believe. 🙂

So here, in no particular order, are the top five reasons to like Kubernetes.

#1 The Borg Heritage

Kubernetes (k8s) inherits directly from Google’s long-time secret application manager: Borg. I often characterize k8s as a rewrite in the open of Borg.

Borg was a secret for a long time but was finally described in the Borg Paper. It is the system used by the famed Google Site Reliability Engineers (SRE) to manage Google applications like Gmail and even its own Cloud: GCE.

borg.png

Historically, Borg managed containerized applications because when it was created, hardware virtualization was not yet available and also because containers offered a fine grain compute unit to pack Google’s data-center and increase efficiency.

As a long-time cloud guy, what I found fascinating is that GCE runs on Borg. This means that the virtual machines we get from GCE are actually running in containers. Let that sink in. And, that GCE is a distributed application managed by Borg.

Hence, the killer reason to embrace Kubernetes for me, was that Google has been re-writing in the open the solution that manages their cloud. I often characterize this as “Imagine AWS was open sourcing EC2” — this would have solved us all a bunch of headaches.

So, read the Borg paper; even if you just skim through it, you will learn valuable insights into the thinking that went into Kubernetes.

#2 Easy to Deploy

This one is definitely going to be contentious, but when I jumped into Kubernetes in early 2015, I found that it was quite straightforward to set up.

First, you can run k8s on a single node, we will get back to that, but for a non-HA setup you just need a central manager and a set of workers. The manager runs three processes (i.e., API server, Scheduler, and a resource Controller) plus a key-value store using etcd and the workers run two processes (i.e., the Kubelet that watches over the containers and the Proxy that exposes services).

This architecture, at a high level is similar to Mesos, CloudStack, or OpenStack, for instance, as well as most non peer-to-peer systems. Replace etcd with zookeeper, replace the manager processes with Mesos master, and replace kubelet/proxy with Mesos worker, and you have Mesos.

When I started, I was able to quickly write an Ansible playbook that used CoreOS virtual machines and set up all the k8s components. CoreOS had the advantage of also shipping a network overlay (i.e., flannel) and Docker. The end result was that, in literally less than 5 minutes, I could spin up a k8s cluster. I have been updating that playbook ever since, and many others exist. So for me, spinning up k8s is one command:

```

$ ansible-playbook k8s.yml

```

Note that if you want to use Google Cloud, you have a service for Kubernetes cluster provisioning, the Google Container Engine (GKE) and getting a cluster is also one command that works great:

```

$ gcloud container clusters create foobar

```

While from my perspective this is “easy”, I totally understand that this may not be the case for everyone. Everything is relative, and reusing someone’s playbook can be a pain.

Meanwhile, Docker has done a terrific job when rewriting Swarm and embedding it into the Docker engine. They made creating a Swarm cluster as simple as running two bash commands.

If you like that type of setup, Kubernetes is now also shipping with a command called kubeadm, which lets you create a cluster from the CLI. Start a master node and have the workers join, and that is it.

```

$ kubeadm init

$ kubeadm join

```

I have also made a quick and dirty playbook for it, check it out.

#3 Development Solution with minikube

Quite often when you want to experiment with a system, take it for a quick ride, you do not want a full blown distributed setup in your data center or in the cloud. You just want to test it on your local machine.

Well, you’ve got minikube for that.

Download, install, and you are one bash command away from having a single-node, standalone Kubernetes instance.

```

$ minikube start

Starting local Kubernetes cluster...

Kubectl is now configured to use the cluster.

```

Within a short moment, minikube will have booted everything and you will have access to your single node k8s instance:

```

$ kubectl get nodes

NAME       STATUS    AGE

minikube   Ready     25s

```

By default, it will use Virtualbox on your machine and start a VM, which will run a single binary (i.e., `localkube`) that will give you Kubernetes quickly. That VM will also have Docker, and you could use it as a Docker host.

Minikube also allows you to test different versions of Kubernetes, as well as configure to test different features. It also comes with the Kubernetes dashboard, which you can open quickly with:

```

$ minikube dashboard

```

#4 Clean API that is easy to learn

There was a world before REST, and it was painful. It was painful to learn, to program, to use, and debug. It was also full of evolving and competing standards. But let’s not go there. That’s why I love clean REST APIs that I can look at and that I can test with curl. To me, the Kubernetes API has been a joy. Just a set of resources (or objects) with HTTP actions, with request and response that I can manipulate in JSON or YAML.

As Kubernetes is moving quite fast, I enjoy that the various resources are grouped in API Groups and well versioned. I know what is alpha or beta or stable, and I know where to check the specifications.

If you read reason #3 you already have minikube, right ? Then the fastest way to check the API is to dive straight into it:

```

$ minikube ssh

$ curl localhost:8080

{

 "paths": [

   "/api",

   "/api/v1",

   "/apis",

   "/apis/apps",

   "/apis/apps/v1alpha1",

...

```

You will see all the API groups and be able to explore the resources they contain, just try:

```

$ curl localhost:8080/api/v1

$ curl localhost:8080/api/v1/nodes

```

All resources have a kind, apiVersion, and metadata.

To learn about the schema of each resource, there is a Swagger API browser that is quite useful. I also often refer to the documentation when I am looking for a specific field in the schema. The next step to learn the API is actually to use the command-line interface to Kubernetes kubectl, which is reason #5

#5 Great CLI

Kubernetes does not leave you out in the cold, having to learn the API from scratch and then writing your own client. The command line client is there; it is called kubectl, and it is sentence based and extremely powerful.

You can manage your  entire Kubernetes cluster and all resources in it via kubectl.

Perhaps the toughest part of kubectl is how to install it or where to find it. There is room for improvement there.

Let’s get going with our minikube setup again and explore a few kubectl verbs like get, describe, and run.

```

$ kubectl get nodes

$ kubectl get nodes minikube -o json

$ kubectl describe nodes minikube

$ kubectl run ghost --image=ghost

```

That last command will start the blogging platform Ghost. You will shortly see a pod appear. A pod is the lowest compute unit in Kubernetes and the most basic resource. With the run command, Kubernetes created another resource called a deployment. Deployments provide a declarative definition of a containerized service (see it as a single microservice). Scaling this microservice is one command:

```

$ kubectl scale deployments/ghost --replicas=4

```

For every kubectl command you try, you can use two little tricks I love: –watch and –v=99. The watch flag will wait for events to happen, which feels a lot like the standard Linux watch command. The verbose flag with the value of 99 will give you the curl commands that can mimic what kubectl does. It is a great way to keep learning the API, find the resources it uses and the requests.

Finally, to get your mind blown you can just edit this deployment in place, it will trigger a rolling update.

```

$ kubectl edit deployment/ghost

```

Stay tuned for five more reasons to love Kubernetes.

So you’ve heard of Kubernetes but have no idea what it is or how it works? The Linux Foundation’s Kubernetes Fundamentals course will take you from zero to knowing how to deploy a containerized application and manipulate resources via the API. Sign up now!

Hyperledger — The Source of Truth

Well, here we go again with yet another new technology with a name that doesn’t tell us much about what it is: Hyperledger. Hyperledger is related to Bitcoin, Ethereum, and blockchains… but, what are these things? They sound like science fiction: I have plenty of Bitcoins, so let’s go splurge on a nice evening at the Ethereum. Fortunately, Brian Behlendorf, Executive Director of the Hyperledger Project explains these amazing new technologies in his LinuxCon North America keynote.

Obviously, because this is a Linux and open source website, these things are all related to open source projects. Bitcoin is a distributed peer digital currency and a shared world ledger that records balances. Ethereum takes concepts from Bitcoin and expands them to build a decentralized computing platform that runs smart contracts.

Both use blockchains, and Behlendorf explains what these are. He says, “People have started to focus on underlying technology within Bitcoin, within Ethereum, that sort of thing. That’s something called a blockchain. The blockchain is actually not a dramatically new idea. It’s a decentralized database that has multi-masters. Anybody can write to it. It’s a ledger, and it’s resilient to hostile actors. Somebody can try to corrupt the consensus-forming process of what’s the next entry to write into the ledger, and the rest of the network would be able to recognize that and stop it.”

“This decentralized ledger is something that used to be core to thinking about how we might scale up database systems, and then we figured out to make the central single master kind of model work and scale up and then we forgot about it, until Satoshi. Until Bitcoin reintroduced this idea by helping us realize these different masters could represent different actors, different organizations, different individuals, perhaps even anonymously. Satoshi took this whole series of different ideas, including blockchain and this kind of decentralized database idea, and created a currency out of it, but in doing that he highlighted the potential to come back to this idea of distributed ledgers as this really interesting technology,” he says.

System of Record

What might that be used for? According to Behlendorf, “the distributed ledger that keeps track of that is essentially the system of record. The source of truth in a community of participants. You could use that as a way to build an immediate settlement network for a bank where 20 banks might be writing into this ledger. Such and such traded with such and such, whatever, and then be able to go back and prove that actually all happened in sequence, and not be able to refute that certain things happened as well.”

Where does Hyperledger come in? Hyperledger is a Linux Foundation Collaborative Project, founded to address issues of code provenance, patent rights, standards, and policy. Behlendorf says, “A set of organizations started to place calls to Jim [Zemlin, the Executive Director of the Linux Foundation]. He started to host some conference calls, some face-to-face meetings, and those companies included very familiar names like IBM and Intel. They included some brand new types of companies such as Digital Asset Holdings and R3, and included some companies that had never previously engaged with the Linux Foundation before such as JP Morgan Bank.”

Hyperledger was launched in December 2015 under the framework of the Linux Foundation’s collaborative projects framework. “The first code release happened in February. That code was called Hyperledger Fabric. Hyperledger Fabric is an implementation of this kind of private blockchain model where if you have a set of known named entities, 20 banks, or a government, and a regulatory agency, and NGOs and others who all want simply a shared distributed ledger, and want to be able to layer on top of that a smart contract platform,” says Behlendorf.

Watch Brian Behlendorf’s keynote (below) to learn about this bleeding-edge technology that is bringing together an unlikely cast of participants, including JP Morgan Bank, Airbus, IBM, Intel, and various companies and open source developers all over the globe.

Watch 25+ keynotes and technical sessions from open source leaders at LinuxCon + ContainerCon Europe. Sign up now to access all videos!

Debunking Unikernel Criticisms

The security and tooling worries around unikernels are vastly exaggerated, asserted Idit Levine, creator of the Unik, a unikernel compilation tool, as well as a cloud chief technology officer at Dell EMC.

A relatively new concept, Unikernels could be thought of as stripped-down containers with only the functionality needed to run the specific workload at hand. They could offer gains in saved storage and faster performance, but they are anything but a proven technology.

A few months back, a then-EMC colleague of Levine’s charged that unikernels are fundamentally unsecurable as they provided the deepest, “Ring 0” access to an operating system. And a few months prior to that, the chief technology officer of Joyent was also quick to point out another problem of unikernels: lack of tooling.

In this latest edition of The New Stack Makers podcast, Levine concisely answers both of these criticisms, as well as discusses the first possible use case for unikernels, namely to power edge devices on the Internet of ThingsThe interview was conducted by TNS founder Alex Williams and managing editor Joab Jackson at Cloud Foundry Summit Frankfurt.

The post Debunking Unikernel Criticisms appeared first on The New Stack.

4 Useful Cinnamon Desktop Applets

The Cinnamon desktop environment is incredibly popular, and for good reason. Out of the box it offers a clean, fast and well configured desktop experience.

But that doesn’t mean that you can’t make it a little better with a few nifty extras.

And that’s where Cinnamon Applets come in. Like Unity’s Indicator Applets and GNOME Extensions, Cinnamon Applets let you add additional functionality to your desktop quickly and easily.

This post, 4 Useful Cinnamon Desktop Applets, was written by Joey-Elijah Sneddon and first appeared on OMG! Ubuntu!.

Where to Find the World’s Best Programmers

So which countries produce the best coders is an interesting question to ask. Perhaps more importantly why do some countries lead the way?

One source of data about programmers’ skills is HackerRank, a company that poses programming challenges to a community of more than a million coders and also offers recruitment services to businesses. Using information about how successful coders from different countries are at solving problems across a wide range of domains (such as “algorithms” or “data structures” or specific languages such as C++ or Java),HackerRank’s data suggests that, overall, the best developers come from China, followed closely by Russia. Alarmingly, and perhaps unexpectedly, the United States comes in at 28th place.

Read more at NetworkWorld

Puppet Unveils New Docker Build and Phased Deployments

Puppet released a number of announcements today including the availability of Puppet Docker Image Build and a new version of Puppet Enterprise, which features phased deployments and situational awareness.

In April, Puppet began helping people deploy and manage things like Docker, KubernetesMesosphere, and CoreOS. Now the shift is helping people manage the services that are running on top of those environments. The Puppet Docker Image Build automates the container build process to help organizations deploy containers into production environments. This gives users a consistent way to install Docker environments using the same code they rely on to automate the delivery of software in the data center or the cloud.

Read more at SDxCentral

Continuous Testing in DevOps…

I’ve recently attended a number of conferences, some testing ones, some agile ones and some dev ones too. Although some of the talks were painful due to misunderstandings of one thing or another (be it testing, automation, agile, BDD, TDD, etc…), overall, I thought the conferences were pretty good and I met some new people too and got to share some stories with them.

One thing I heard a fair amount was about DevOps. Many people were talking about it. It’s a big topic! But too many people seemed hugely confused about where testing fits in this wonderful world of DevOps. Some suggested that you only need automation in DevOps, but when asked to explain, their arguments fell floppily by the waist-side. Some people blatently refused to guess at how they’d try and implement testing in DevOps.

Read more at Dan Ashby

Mitigating dirtyc0w with systemd

Basic mitigation

Known exploits for the CVE-2016–5195 vulnerability involve the madvise syscall, so it’s possible to mitigate by excluding the necessary call via a systemd service or container configuration. This is easy with for a systemd unit:

[Service]
SystemCallFilter=~madvise

The tilde after the equal sign indicates that this is a blacklist of syscalls.

As with any configuration change, you’ll want to test this out before deploying it. …

Read more at David Timothy Strauss Blog