Home Blog Page 653

Untangling Macros in C

Morse Code made with smoke

 

As programmers, in our daily office/school life, we are expected to write code following best practice, to comment it wisely, so that when need is to re-read it, well someone can do it. To take a break from all those constraints, we can head to the IOCCC the International Obfuscated C Code Contest.

In this post, we are going to focus on the IOCCC 1986 winner in the Worst abuse of the C preprocessor category. The code was written by James Hague.

Starting from the given source, observing its output, we will explain how it works.

The Code

Here it is in all its obfuscated glory:

#define	DIT	(
#define	DAH	)
#define	__DAH	++
#define DITDAH	*
#define	DAHDIT	for
#define	DIT_DAH	malloc
#define DAH_DIT	gets
#define	_DAHDIT	char
_DAHDIT _DAH_[]="ETIANMSURWDKGOHVFaLaPJBXCYZQb54a3d2f16g7c8a90l?e'b.s;i,d:"
;main			DIT			DAH{_DAHDIT
DITDAH			_DIT,DITDAH		DAH_,DITDAH DIT_,
DITDAH			_DIT_,DITDAH		DIT_DAH DIT
DAH,DITDAH		DAH_DIT DIT		DAH;DAHDIT
DIT _DIT=DIT_DAH	DIT 81			DAH,DIT_=_DIT
__DAH;_DIT==DAH_DIT	DIT _DIT		DAH;__DIT
DIT'n'DAH DAH		DAHDIT DIT		DAH_=_DIT;DITDAH
DAH_;__DIT		DIT			DITDAH
_DIT_?_DAH DIT		DITDAH			DIT_ DAH:'?'DAH,__DIT
DIT' 'DAH,DAH_ __DAH	DAH DAHDIT		DIT
DITDAH			DIT_=2,_DIT_=_DAH_;	DITDAH _DIT_&&DIT
DITDAH _DIT_!=DIT	DITDAH DAH_>='a'?	DITDAH
DAH_&223:DITDAH		DAH_ DAH DAH;		DIT
DITDAH			DIT_ DAH __DAH,_DIT_	__DAH DAH
DITDAH DIT_+=		DIT DITDAH _DIT_>='a'?	DITDAH _DIT_-'a':0
DAH;}_DAH DIT DIT_	DAH{			__DIT DIT
DIT_>3?_DAH		DIT			 DIT_>>1 DAH:''DAH;return
DIT_&1?'-':'.';}__DIT DIT			DIT_ DAH _DAHDIT
DIT_;{DIT void DAH write DIT			1,&DIT_,1 DAH;}

Apart from the particular formatting, what jumps to the eye is the number of “unnecessary” macros and the repetitive use of DIT and DAT variations.

The output

If we compile the code at this point we see many warnings. Among them, two for the implicit declaration of __DIT and _DAH. After that step, we can run the code, and as we provide sequences of ascii characters, it spits out sequences of . and _.

$ ./a.out hello, world

.... . .-.. .-.. --- --..-- .-- --- .-. .-.. -..

It looks like Morse code. And indeed, using an online Morse decoder, it is. It reverses back to HELLO, WORLD

De-Obfuscating

Let’s first try to perform the pre-processor job and replace the macros by their values. After a bit of reformatting, this is what we have:

char _DAH_[]=”ETIANMSURWDKGOHVFaLaPJBXCYZQb54a3d2f16g7c8a90l?e’b.s;i,d:”;
main()
{
char *_DIT, *DAH_, *DIT_, *_DIT_, *malloc (), *gets();
for (_DIT = malloc(81), DIT_=_DIT++; _DIT == gets(_DIT); __DIT(‘n’))
   for (DAH_=_DIT; *DAH_; __DIT(*_DIT_ ? _DAH(*DIT_ ) : ‘?’),__DIT(‘ ‘),DAH_++) 
     for (*DIT_ = 2, _DIT_ = _DAH_; *_DIT_ && (*_DIT_ != (*DAH_ >= ‘a’ ? *DAH_&223 : *DAH_ )); (*DIT_ )++,_DIT_++)
         *DIT_+= (*_DIT_>=’a’ ? *_DIT_ — ‘a’ : 0);
}
_DAH(DIT_)
{ 
__DIT(DIT_> 3 ? _DAH(DIT_>>1) : ‘’);
return DIT_ & 1 ? ‘-’ : ‘.’;
} 
__DIT(DIT_) char DIT_;
{
(void) write (1,&DIT_,1);
}

Slightly better.

We see the three functions we expected: main, _DAH, and __DIT. We also see an external variable __DAH__ , a long string. __DIT looks like the putchar function from the standard library, printing a char at a time. And what about _DAH ?

Dive into _DAH

It is recursive. As long as the argument is a number that takes more than 2 bits to write, it calls the function again, stripping the number from its last bit. The output will be part of the argument printed as and . masking for 1 and 0 , i.e. the number in binary format, and it will return the second leftmost digit. As an example, if we call _DAH(5) , 5 being 101 in binary, it will call _DAH(2) . That is the base case. it prints (nothing) and return 10 & 1 == 0 so . . Then it will print . and return 101 & 1 == 1 so -. If we want to print that we have to call __DIT(_DAH(5)) which outputs .- which actually corresponds to 3 written in binary. _DAH(n) is a rather obfuscated function, which will not print/return n in binary but n — 2 .

The main function

Again with more explicit variables names.

char code[]=”ETIANMSURWDKGOHVFaLaPJBXCYZQb54a3d2f16g7c8a90l?e’b.s;i,d:”;
main ( )
{
char *line, *letter, *value, *code_copy, *malloc ( ),* gets ( );
    for (line = malloc(81), value= line++; line == gets(line); putchar(‘n’))
     {
     for (letter = line; *letter; putchar(*code_copy ? _DAH(*value) : ‘?’), putchar(‘ ‘), letter++)
         {
         for (*value = 2, code_copy = code; *code_copy && (*code_copy != (*letter >= ‘a’ ? *letter & 2
23: *letter)); (*value)++, code_copy++)
             {
              *value += (*code_copy >=’a’ ? *code_copy — ‘a’: 0);
             }
         }
     }
}

The outer loop: Each time the user enters a new line, it creates a buffer, and reads a line from the standard input to the buffer. gets does not check for buffer overflow, so 81 means nothing in the code itself, and I have not found what it means for Morse users. The function either returns the buffer it takes as an argument or NULL in this case, the program will return. This loop also assigns an address to value which it will use in the inner loop. This loop will print a new line after completion of the two inner loops.

The middle loop: it will loop through each letter in the string obtained above. As it moves from one letter to another, it will either print it using _DAH seen above or print a ? and add a space. As we looked at _DAH above, we used integers as arguments. It works fine as letters, ASCII characters like *value to be more precise, and are small integers in C.

The inner loop: it sets the value of *value to 2, and look at the value of the letter at this point. ((*letter >= 'a') ? *letter & 223 : *letter) means if the letter is lower case, use its upper case version. The octal 233 or 10010011 serves as a mask to change the one bit that is different for a lowercase letter than an uppercase. Of course 223 is not the most obvious one 137 would more easily come to mind. Knowing this, we can realize this inner loop will iterate as long as the *letter variable does not have a match in the code string. As it iterates, it will increase *value by 1 and move on to the next letter in code . Interestingly, if at one point in this process, the letter in code is lowercase, the loop will increase *value by some special number.

Going from a letter to its Morse code: globally, the inner loop starts with a *value = 2 and increase it by 1 each time the iteration moves on to the next letter in code until the letter in code is the letter in my line. It will then print *value using _DAH . Let’s see some examples:

  •  *letter = 'E' then *value = 2 and since E is the first letter in code it goes back to the middle loop and calls putchar(*code_copy ? _DAH(*value) : '?')` . Value of *code_copy == 'E' here so expression above becomes putchar(_DAH(2)) As seen above _DAH(2) returns/prints the binary value of 0as ‘.’ and ‘-’. The output of this expression will be . This is indeed the Morse code for E.
  •  *letter = 'I' then to start *value = 2 , since I is the third letter in code the inner loop will exit with *value == 4 . putchar(_DAH(4)) will print out .. , the Morse code for I.

We can observe a pattern, as the order chosen for the letters in code is such that the letters have Morse codes that are equal to the binary values of their index in the code string plus 2. Of course, this is not perfect. Remember the inner loop special cases ? If *code == 'a' , the loop will skip that *value or said otherwise, this *value or index does not map any Morse code. If *code == 'b' , the loop will skip that *value and shift by 'b' — 'a' == 1 , which means from the on, the Morse code for the letters in code will map the value of their index in the string plus 3. An so on, next comes a d which will shift that new mapping by 'd' — 'a' == 3 …This is genius and so hard to figure out.

Conclusion

The insane contest that is the IOCCC produces crazy creative code. Behind the formatting, the dubious but nevertheless relevant variable names and over complications lies a genius idea that maps letters with their Morse code inside a single string. I do not know how challenging it was to come up with it in the first place, but it did require some time, doubts and a few ‘ha ha’ moments to unravel the underlying process. I am so glad I did it, and I feel I acquired new skills in reading others’ code in the process.

Post Scriptum

As an annex, here is a ‘de obfuscated’ code, it compiles with one warning due to the use of gets. I tried to stay close to the original, but making it more readable.

#include <stdio.h>
#include <stdlib.h>

int _DAH(int);

char code[]="ETIANMSURWDKGOHVFaLaPJBXCYZQb54a3d2f16g7c8a90l?e'b.s;i,d:";

int main(void)
{
        char *line, *letter, *code_copy;
        char *gets(char *);
        char value, upper_case;

for (line = malloc(81); gets(line) != NULL; putchar('n'))
        {

for (letter = line; *letter; letter++)
                {
                        code_copy = code;
                        upper_case = (*letter >= 'a' ? *letter - 'a' + 'A' : *letter);
                        value = 2;
                        while (*code_copy && (*code_copy != upper_case))
                        {
                                value += (*code_copy >='a' ? *code_copy - 'a': 0);
                                value++;
                                code_copy++;
                        }
                        putchar(*code_copy ? _DAH(value) : '?');
                        putchar(' ');
                }
        }
}

int _DAH(int letter)
{
        putchar(letter > 3 ?_DAH (letter>>1): '');
        return (letter & 1 ? '-' : '.');
}

This article was contributed by a student at Holberton School and should be used for educational purposes only.

 

The Year in NV Trends for 2016

Is it ever too early for a Year in Review column? Didn’t think so. For this attempt, let’s take a look at the exciting trends in network virtualization (NV), as a mixture of open and proprietary technologies battle to be the cloud networking foundation of the future.

On the competitive front, we took a detailed look at the market in our “Future of Network Virtualization and SDN Controllers Report,” released in September. The market continues to grow, with a dynamic mixture of NV incumbents and startups gaining market traction.

Read more at SDx Central

Syscall Auditing at Scale

If you are are an engineer whose organization uses Linux in production, I have two quick questions for you:

1) How many unique outbound TCP connections have your servers made in the past hour?

2) Which processes and users initiated each of those connections?

If you can answer both of these questions, fantastic! You can skip the rest of this blog post. If you can’t, boy-oh-boy do we have a treat for you! We call it go-audit.

Syscalls are how all software communicates with the Linux kernel. Syscalls are used for things like connecting network sockets, reading files, loading kernel modules, and spawning new processes (and much much much more). 

 

Read more at Slack Engineering Blog

Mozilla and Tor Release Urgent Update for Firefox 0-day Under Active Attack

Developers with both Mozilla and Tor have published browser updates that patch a critical Firefox vulnerability being actively exploited to deanonymize people using the privacy service.

“The security flaw responsible for this urgent release is already actively exploited on Windows systems,” a Tor official wrote in an advisory published Wednesday afternoon. “Even though there is currently, to the best of our knowledge, no similar exploit for OS X or Linux users available, the underlying bug affects those platforms as well. Thus we strongly recommend that all users apply the update to their Tor Browser immediately.”

Read more at Ars Technica

Canonical Offers Direct Docker Support to Ubuntu Users

Enterprise Ubuntu users running Docker in production now have a new source for Docker support: from Canonical.

Earlier today, Canonical and Docker announced joint support for the commercial edition of Docker Engine on Ubuntu. The pair also will provide updates for Docker on Ubuntu through an application delivery system Canonical originally devised.

Read more at InfoWorld

Build a Hadoop Cluster in AWS in Minutes

Check out this process that will let you get a Hadoop cluster up and running on AWS in two easy steps.

I use Apache Hadoop to process huge data loads. Setting up Hadoop in a cloud provider, such as AWS, involves spinning up a bunch of EC2 instances, configuring nodes to talk to each other, installing software, configuring the master and data nodes’ config files, and starting services.

This was a good use case to automate, considering I wanted to solve these problems.

  • How do I build the cluster in minutes (as opposed to hours and maybe even days for a large number of data nodes)?

Read more at DZone

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Over the past six months, federation of Kubernetes clusters has moved from proof of concept to a release that is worth checking. Federation was first introduced under somewhat of a code name — Ubernetes. And then, in Kubernetes v1.3.0, cluster federation appeared. Now, there is extensive documentation on the topic.

Why is it such a big deal? If you have followed the development of Kubernetes, you probably know that it is an open source rewrite of Borg, the system that Google uses internally to manage their containerized workloads across data centers. If you read the paper, you will notice that a single Kubernetes cluster is the equivalent of a Borg cell. As such, Kubernetes itself is not the complete equivalent of Borg. However, by adding cluster federation, Kubernetes can now distribute workloads across multiple clusters. This opens the door for more real Borg features, like failover across Zones, geographic load-balancing, workload migration, and so on.

Indeed, cluster federation in Kubernetes is a hybrid cloud solution.

How does it work ?

The picture below, taken from the Tectonic blog on Federation shows a high-level architectural view.

federation-api-4x.png

Image courtesy of coreOS.

You see three Kubernetes clusters (i.e., San Francisco, New York, and Berlin). Each of those runs an API server, controller, its own scheduler and etcd-based key value store. This is the standard Kubernetes cluster setup. You can use any of these clusters from your local machine, assuming you have an account set up on them and associated credentials. With the k8s client — kubectl — you can create multiple contexts and switch between them. For example, to list the nodes in each cluster, you would do something like:

```

$ kubectl config use-context sanfrancisco

$ kubectl get nodes

$ kubectl config use-context newyork

$ kubectl get nodes

$ kubectl config use-context berlin

$ kubectl get nodes

```

With federation, Kubernetes adds a separate API server (i.e., the Federation API server), its own etcd-based key value store and a control plane. In effect, this is the same setup for a regular cluster but at a higher level of abstraction. Instead of registering individual nodes with the Federated API server, we will register full clusters.

A cluster is defined as a federated API server resource, in a format consistent with the rest of the Kubernetes API specification. For example:

```

apiVersion: federation/v1beta1

kind: Cluster

metadata:

 name: new-york

spec:

 serverAddressByClientCIDRs:

   - clientCIDR: "0.0.0.0/0"

     serverAddress: "${NEWYORK_SERVER_ADDRESS}"

 secretRef:

   name: new-york

```

Adding a cluster to the federation is a simple creation step on the federated API server:

```

$ kubectl --context=federated-cluster create -f newyork.yaml

```

In the sample below, notice that I use a context federated cluster and that I used the k8s client. Indeed the Federated API server extends on the Kubernetes API and can be talked to using kubectl.

Also note that the federation components can (and actually should) run within a Kubernetes cluster.

Creating Your Own Federation

I will not show you all the steps here, as it would honestly make for a long blog. The official documentation is fine, but the best way to understand the entire setup is the walkthrough from Kelsey Hightower. This walkthrough uses Google GKE and Google Cloud DNS, but it can be adapted relatively easily for your own setup using your own on-premise clusters.

In short, the steps to create a Federation are:

1. Pick the cluster where you will run the federation components, and create a namespace where you will run them.

2. Create a Federation API server service that you can reach (i.e., LoadBalancer, NodePort, Ingress), create a secret containing the credentials for the account you will use on the federation and launch the API server as a deployment.

3. Create a local context for the Federation API server so that you can use kubectl to target it. Generate a kubeconfig file for it, store is as a secret, and launch the control plane. The control plane will be able to authenticate with the API server using the kubeconfig secret created.

4. Once the control plane is running, you are ready to add the Clusters. You will need to create secrets for each cluster’s kubeconfig. Then, with Cluster resource manifest on hand (see above), you can use kubectl to create them on the federation context.

The end result of this is that you should have a working federation. Your clusters should registered and ready. The following command will show them.

```

$ kubectl --context=federated-cluster get clusters

```

Migrating a Workload From One Cluster to Another

As federation matures, we can expect to see most Kubernetes resources available on the Federation API. Currently, only Events, Ingress, Namespaces, Secrets, Services and ReplicaSets are supported. Deployments should not be far off, because ReplicaSets are already in there. Deployments will be great because this will bring us rolling updates and rollbacks across clusters.

Creating a workload in the Federation is exactly the same thing as doing it on a single cluster. Create a resource file for a replica set and create it with kubectl targeting the federated cluster.

```

$ cat nginx.yaml

apiVersion: extensions/v1beta1

kind: ReplicaSet

metadata:

 name: nginx

spec:

 replicas: 4

 template:

   metadata:

     labels:

       app: nginx

   spec:

     containers:

       - name: nginx

         image: nginx:1.10

         

$ kubectl --context=federated-cluster create -f nginx.yaml

```

The really great concept, though, even at this early stage in federation support, is that you can already give some preference to a cluster in the federation. That way, when the Pods start, they may be scheduled more on one cluster and less on another. This is done via an annotation.

Add the following in the metadata of the replica set above:

```

 annotations:

   federation.kubernetes.io/replica-set-preferences: |

       {

           "rebalance": true,

           "clusters": {

               "new-york": {

                   "minReplicas": 0,

                   "maxReplicas": 10,

                   "weight": 1

               },

               "berlin": {

                   "minReplicas": 0,

                   "maxReplicas": 10,

                   "weight": 1

               }

           }

       }

```

If you scale to 10 replicas, you will see five pods appear on each cluster. Indeed, each one has the same weight in the annotation.

Now try this: edit the annotation and change the weight. For example, put 20 on one of the clusters and 0 in the other.

```

$ kubectl --context=federated-cluster apply -f nginx.yaml

```

You will see all your Pods “move” over to one cluster and disappear from the other. Edit the replicaset again, switch the weight the other way and do another apply. You will see the Pods “move” the other way.

This is not a migration in the sense copying memory between two hypervisors like we can do with VMs, but it is migration in a microservice sense, where we can move the services from region to region.

And this is just the beginning!

Read the previous articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.

Free Linux Foundation Webinar on Hyperledger: Blockchain Technologies for Business

You may have heard about the world-changing potential of blockchains — the technology behind cryptocurrencies such as Bitcoin and Ethereum. But what are they exactly? And why are companies clamoring to use and develop blockchain technologies?

“It’s not too outlandish to think that in five years time, every Fortune 500 company and perhaps even the top 1,000 will have deployed a blockchain somewhere,” said Hyperledger Executive Director Brian Behlendorf, in a recent article on Linux.com.

In a free webinar to be held Dec. 1 at 10 a.m. Pacific, guest speaker Dan O’Prey, CMO of Digital Asset Holdings, will provide an overview of blockchain technology and the Hyperledger Project at The Linux Foundation.

Hyperledger is an umbrella project for software developer communities building open source blockchain and related technologies. It is a neutral, foundational community for participating companies such as IBM, Intel, Cisco, JPMorgan, Wells Fargo, the London Stock Exchange, Red Hat, and Swift to work together to develop the technology and address issues of code provenance, patent rights, standards, and policy.

In this webinar, Dan will cover:

  • The foundations of distributed ledger technologies, smart contracts, and other components that comprise the modern blockchain technology stack.

  • Why a new blockchain project was needed for business and what the main use cases and requirements for the technology are for commercial applications, as well as extending the overview on the history and projects in the Hyperledger umbrella and how you can get involved.

Register now to attend the webinar, Hyperledger: Blockchain Technologies for Business! Can’t attend? Register anyways to make sure you get a link to the replay, delivered straight to your inbox.

Remote Logging With Syslog, Part 1: The Basics

A problematic scenario, sometimes caused as a result of an application’s logging misconfiguration, is when the /var/log/messages file fills up, meaning that the /var partition becomes full. Thanks to the very nature of how systems work, there are always times when your system’s logging cause unexpected issues. Thus, it’s key that you understand and have the ability to control how your logging works and where your logs are saved.

Over the years when using Unix-like systems, I’ve been exposed to three or four different versions of the operating system’s default logging tool, known as Syslog. Here I will look at the logging daemon called rsyslog, a superfast Syslog product.

Before looking at the package itself, we’ll start by exploring the configuration of your system logging locally. Then we will use that knowledge to go one step further and configure a remote Syslog server. We will also explore certain aspects which might cause you problems. With some background information in addition we will also improve your troubleshooting.

As mentioned, in this series, we will be looking at rsyslog. We will focus on a long-established variety of Syslog, dating back to 2004, which has become a favorite among Linux distributions. As a result, rsyslog is now bundled as the default Syslog daemon on a number of the popular Unix-like flavors. It’s a substantial piece of software which is super fast, extensible and reportedly available on around 10 Unix-like distributions.

To get started, I’ll provide an introduction into how your Linux system thinks of its logging at a basic level.

Logging Detail

For years, sysadmins have debated at what level of detail to log their system data. There are a number of settings to affect the amount of logging detail which your server generates. We will look at how to configure the varying levels shortly. It’s a tradeoff between using up disk space too quickly versus not having enough information in your logs.

Let’s look at some of the detail settings which we can choose between. In Listing 1, we can see how the kernel ranks its errors, from zero to seven.

#define KERN_EMERG     "<0>"  /* system is unusable                      */

#define KERN_ALERT       "<1>"  /* action must be taken immediately */

#define KERN_CRIT           "<2>"  /* critical conditions                        */

#define KERN_ERR            "<3>"  /* error conditions                           */

#define KERN_WARNING  "<4>"  /* warning conditions                       */

#define KERN_NOTICE      "<5>"  /* normal but significant condition     */

#define KERN_INFO            "<6>"  /* informational                              */

#define KERN_DEBUG       "<7>"  /* debug-level messages                  */

Listing 1: How the system file “kernel.h” defines logging levels in some cases.

The rsyslog basics

When first looking at a new package it is usually a good indication of how tricky that package might be to pick up from the syntax of its configuration. The following syntax is the type that “sysklogd” used in the past and the modern rsyslog uses too:

mail.info /var/log/mail.log

mail.err @ server.chrisbinnie.tld

Thankfully, it is very simple once you understand the basics. The function of this logging example is also known as forwarding. The info logging mentioned (or in other words “informational errors”) is simply dropped locally into the file /var/log/mail.log. Whereas the err, or error condition logs, are sent off somewhere else to another server. In this case that server is called server.chrisbinnie.tld. Incidentally, the “INFO” and “info” settings are not case-sensitive and its case is frequently interchanged for clarity within Syslog config.

The writers of the powerful Syslog software, rsyslog, (Rainer Gerhards is apparently the main author) refers to it as “The Rocket-Fast System For Log Processing.”

This high-performance package can seemingly manage a million messages per second (when asked to drop local logging events to disk if “limited processing” is applied)! That’s impressive by any measure.

The home page of the rsyslog includes a few of its features:

  • Multi-threading

  • TCP, SSL, TLS, RELP

  • MySQL, PostgreSQL, Oracle, and more

  • Filter any part of syslog message

  • Fully configurable output format

  • Suitable for enterprise-class relay chains

That is exactly the functionality that you might look for in a best-of-breed Syslog daemon. In the next article, I’ll take a detailed look at the main config file. Then, I’ll cover logfile rules, log rotation, and some important networking considerations.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

SUSE Buys HPE’s OpenStack and Cloud Foundry Assets

Today, SUSE announced that it is acquiring OpenStack and Cloud Foundry (the Platform-as-a-Service to OpenStack’s Infrastructure-as-a-Service) assets and talent from the troubled HPE. This follows HPE’s decision to sell off (or “spin-merge” in HPE’s own language) its software business (including Autonomy, which HP bought for $11 billion, followed by a $9 billion write-off) to Micro Focus. And to bring this full circle: Micro Focus also owns SUSE, and SUSE is now picking up HPE’s OpenStack and Cloud Foundry assets.

Read more at TechCrunch