Home Blog Page 653

Mozilla and Tor Release Urgent Update for Firefox 0-day Under Active Attack

Developers with both Mozilla and Tor have published browser updates that patch a critical Firefox vulnerability being actively exploited to deanonymize people using the privacy service.

“The security flaw responsible for this urgent release is already actively exploited on Windows systems,” a Tor official wrote in an advisory published Wednesday afternoon. “Even though there is currently, to the best of our knowledge, no similar exploit for OS X or Linux users available, the underlying bug affects those platforms as well. Thus we strongly recommend that all users apply the update to their Tor Browser immediately.”

Read more at Ars Technica

Canonical Offers Direct Docker Support to Ubuntu Users

Enterprise Ubuntu users running Docker in production now have a new source for Docker support: from Canonical.

Earlier today, Canonical and Docker announced joint support for the commercial edition of Docker Engine on Ubuntu. The pair also will provide updates for Docker on Ubuntu through an application delivery system Canonical originally devised.

Read more at InfoWorld

Build a Hadoop Cluster in AWS in Minutes

Check out this process that will let you get a Hadoop cluster up and running on AWS in two easy steps.

I use Apache Hadoop to process huge data loads. Setting up Hadoop in a cloud provider, such as AWS, involves spinning up a bunch of EC2 instances, configuring nodes to talk to each other, installing software, configuring the master and data nodes’ config files, and starting services.

This was a good use case to automate, considering I wanted to solve these problems.

  • How do I build the cluster in minutes (as opposed to hours and maybe even days for a large number of data nodes)?

Read more at DZone

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Over the past six months, federation of Kubernetes clusters has moved from proof of concept to a release that is worth checking. Federation was first introduced under somewhat of a code name — Ubernetes. And then, in Kubernetes v1.3.0, cluster federation appeared. Now, there is extensive documentation on the topic.

Why is it such a big deal? If you have followed the development of Kubernetes, you probably know that it is an open source rewrite of Borg, the system that Google uses internally to manage their containerized workloads across data centers. If you read the paper, you will notice that a single Kubernetes cluster is the equivalent of a Borg cell. As such, Kubernetes itself is not the complete equivalent of Borg. However, by adding cluster federation, Kubernetes can now distribute workloads across multiple clusters. This opens the door for more real Borg features, like failover across Zones, geographic load-balancing, workload migration, and so on.

Indeed, cluster federation in Kubernetes is a hybrid cloud solution.

How does it work ?

The picture below, taken from the Tectonic blog on Federation shows a high-level architectural view.

federation-api-4x.png

Image courtesy of coreOS.

You see three Kubernetes clusters (i.e., San Francisco, New York, and Berlin). Each of those runs an API server, controller, its own scheduler and etcd-based key value store. This is the standard Kubernetes cluster setup. You can use any of these clusters from your local machine, assuming you have an account set up on them and associated credentials. With the k8s client — kubectl — you can create multiple contexts and switch between them. For example, to list the nodes in each cluster, you would do something like:

```

$ kubectl config use-context sanfrancisco

$ kubectl get nodes

$ kubectl config use-context newyork

$ kubectl get nodes

$ kubectl config use-context berlin

$ kubectl get nodes

```

With federation, Kubernetes adds a separate API server (i.e., the Federation API server), its own etcd-based key value store and a control plane. In effect, this is the same setup for a regular cluster but at a higher level of abstraction. Instead of registering individual nodes with the Federated API server, we will register full clusters.

A cluster is defined as a federated API server resource, in a format consistent with the rest of the Kubernetes API specification. For example:

```

apiVersion: federation/v1beta1

kind: Cluster

metadata:

 name: new-york

spec:

 serverAddressByClientCIDRs:

   - clientCIDR: "0.0.0.0/0"

     serverAddress: "${NEWYORK_SERVER_ADDRESS}"

 secretRef:

   name: new-york

```

Adding a cluster to the federation is a simple creation step on the federated API server:

```

$ kubectl --context=federated-cluster create -f newyork.yaml

```

In the sample below, notice that I use a context federated cluster and that I used the k8s client. Indeed the Federated API server extends on the Kubernetes API and can be talked to using kubectl.

Also note that the federation components can (and actually should) run within a Kubernetes cluster.

Creating Your Own Federation

I will not show you all the steps here, as it would honestly make for a long blog. The official documentation is fine, but the best way to understand the entire setup is the walkthrough from Kelsey Hightower. This walkthrough uses Google GKE and Google Cloud DNS, but it can be adapted relatively easily for your own setup using your own on-premise clusters.

In short, the steps to create a Federation are:

1. Pick the cluster where you will run the federation components, and create a namespace where you will run them.

2. Create a Federation API server service that you can reach (i.e., LoadBalancer, NodePort, Ingress), create a secret containing the credentials for the account you will use on the federation and launch the API server as a deployment.

3. Create a local context for the Federation API server so that you can use kubectl to target it. Generate a kubeconfig file for it, store is as a secret, and launch the control plane. The control plane will be able to authenticate with the API server using the kubeconfig secret created.

4. Once the control plane is running, you are ready to add the Clusters. You will need to create secrets for each cluster’s kubeconfig. Then, with Cluster resource manifest on hand (see above), you can use kubectl to create them on the federation context.

The end result of this is that you should have a working federation. Your clusters should registered and ready. The following command will show them.

```

$ kubectl --context=federated-cluster get clusters

```

Migrating a Workload From One Cluster to Another

As federation matures, we can expect to see most Kubernetes resources available on the Federation API. Currently, only Events, Ingress, Namespaces, Secrets, Services and ReplicaSets are supported. Deployments should not be far off, because ReplicaSets are already in there. Deployments will be great because this will bring us rolling updates and rollbacks across clusters.

Creating a workload in the Federation is exactly the same thing as doing it on a single cluster. Create a resource file for a replica set and create it with kubectl targeting the federated cluster.

```

$ cat nginx.yaml

apiVersion: extensions/v1beta1

kind: ReplicaSet

metadata:

 name: nginx

spec:

 replicas: 4

 template:

   metadata:

     labels:

       app: nginx

   spec:

     containers:

       - name: nginx

         image: nginx:1.10

         

$ kubectl --context=federated-cluster create -f nginx.yaml

```

The really great concept, though, even at this early stage in federation support, is that you can already give some preference to a cluster in the federation. That way, when the Pods start, they may be scheduled more on one cluster and less on another. This is done via an annotation.

Add the following in the metadata of the replica set above:

```

 annotations:

   federation.kubernetes.io/replica-set-preferences: |

       {

           "rebalance": true,

           "clusters": {

               "new-york": {

                   "minReplicas": 0,

                   "maxReplicas": 10,

                   "weight": 1

               },

               "berlin": {

                   "minReplicas": 0,

                   "maxReplicas": 10,

                   "weight": 1

               }

           }

       }

```

If you scale to 10 replicas, you will see five pods appear on each cluster. Indeed, each one has the same weight in the annotation.

Now try this: edit the annotation and change the weight. For example, put 20 on one of the clusters and 0 in the other.

```

$ kubectl --context=federated-cluster apply -f nginx.yaml

```

You will see all your Pods “move” over to one cluster and disappear from the other. Edit the replicaset again, switch the weight the other way and do another apply. You will see the Pods “move” the other way.

This is not a migration in the sense copying memory between two hypervisors like we can do with VMs, but it is migration in a microservice sense, where we can move the services from region to region.

And this is just the beginning!

Read the previous articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.

Free Linux Foundation Webinar on Hyperledger: Blockchain Technologies for Business

You may have heard about the world-changing potential of blockchains — the technology behind cryptocurrencies such as Bitcoin and Ethereum. But what are they exactly? And why are companies clamoring to use and develop blockchain technologies?

“It’s not too outlandish to think that in five years time, every Fortune 500 company and perhaps even the top 1,000 will have deployed a blockchain somewhere,” said Hyperledger Executive Director Brian Behlendorf, in a recent article on Linux.com.

In a free webinar to be held Dec. 1 at 10 a.m. Pacific, guest speaker Dan O’Prey, CMO of Digital Asset Holdings, will provide an overview of blockchain technology and the Hyperledger Project at The Linux Foundation.

Hyperledger is an umbrella project for software developer communities building open source blockchain and related technologies. It is a neutral, foundational community for participating companies such as IBM, Intel, Cisco, JPMorgan, Wells Fargo, the London Stock Exchange, Red Hat, and Swift to work together to develop the technology and address issues of code provenance, patent rights, standards, and policy.

In this webinar, Dan will cover:

  • The foundations of distributed ledger technologies, smart contracts, and other components that comprise the modern blockchain technology stack.

  • Why a new blockchain project was needed for business and what the main use cases and requirements for the technology are for commercial applications, as well as extending the overview on the history and projects in the Hyperledger umbrella and how you can get involved.

Register now to attend the webinar, Hyperledger: Blockchain Technologies for Business! Can’t attend? Register anyways to make sure you get a link to the replay, delivered straight to your inbox.

Remote Logging With Syslog, Part 1: The Basics

A problematic scenario, sometimes caused as a result of an application’s logging misconfiguration, is when the /var/log/messages file fills up, meaning that the /var partition becomes full. Thanks to the very nature of how systems work, there are always times when your system’s logging cause unexpected issues. Thus, it’s key that you understand and have the ability to control how your logging works and where your logs are saved.

Over the years when using Unix-like systems, I’ve been exposed to three or four different versions of the operating system’s default logging tool, known as Syslog. Here I will look at the logging daemon called rsyslog, a superfast Syslog product.

Before looking at the package itself, we’ll start by exploring the configuration of your system logging locally. Then we will use that knowledge to go one step further and configure a remote Syslog server. We will also explore certain aspects which might cause you problems. With some background information in addition we will also improve your troubleshooting.

As mentioned, in this series, we will be looking at rsyslog. We will focus on a long-established variety of Syslog, dating back to 2004, which has become a favorite among Linux distributions. As a result, rsyslog is now bundled as the default Syslog daemon on a number of the popular Unix-like flavors. It’s a substantial piece of software which is super fast, extensible and reportedly available on around 10 Unix-like distributions.

To get started, I’ll provide an introduction into how your Linux system thinks of its logging at a basic level.

Logging Detail

For years, sysadmins have debated at what level of detail to log their system data. There are a number of settings to affect the amount of logging detail which your server generates. We will look at how to configure the varying levels shortly. It’s a tradeoff between using up disk space too quickly versus not having enough information in your logs.

Let’s look at some of the detail settings which we can choose between. In Listing 1, we can see how the kernel ranks its errors, from zero to seven.

#define KERN_EMERG     "<0>"  /* system is unusable                      */

#define KERN_ALERT       "<1>"  /* action must be taken immediately */

#define KERN_CRIT           "<2>"  /* critical conditions                        */

#define KERN_ERR            "<3>"  /* error conditions                           */

#define KERN_WARNING  "<4>"  /* warning conditions                       */

#define KERN_NOTICE      "<5>"  /* normal but significant condition     */

#define KERN_INFO            "<6>"  /* informational                              */

#define KERN_DEBUG       "<7>"  /* debug-level messages                  */

Listing 1: How the system file “kernel.h” defines logging levels in some cases.

The rsyslog basics

When first looking at a new package it is usually a good indication of how tricky that package might be to pick up from the syntax of its configuration. The following syntax is the type that “sysklogd” used in the past and the modern rsyslog uses too:

mail.info /var/log/mail.log

mail.err @ server.chrisbinnie.tld

Thankfully, it is very simple once you understand the basics. The function of this logging example is also known as forwarding. The info logging mentioned (or in other words “informational errors”) is simply dropped locally into the file /var/log/mail.log. Whereas the err, or error condition logs, are sent off somewhere else to another server. In this case that server is called server.chrisbinnie.tld. Incidentally, the “INFO” and “info” settings are not case-sensitive and its case is frequently interchanged for clarity within Syslog config.

The writers of the powerful Syslog software, rsyslog, (Rainer Gerhards is apparently the main author) refers to it as “The Rocket-Fast System For Log Processing.”

This high-performance package can seemingly manage a million messages per second (when asked to drop local logging events to disk if “limited processing” is applied)! That’s impressive by any measure.

The home page of the rsyslog includes a few of its features:

  • Multi-threading

  • TCP, SSL, TLS, RELP

  • MySQL, PostgreSQL, Oracle, and more

  • Filter any part of syslog message

  • Fully configurable output format

  • Suitable for enterprise-class relay chains

That is exactly the functionality that you might look for in a best-of-breed Syslog daemon. In the next article, I’ll take a detailed look at the main config file. Then, I’ll cover logfile rules, log rotation, and some important networking considerations.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

SUSE Buys HPE’s OpenStack and Cloud Foundry Assets

Today, SUSE announced that it is acquiring OpenStack and Cloud Foundry (the Platform-as-a-Service to OpenStack’s Infrastructure-as-a-Service) assets and talent from the troubled HPE. This follows HPE’s decision to sell off (or “spin-merge” in HPE’s own language) its software business (including Autonomy, which HP bought for $11 billion, followed by a $9 billion write-off) to Micro Focus. And to bring this full circle: Micro Focus also owns SUSE, and SUSE is now picking up HPE’s OpenStack and Cloud Foundry assets.

Read more at TechCrunch

OpenHPC Pedal Put to the Compute Metal

The ultimate success of any platform depends on the seamless integration of diverse components into a synergistic whole – well, as much as is possible in the real world – while at the same time being flexible enough to allow for components to be swapped out and replaced by others to suit personal preferences.

Is OpenHPC, the open source software stack aimed at simulation and modeling workloads that was spearheaded by Intel a year ago, going to be the dominant and unifying platform for high performance computing? Will OpenHPC be analogous to the Linux distributions that grew up around the open source Linux operating system kernel, in that it creates a platform and helps drive adoption in the datacenters of the world because it makes HPC easier and uniform?

Read more at TheNextPlatform

Logging: Change Your Mind

Most people consider logging something nice to have; a supplement to the code that matters or something you add in order to debug a problem. Having worked for the past year in a distributed microservices architecture I finally discovered what logging truly is: it’s Google Analytics for your code.

Think about this: the business asks you to track interesting behaviour so it has the information it needs to make informed decisions. They want to know if a feature is driving in more customers, if a campaign is getting traction, if one solution is preferrable over another.

Shouldn’t you, as a professional, have the same understanding about the code you ship?

Read more at JUXT

Open Source Dependency Management Is a Balancing Act

Open source dependency management is a balancing act

During my career I have spent a lot of time packaging other people’s code, writing my own, and working on large software frameworks. I have seen projects that still haven’t released a stable version, never quite hitting 1.0, while others made 1.0 releases within months of beginning development, and then quickly moving on to 2.0, 3.0, etc. There is quite a variance in these release cycles, and this coupled with maintaining large projects can make things difficult.

I will go through some of the decisions we have faced in projects I have worked on and the pressures on the project. On the one extreme, users would like to have a stable API that never changes, with dependencies that don’t specify a minimum version so that they can choose whatever version works best. The other extreme pushes us to use the latest features of the language, of hardware accelerated APIs, compilers, and the libraries we depend upon.

Read more at OpenSource.com