Home Blog Page 417

Your Instant Kubernetes Cluster

This is a condensed and updated version of my previous tutorial Kubernetes in 10 minutes. I’ve removed just about everything I can so this guide still makes sense. Use it when you want to create a cluster on the cloud or on-premises as fast as possible.

1.0 Pick a host

We will be using Ubuntu 16.04 for this guide so that you can copy/paste all the instructions. Here are several environments where I’ve tested this guide. Just pick where you want to run your hosts.

Read more at Alex Ellis blog

What is the IoT? Everything You Need to Know About the Internet of Things Right Now

What is the Internet of Things?

The Internet of Things, or IoT, refers to billions of physical devices around the world that are now connected to the internet, collecting and sharing data. Thanks to cheap processors and wireless networks, it’s possible to turn anything, from a pill to an aeroplane, into part of the IoT. This adds a level of digital intelligence to devices that would be otherwise dumb, enabling them to communicate without a human being involved, and merging the digital and physical worlds.

Pretty much any physical object can be transformed into an IoT device if it can be connected to the internet and controlled that way. A lightbulb that can be switched on using a smartphone app is an IoT device, as is a motion sensor or a smart thermostat in your office or a connected streetlight. 

Read more at ZDNet

Containers, the GPL, and Copyleft: No Reason for Concern

Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL’s derivative work issue.

One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven’t yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it’s fairly straightforward to show that containers do not raise new or concerning GPL scope issues.

Read more at OpenSource.com

How to Fix the Docker and UFW Security Flaw

If you use Docker on Linux, chances are your system firewall might be relegated to Uncomplicated Firewall (UFW). If that’s the case, you may not know this, but the combination of Docker and UFW poses a bit of a security issue. Why? Because Docker actually bypasses UFW and directly alters iptables, such that a container can bind to a port. This means all those UFW rules you have set won’t apply to Docker containers.

Let me demonstrate this.

I’m going to set up UFW (running on Ubuntu Server 16.04), so that the only thing it will allow through is SSH traffic. To do this, I open a terminal and issue the following commands:

sudo ufw allow ssh
sudo ufw default deny incoming
sudo ufw enable

Read more at TechRepublic

Linux Kernel 4.15: ‘An Unusual Release Cycle’

Linus Torvalds released version 4.15 of the Linux Kernel on Sunday, again, and for a second version in a row, a week later than scheduled. The culprits for the late release were the Meltdown and Spectre bugs, as these two vulnerabilities forced developers to submit major patches well into what should have been the last cycle. Torvalds was not comfortable rushing the release, so he gave it another week.

Unsurprisingly, the first big bunch of patches worth mentioning were those designed to sidestep Meltdown and Spectre. To avoid Meltdown, a problem that affects Intel chips, developers have implemented Page Table Isolation (PTI) for the x86 architecture. If for any reason you want to turn this off, you can use the pti=off kernel boot option.

Spectre v2 affects both Intel and AMD chips and, to avoid it, the kernel now comes with the retpoline mechanism. Retpoline requires a version of GCC that supports the -mindirect-branch=thunk-extern functionality. As with PTI, the Spectre-inhibiting mechanism can be turned of. To do so, use the spectre_v2=off option at boot time. Although developers are working to address Spectre v1, at the moment of writing there is still not a solution, so there is no patch for this bug in 4.15.

The solution for Meltdown on ARM has also been pushed to the next development cycle, but there is a remedy for the bug on PowerPC with the RFI flush of L1-D cachefeature included in this release.

An interesting side affect of all of the above is that new kernels now come with a /sys/devices/system/cpu/vulnerabilities/ virtual directory. This directory shows the vulnerabilities affecting your CPU and the remedies being currently applied.

The issues with buggy chips (and the manufacturers that keep things like this secret) has revived the call for the development of viable open source alternatives. This brings us to the partial support for RISC-V chips that has now been merged into the mainline kernel. RISC-V is an open instruction set architecture that allows manufacturers to create their own implementation of RISC-V chips, and it has resulted in several open sourced chips. While RISC-V chips are currently used mainly in embedded devices, powering things like smart hard disks or Arduino-like development boards, RISC-V proponents argue that the architecture is also well-suited for use on personal computers and even in multi-node supercomputers.

The support for RISC-V, as mentioned above, is still incomplete, and includes the architecture code but no device drivers. This means that, although a Linux kernel will run on RISC-V, there is no significant way to actually interact with the underlying hardware. That said, RISC-V is not vulnerable to any of the bugs that have dogged other closed architectures, and development for its support is progressing at a brisk pace, as the RISC-V Foundation has the support of some of the industries biggest heavyweights.

Other stuff that’s new in kernel 4.15

Torvalds has often declared he likes things boring. Fortunately for him, he says, apart from the Spectre and Meltdown messes, most of the other things that happened in 4.15 were very much run of the mill, such as incremental improvements for drivers, support for new devices, and so on. However, there were a few more things worth pointing out:

  • AMD got support for Secure Encrypted Virtualization. This allows the kernel to fence off the memory a virtual machine is using by encrypting it. The encrypted memory can only be decrypted by the virtual machine that is using it. Not even the hypervisor can see inside it. This means that data being worked on by VMs in the cloud, for example, is safe from being spied on by any other process outside the VM.
  • AMD GPUs get a substantial boost thanks to the inclusion of display code. This gives mainline support to Radeon RX Vega and Raven Ridge cards and also implements HDMI/DP audio for AMD cards.
  • Raspberry Pi aficionados will be glad to know that the 7” touchscreen is now natively supported, which is guaranteed to lead to hundreds of fun projects.

To find out more, you can check out the write-ups at Kernel Newbies and Phoronix.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Index: A Focus on the Future of Code and Community

One of the most significant challenges developers face is keeping up with the increasingly rapid pace of change in our industry. With each new innovation comes a new crop of vendors and best practices, and staying on top of your game can become a second profession in itself.

Cloud, containers, data, analytics, IoT, AI, machine learning, serverless architecture, blockchain: Behind all of these rapidly evolving technologies are the programming languages and developers who are leading the charge into the next era of innovation.

An ideal way for developers to understand all this is through conversations with other developers. We believe conversation about development—like innovation itself—is best when it happens in the open. This idea was the catalyst for Index, a first-of-its-kind, developer-focused event that will take place in San Francisco Feb. 20-22 at Moscone West.

Read more at IBM developerWorks

A Look Inside Facebook’s Open Source Program

Open source becomes more ubiquitous every year, appearing everywhere from government municipalities to universities. Companies of all sizes are also increasingly turning to open source software. In fact, some companies are taking open source a step further by supporting projects financially or working with developers.

Facebook’s open source program, for example, encourages others to release their code as open source, while working and engaging with the community to support open source projects.

Read more at OpenSource.com

Q&A on Machine Learning and Kubernetes with David Aronchick of Google from Kubecon 2017

At the recently concluded Kubecon in Austin, TX, attended by over 4000 engineers, Kubernetes was front, left and center. Due to the nature of workloads and typical heavy compute requirements in training algorithms, Machine Learning topics and its synergy with Kubernetes was discussed in many sessions.

Kubeflow is a platform for making Machine Learning on Kubernetes easy, portable and scalable by providing manifests for creating:

  • A JupyterHub to create and manage Jupyter notebooks
  • Tensorflow training controller to adapt for both CPUs and GPUs, and
  • A Tensorflow serving container

Read more at InfoQ

Why You Should Care About Diversity and Inclusion

Aubrey Blanche, Global Head of Diversity and Inclusion at Atlassian, joins us in this latest edition of The New Stack Makers podcast to talk about the difference between diversity and inclusion and why anyone should care.

“Diversity is being invited to the party,” she said. “Inclusion is being glad you’re there.”

When you create an inclusive culture, Blanche explained, business thrives. Employees who feel comfortable bringing their authentic selves to work perform better and are happier at work, which leads to less turnover, which leads to greater profits.

Read more at The New Stack

How to Use DockerHub

In the previous articles, we learned the basics of Docker terminology,  how to install Docker on desktop Linux, macOS, and Windows, and how to create container images and run them on your system. In this last article in the series, we will talk about using images from DockerHub and publishing your own images to DockerHub.

First things first: what is DockerHub and why is it important? DockerHub is a cloud-based repository run and managed by Docker Inc. It’s an online repository where Docker images  can be published and used by other users. There are both public and private repositories. If you are a company, you can have a private repository for use within your own organization, whereas public images can be used by anyone.

You can also use official Docker images that are published publicly. I use many such images, including for my test WordPress installations, KDE plasma apps, and more. Although we  learned last time how to create your own Docker images, you don’t have to. There are thousands of images published on DockerHub for you to use. DockerHub is hardcoded into Docker as the default registry, so when you run the docker pull command against any image, it will be downloaded from DockerHub.

Download images from Docker Hub and run locally

Please check out the previous articles in the series to get started. Then, once you have Docker running on your system, you can open the terminal and run:

$ docker images

This command will show all the docker images currently on your system. Let’s say you want to deploy Ubuntu on your local machine; you would do:

$ docker pull ubuntu

If you already have Ubuntu image on your system, the command will automatically update that image to the latest version. So, if you want to update the existing images, just run the docker pull command, easy peasy. It’s like apt-get upgrade without any muss and fuss.

You already know how to run an image:

$ docker run -it <image name>

$ docker run -it ubuntu

The command prompt should change to something like this:

root@1b3ec4621737:/#

Now you can run any command and utility that you use on Ubuntu. It’s all safe and contained. You can run all the experiments and tests you want on that Ubuntu. Once you are done testing, you can nuke the image and download a new one. There is no system overhead that you would get with a virtual machine.

You can exit that container by running the exit command:

$ exit

Now let’s say you want to install Nginx on your system. Run search to find the desired image:

$ docker search nginx

aizMFFysICAEsgDDYrsrlqwoCgGbWVHtcOzgV9mA

As you can see, there are many images of Nginx on DockerHub. Why? Because anyone can publish an image. Various images are optimized for different projects, so you can choose the appropriate image. You just need to install the appropriate image for your use-case.

Let’s say you want to pull Bitnami’s Nginx container:

$ docker pull bitnami/nginx

Now run it with:

$ docker run -it bitnami/nginx

How to publish images to Docker Hub?

Previously, we learned how to create a Docker image, and we can easily publish that image to DockerHub. First, you need to log into DockerHub. If you don’t already have an account, please create one. Then, you can open terminal app and log in:

$ docker login --username=<USERNAME>

Replace <USERNAME> with the name of your username for Docker Hub. In my case it’s arnieswap:

$ docker login --username=arnieswap>

Enter the password, and you are logged in. Now run the docker images command to get the ID of the image that you created last time.

$ docker images

tW1jDOugkX7J2FfyFyToM6B8m5OYFwMba-Ag5aez

Now, suppose you want to push the ng image to DockerHub. First, we need to tag that image (learn more about tags):

$ docker tag e7083fd898c7 arnieswap/my_repo:testing

Now push that image:

$ docker push arnieswap/my_repo

The push refers to repository [docker.io/arnieswap/my_repo]

12628b20827e: Pushed

8600ee70176b: Mounted from library/ubuntu

2bbb3cec611d: Mounted from library/ubuntu

d2bb1fc88136: Mounted from library/ubuntu

a6a01ad8b53f: Mounted from library/ubuntu

833649a3e04c: Mounted from library/ubuntu

testing: digest: sha256:286cb866f34a2aa85c9fd810ac2cedd87699c02731db1b8ca1cfad16ef17c146 size: 1569

Eureka! Your image is being uploaded. Once finished, open DockerHub, log into your account, and you can see your very first Docker image. Now anyone can deploy your image. It’s the easiest and fastest way to develop and distribute software. Whenever you update the image, users can simply run:

$ docker run arnieswap/my_repo

Now you know why people love Docker containers. They solve many problems that traditional workloads face and allow you develop, test, and deploy applications in no time.  And, by following the steps in this series, you can try them out for yourself.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.