Home Blog Page 631

Crossing the AI Chasm

Every day brings another exciting story of how artificial intelligence is improving our lives and businesses. AI is already analyzing x-rays, powering the Internet of Things and recommending best next actions for sales and marketing teams. The possibilities seem endless.

But for every AI success story, countless projects never make it out of the lab. That’s because putting machine learning research into production and using it to offer real value to customers is often harder than developing a scientifically sound algorithm. Many companies I’ve encountered over the last several years have faced this challenge, which I refer to as “crossing the AI chasm.”

I recently presented those learnings at ApacheCon, and in this article I’ll share my top four lessons for overcoming both the technical and product chasms that stand in your path.

Read more at TechCrunch

Multi-Arch Docker Images

Although the promise of Docker is the elimination of differences when moving software between environments, you’ll still face the problem that you can’t cross platform boundaries, i.e. you can’t run a Docker image built for x86_64 on a arm board such as the Raspberry Pi. This means that if you want to support multiple architectures, you typically end up tagging images with their arch (e.g. myimage-arm and myimage-x86_64). However, it turns out that the Docker image format already supports multi-platform images (or more accurately, “manifests”),…

Read more at Container Solutions

Hands On With the First Open Source Microcontroller

2016 was a great year for Open Hardware. The Open Source Hardware Association released their certification program, and late in the year, a fe pleasew silicon wizards met in Mountain View to show off the latest happenings in the RISC-V instruction set architecture.

The RISC-V ISA is completely unlike any other computer architecture. Nearly every other chip you’ll find out there, from the 8051s in embedded controllers, 6502s found in millions of toys, to AVR, PIC, and whatever Intel is working on are closed-source designs. You cannot study these chips, you cannot manufacture these chips, and if you want to use one of these chips, your list of suppliers is dependent on who has a licensing agreement with who.

Read more at Hackaday

How Fast Are Unix Domain Sockets?

It probably happened more than once, when you ask your team about how a reverse proxy should talk to the application backend server. “Unix sockets. They are faster.”, they’ll say. But how much faster this communication will be? And why a Unix domain socket is faster than an IP socket when multiple processes are talking to each other in the same machine? Before answering those questions, we should figure what Unix sockets really are.

Unix sockets are a form of inter-process communication (IPC) that allows data exchange between processes in the same machine. They are special files, in the sense that they exist in a file system like a regular file (hence, have an inode and metadata like ownership and permissions associated to it), but will be read and written using recv() and send() syscalls instead of read() and write(). When binding and connecting to a Unix socket, we’ll be using file paths instead of IP addresses and ports.

Read more at Myhro Blog

What’s the Future of Data Storage?

Storage planning today means investing in an ecosystem that supports multiple technologies. The winning vendors will create integrated delivery models that obviate the differences between particular technologies.

 What’s the future of storage? Is it internal server-based/software-defined? Hyperconverged? All-flash arrays? Cloud? Hybrid cloud?

Over the next few weeks we’re going to spend some time going over all of these different technologies and examining why each is viable (or not). But for now, I’m going to go ahead and give you the short answer: All of the above. 

Read more at HPE

Enjoy Kubernetes with Python

Over the past few years it seems that every cool and trending project is using Golang, but I am a Python guy and I feel a bit left out!

Kubernetes is no stranger to this, it is written in Go, and most clients that you will find are based on the Go client. Building a Kubernetes client has become easier. The Go client is now in its own repository. Therefore, if you want to write in Go, you can just import the Go client and not the entirety of the Kubernetes source code. Also, the Kubernetes API specification follows the OpenAPI standardization effort. If you want to use another language, you can use the OpenAPI specification and auto-generate one.

A couple weeks ago, the Python in me was awakened by a new incubator project for Kubernetes: a Python client almost single-handedly developed by Google engineer @mbohlool. The client is now available on PyPi and — like most Python packages — easily installable from source. To be fair, there already existed a Python client that was built on the Swagger specification but it received little attention.

So, let’s have a look at this new Python client for Kubernetes and take it for a spin.

Getting It

As always the easiest way is to get it from PyPi:


pip install kubernetes


Or get it from source:


pip install git+https://github.com/kubernetes-incubator/client-python.git


Or clone it and build locally:


git clone https://github.com/kubernetes-incubator/client-python.git

cd client-python

python ./setup.py install


Whatever you prefer.

Once installed, you should be able to start Python and import the kubernetes module. Check that your installation went fine.


$ python

Python 2.7.12 (default, Oct 11 2016, 14:42:23) 

[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin

Type "help", "copyright", "credits" or "license" for more information.

>>> import kubernetes


Note that you can use Python 2.7 and Python 3.5

To get started using it, you will need a working Kubernetes endpoint. If you do not have one handy, use minikube.

Structure

Before we dive straight into examples, we need to look at the structure of the client. Most of the code is auto-generated. Each Kubernetes API group endpoint is usable and needs to be instantiated separately.

For example:

  • The basic resources (e.g., pods, services) will need the v1 stable API endpoint: kubernetes.client.CoreV1Api

  • The jobs resources will need the Batch endpoint: kubernetes.client.BatchV1Api

  • The deployments will need the Extensions endpoint: kubernetes.client.ExtensionsV1beta1Api

  • The horizontal pod autoscalers will need the Autoscaling endpoint: kubernetes.client.AutoscalingV1Api

In each of these endpoints, the REST methods for all resources will be available as separate Python functions. For example:

  • list_namespaces()

  • delete_namespace()

  • create_namespace()

  • patch_namespace()

The response from these method calls will be dictionaries that you can easily explore with Python.

The part that will take the most time is that, this client is a very low-level client. It can do almost everything you can do with the Kubernetes API, but it does not have any high-level wrappers to make your life easy.

For instance, creating your first Pod will involve going through the auto-generated documentation and finding out all the classes that you need to instantiate to define your Pod specification properly. I will save you some time and show you how, but the process will need to be repeated for all resources.

Example

The client can read your kubeconfig file, but the easiest configuration possible might be to run a proxy kubectl proxy then open Python, create the V1 API endpoint, and list your nodes.



>>> from kubernetes import client,config

>>> client.Configuration().host="http://localhost:8080"

>>> v1=client.CoreV1Api()

>>> v1.list_node()

...

>>> v1.list_node().items[0].metadata.name

minikube


Now the fun with Python starts. Try to list your namespaces:



>>> for ns in v1.list_namespace().items:

...     print ns.metadata.name  

...

default

kube-system


To create a resource, you will need the endpoint the resource is in and some type of body. Because the API version and kind will be implicitly known by the endpoint and the function name, you will only need to create some metadata and probably some specification.

For example, to create a namespace, we need an instance of the namespace class, and we need to set the name of the namespace in the metadata. The metadata is yet another instance of a class.



>>> body = client.V1Namespace()

>>> body.metadata = client.V1ObjectMeta(name="linuxcon")

>>> v1.create_namespace(body)


Deleting a namespace is a little bit simpler but you need to specify some deletion options.



v1.delete_namespace(name="linuxcon", body=client.V1DeleteOptions())


Now I cannot leave you without starting a Pod. A Pod is made of metadata and a specification. The specification contains a list of containers and volumes. In its simple form, a Pod will have a single container and no volumes. Let’s start a busybox Pod. It will use the busybox image and just sleep. In the example below, you can see that we have a few classes:

  • V1Pod for the overall pod.

  • V1ObjectMeta for metadata

  • V1PodSpec for the pod specification

  • V1Container for the container that runs in the Pod

Let’s instantiate a pod and set its metadata, which include its name:



>>> pod = client.V1Pod()

>>> pod.metadata = client.V1ObjectMeta(name="busybox")


Now let’s define the container that will run in the Pod:



>>> container = client.V1Container()

>>> container.image = "busybox"

>>> container.args = ["sleep", "3600"]

>>> container.name = "busybox"


Now let's define the Pod’s specification, in our case, a single container:



>>> spec = client. V1PodSpec()

>>> spec.containers = [container]

>>> pod.spec = spec


And, finally, we are ready to create our Pod in Python:



>>> v1.create_namespaced_pod(namespace="default",body=pod)


We’ll see if the community (i.e., us) decides to add some convenience functions to the Kubernetes python client. Things like kubectl run ghost –image-ghost are quite powerful, and although it can be easily coded with this Python module, it might be worthwhile to make it a first-class function.

Read the previous articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. A member of the Apache Software Foundation and the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.

OpenSSL For Apache and Dovecot

At long last, my wonderful readers, here is your promised OpenSSL how-to for Apache, and next week you get SSL for Dovecot. In this two-part series, we’ll learn how to create our own OpenSSL certificates and how to configure Apache and Dovecot to use them.

The examples here build on these tutorials:

Creating Your Own Certificate

Debian/Ubuntu/Mint store private keys and symlinks to certificates in /etc/ssl. The certificates bundled with your system are kept in /usr/share/ca-certificates. Certificates that you install or create go in /usr/local/share/ca-certificates/.

This example for Debian/etc. creates a private key and public certificate, converts the certificate to the correct format, and symlinks it to the correct directory:


$ sudo openssl req -x509 -days 365 -nodes -newkey rsa:2048 
   -keyout /etc/ssl/private/test-com.key -out 
   /usr/local/share/ca-certificates/test-com.crt
Generating a 2048 bit RSA private key
.......+++
......................................+++
writing new private key to '/etc/ssl/private/test-com.key'
-----
You are about to be asked to enter information that will 
be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished 
Name or a DN. There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:WA
Locality Name (eg, city) []:Seattle
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Alrac Writing Sweatshop
Organizational Unit Name (eg, section) []:home dungeon
Common Name (e.g. server FQDN or YOUR name) []:www.test.com
Email Address []:admin@test.com

$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...

Adding debian:test-com.pem
done.
done.

CentOS/Fedora use a different file structure and don’t use update-ca-certificates, so use this command:


$ sudo openssl req -x509 -days 365 -nodes -newkey rsa:2048 
   -keyout /etc/httpd/ssl/test-com.key -out 
   /etc/httpd/ssl/test-com.crt

The most important item is the Common Name, which must exactly match your fully qualified domain name. Everything else is arbitrary. -nodes creates a password-less certificate, which is necessary for Apache. - days defines an expiration date. It’s a hassle to renew expired certificates, but it supposedly provides some extra security. See Pros and cons of 90-day certificate lifetimes for a good discussion.

Configure Apache

Now configure Apache to use your new certificate. If you followed Apache on Ubuntu Linux For Beginners: Part 2, all you do is modify the SSLCertificateFile and SSLCertificateKeyFile lines in your virtual host configuration to point to your new private key and public certificate. The test.com example from the tutorial now looks like this:


SSLCertificateFile /etc/ssl/certs/test-com.pem
SSLCertificateKeyFile /etc/ssl/private/test-com.key

CentOS users, see Setting up an SSL secured Webserver with CentOS in the CentOS wiki. The process is similar, and the wiki tells how to deal with SELinux.

Testing Apache SSL

The easy way is to point your web browser to https://yoursite.com and see if it works. The first time you do this you will get the scary warning from your over-protective web browser how the site is unsafe because it uses a self-signed certificate. Ignore your hysterical browser and click through the nag screens to create a permanent exception. If you followed the example virtual host configuration in Apache on Ubuntu Linux For Beginners: Part 2 all traffic to your site will be forced over HTTPS, even if your site visitors try plain HTTP.

The cool nerdy way to test is by using OpenSSL. Yes, it has a nifty command for testing these things. Try this:


$ openssl s_client -connect www.test.com:443
CONNECTED(00000003)
depth=0 C = US, ST = WA, L = Seattle, O = Alrac Writing Sweatshop, 
OU = home dungeon, CN = www.test.com, emailAddress = admin@test.com
verify return:1
---
Certificate chain
 0 s:/C=US/ST=WA/L=Seattle/O=Alrac Writing Sweatshop/OU=home 
     dungeon/CN=www.test.com/emailAddress=admin@test.com
   i:/C=US/ST=WA/L=Seattle/O=Alrac Writing Sweatshop/OU=home 
     dungeon/CN=www.test.com/emailAddress=admin@test.com
---
Server certificate
-----BEGIN CERTIFICATE-----
[...]

This spits out a giant torrent of information. There is a lot of nerdy fun to be had with openssl s_client; for now it is enough that we know if our web server is using the correct SSL certificate.

Creating a Certificate Signing Request

Should you decide to use a third-party certificate authority (CA), you will have to create a certificate signing request (CSR). You will send this to your new CA, and they will sign it and send it back to you. They may have their own requirements for creating your CSR; this a typical example of how to create a new private key and CSR:


$ openssl req -newkey rsa:2048 -nodes 
   -keyout yourdomain.key -out yourdomain.csr

You can also create a CSR from an existing key:


$ openssl req  -key yourdomain.key 
   -new -out domain.csr

That is all for today. Come back next week to learn how to properly set up Dovecot to use OpenSSL.

Additional Tutorials

Quieting Scary Web Browser SSL Alerts
How to Set Up Secure Remote Networking with OpenVPN on Linux, Part 1
How to Set Up Secure Remote Networking with OpenVPN on Linux, Part 2

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Understanding Open vSwitch, An OpenStack SDN Component

Open vSwitch is an open-source project that allows hypervisors to virtualize the networking layer. This caters for the large number of virtual machines running on one or more physical nodes. The virtual machines connect to virtual ports on virtual bridges (inside the virtualized network layer.)

This is very similar to a physical server connecting to physical ports on a Layer 2 networking switch. These virtual bridges then allow the virtual machines to communicate with each other on the same physical node. These bridges also connect these virtual machines to the physical network for communication outside the hypervisor node.

In OpenStack, both the Neutron node and the compute node (Nova) are running Open vSwitch to provide virtualized network services.

Read more at OpenStack SuperUser

8 Docker Security Rules to Live By

Odds are, software (or virtual) containers are in use right now somewhere within your organization, probably by isolated developers or development teams to rapidly create new applications. They might even be running in production. Unfortunately, many security teams don’t yet understand the security implications of containers or know if they are running in their companies.

In a nutshell, Linux container technologies such as Docker and CoreOS Rkt virtualize applications instead of entire servers. Containers are superlightweight compared with virtual machines, with no need for replicating the guest operating system. 

Read more at InfoWorld

Zigbee Writes a Universal Language for IoT

The nonprofit Zigbee Alliance today unveiled dotdot, a universal language for the Internet of Things (IoT).

The group says dotdot takes the IoT language at Zigbee’s application layer and enables it to work across different networking technologies.

This is important because currently, most IoT devices don’t speak the same language, even if they use the same wireless technology. The result is an Internet of Things that is often a patchwork of translations done in the cloud. And platform and app developers must maintain a growing set of unique interfaces for each vendor’s products.

Read more at SDx Central