Home Blog Page 631

The Best Linux Distros for 2017

The new year is upon us, and it’s time to look toward what the next 365 days have in store. As we are wont to do, Linux.com looks at what might well be the best Linux distributions to be found from the ever-expanding crop of possibilities.

Of course, we cannot just create a list of operating systems and say “these are the best,” not when so often Linux can be very task-oriented. To that end, I’m going to list which distros will rise to the top of their respective heaps…according to task.

With that said, let’s get to the list!

Best distro for sysadmins : Parrot Linux

Parrot Linux is based on Debian and offers nearly every penetration testing tool you could possibly want.

Administrators are tasked with so much on a daily basis. Without a solid toolkit, that job becomes incredibly challenging. For that, there are a host of Linux distributions ready to serve. I believe the one distribution that will find a significant rise in popularity for the coming year will be Parrot Linux. This particular distribution is based on Debian and offers nearly every penetration testing tool you could possibly want. You will also find tools for cryptography, cloud, anonymity, digital forensics, programming, and even productivity. All of these tools (and there are many) are coupled with an already rock-solid foundation to create a Linux distribution perfect for the security and network administrator

Parrot currently stands at #57 on Distrowatch, and I expect to see a significant leap on that list by the end of the year.

Read more about Parrot Linux in Parrot Security Could Be Your Next Security Tool.

Advertisement for New Year's resolution to be a Linux sysadmin

Best lightweight distribution: LXLE

LXLE combines a perfect blend of small footprint with large productivity.

Without a doubt, I believe LXLE will become the lightweight distribution of choice in 2017. Why? Simple. LXLE manages to combine a perfect blend of small footprint with large productivity. In other words, this is a small-sized distribution that won’t stop you from getting your work done. You’ll find all the tools you need in a desktop Linux release that will feel right at home on older hardware (as well as newer machines). LXLE is based on Ubuntu 16.04 (so it will enjoy long-term support) and makes use of the LXDE window manager, which brings with it an instant familiarity.

LXLE ships with many of the standard tools (such as LibreOffice and GIMP). The only caveat is the need to install a more modern (and up-to-date browser).

Currently LXLE stands at #16 on Distrowatch. I look for this to break the top 10 by mid 2017. You can read more about LXLE in this article.

Best desktop distribution: Elementary OS

Elementary OS Loki is not only beautiful, it is also rock solid and offers unmatched user-friendliness and consistency.

I may be biased, but I’m certain that Elementary OS Loki will do the impossible and usurp Linux Mint from the coveted “best desktop distribution” for 2017. That will be a fairly impressive feat, considering that Linux Mint consistently clobbers the competition on Distrowatch. Currently, Elementary OS stands at #6 (where Linux Mint continues its reign at the number one spot). How is it possible that Elementary OS could de-throne Mint? Loki has not only proved itself to be one of the more beautiful Linux distributions, it is also rock solid and offers an unmatched user-friendliness and consistency across the desktop.

Some might find the Elementary OS desktop to be too “Mac-like.” However, that metaphor has proved incredibly effective with end users and, of course, the Elementary take on the design isn’t nearly as limiting as is the OS X desktop…so feel free to tweak it to your liking.

I’ve covered Elementary OS Loki previously, so you can read more in this article.

Best distribution for those with something to prove: Gentoo

Gentoo requires a higher level of Linux understanding, but you will be rewarded with exactly the distribution you want and nothing more.

This is a category specific to those who want to show their prowess with the Linux operating system. This is for those who know Linux better than most and want a distribution built specificly to their needs. When this flavor of Linux is desired, there is only one release that comes to mind…Gentoo.

Gentoo is a source-based Linux distribution that starts out as a live instance and requires you to then build everything you need from source. This not only requires a higher level of Linux understanding but also demands more time and patience. In the end, however, you will be rewarded with exactly the distribution you want and nothing more. Gentoo is not new, it’s been around for quite some time; but if you want to prove your Linux skills, it helps to start with Gentoo.

Advertisement for Intro to Linux

Best Linux for IoT: Snappy Ubuntu Core

Ubuntu Snaps makes it incredibly easy to install packages without worrying about dependencies and breakage due to upgrades; this system makes Snappy Core perfect for IoT.

Now we’re talking really, really small form factor. The Internet of Things category is where embedded Linux truly shines, and there are a number of distributions ready to take on the task. I believe 2017 will be the year of Snappy Ubuntu Core. Ubuntu Snaps have already made it incredibly easy to install packages without worrying about dependencies and breakage due to upgrades. By leveraging this system, Snappy Core makes for a perfect platform for IoT. Ubuntu Snappy Core can already be found in the likes of various hacker boards (such as the Raspberry Pi) as well as Erle-Copter drones, Dell Edge Gateways, Nextcloud Box, and LimeSDR.

Best non-enterprise server distribution: CentOS

CentOS is as reliable a server platform as you can find.

It should come as no surprise here that CentOS remains the Linux darling of the server room for small- and medium-sized businesses. There’s a very good reason CentOS continues to stand at the top of this hill—it’s derived from the Red Hat Enterprise Linux (RHEL) sources. Because of this, you know you are getting as reliable a server platform as you can find. The major difference between Red Hat Enterprise Linux and CentOS (besides the branding) is support. With RHEL, you benefit from official Red Hat support. On the contrary, since 2004, CentOS has enjoyed a massive community-driven support system. So, if your small- or medium-sized business is looking to migrate a data center to an open source platform, your first stop is CentOS.

Best enterprise server distribution: RHEL

Red Hat is perfectly in tune with enterprise business needs.

Once again, there is no surprise here. SUSE is doing a remarkable job of climbing the enterprise ladder and one of these days they will usurp the reigning king of enterprise Linux from the throne. Unfortunately, 2017 will not be that year. Red Hat Enterprise Linux (RHEL) will continue to top the most wanted list for enterprise businesses. According to Gartner, Red Hat has a 67 percent market share within the realm of Linux (with RHEL subscriptions driving about 75 percent of Red Hat’s revenue). The reasons for this are many. Not only is Red Hat perfectly in tune with what enterprise business needs, they also are major contributors to nearly every software stack within the open source community.

Red Hat knows Linux, and they know enterprise. Red Hat is trusted by numerous Fortune 500 companies (such as ING, Sprint, Bayer Business Services, Atos, Amadeus, and Etrade) and RHEL has managed to push many envelopes far and wide in areas of security, integration, cloud, and management. I also look for Red Hat to focus a good amount of energy on IoT in the coming year. Even still, don’t be surprised if, by the end of 2017, SUSE further chips away at the current Red Hat market share.

The choice is yours

One of the greatest aspects of the Linux platform is that, in the end, the choice is yours. There are hundreds of distributions to choose from, many of which will perfectly meet your needs. However, if you want to give what I believe will be the best in 2017, take one of the above distributions for a spin; I’m certain you won’t be disappointed. Next time, I’ll look at which distros are best designed for new users.

Advertisement for newsletter

Node.js: The State of the Union

By all metrics, it has been a good year for Node.js. During his keynote at Node.js Interactive in November, Rod Vagg, Technical Steering Committee Director at the Node.js Foundation talked about the progress that the project made during 2016.

Node.js. Foundation is now sponsored by nearly 30 companies, including heavyweights such as IBM, PayPal, and Red Hat. The community of developers is also looking healthy. Within the Technical Group, currently 90 core collaborators have commit access to the Node.js repository; 48 of these core collaborators were active in the last year. Since 2009, when Node.js was born, the total number contributors — that is, the number of people who have made changes in the Node Git repository — has grown over time. In fact, 2016 saw twice as many people per month contributing to the code base as in 2015.

State of the Core Code

In 2016, the number of commits increased 125 percent relative to 2015, said Vagg. Despite this, the core stayed more or less stable. 37 percent of JavaScript and C++ code received minor changes in the src/ and lib/ directories. 58 percent of the test code was also tweaked. However, the majority of commits went into documentation. More than 90 percent of the lines in the API documents were changed. Vagg thinks documentation is probably the easiest way to get into contributing to Node and is therefore acting as a gateway for first-time contributors.

Vagg said developers can now count on more tools to help them with their tasks if they decide to tackle programming issues. Node has been traditionally hard to debug, but new utilities, such as the V8_inspector extension for Chrome allows a developer to attach Chrome’s DevTools to your application. This extension will probably supersede the old debugger in the near future. Other tools, such as AsyncHooks (previously AsyncWrap), V8 Trace Events, llnode, and nodereport also contribute to making Node.js applications easier to debug.

This hard work is paying off. Version 6, an LTS version, now implements 96 percent of EcmaScript 6/2015, for example. And, it works both ways: Node now has contributors representing the community in the Ecma technical committee that evolves and regulates the development of JavaScript (TC39), so Node.js will have a say in the future of the base language, too.

State of Releases & LTS

To understand how Node releases and versions work, Vagg explained that in 2016 there were 63 releases covering four different versions: 0.12, 4, 5, and 6. From version 4 onwards, versions with even numbers are LTS and are supported for three years. Versions with odd numbers are supported for 3 months. Hence, Vagg recommended that shops with large deployments always implement even numbered versions.

Version 0.10, what Vagg described as the “Windows XP of Node.js,” is still being used “because it was the first ‘ready’ version. However 0.10 received no support in 2016, having reached its end of life in 2015. Version 0.12 reached its end of life in 2016. Hence, Vagg urged people using either of these versions to update to something more current, such as version 4 (code named “Argon“).

Argon is an LTS version and will be maintained until April 2018. Version 5, however, as a non-LTS version, reached its end of life in June 2016. The most current non-LTS at the moment of writing is version 7, which will be maintained until April 2017. There is also another LTS version apart from version 4 available and which is also current: version 6 (code name “Boron“). Boron started life in April 2016 and will be maintained until 2019. A new LTS version, version 8, will be coming out in April 2017 and will be maintained until 2020.

Ever since version 4, Vagg says upgrading has been pretty painless. Currently, a whole crew of release managers guarantees a smooth transition between versions. If you are using Node in a large environment, Vagg recommends implementing a migration strategy to avoid “getting stuck” on an unsupported version.

State of the Build

Vagg used the State of the Build segment of his presentation to mention the companies and individual users that make development possible within Node.js. Digital Ocean and Rackspace, for example, have donated resources and funds from the very beginning. The foundation also counts on an ARM cluster made up largely by Raspberry Pis, many of which have been donated by individuals.

These resources are configured to test Node.js core, libuv, V8, full release builds and more. the cluster itself contains 141 build, text and release cluster nodes connected full-time. Each build is compiled for 25 different operating systems and eight different architectures. Every build is painstakingly tested before releasing a new version.

State of Security

In discussing the state of security, Vagg said that security reports should be sent to security@nodejs.org; it is the task of the CTC and domain experts to discuss and solve issues. When an issue is confirmed, it is notified to the nodejs.org and nodejs-sec Google Groups, following Node.js’ “full disclosure” policy. LTS release lines receive as few changes as possible to ensure the platform remains stable. Overall, there were seven security releases during 2016, none of which were severe.

The Node.js Foundation is also working on a new Node security project. The project implements a public working group, made up by professionals from ^lift and other interested parties. The idea is to facilitate the creation of a healthy ecosystem of security service and product providers that work together to bring more rigor and formality to the core and open source ecosystem and its security handling.

Membership is also open to individuals, communities, and other companies. Vagg encouraged anyone who would like to join to visit the workgroup’s site on GitHub.

Watch the complete video below:

If you are interested in speaking or attending Node.js Interactive North America 2017 – happening in Vancouver, Canada next fall, please subscribe to the Node.js community newsletter to keep abreast with dates and time.

 

This Week in Open Source News: Mark Shuttleworth Talks Business Models, OSS Trustworthiness Requires Work, & More

This week in Linux and open source headlines, Canonical’s Mark Shuttleworth opens up about spawning new opportunities with the interoperability of various areas of OSS, Steven J. Vaughan-Nichols urges the Linux community to roll up their sleeves in 2017, and more! Read on to stay on the forefront of open source news:

1) “When sensors, data, machine learning and the cloud collide, new kinds of opportunity can emerge.”

Open Source Pioneer Mark Shuttleworth Says Smart “Edge’ Devices Spawn Business Models– The Wall Street Journal

2) Linux turned 25 last year– but that doesn’t mean OSS is done proving itself. 

Linux 2017: With Great Power Comes Great Responsibility– ZDNet

3) “Endless is launching its first products designed specifically for the United States.”

Endless Introduces Linux Mini Desktop PCs for American Market– Liliputing

4) The Linux Foundation’s Hyperledger Project has formed a new working group to reach out to Chinese members, which make up over a quarter or their base. 

Hyperledger Blockchain Project Announces ‘Technical Working Group China’ Following Strong Interest– Cryptocoins News

5) “AT&T is an open-source software company now  — I just have to pinch myself.” said Jim Zemlin at CES.

The Linux Foundation is Still Adjusting to AT&T’s Embrace of Open Source– GeekWire

Top 50 Developer Tools of 2016

Want to know exactly which tools should be on your radar in 2017? Our 3rd annual StackShare Awards do just that! We’ve analyzed thousands of data points to bring you rankings for the hottest tools, including:

Read more at StackShare

Crossing the AI Chasm

Every day brings another exciting story of how artificial intelligence is improving our lives and businesses. AI is already analyzing x-rays, powering the Internet of Things and recommending best next actions for sales and marketing teams. The possibilities seem endless.

But for every AI success story, countless projects never make it out of the lab. That’s because putting machine learning research into production and using it to offer real value to customers is often harder than developing a scientifically sound algorithm. Many companies I’ve encountered over the last several years have faced this challenge, which I refer to as “crossing the AI chasm.”

I recently presented those learnings at ApacheCon, and in this article I’ll share my top four lessons for overcoming both the technical and product chasms that stand in your path.

Read more at TechCrunch

Multi-Arch Docker Images

Although the promise of Docker is the elimination of differences when moving software between environments, you’ll still face the problem that you can’t cross platform boundaries, i.e. you can’t run a Docker image built for x86_64 on a arm board such as the Raspberry Pi. This means that if you want to support multiple architectures, you typically end up tagging images with their arch (e.g. myimage-arm and myimage-x86_64). However, it turns out that the Docker image format already supports multi-platform images (or more accurately, “manifests”),…

Read more at Container Solutions

Hands On With the First Open Source Microcontroller

2016 was a great year for Open Hardware. The Open Source Hardware Association released their certification program, and late in the year, a fe pleasew silicon wizards met in Mountain View to show off the latest happenings in the RISC-V instruction set architecture.

The RISC-V ISA is completely unlike any other computer architecture. Nearly every other chip you’ll find out there, from the 8051s in embedded controllers, 6502s found in millions of toys, to AVR, PIC, and whatever Intel is working on are closed-source designs. You cannot study these chips, you cannot manufacture these chips, and if you want to use one of these chips, your list of suppliers is dependent on who has a licensing agreement with who.

Read more at Hackaday

How Fast Are Unix Domain Sockets?

It probably happened more than once, when you ask your team about how a reverse proxy should talk to the application backend server. “Unix sockets. They are faster.”, they’ll say. But how much faster this communication will be? And why a Unix domain socket is faster than an IP socket when multiple processes are talking to each other in the same machine? Before answering those questions, we should figure what Unix sockets really are.

Unix sockets are a form of inter-process communication (IPC) that allows data exchange between processes in the same machine. They are special files, in the sense that they exist in a file system like a regular file (hence, have an inode and metadata like ownership and permissions associated to it), but will be read and written using recv() and send() syscalls instead of read() and write(). When binding and connecting to a Unix socket, we’ll be using file paths instead of IP addresses and ports.

Read more at Myhro Blog

What’s the Future of Data Storage?

Storage planning today means investing in an ecosystem that supports multiple technologies. The winning vendors will create integrated delivery models that obviate the differences between particular technologies.

 What’s the future of storage? Is it internal server-based/software-defined? Hyperconverged? All-flash arrays? Cloud? Hybrid cloud?

Over the next few weeks we’re going to spend some time going over all of these different technologies and examining why each is viable (or not). But for now, I’m going to go ahead and give you the short answer: All of the above. 

Read more at HPE

Enjoy Kubernetes with Python

Over the past few years it seems that every cool and trending project is using Golang, but I am a Python guy and I feel a bit left out!

Kubernetes is no stranger to this, it is written in Go, and most clients that you will find are based on the Go client. Building a Kubernetes client has become easier. The Go client is now in its own repository. Therefore, if you want to write in Go, you can just import the Go client and not the entirety of the Kubernetes source code. Also, the Kubernetes API specification follows the OpenAPI standardization effort. If you want to use another language, you can use the OpenAPI specification and auto-generate one.

A couple weeks ago, the Python in me was awakened by a new incubator project for Kubernetes: a Python client almost single-handedly developed by Google engineer @mbohlool. The client is now available on PyPi and — like most Python packages — easily installable from source. To be fair, there already existed a Python client that was built on the Swagger specification but it received little attention.

So, let’s have a look at this new Python client for Kubernetes and take it for a spin.

Getting It

As always the easiest way is to get it from PyPi:


pip install kubernetes


Or get it from source:


pip install git+https://github.com/kubernetes-incubator/client-python.git


Or clone it and build locally:


git clone https://github.com/kubernetes-incubator/client-python.git

cd client-python

python ./setup.py install


Whatever you prefer.

Once installed, you should be able to start Python and import the kubernetes module. Check that your installation went fine.


$ python

Python 2.7.12 (default, Oct 11 2016, 14:42:23) 

[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)] on darwin

Type "help", "copyright", "credits" or "license" for more information.

>>> import kubernetes


Note that you can use Python 2.7 and Python 3.5

To get started using it, you will need a working Kubernetes endpoint. If you do not have one handy, use minikube.

Structure

Before we dive straight into examples, we need to look at the structure of the client. Most of the code is auto-generated. Each Kubernetes API group endpoint is usable and needs to be instantiated separately.

For example:

  • The basic resources (e.g., pods, services) will need the v1 stable API endpoint: kubernetes.client.CoreV1Api

  • The jobs resources will need the Batch endpoint: kubernetes.client.BatchV1Api

  • The deployments will need the Extensions endpoint: kubernetes.client.ExtensionsV1beta1Api

  • The horizontal pod autoscalers will need the Autoscaling endpoint: kubernetes.client.AutoscalingV1Api

In each of these endpoints, the REST methods for all resources will be available as separate Python functions. For example:

  • list_namespaces()

  • delete_namespace()

  • create_namespace()

  • patch_namespace()

The response from these method calls will be dictionaries that you can easily explore with Python.

The part that will take the most time is that, this client is a very low-level client. It can do almost everything you can do with the Kubernetes API, but it does not have any high-level wrappers to make your life easy.

For instance, creating your first Pod will involve going through the auto-generated documentation and finding out all the classes that you need to instantiate to define your Pod specification properly. I will save you some time and show you how, but the process will need to be repeated for all resources.

Example

The client can read your kubeconfig file, but the easiest configuration possible might be to run a proxy kubectl proxy then open Python, create the V1 API endpoint, and list your nodes.



>>> from kubernetes import client,config

>>> client.Configuration().host="http://localhost:8080"

>>> v1=client.CoreV1Api()

>>> v1.list_node()

...

>>> v1.list_node().items[0].metadata.name

minikube


Now the fun with Python starts. Try to list your namespaces:



>>> for ns in v1.list_namespace().items:

...     print ns.metadata.name  

...

default

kube-system


To create a resource, you will need the endpoint the resource is in and some type of body. Because the API version and kind will be implicitly known by the endpoint and the function name, you will only need to create some metadata and probably some specification.

For example, to create a namespace, we need an instance of the namespace class, and we need to set the name of the namespace in the metadata. The metadata is yet another instance of a class.



>>> body = client.V1Namespace()

>>> body.metadata = client.V1ObjectMeta(name="linuxcon")

>>> v1.create_namespace(body)


Deleting a namespace is a little bit simpler but you need to specify some deletion options.



v1.delete_namespace(name="linuxcon", body=client.V1DeleteOptions())


Now I cannot leave you without starting a Pod. A Pod is made of metadata and a specification. The specification contains a list of containers and volumes. In its simple form, a Pod will have a single container and no volumes. Let’s start a busybox Pod. It will use the busybox image and just sleep. In the example below, you can see that we have a few classes:

  • V1Pod for the overall pod.

  • V1ObjectMeta for metadata

  • V1PodSpec for the pod specification

  • V1Container for the container that runs in the Pod

Let’s instantiate a pod and set its metadata, which include its name:



>>> pod = client.V1Pod()

>>> pod.metadata = client.V1ObjectMeta(name="busybox")


Now let’s define the container that will run in the Pod:



>>> container = client.V1Container()

>>> container.image = "busybox"

>>> container.args = ["sleep", "3600"]

>>> container.name = "busybox"


Now let's define the Pod’s specification, in our case, a single container:



>>> spec = client. V1PodSpec()

>>> spec.containers = [container]

>>> pod.spec = spec


And, finally, we are ready to create our Pod in Python:



>>> v1.create_namespaced_pod(namespace="default",body=pod)


We’ll see if the community (i.e., us) decides to add some convenience functions to the Kubernetes python client. Things like kubectl run ghost –image-ghost are quite powerful, and although it can be easily coded with this Python module, it might be worthwhile to make it a first-class function.

Read the previous articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. A member of the Apache Software Foundation and the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.