Home Blog Page 667

Getting Started With Kubernetes Is Easy With Minikube

Getting started with any distributed system that has several components expected to run on different servers can be challenging. Often developers and system administrators want to be able to take a new system for a spin before diving straight into a full-fledged deployment on a cluster.

All in one installations are a good remedy to this challenge. They provide a quick way to test a new technology by getting a working setup on a local machine. They can be used by developers in their local workflow and by system administrators to learn the basics of the deployment setup and configuration of various components.

Typically, all-in-one installs are provided as a virtual machine. Vagrant has been a great tool for this. For example, OpenShift Origin provides an all-in-one VM, and OpenStack has DevStack.

To get started with Kubernetes easily, we now have an all-in-one solution: minikube.

Minikube will start a virtual machine locally and run the necessary Kubernetes components. The VM will get configured with Docker and Kubernetes via a single binary called localkube. The end result will be a local Kubernetes endpoint that you can use with the Kubernetes client kubectl.

This is very similar to Docker’s docker-machine with one main difference being that minikube only starts local virtual machines, it does not interact with public Cloud Providers.

Setup

For clarity, I will skip over the very few requirements (e.g., VirtualBox setup or another hypervisor) and go straight into “minikube” installation.

While you can build from source by grabbing it on GitHub, the easiest is to grab the released binary for your platform (packaging should come shortly). For example, on Linux:


```

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.8.0/minikube-linux-amd64

$ chmod +x minikube

$ sudo mv minikube /usr/local/bin/

```

With minikube now on your machine, you will be able to create a local Kubernetes endpoint. But if you are totally new to Kubernetes, you should also download the Kubernetes client kubectl. This client will interact with the Kubernetes API to manage your containers.

Get kubectl:


```

$ curl -Lo kubectl http://storage.googleapis.com/kubernetes-release/release/v1.3.0/bin/linux/amd64/kubectl

$ chmod +x kubectl

$ sudo mv kubectl /usr/local/bin/


```

You are now ready to use minikube. Just typing minikube at your shell prompt will return the usage. The first command to use, however, is the start command. This will boot the virtual machine that will run Kubernetes. Let’s do it:

```

$ minikube start

Starting local Kubernetes cluster...

Kubectl is now configured to use the cluster.

```

You can check the status of your minikube VM with the status command.

```

$ minikube status

Running

```

To see the kubectl is configured to talk to the minikube VM, try to list the nodes in your Kubernetes cluster. It should return a single node with the name minikubevm.

```

$ kubectl get nodes

NAME         STATUS    AGE

minikubevm   Ready     1h

```

And, to finish checking that everything is running, open your VirtualBox UI and see the minikube VM running.

Application Example

You are now all set with this all-in-one local Kubernetes install. You can use it to discover the API, explore the dashboard and start writing your first containerized applications that will easily migrate to a full Kubernetes cluster.

Explaining how an application is containerized and what are the various resources in Kubernetes is beyond the scope of this blog. But, to showcase minikube, we are going to create the canonical guestbook application. This application is made of a Redis cluster and a PHP front end. Containers are created via a Kubernetes resource called Deployments and exposed to each other via another Kubernetes primitive called a Service.

Let’s create the guestbook application on minikube. To do this, we will use the kubectl client and create all required resources by pointing at the YAML file that describes them in the examples folder of the Kubernetes source code.


```

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml

service "redis-master" created

deployment "redis-master" created

service "redis-slave" created

deployment "redis-slave" created

service "frontend" created

deployment "frontend" created

```

The CLI tells you that it created three services and three deployments.

You can now open the Kubernetes dashboard automatically, and you will see all sorts of resources that have been created and that we will discuss in a later post.

```

$ minikube dashboard

```

Below is a snapshot of the Dashboard.

dashboard.png

To access the application front end, the quickest way in development setup like this one is to run a local proxy. This is due to the fact that a default Kubernetes service is not reachable outside the cluster.

```

$ kubectl proxy

```

The frontend service will now be accessible locally. Open it in your browser and enjoy the guestbook application.

```

$ open http://localhost:8001/api/v1/proxy/namespaces/default/services/frontend

```

Here is a snapshot of the guestbook application:

guestbook.png

Additional minikube commands

If you want to learn what is actually happening inside the minikube VM, you can SSH into it with the minikube ssh command.

Because Kubernetes uses the Docker engine to run the containers, you can also use minikube as a Docker host with the minikube docker-env command.

As a developer you might also be interested to test different versions of Kubernetes. Minikube allows you to do this. Check what versions are available to test with minikube get-k8s-versions and use the –kubernetes-version= flag in minikube start to set a specific version.

Finally, you can stop and delete the minikube VM with intuitive commands like minikube stop and minikube delete.

In conclusion, the minikube binary is by far the easiest and quickest way to get Kubernetes for a spin. It will allow you to learn the Kubernetes API, resource objects as well as how to interact with it with the kubectl client.

Read the next articles in this series: 

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.

Shining a Light on the Enterprise Appeal of Multi-Cloud Deployments

Rather than settle for the services of a single cloud provider, enterprises are, in time, expected to want to source off-premise capacity from a variety of suppliers.

According to market watcher Gartner, the trend is being driven by the fact that cloud customers are increasingly aware of the merits and drawbacks of individual providers, which enables them to make informed decisions about where best to run specific workloads.

Mark D’Cunha, product manager at Pivotal, told Computer Weekly at the Cloud Foundry Summit in Frankfurt that parts of the industry have been surprised by the speed at which enterprises are looking to adopt a multi-cloud approach to IT consumption.  

Read more at ComputerWeekly

Coaches, Managers, Collaboration, and Agile: Part III

I started this series writing about the need for coaches in Coaches, Managers, Collaboration, and Agile, Part 1. I continued in Coaches, Managers, Collaboration, and Agile, Part 2, talking about the changed role of managers in Agile. In this part, let me address the role of senior managers in Agile and how coaches might help.

For years, we have organized our people into silos. That meant we had middle managers who (with any luck) understood the function (testing or development) and/or the problem domain (think about the major chunks of your product such as Search, Admin, Diagnostics, the feature sets, etc.). I often saw technical organizations organized into product areas with directors at the top, and some functional directors such as those involved in test and quality and/or performance.

Read more at DZone

How Continuous Integration Can Help You Keep Pace With the Linux Kernel

Written by Tomeu Vizoso, Principal Software Engineer at Collabora.

Almost all of Collabora’s customers use the Linux kernel on their products. Often they will use the exact code as delivered by the SBC vendors and we’ll work with them in other parts of their software stack. But it’s becoming increasingly common for our customers to adapt the kernel sources to the specific needs of their particular products.

A very big problem most of them have is that the kernel version they based on isn’t getting security updates any more because it’s already several years old. And the reason why companies are shipping kernels so old is that they have been so heavily modified compared to the upstream versions, that rebasing their trees on top of newer mainline releases is so expensive that is very hard to budget and plan for it.
 
To avoid that, we always recommend our customers to stay close to their upstreams, which implies rebasing often on top of new releases (typically LTS releases, with long term support). For the budgeting of that work to become possible, the size of the delta between mainline and downstream sources needs to be manageable, which is why we recommend contributing back any changes that aren’t strictly specific to their products.
 
But even for those few companies that already have processes in place for upstreaming their changes and are rebasing regularly on top of new LTS releases, keeping up with mainline can be a substantial disruption of their production schedules. This is in part because new bugs will be in the new mainline release, and new bugs will be in the downstream changes as they get applied to the new version.
 
Those companies that are already keeping close to their upstreams typically have advanced QA infrastructure that will detect those bugs long before production, but a long stabilization phase after every rebase can significantly slow product development.
 
To improve this situation and encourage more companies to keep their efforts close to upstream we at Collabora have been working for a few years already in continuous integration of FOSS components across a diverse array of hardware. The initial work was sponsored by Bosch for one of their automotive projects, and since the start of 2016 Google has been sponsoring work on continuous integration of the mainline kernel.
 
One of the major efforts to continuously integrate the mainline Linux kernel codebase is kernelci.org, which builds several configurations of different trees and submits boot jobs to several labs around the world, collating the results. This is being of great help already in detecting at a very early stage any changes that either break the builds, or prevent a specific piece of hardware from completing the boot stage.
 
Though kernelci.org can easily detect when an update to a source code repository has introduced a bug, such updates can have several dozens of new commits, and without knowing which specific commit introduced the bug, we cannot identify culprits to notify of the problem. This means that either someone needs to monitor the dashboard for problems, or email notifications are sent to the owners of the repositories who then have to manually look for suspicious commits before getting in contact with their author.
 
To address this limitation, Google has asked us to look into improving the existing code for automatic bisection so it can be used right away when a regression is detected, so the possible culprits are notified right away without any manual intervention.
 
Another area in which kernelci.org is currently lacking is in the coverage of the testing. Build and boot regressions are very annoying for developers because they impact negatively everybody who work in the affected configurations and hardware, but the consequences of regressions in peripheral support or other subsystems that aren’t involved critically during boot can still make rebases much costlier.
 
At Collabora we have had a strong interest in having the DRM subsystem under continuous integration and some time ago started a R&D project for making the test suite in IGT generically useful for all the DRM drivers. IGT started out being i915-specific, but as most of the tests exercise the generic DRM ABI, they could as well test other drivers with a moderate amount of effort. Early in 2016 Google started sponsoring this work and as of today submitters of new drivers are using it to validate their code.
 
Another related effort has been the addition to DRM of a generic ABI for retrieving CRCs of frames from different components in the graphics pipeline, so two frames can be compared when we know that they should match. And another one is adding support to IGT for the Chamelium board, which can simulate several display connections and hotplug events.
 
A side-effect of having continuous integration of changes in mainline is that when downstreams are sending back changes to reduce their delta, the risk of introducing regressions is much smaller and their contributions can be accepted faster and with less effort.
 
We believe that improved QA of FOSS components will expand the base of companies that can benefit from involvement in development upstream and are very excited by the changes that this will bring to the industry.
 
If you are an engineer who cares about QA and FOSS, and would like to work with us on projects such as kernelci.org, LAVA, IGT and Chamelium, get in touch!

Provision Bare Metal Servers for OpenStack with Ironic

The day-to-day life of a developer can change drastically from one moment to the next, particularly if one is working on open source projects. Intel Cloud Software Senior Developer Ruby Loo spends her days working on the bare metal provisioning software OpenStack Ironic, which is software that allows OpenStack to provision bare metal servers, as one of its Core members.

While Loo is employed by Intel, the bulk of her daily interactions are based in the upstream open source community. As patches come in, Loo reviews them alongside other Core members, ensuring that OpenStack Ironics feature priorities are met. In todays episode of The New Stack Makers recorded at OpenStack Summit Barcelona, Loo sat down with TNS Founder Alex Williams to explore more about her background, the daily tasks of an OpenStack Core project member and active open source community participant, and whats next for OpenStack Ironic.

Read more at The New Stack

China Adopts Cybersecurity Law in Face of Overseas Opposition

China adopted a controversial cyber security law on Monday to counter what Beijing says are growing threats such as hacking and terrorism, but the law triggered concerns among foreign business and rights groups.

The legislation, passed by China’s largely rubber-stamp parliament and set to take effect in June 2017, is an “objective need” of China as a major internet power, a parliament official said.

Overseas critics of the law say it threatens to shut foreign technology companies out of various sectors deemed “critical”, and includes contentious requirements for security reviews and for data to be stored on servers in China.

Read more at Reuters

Top 3 Questions Job Seekers Ask in Open Source

As a recruiter working in the open source world, I love that I interact every day with some of the smartest people around. I get to hear about the cool projects they’re working on and what they think about the industry, and when they are ready for a new challenge. I get to connect them to companies that are quietly changing the world.

But one thing I enjoy most about working with them is their curiosity: they ask questions, and in my conversations, I hear a lot of inquiries about the job search and application process. 

Read more at OpenSource.com

How to Recover a Deleted File in Linux

Did this ever happen to you? You realized that you had mistakenly deleted a file – either through the Del key, or using rm in the command line.

In the first case, you can always go to the Trashsearch for the file, and restore it to its original location. But what about the second case? As I am sure you probably know, the Linux command line does not send removed files anywhere – it REMOVES them. Bum. They’re gone.

Read complete article at Tecmint

Get Trained and Certified on Kubernetes with The Linux Foundation and CNCF

Companies in diverse industries are increasingly building applications designed to run in the cloud at a massive, distributed scale. That means they are also seeking talent with experience deploying and managing such cloud native applications using containers in microservices architectures.

Kubernetes has quickly become the most popular container orchestration tool according to The New Stack, and thus is a hot new area for career development as the demand for IT practitioners skilled in Kubernetes has also surged. Apprenda, which runs its PaaS on top of Kubernetes, reported a spike in Kubernetes job postings this summer, and the need is only growing.

To meet this demand, The Linux Foundation and the Cloud Native Computing Foundation today announced they have partnered to provide training and certification for Kubernetes.  

The Linux Foundation will offer training through a free, massive open online course (MOOC) on edX as well as a self-paced, online course. The MOOC will cover the introductory concepts and skills involved, while the online course will teach the more advanced skills needed to create and configure a real-world working Kubernetes cluster.

The training course will be available soon, and the MOOC and certification program are expected to be available in 2017. The course is open now at the discounted price of $99 (regularly $199) for a limited time. Sign up here to pre-register for the course.

The course curriculum will also be open source and available on GitHub, Dan Kohn, CNCF Executive Director, said in his keynote today at CloudNativeCon in Seattle.

Certification will be offered by Kubernetes Managed Service Providers (KMSP) trained and certified by the CNCF. Nine companies with experience helping enterprises successfully adopt Kubernetes are committing engineers to participate in a CNCF working group that will develop the certification requirements. These early supporters include Apprenda, Canonical, Cisco, Container Solutions, CoreOS, Deis, Huawei, LiveWyer, and Samsung SDS. The companies are also interested in becoming certified KMSPs once the program is available next year.

Kubernetes is a software platform that makes it easier for developers to run containerized applications across diverse cloud infrastructures — from public cloud providers, to on-premise clouds and bare metal. Core functions include scheduling, service discovery, remote storage, autoscaling, and load balancing.

Google originally engineered the software to manage containers on its Borg infrastructure, but open sourced the project in 2014 and donated it earlier this year to the Cloud Native Computing Foundation at The Linux Foundation. It is now one of the most active open source projects on GitHub and has been one of the fastest growing projects of all time with a diverse community of contributors.

“Kubernetes has the opportunity to become the new cloud platform,” said Sam Ghods, a co-founder and Services Architect at Box, in his keynote at CloudNativeCon. “We have the opportunity to do what AWS did for infrastructure but this time in an open, universal, community-driven way.”

With more than 170 user groups worldwide, it’s already easy to hire people who are experts in Kubernetes, said Chen Goldberg, director of engineering for the Container Engine and Kubernetes team at Google, in her keynote at CloudNativeCon.

The training and certification from CNCF and The Linux Foundation will go even further to help develop the pool of Kubernetes talent worldwide.

Pre-register now for the online, self-paced Kubernetes Fundamentals course from The Linux Foundation and pay only $99 ($100 off registration)!

OpenSDS for Industry-Wide Software Defined Storage Collaboration

Software defined storage (SDS) brings cloud benefits to storage, but the challenge is that it must be highly reliable – you can’t lose a single byte of data. Storage can be difficult to manage in the cloud where there are many frameworks and technologies working together in virtualized / containerized environments. 

At LinuxCon Europe, Cameron Bahar, SVP and Global CTO of Huawei Storage, launched the project proposal for a new open source initiative called OpenSDS:

“What we’re proposing effectively is this virtualization layer that effectively does discovery, provisioning, management, and orchestration of advanced storage services. It allows the open source vendors to plug in their OpenSDS adapters to manage storage. Ceph, Gluster, and ZFS and what have you can plug in through that stack. It allows the vendors from EMC, Huawei, Intel, HP to plug in their adapters. … We define the interfaces once, you make them general enough, and then we’re able to update, both in an open source way and in a proprietary way, these vendor APIs.”

In this keynote, Bahar invites vendors, customers and other open source collaborators to work with them to make this mission a reality: “We want to develop an open source SDS controller platform that allows us to manage both virtualized and containerized and bare metal environments, and facilitate collaborations. Adherence to standards, leverage existing open source, and have a customer and vendor community that comes together to solve the problem together.”

Steven Tan, Chief Architect at Huawei, talked about some of the technical details for how OpenSDS could benefit people using Kubernetes and OpenStack. He outlined three key benefits: “The first one is to be able to plug in to be able to provide a seamless plugin for any framework. Second, to be able to provide an end-to-end storage management with a single solution. Third, to support a broad set of storage including closed storage.”

Tan went on to talk more about how the project will be managed as a Linux Foundation project with light governance and a technical steering committee for the technical oversight of the project. The source code will be on GitHub with Gerrit code reviews and regular IRC meetings as well as meetups.

They were joined on stage by a special guest, Reddy Chagam, Chief Architect of SDS at Intel, who talked about Intel’s involvement in the project.  

While the project hasn’t launched quite yet, stay tuned for more information about how to participate in OpenSDS. For now, you can watch the entire keynote below to learn more about the project!

LinuxCon Europe videos