Home Blog Page 513

How to Install OpenStack in Less Than an Hour

OpenStack is a framework for building IT infrastructure. This framework consists of a collection of many smaller projects including OpenStack Nova (compute), Keystone (identity service), Glance (image service), Neutron (networking), and many others.

These components are combined into working software, either through a do-it-yourself (DIY) approach or by using one of the many available distributions. Brave admins will go the DIY route which allows you to select exactly which components you need and stay on the cutting edge with constant access to the latest and greatest OpenStack releases, which happen every six months.

A distribution is less work to build and maintain. You can get the whole setup installed in less than an hour — although you will also need to wait for the hot new features from the latest OpenStack releases.

There are many available OpenStack distributions including RDO/RHOS, Mirantis, SUSE, Ubuntu, DevStack, HPE, Oracle, VMware and others in the OpenStack marketplace. However, they are not all lightweight. The distribution you choose will depend on its features. These include the following:

  • Supported projects

  • Support offering (to help you out if things go wrong)

  • Integration with your existing infrastructure

  • Support for different hypervisors. Some distributions don’t go beyond supporting KVM, while others include support for nearly all of the available hypervisors.

Pick your distribution

The first step is to pick your distribution. My personal favorite is Red Hat based OpenStack, running on CentOS (RDO). It offers different deployment solutions which will lead to a fairly standardized configuration. It’s lightweight, running on a minimum of 6 GiB RAM, and supports KVM and ESXi hypervisors. And best of all it’s free, with a fast and easy deployment using Packstack. This makes it ideal to configure a demo OpenStack that will run as a virtual machine on your laptop, for instance.

For a POC setup, install CentOS using the ”Server with GUI” installation pattern. Make sure you have at least 6 GiB RAM, more is better! Note that the Packstack installer will allocate processes according to the number of CPUs you’re using, which highly increases RAM requirements. If you’re installing in a VM, it’s a good idea to configure it with just 1 CPU while installing, and if so required, increase the number of CPUs one the installation has finished.

After installation, use yum search openstack. It will show a list of packages available for different distributions. I recommend that you don’t use the latest release, but the one before as it’s likely to have fewer bugs.

Installation Summary

You can install OpenStack with a few simple commands:

  • yum install centos-release-openstack-ocata

  • yum install openstack-packstack

  • packstack –gen-answer-file=/root/answers.txt

  • packstack –answer-file=/root/answers.txt (takes 10-15 minutes, depending on Internet speed)

An important part of the installation is the answer file. This long file contains a long list of parameters that need to be configured to determine what is going to be installed and how it will be installed. Below is a list of some of the parameters I’d recommend that you change to get to a simple POC setup.

  • CONFIG_DEFAULT_PASSWORD=password

  • CONFIG_SWIFT_INSTALL=n

  • CONFIG_HEAT_INSTALL=y

  • CONFIG_NTP_SERVERS=pool.ntp.org

  • CONFIG_KEYSTONE_ADMIN_PW=password

  • CONFIG_CINDER_VOLUMES_CREATE=y

  • CONFIG_HORIZON_SSL=y

  • CONFIG_HEAT_CFN_INSTALL=y

  • CONFIG_PROVISION_DEMO=n

  • CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1

Verifying the Installation

Once the Packstack based installation has finished, you’ll want to figure out if it was successful. A few simple commands can help you verify your installation. To start with, use source ˜/keystonerc_admin. This command sources the OpenStack credentials file that has automatically been created. This file authenticates you with Keystone, the OpenStack authentication service, after which you’ll have full access to all of the OpenStack components and you’ll be able to run the different OpenStack commands.

Another useful command is openstack-status (you’ll need to install the openstack-utils RPM). After sourcing the credentials file, it gives an overview of the complete configuration that’s currently operational. You can also use the OpenStack command-line interface. The openstack command from the CLI has hundreds of options to allow you to manipulate all parts of OpenStack. Type, for example:

openstack user list 

which will show a list of users currently configured in your OpenStack cloud.

After verifying that OpenStack has been configured successfully, you can also start a browser and connect to the Horizon configuration interface. This is a nice graphical interface that gives easy access to most of the tasks that OpenStack admins need to do on a frequent basis.

Screen Shot 2017-05-11 at 15.41.45.png

Now that it’s installed and verified, services are defined in the database and exist at a Keystone level, many configuration files have been created with a common structure, and SDN has been set up, all through the magic of Packstack.

Now you can get started creating a project and user and running an instance. In part 2 of this series on OpenStack, I’ll show you how to get instances up and running in 40 minutes!

Now updated for OpenStack Newton! Our Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

How to Write Documentation That’s Actually Useful

Now more than ever we need well-documented code. Here are four ways to make sure your applications make sense to humans as well as to computers.

Programmers love to write code, but they hate to write documentation. Developers always want to read documentation when they inherit a project, but writing it themselves? Feh!

How common is this? A recent GitHub survey found that “incomplete or outdated documentation is a pervasive problem,” according to 93 percent of respondents. Yet 60 percent of contributors to the open source code repository say they rarely or never contribute to documentation. Their reasoning, for both the open source projects and their own applications? A common attitude that “documentation is for ‘lusers‘ who don’t write good code!”

That’s not a stance your business can afford to adopt.

Read more at HPE 

Tech Giants Rally Today in Support of Net Neutrality

Technology giants like Amazon, Spotify, Reddit, Facebook, Google, Twitter and many othersare rallying today in a so-called “day of action” in support of net neutrality, five days ahead of the first deadline for comments on the US Federal Communications Commission’s planned rollback of the rules.

In a move that’s equal parts infuriating and exasperating, Ajit Pai, the FCC’s new chairman appointed by President Trump, wants to scrap the open internet protections installed in 2015 under the Obama administration. Those consumer protections mean providers such as AT&T, Charter, Comcast, and Verizon are prevented from blocking or slowing down access to the web.

Read more at The Verge

FD.io: Breaking the Terabit Barrier!

 At launch, FD.io’s VPP technology could route/switch at half a Terabit per second at multimillion fib entry scales.  Close examination of the bottlenecks revealed that it was being limited by the ability of the PCI bus to deliver packets from the NIC to the CPU.  VPP had headroom to do more, but the PCI bus bandwidth imposed limitations.

Today we are delighted to announce that limitation has moved further out. The increased PCI bandwidth in the Intel® Xeon® Processor Scalable family have doubled the amount of traffic the PCI bus can deliver to the CPU, and VPP has risen to the occasion without the need of new software optimizations.  This proves what we have long suspected: VPP can route/switch in software at multi-million fib entry scale as much traffic as the PCI bus can throw at it.

Read more at FDio

The Changing Face of the Hybrid Cloud

Depending upon the event you use to start the clock, cloud computing is only a little more than 10 years old. Some terms and concepts around cloud computing that we take for granted today are newer still. The National Institute of Standards and Technology (NIST) document that defined now-familiar cloud terminology—such as Infrastructure-as-a-Service (IaaS)—was only published in 2011, although it widely circulated in draft form for a while before that.

Among other definitions in that document was one for hybrid cloud. Looking at how that term has shifted during the intervening years is instructive. Cloud-based infrastructures have moved beyond a relatively simplistic taxonomy. Also, it highlights how priorities familiar to adopters of open source software—such as flexibility, portability, and choice—have made their way to the hybrid cloud.

Read more at OpenSource.com

Dangerous Logic – De Morgan & Programming

Programmers are master logicians – well they sometimes are. Most of the time they are as useless at it as the average joe. The difference is that the average joe can avoid logic and hence the mistakes. How good are you at logical expressions and why exactly is Augustus De Morgan your best friend, logically speaking?

It is commonly held that programming is a logical subject.

Programmers are great at working out the logic of it all and expressing it clearly and succinctly, but logic is tough to get right.

IFs and Intervals

A logical expression is just something that works out to be true or false.

Generally you first meet logical expressions as part of learning about if statements. Most languages have a construct something like…

Read more at I Programmer

How to Get Started with Kubernetes

Kubernetes, the product of work done internally at Google to solve that problem, provides a single framework for managing how containers are run across a whole cluster. The services it provides are generally lumped together under the catch-all term “orchestration,” but that covers a lot of territory: scheduling containers, service discovery between containers, load balancing across systems, rolling updates/rollbacks, high availability, and more.

In this guide we’ll walk through the basics of setting up Kubernetes and populating it with container-based applications. This isn’t intended to be an introduction to Kubernetes’s concepts, but rather a way to show how those concepts come together in simple examples of running Kubernetes.

Read more at InfoWorld

OpenStack: Driving the Future of the Open Cloud

As cloud computing continues to evolve, it’s clear that the OpenStack platform is guaranteeing a strong open source foundation for the cloud ecosystem. At the recent OpenStack Days conference in Melbourne, OpenStack Foundation Executive Director Jonathan Bryce noted that although the early stages of cloud technology emphasized public platforms such as AWS, Azure and Google, the latest stage is much more focused on private clouds.

According to the The OpenStack Foundation User Survey, organizations everywhere have moved beyond just kicking the tires and evaluating OpenStack to deploying the platform. In fact, the survey found that OpenStack deployments have grown 44 percent year-over-year. More than 50 percent of Fortune 100 companies are running the platform, and OpenStack is a global phenomenon. According to survey findings, five million cores of compute power, distributed across 80 countries, are powered by OpenStack.

The typical size of an OpenStack cloud increased over the past year as well. Thirty-seven percent of clouds have 1,000 or more cores, compared to 29 percent a year ago, and 3 percent of clouds have more than 100,000 cores. You can see the survey findings, which are based on responses from 2561 users, in this video overview.

The fact that OpenStack is built on open source is not lost on organizations deploying it. The OpenStack Foundation User Survey shows that avoiding vendor lock-in and accelerating the ability to innovate are the top reasons cited for OpenStack deployment. According to the survey, the highest number of OpenStack deployments fall within the Information Technology industry (56 percent), followed by telecommunications, academic/research, finance, retail/e-commerce, manufacturing/industrial, and government/defense.

The survey also found that most OpenStack deployments consist of on-premises private clouds (70 percent), with public cloud deployments at 12 percent.  Interestingly, containers remain the top emerging technology of interest to OpenStack users. And, 65 percent of organizations running OpenStack services inside containers use Docker runtime, while nearly 50 percent of those using containers to orchestrate apps on OpenStack use Kubernetes.

Organizations are building infrastructure around OpenStack, too. Survey results show that the median user runs 61–80 percent of their overall cloud infrastructure on OpenStack, while the typical large user (deployment with 1,000+ cores) reports running 81–100 percent of their total infrastructure on OpenStack.

It’s proven that OpenStack skills are in high-demand in the job market, and if you are seeking training and certification, opportunities abound. The OpenStack Foundation offers a Certified OpenStack Administrator (COA) exam. Developed in partnership with The Linux Foundation, the exam is performance-based and available anytime, anywhere. It allows professionals to demonstrate their OpenStack skills and helps employers gain confidence that new hires are ready to work.

The Linux Foundation also offers an OpenStack Administration Fundamentals course, which serves as preparation for the certification. The Foundation also offers comprehensive Linux training and other classes. You can explore options here.  Red Hat and Mirantis offer very popular OpenStack training options as well.

For a comprehensive look at trends in the open cloud, The Linux Foundation’s Guide to the Open Cloud report is a good place to start. The report covers not only OpenStack, but well-known projects like Docker and Xen Project, and up-and-comers such as Apache Mesos, CoreOS and Kubernetes.

Now updated for OpenStack Newton! Our Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Fabric 1.0: Hyperledger Releases First Production-Ready Blockchain Software

Open-source software isn’t so much built, it grows. And today, the open-source blockchain consortium Hyperledger has announced that its first production-ready solution for building applications, Fabric, has finished that process.

But even before the formal release of Fabric 1.0 today, hundreds of proofs-of-concept had been built. With contributions to the platform for building shared, distributed ledgers across a number of industries (coming from 159 different engineers in 28 organizations), no single company owns the platform, which is hosted by the Linux Foundation.

For those going forward with that work, the group’s executive director Brian Behlendorf indicated that production-grade functionality is just a download and a few tweaks away. Behledorf told CoinDesk:

“It’s not as easy as drop in and upgrade. But the intent is that anyplace where there were changes, that those changes will be justified.”

Read more at CoinDesk

How Linux Containers Have Evolved

In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here’s the story of how much of that engineering has happened at an enterprise level at Red Hat.

When I first met up with representatives from Docker Inc. (Docker.io) in the fall of 2013, we were looking at how to make Red Hat Enterprise Linux (RHEL) use Docker containers. (Part of the Docker project has since been rebranded as Moby.) We had several problems getting this technology into RHEL. The first big hurdle was getting a supported Copy On Write (COW) file system to handle container image layering. Red Hat ended up contributing a few COW implementations, including Device Mapperbtrfs, and the first version of OverlayFS. For RHEL, we defaulted to Device Mapper, although we are getting a lot closer on OverlayFS support.

The next major hurdle was on the tooling to launch the container. At that time, upstream docker was using LXC tools for launching containers, and we did not want to support LXC tools set in RHEL. Prior to working with upstream docker, I had been working with the libvirt team on a tool called virt-sandbox, which used libvirt-lxc for launching containers.

Read more at OpenSource.com