Home Blog Page 574

Amadeus: Redefining Travel Industry Tech Through Open Source and SDN

Travel tech giant Amadeus has been moving toward a fully software-defined data center strategy over the past few years — one based on open source and software-defined networking (SDN).

Rashesh Jethi, SVP of Engineering at Amadeus, will speak at Open Networking Summit 2017, April 3-6, in Santa Clara, CA.
“We are actively leveraging software-defined networking in our existing data centers and all new infrastructure projects,” says Rashesh Jethi, SVP of Engineering and head of Research & Development for Amadeus in the North America and Latin America regions.

Jethi leads the teams responsible for developing and maintaining distribution software and airline passenger support systems at Amadeus — a multi-billion dollar technology company that connects and enables the entire travel industry – as well as travel — around the world.

On Tuesday, April 4 he will speak at Open Networking Summit in Santa Clara about how software-defined networking and data centers are redefining the travel industry and moving millions of people every day. Here, he discusses how Amadeus uses open source software and SDN, the best way for companies to get involved in the SDN revolution, and how networking affects adjacent industries such as IoT, cloud, and big data.

Want to learn more? Register now for Open Networking Summit 2017! Linux.com readers can use code LINUXRD5 for 5% off the attendee registration.

Linux.com:  Which open source networking projects does your organization use and contribute to? Why do you participate? How are you contributing?

Jethi: Amadeus primarily uses OpenStack. Other open source projects we use that indirectly contribute to SDN include Github, Jenkins, Ansible, Puppet, and Chef. Amadeus is an active member in the open source community and regularly contributes code to open source libraries.

Linux.com: What’s your advice to individuals and companies getting started in SDN?

Jethi: SDN should be viewed as a means to an end. What’s important is to first understand why you want to embrace SDN and how you will get the organizational buy-in and technical talent behind the project.

Talk to other individuals and companies who have gone through it. Don’t readily believe the hype from equipment manufacturers or the promised positive outcomes at large from the community. It’s important to set realistic goals and be pragmatic along the way!

Linux.com:  How can companies and individuals best participate in the ‘Open Revolution’ in networking?

Jethi: The best participation comes from three things: learning, contributing and getting started – even if in a small way – rather than endless debates and analysis.

Linux.com: How has networking had a profound impact on adjacent “hot” industries like Cloud, Big Data, IoT, Analytics, Security, Intelligence, and others?

Jethi: They are all very interconnected in some ways. The growth of hyperscale computing platforms – whether public clouds or private clouds – would not be possible without the enabling software-defined infrastructure provisioning, deployment, and automation capabilities. (The cost and complexity in legacy models is too high). The availability of these hyperscale computing platforms has, in turn, facilitated the development of data, analytics and IoT solutions.

How to Set Up External Service Discovery with Docker Swarm Mode and Træfik

In my previous post, I showed how to use service discovery built into Docker Swarm Mode to allow containers to find other services in a cluster of hosts. This time, I’ll show you how to allow services outside the Swarm Mode cluster to discover services running in the cluster.

It turns out this isn’t as easy as it used to be. But first, allow me to talk about why one would want external service discovery, and why this has become more difficult to achieve.

Why External Service Discovery?

For most of us, we are not running 100 percent of our applications and services in containers. Maybe some are, but they’re running them in two or more Swarm Mode clusters. Perhaps a large group are constantly deploying and working on containers. In these situations, it can become trying to have to update configuration files or DNS entries every time a service is published or changes location.

What changed?

Those of us who use Docker heavily are familiar with Docker’s “move fast and break things” philosophy. While the “break” part happens less frequently than in Docker’s early days, rollouts of significant new features such as Swarm Mode can be accompanied by a requirement to retool how one uses Docker. With earlier versions of Docker, my company used a mixture of HashiCorp’s Consul as a key/value pair, along with Gilder Labs’ Registrator to detect and publish container-based service information into Consul. With this setup, Consul provided us DNS-based service discovery – both within and external to the Swarm (note: Swarm, not Swarm Mode) cluster.

While Docker 1.12 brought Swarm Mode and extreme ease of use to building a cluster of Docker hosts, Swarm Mode architecture is not really compatible with Registrator. There are some workarounds to get Registrator working on Swarm Mode, and after a good amount of experimentation I felt the effort didn’t justify the result.

Taking a step back, what’s wanted out of external service discovery? Basically, the ability to allow an application or person to easily and reliably access a published service, even as the service moves from host to host, or cluster to cluster (or across geographic areas, but we’ll cover that in a later post). The question I asked myself was “how can I combine Swarm Mode’s built-in service discovery with something else so I could perform name-based discovery outside the cluster?” One answer to this question would be to use a proxy that can do name-based HTTP routing, such as Træfik.

Using Træfik

For this tutorial, we’ll build up a swarm cluster using the same Vagrant setup from my previous post. I’ve added a new branch with some more exposed TCP ports for this post. To grab a copy, switch to the proper branch and start the cluster, follow below:

$ git clone https://github.com/jlk/docker-swarm-mode-vagrant.git

Cloning into 'docker-swarm-mode-vagrant'...

remote: Counting objects: 23, done.

remote: Total 23 (delta 0), reused 0 (delta 0), pack-reused 23

Unpacking objects: 100% (23/23), done.

$ cd docker-swarm-mode-vagrant/

$ git checkout -b traefik_proxy origin/traefik_proxy

$ vagrant up

If this is the first time you’re starting this cluster, this takes about 5 minutes to update and install packages as needed.

Next, let’s fire up a few WordPress containers – again, similar to the last post, but this time we’re going to launch two individual WordPress containers for different websites. While they both use the same database, you’ll notice in the docker-compose.yml file I specify different table prefixes for each site. Also in the YML you’ll see a definition for a Træfik container, and a Træfik network that’s shared with the two WordPress containers.

Let’s connect into the master node, check out the code from GitHub, switch to the appropriate branch, and then start the stack up:

$ vagrant ssh node-1

$ git clone http://github.com/jlk/traefiked-wordpress.git

$ cd traefiked-wordpress

$ docker stack deploy --compose-file docker-compose.yml  traefiked-wordpress

Finally, as this example has Træfik using hostname-based routing, you will need to create a mapping for beluga and humpback to an IP address in your hosts file. If you’re not familiar with how to do this, Rackspace has a good page covering how to do this for various operating systems. If you’re running this example locally, 127.0.0.1 should work for the IP address.

Once that’s set up, you should be able to browse to http://beluga or http://humpback in your browser, and see two separate WordPress setup pages. Also, you can hit http://beluga:8090 (or humpback, localhost, etc) and see the dashboard for Træfik.

NDVq3Jiai4T9CSvBqx3Kyt9wOa3K_p4mC0bAxyxB

jdBff81VuyDesgkYenpyCv0VdIoDEcYtnMYT_9QY

An Added Benefit, But with a Big Caveat

One of the things which drew me to Træfik is it comes with Let’s Encrypt built in. This allows free, automatic TLS certificate generation, authorization, and renewal. So, if beluga had a public DNS record, you could hit https://beluga.test.com and after a few seconds, have a valid, signed TLS certificate on the domain. Details for setting up Let’s Encrypt in Træfik can be found here.

One important caveat that I learned the hard way: When Træfik receives a signed certificate from Let’s Encrypt, it is stored in the container. Unless specified in the Træfik configuration, this file will be stored on ephemeral storage, being destroyed when the container is recreated. If that’s the case, each time the Træfik container is re-created and a proxied TLS site is accessed, it will send a new certificate signing request to Let’s Encrypt, and receive a newly signed certificate. If this happens often enough within a small period of time, Let’s Encrypt will stop signing requests from that top-level domain for 7 days. If this happens in production, you will be left scrambling. The important line you need to have in your traefik.toml is…

       storage = "/etc/traefik/acme.json"

…and then make sure /etc/traefik is a volume you mount in the container.

Now we understand external, DNS-based service discovery for Swarm Mode. In the final part of this series, we’ll add high availability and failover to this mixture.

Learn more about container networking at Open Networking Summit 2017. Linux.com readers can register now with code LINUXRD5 for 5% off the attendee registration.

John Kinsella has long been active in open source projects – first using Linux in 1992, recently as a member of the PMC and security team for Apache CloudStack, and now active in the container community. He enjoys mentoring and advising people in the information security and startup communities. At the beginning of 2016 he co-founded Layered Insight, a container security startup based in Silicon Valley where he is the CTO. His nearly 20-year professional background includes datacenter, security and network operations, software development, and consulting.

Stack Overflow Developer Survey Results 2017

Each year since 2011, Stack Overflow has asked developers about their favorite technologies, coding habits, and work preferences, as well as how they learn, share, and level up. This year represents the largest group of respondents in our history: 64,000 developers took our annual survey in January.

As the world’s largest and most trusted community of software developers, we run this survey and share these results to improve developers’ lives: We want to empower developers by providing them with rich information about themselves, their industry, and their peers. And we want to use this information to educate employers about who developers are and what they need.

We learn something new every time we run our survey. This year is no exception:

  • A common misconception about developers is that they’ve all been programming since childhood. In fact, we see a wide range of experience levels. Among professional developers, 11.3% got their first coding jobs within a year of first learning how to program. A further 36.9% learned to program between one and four years before beginning their careers as developers.
  • Only 13.1% of developers are actively looking for a job. But 75.2% of developers are interested in hearing about new job opportunities.

Read more at StackOverflow

A Beginner-Friendly Introduction to Containers, VMs and Docker

If you’re a programmer or techie, chances are you’ve at least heard of Docker: a helpful tool for packing, shipping, and running applications within “containers.” It’d be hard not to, with all the attention it’s getting these days — from developers and system admins alike. Even the big dogs like Google, VMware and Amazon are building services to support it.

Regardless of whether or not you have an immediate use-case in mind for Docker, I still think it’s important to understand some of the fundamental concepts around what a “container” is and how it compares to a Virtual Machine (VM). While the Internet is full of excellent usage guides for Docker, I couldn’t find many beginner-friendly conceptual guides, particularly on what a container is made up of. So, hopefully, this post will solve that problem 🙂

Let’s start by understanding what VMs and containers even are.

Read more at FreeCodeCamp

8 Practical Examples of Linux Xargs Command for Beginners

The Linux xargs command may not be a hugely popular command line tool, but this doesn’t take away the fact that it’s extremely useful, especially when combined with other commands like find and grep. If you are new to xargs, and want to understand its usage, you’ll be glad to know that’s exactly what we’ll be doing here.

Before we proceed, please keep in mind that all the examples presented in this tutorial have been tested on Ubuntu 14.04 LTS. Shell used is Bash, and version is 4.3.11.

Read more at HowtoForge

Open Source JavaScript, Node.js Devs Get NPM Orgs for Free

NPM Inc.’s NPM Orgs tool, which has been available as a paid service for JavaScript and Node.js development teams collaborating on private code, is now available for free use by teams working on open source code.

The SaaS-based tool, which features capabilities like role-based access control, semantic versioning, and package discovery, now can be used on public code on the NPM registry, NPM Inc. said on Wednesday. Developers can transition between solo projects, public group projects, and commercial projects, and users with private registries can use Orgs to combine code from public and private packages into a single project. 

Read more at InfoWorld

Bash Scripting Quirks & Safety Tips

Yesterday I was talking to some friends about Bash and I realized that, even though I’ve been using Bash for more than 10 years now there are still a few basic quirks about it that are not totally obvious to me. So as usual I thought I’d write a blog post.

We’ll cover

  • some bash basics (“how do you write a for loop”)
  • quirky things (“always quote your bash variables”)
  • and bash scripting safety tips (“always use set -u”)

If you write shell scripts and you don’t read anything else in this post, you should know that there is a shell script linter called shellcheck. Use it to make your shell scripts better!

Read more at Julia Evans

TripleO QuickStart Master Branch Deployment with Feature Sets and Nodes Configuration (topology) Separated

Quoting currently posted release notes :-

  Configuration files in general_config were separated
    into feature sets (to be specified with –config
    argument ) and nodes configuration (to be specified with
    –nodes configuration)

    Featureset files should contain only the list of flags
    that enable features we want to test in the deployment,
    the overcloud nodes configuration, and all that involves
    their set up, should be put into nodes configuration
    files.

end quote

Copmplete text may seen here http://dbaxps.blogspot.com/2017/03/tripleo-quickstart-master-branch.html

This Week in Open Source News: Blockchain Helps China Go Green, Old Linux Vulnerability Exposed, and More

This week in Linux and open source news, The Linux Foundation’s Hyperledger Project to help China get greener, an old Linux vulnerability surfaces, and more! Read on to stay in the OSS know!

1) IBM and Energy-Blockchain Labs announced a blockchain-based trading platform for “green assets” that’s based on Hyperledger.

How Blockchain Is Helping China Go Greener– Fox Business

2) “A Linux developer discovered a serious security hole that’s been hiding for years in an out-of-date driver.”

Old Linux Kernel Security Bug Bites– ZDNet

3) Gates’ Radiant Earth Project hopes to “encourage the creation of more open source technologies and innovation that can help ‘solve societies’ most pressing issues.'”

Bill Gates Has Started a New Crusade to Save the World– Fortune

4) Containerd to become a CNCF project

Docker and Core OS Plan to Donate Their Container Technologies to CNCF– CIO

5) “IBM’s public cloud will run Red Hat’s OpenStack and Ceph storage products”

IBM + Red Hat = An Open Source Hybrid Cloud– NetworkWorld

Manjaro: User-Friendly Arch Linux for Everyone

Arch Linux has never been known as a user-friendly Linux distribution. In fact, the whole premise of Arch requires the end user make a certain amount of effort in understanding how the system works. Arch even goes so far as to use a package manager (aptly named, Pacman) designed specifically for the platform. That means all that apt-get and dnf knowledge you have doesn’t necessarily roll over.

Don’t get me wrong; Arch Linux is a fantastic distribution. However (and that “however” is significant), it’s certainly not a distribution for anyone even moderately new to the world of Linux. Case in point: When you boot up an ISO of Arch Linux, you wind up at a Bash prompt, where you then walk through the numerous steps (as outlined in the Installation guide) to get Arch Linux installed. In the end, you will be rewarded with a fine-tuned Linux distribution that will serve your needs well. On top of that, by the time you’ve installed Arch, you will know more about your operating system than you would have before.

But what about those who want the benefits of Arch Linux, but don’t want to have to go through the unwieldy installation? For that, you turn to a distribution like Manjaro. This take on Arch Linux makes the platform as easy to install as any operating system and equally as user-friendly to work with. Manjaro is suited for every level of user—from beginner to expert.

The big question, however, is why would you want to give Manjaro a try? With so many Linux distributions available, is there anything particularly compelling about this platform to woo you away from your current daily driver (or to simply test out what this Arch-based distribution is all about)? Let’s take a look.

32- and 64-bit friendly

While many distributions are dropping support for 32-bit architecture, Manjaro continues to support the aging platform. This means that all of your older hardware can still make use of this Arch-based operating system with the latest-greatest releases of software. This will become more crucial in the future, when more Linux distributions stop supporting 32-bit hardware.

Rolling Release

Manjaro (currently on its 17th iteration) is a rolling release distribution. What does that mean? For those that do not know, a rolling release distribution effectively means everything is updated frequently, even the core of the system, so that there is no need for point-based releases. This also means your machine will always have the latest-greatest stable software. Due to the frequency of the updates, they are also smaller. Some consider this a superior update delivery method, as there is less chance of software breakage.

Choose your desktop

At the moment, you can choose between the Xfce, KDE, or GNOME. All three editions follow similar design concepts and offer a very clean and professional look (Figure 1).

Figure 1: The Xfce version of Manjaro keeps things clean and simple.

The Net edition provides a base installation without a pre-existing display manager, desktop environment, or any desktop software. With this particular release, you can customize it to perfectly meet your needs.

There are also community editions that include spins based on the following desktops:

The Manjaro developers have done a fantastic job of making Xfce, GNOME, and KDE versions look and feel the same. The biggest difference, for me, is that both the KDE and GNOME takes on the distribution are a bit more elegant and modern than Xfce (which might sway you one way or another).

Software

Beyond Manjaro’s ability to make Arch easy, one of the most impressive aspects to be found on this desktop Linux distribution is the collection of included software. Yes, you’ll find the standard productivity software:

  • LibreOffice

  • GIMP (XFCE version only)

  • Inkscape and Krita (KDE version only)

  • File managers and other standard desktop tools

  • Firefox (all three versions)

  • Thunderbird (KDE and XFCE versions)

  • Evolution (GNOME version)

But beyond the basics, you’ll also find the likes of:

  • Avahi SSH Server and Zeroconf Browser

  • Steam

  • Bulk Rename

  • Catfish File Search

  • Clipman

  • HP Device Manager

  • Orage Calendar

  • Htop

  • GParted

  • Yakuake (KDE version only)

  • Octopi CacheCleaner (KDE version only)

Along with those packages, Manjaro offers an easy to use Add/Remove Software tool (Figure 2) that allows you to install software from a vast collection of titles.

Figure 2: The Manjaro Add/Remove Software tool.

Understand, the pre-installed package listing will vary, depending on which desktop environment you’ve chosen to install. For example, the KDE version of Manjaro will lean heavy on KDE applications and the GNOME version will lean on GNOME software. You will find, however, that all three official desktop iterations do include LibreOffice, so your productivity is covered, regardless of environment.

The package manager GUI is as simple to use as any: Open the tool, search for what you want to install, select the software, and click Apply. Updates are just as easy. When an update has arrived, you will be notified in the system tray. Click the notification and okay the installation of the upgrades.

Settings Menu

One nice touch for Xfce spin of Manjaro is the Settings menu. Click on the Main menu and then click Settings in the right side of the menu to reveal an impressive amount of options available to configure (Figure 3).

Figure 3: The Manjaro Settings menu offers a wide collection of configuration options.

With the KDE and GNOME flavors of Manjaro, you work with the standard tools of that particular desktop environment, for a bit more cohesive feel. If you’ve used a recent releases of either KDE or GNOME, you’ll feel right at home. The GNOME iteration also includes the Dock To Dash extension, for those that prefer a more “dock-like” approach to the desktop.

Media

I was pleasantly surprised that Manjaro was able to play MP3s out of the box with one of its media players. The Xfce edition of Manjaro ships with both Guayadeque and Parole media players. Of the two, only Guayadeque was able to play MP3 files out of the box. YouTube videos play without issue and Netflix only requires the enabling of DRM (Figure 4) and the installation of the Random Agent Spoofer extension.

Figure 4: Enabling DRM for Netflix.

Once you’ve taken care of those two issues, Netflix plays seamlessly (Figure 5).

Figure 5: Catching a little Buffy The Vampire Slayer on Netflix.

Performance

As for performance, you can opt for any of the official editions of Manjaro and expect incredible speed. Running as a VirtualBox guest with 3GB of RAM, Manjaro ran as smoothly and quickly as the host Elementary OS Loki with a remaining 13GB of RAM available. That should tell you all you need to know about the performance of Manjaro. As a whole, there is absolutely nothing to complain about with regards to Manjaro performance. It’s quick, smooth, and reliable. The GNOME, KDE, and Xfce are flawless.

Who’s it for?

In the end, I think it’s safe to say that Manjaro Linux is a distribution that is perfectly capable of pleasing any level of user wanting a reliable, always up-to-date desktop. Manjaro has been around since 2011, so it’s had plenty of time to get things right… and that’s exactly what it does. If you’ve been looking for the ideal distribution to help you give Arch a try, the latest release of Manjaro is exactly what you’re looking for.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.