Home Blog Page 771

Distributing Docker Cache across Hosts

Building and compiling code can take a huge hit on our time and resources. If you have dockerized your application, you may have noticed how much of a time-saver Docker cache is. Lengthy build commands can be cached and not have to be run at all! This works great when you’re building on a single host; however, once you start to scale up your Docker hosts, you start to lose that caching goodness.

In order to take advantage of Docker caching on multiple hosts, we need a multi-host cache distribution system. Our requirements for preserving a single-tenant infrastructure for our customers meant we needed a horizontally scalable solution. This post will go through some methods we considered to distribute Docker cache across multiple Docker hosts.

Read more at Runnable

Gradle Goodness: Running All Tests From One Package

Hubert Klein Ikkink shows how to run all tests in Gradle from one package, complete with a set of instructions for different scenarios.

If we have a Gradle task of type Test we can use a filter on the command line when we invoke the task. We define a filter using the --tests option. If, for example, we want to run all tests from a single package, we must define the package name as value for the --tests option. It is good to define the filter between quotes, so it is interpreted as is, without any shell interference.

Read more at DZone.

6 Best Email Clients for Linux Systems

Email is an old way of communication yet, it still remains the basic and most important method out there of sharing information up to date, but the way we access emails has changed over…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read the complete article: http://www.tecmint.com/best-email-clients-linux/

Microsoft Azure Brings CoreOS Linux to China

CoreOS Linux, an open source Linux operating system, is now available in China. Microsoft Azure operator 21Vianet has become the first officially supported cloud provider to offer CoreOS Linux in China. Until now, many Chinese organizations have deployed CoreOS Linux internally, on their own.

“As a supporter of Linux and open source, we believe in the importance of working with innovators in open source like CoreOS to enable choice and flexibility for cloud customers,” said Mark Russinovich, Chief Technology Officer, Microsoft Azure in a press statement. “The combination of CoreOS Linux with the power and scale of Microsoft’s cloud will help to inspire creation of new applications and collaboration across teams around the world,” he said.

With this availability of CoreOS Linux in a new region, both small and large organizations across continents will benefit from running their applications in software containers on a consistent platform globally, said Alex Crawford, head of CoreOS Linux at CoreOS, in an interview with me.

Additionally,  according to Al Gillen, group vice president, enterprise infrastructure at IDC, “With open source infrastructure solutions like CoreOS Linux available in China, Chinese businesses will be able to more easily adopt container infrastructure, while companies outside China can extend a single container platform worldwide and more easily deploy applications in China.”

Microsoft recently announced that it will continue to expand market share in China. According to a China Daily report, Microsoft increased its corporate customer base from 50,000 in 2015 to 65,000 in 2016. That’s impressive growth as Azure was launched in China only two years ago.

This growth is good news for CoreOS Linux. Crawford said, “The entire user base of Microsoft Azure now has CoreOS Linux as a best-practice option for modern, microservices container deployments. That alone constitutes a market primed for expansion.”

Cloud deployments on Azure will expand the community that already exists in China thanks to organizations like Huawei and Goyoo Networks, which today have advanced secure, dynamic CoreOS infrastructure on-premises.

The arrival of CoreOS Linux to China will also spark interest from the developer community. It’s hard for any open source project to track how much contribution is coming from a certain region, but if corporate users are consuming an open source technology locally, the engineers and developers of those companies and customers will start contributing automatically. Such work can trigger the formation of vibrant communities in that region. And that’s what may happen with CoreOS.

“The open source community in China as well as Chinese businesses who want to adopt secure, reliable container infrastructure more easily will benefit from using CoreOS Linux in China. Existing CoreOS Linux users who want to extend their presence to China and run a consistent platform for distributed applications worldwide will also benefit,” said Crawford.

Developers in China can already get started with CoreOS Linux by following the CoreOS Azure Documentation.



“CoreOS believes in bringing innovations in distributed systems and containers via open source software to communities worldwide,” said Brandon Philips, CTO at CoreOS. “Bringing CoreOS Linux to the open source community in China means that secure, automatic updates are at the fingertips of more container users worldwide.”

Core OS Inc. is not stopping at Microsoft Azure. “We will work with selected other providers toward official support on their platforms in the future,” said Crawford.
Core OS Inc. is behind many enterprise open source projects including CoreOS Linux, etcd, rkt, Tectonic, and Quay.

DevOps Students Learn the Value of Uptime With 3 a.m. Calls

Students at the Holberton School, San Francisco’s innovative new school for training students of any age to be full stack software engineers, are being woken early, really early, to learn just what’s it’s like to be a part of a DevOps team.

DevOps is a set of practices, a philosophy aiming for agile operations, to expand the collaboration between developers and operation folks to make them work toward the same goal: contribute to the entire product life cycle, from design, development and shipping, up to the production stage. This is a radical shift from the industry norm of separate engineering and operations departments which often operate in opposition to each other.

Holberton is partnering with PagerDuty, a 6-year old IT incidents management startup, to wake students up to the reality of on-call engineering. Students will be on call, 24/7 for their personal projects but also for group projects.

In the industry, engineers are often on call for systems they did not build, but that they still need to support. In that situation the challenge is even trickier.

“Uptime is the number one goal of any SRE/DevOps/System administrator team,” said Casey Brown, manager, Site Reliability Engineering at LinkedIn. “Nowadays, well established companies like LinkedIn, Facebook and Google are also expecting developers to be fully responsible for their code in production. Having production in mind and being ready for it is something that every good developer must have, yet no school prepares students to that.”

Hands on Devops training isn’t the only way we have been innovating. Since the school’s inception last year, we’ve been offering unique opportunities for students; from our tuition model and admissions process to our certificate verification process based on blockchain, the technology behind bitcoins.

One of our core precepts is that our students learn by doing, and being on call is a lot about experience, it is not something you can learn in a book.With this program, students will already have one-and-a-half years of on-call experience, because we put our students through their paces, and that sometimes means a panicked call at 3 a.m. What better way to be prepared?

Sylvain Kalache is a co-founder of Holberton School and a former Senior Site Reliability Engineer at LinkedIn.

Holberton School is a project-based alternative to college for the next generation of software engineers. Using project-based learning and peer learning, Holberton Schools mission is to train the best software engineers of their generation. At Holberton School, there are no formal teachers and no formal courses. Instead, everything is project-centered. The school gives students increasingly difficult programming challenges to solve, and gives them minimal initial directions on how to solve them. As a consequence, students naturally look for the theory and tools they need, understand them, use them, work together, and help each other.

 

OpenHPC Establishes Leadership & Releases Initial Software Stack

Today the Linux Foundation announced a set of technical, leadership and member investment milestones for OpenHPC, a Linux Foundation project to develop an open source framework for High Performance Computing environments.

While HPC is often thought of as a hardware-dominant industry, the software requirements needed to accommodate supercomputing deployments and large-scale modeling requirements is increasingly more demanding. An open source framework like OpenHPC promises to close technology gaps that hardware enhancements alone can’t address. 

Read more at insideHPC

 

How to Stand Up a 600 Node Bare Metal Mesos Cluster in Two Weeks – Craig Neth, Verizon Labs

https://www.youtube.com/watch?v=6P8htQnXCfM?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC


In this talk, Craig Neth (Distinguished Member of Technical Staff at Verizon) will describe his experiences in bringing up a 600 node Mesos cluster – from power on to running tasks in 14 days.

How Verizon Labs Built a 600 Node Bare Metal Mesos Cluster in Two Weeks

Verizon Labs is building some impressive projects around Apache Mesos and relies on a lot of open source software for functionality: operating systems, networking, provisioning, monitoring, and administration. Open source software is popular at Verizon Labs because it gives them the flexibility and the functionality to do what they want to do, without fighting vendor restrictions.

Apache enterprise software plays a key role, including Mesos, Kafka, Spark, and the Apache HTTP server. And a host of other OSS software, including Docker, Ansible, CoreOS, DHCPD, Ubuntu Linux, and Fleet.

In his talk at MesosCon North America earlier this month, Larry Rau, Director of Architecture and Infrastructure at Verizon Labs gives a live demonstration of a large-scale messaging simulation across multiple datacenters, including a failure and automatic failover. You can see it all happening in real time during his keynote.

In the second talk, Craig Neth, Distinguished Member of the Technical Staff at Verizon Labs, describes building a 600-node Mesos cluster from bare metal in two weeks. His team didn’t really get it all done in two weeks, but it’s a fascinating peek at some ingenious methods for accelerating the installation and provisioning of the bare hardware, and some advanced ideas on hardware and rack architectures.

Keynote: Verizon Calls Mesos

Larry Rau, Director of Architecture and Infrastructure, Verizon Labs

Larry Rau, Director of Architecture and Infrastructure at Verizon Labs, gave a live demonstration of a high-volume messaging system built on Mesos. The demo simulated 110 million devices generating over 400,000 messages per second over Verizon’s wireless network, managed by multiple data centers. The demo included the failure of one data center, and seamless failover to other data centers.

Verizon’s software stack is stuffed with open source software, including CoreOS Linux, the Mesosphere data center operating system, Apache Kafka, which is a high-throughput distributed messaging system, and Apache Spark, for fast big data processing.

Rau explained their decision to go with Mesos was to to increase efficiency and flexibility: “We chose Mesos as a platform because we wanted to basically do this. We wanted to run lots of containers. We realised this, we really buy into the, “We don’t need a virtual machine layer, we want to containerize, run microservices and we’ve got to run lots of these different microservices within our cluster.

“This is another key point: We didn’t want any more silos,” he said. “If I looked across how we built applications and deployed them in the past, they were all silos of machines and applications and put into these data centers. Every time you wanted to bring up a new application, you had to go source hardware, deploy hardware, deploy applications, set up new teams and monitor it. Really we didn’t want to do that anymore. We really wanted to go cluster computing, so we have lots of very similar, same types of computers running in a cluster, we run our applications across all these.”

https://www.youtube.com/watch?v=RjtPQFGI-Eg?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

How to Stand Up a 600 Node Bare Metal Mesos Cluster in Two Weeks

Craig Neth, Distinguished Member of Technical Staff, Verizon Labs

In this video, Craig Neth tells how he and his team attended MesosCon in Seattle in August 2015 and were excited and inspired to set up their own test cluster. He asked his boss for a couple of racks, and instead was given the go-ahead for a 20-rack test lab. This may sound like being showered with riches, but it also meant being showered with headaches, because part of the deal was using experimental hardware and rack designs, and having it all done by Christmas.

His team had to find a location for their new cluster lab and then had to figure out power and cooling. The compute sleds included “a standard off-the-shelf Intel Taylor Pass motherboard. It’s got two CPU sockets…Each one of them has a plug-in 10 gig PCI nic card. We use that for our data plane stuff. We use a couple of the one-gig nics on there, one for management and one for the IPMI network. That’s how you get the four servers per 2U.” The sleds do not have power supplies, but rather draw DC power from a common bus bar across the backs of the racks. All the interconnects are on the back as well.

The storage sleds are configured differently from the compute sleds. “It’s a two-layer system. The top layer has 16 six-terabyte drives, spinning drives. The bottom layer has got another one of those Taylor Pass motherboards and a couple of SSDs down there. They’re the exact same motherboards that we run in the compute sleds. The only difference here is on this particular cluster we only have one socket populated.”

Provisioning all these machines was considerably accelerated by having the vendor do the preliminary work, and Neth is proud that they only had to connect a single serial cable to configure the first node, and then the rest was done automatically.

Nodes are cattle. They’re not pets.

Maintenance is pull-and-replace, and uses the same auto-provisioning as the initial installation. “Our maintenance model is we don’t replace components in any of these things,” Neth said. “We replace sleds. If we lose a disc, if we lose some memory, if we lose fans, whatever it is, we call up the vendor, and they overnight us a new sled, and we just pull out the old sled. We get the new sled. We get metadata for the sled so we can provision it and bring it right back up again. Nodes are cattle. They’re not pets.”

https://www.youtube.com/watch?v=6P8htQnXCfM?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

Getting Creative With Mesos

You might also enjoy 4 Unique Ways Uber, Twitter, PayPal, and Hubspot Use Apache Mesos. And, come back for more blogs on ingenious and creative ways to hack Mesos for large-scale tasks.

MesosCon Europe 2016 offers you the chance to learn from and collaborate with the leaders, developers and users of Apache Mesos. Don’t miss your chance to attend! Register by July 15, 2016 to save $100.

mesoscon-video-cta-2016.jpg?itok=PVP-FqWv

Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.

Keynote: Verizon Calls Mesos

https://www.youtube.com/watch?v=RjtPQFGI-Eg?list=PLGeM09tlguZQVL7ZsfNMffX9h1rGNVqnC

As data services change the way the world does business, Verizon Labs has built a platform designed around the Mesos open source system that enables the robust development of micro services for a variety of products and services. This presentation from MesosCon will use a real world example of how Verizon Lab’s Mesos-based platform integrates with America’s most reliable wireless network to transform how Verizon builds and delivers new services. 

Make Peace With Your Processes: Part 2

In this article, we continue our look at processes and go back to school for a moment, where we’ll pick up some of the basics that will help us increase our knowledge later. We will (very loosely) mimic the Process Tree output that we saw in the previous article with just the ps command. You might stumble across “forest” puns in manuals with regards to “trees” in case it’s confusing.

The “-e” switch is also equal to “-A” and will dutifully display ALL of the processes. We’re going to combine that with the “-j” option, which should give us a “jobs format.” I’m sorry to potentially give you more headaches, and I encourage to try a few of these of alternative options, but one small caveat would be that running “j” and “-j” gives different levels of information. The non-hyphenated version provides more output in this case.

We will also add the “-H” to display the hierarchical output. We can achieve this by running:

# ps -ejH

Or, in a parallel universe, alternatively, you could also use the following for vaguely similar results:

# ps axjf

Try it yourself. It’s far from identical to the “pstree” command seen previously, but there are similarities in the output due to the hierarchies.

Day to Day

It is easy to get overwhelmed by the level of detail that the ps command can provide. My favorite process command is:

# ps -ef 

Using the “-e” option gives us a display of all the processes in the Process Table and the newly added “-f” switch gives us what’s known as full-format listing. Apparently, we can add “-L” to this concoction to offer us process thread information. I find this command very easy to remember around Christmas time.

# ps -eLf

Et voilà, as requested, two new columns NLWP (number of threads) and LWP (thread ID) are added to the now busy display.

If you only wanted to print the name of a process (which might be ideal for scripting), then you can separate the wheat from the chaff by using this command for process number “37”:

# ps -p 37 -o comm=

This is an option that I’ve used in the past a few times to check what command my terminal is currently responsible for. This is useful if you’re stressing your server or worrying your workstation with lots of extra load. The simple (non-hyphenated) “T” switch lets us view this.

# ps T

You can test this by running something unusual — like a for loop with a pause in it using “sleep” — or anything odd that stands out, such as this command.

# echo {1..999999} &

This simply runs a counter process in the background. And, when we run “ps T”, we can see processes associated with the terminal in question.

Built-In Features

Let’s look at a few other built-in basics of the pliable ps command.

You can reverse the output of a command with a non-hyphenated “N” switch, which stands for negate. Or, more precisely, from the manual, this lets you “Select all processes except those that fulfill the specified conditions. (negates the selection) Identical to –deselect.”

All is revealed in Listing 1. As you can see, there isn’t any mention of “ntp” except from our ps command.

# ps N ntp


PID TTY   STAT   TIME COMMAND

1414 tty1     Ss+    0:00 /sbin/mingetty /dev/tty1

1416 tty2     Ss+    0:00 /sbin/mingetty /dev/tty2

1418 tty3     Ss+    0:00 /sbin/mingetty /dev/tty3

1420 tty4     Ss+    0:00 /sbin/mingetty /dev/tty4

1426 tty5     Ss+    0:00 /sbin/mingetty /dev/tty5

1430 tty6     Ss+    0:00 /sbin/mingetty /dev/tty6

9896 pts/1    S       0:00 sudo -i

9899 pts/1    S       0:00 -bash

10040 pts/1   R+     0:00 ps N ntp

Listing 1: Demonstrates running processes without including “ntp” (note our “ps” command again being seen)

Imagine that you wanted to see the SSH activity as well as the Hardware Abstraction Layer daemon, “hald”. These are hardly related I agree, but you can never account for strange scenarios when it comes to computers.

The following command is a way of searching for a list of processes with a certain name, separated without a space and just a comma in this case.

# ps -C sshd,hald

If you need to check any processes that are run by a particular system group, you can do so with the following command:

# ps -G ntp

Compatibility

Although it’s not always the case, the modern ps command cleverly mitigates our migraine-inducing compatibility headaches by letting us run a simple command in several ways.

If, for example, you wanted to select a list of processes that belonged to the superuser, root, then you could achieve this with the following three commands (which admittedly display ever-so-slightly different outputs):

# ps -u root 

# ps U root 

# ps --user root

The above commands dutifully offer what’s known as the “EUID” or “effective ID” of a user but not the “real user ID”. In reality — no pun intended — every process actually has two user IDs, just to keep things simple. This also applies to groups, but let’s not worry about that.

Apparently, the kernel is most concerned with the “effective user ID” for activities such as writing to a file and whether a user is allowed to complete a request to do something that requires a privilege.

And, although this is required much of the time, there’s an important scenario within which the “real user ID” must be paid attention to. If someone or something wants to alter the owner of “effective user ID” on an already running process, then the kernel needs to check both the “real user ID” and the “effective user ID”.

Changing the ownership of a process is particularly useful if a new user wants to do essentially the same thing (like write to a file) as the existing owner does. Rather than duplicating the process (adding extra strain to the system and potentially introducing more security aspects to consider), we can avoid duplicating the process and simply reassign it.

What about after a user is finished with their short task? The answer is that we only temporarily give access and then swap it back to its original owner. If you want to reveal the somewhat elusive “real user IDs” then you can do so with system groups like this (which is the same as “-G”):

# ps --Group ntp

And, not surprisingly, we can do exactly the same thing for users as follows (we do this with “-U”):

# ps --User chrisbinnie

If you want to query a very specific Process ID (because you’ve spotted it in “top” or a script has complained about it), then for all intents and purposes, these commands all the do the same thing. I have included some output for comparison, because it’s short and easy to read in this case:

# ps -p 1248


PID TTY   TIME      CMD

1248 ?      00:00:08 ntpd


# ps p 1248


PID TTY STAT TIME COMMAND

1248 ?      Ss    0:08   ntpd -u ntp:ntp -p /var/run/ntpd.pid -g


# ps --pid 1248


PID TTY    TIME      CMD

1248 ?      00:00:08 ntpd

If you’ve ever wondered about the efficiencies of a system, here’s something interesting. The kernel has to be able to tidy up when a user logs out (otherwise, there would be a plethora of useless processes clogging up the pipes) so Unix-like systems dutifully group processes into “sessions”. You can test for session IDs by using “–sid” or as below with “-s”:

# ps -s 1526

Note that a session can have an associated terminal (of “tty” or “Teletype” varieties) controlling it, however, only one process can be at running in the foreground. All these components are given numbers to keep the system nice and orderly. As a result, we have thread IDs, process IDs, process group IDs, and session IDs. And, here you were thinking that the ps command didn’t have much to do.

If you’re interested in reading a little more about sessions, this book excerpt is intriguing with sentences such as “Consequently, sessions are largely the business of shells. In fact, nothing else really cares about them.”

Here, I’ve provided a bit more detail about the powerful ps command and how it can help you discover information about your processes. There’s more to learn about parent processes, filesystems, and more in the next few articles.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.