Home Blog Page 698

Fedora 24 — The Best Distro for DevOps?

“Give us the tools, and we will finish the job.” Winston Churchill.

If you have been to any DevOps-focused conferences — whether it’s OpenStack Summit or DockerCon — you will see a sea of MacBooks. Thanks to its UNIX base, availability of Terminal app and Homebrew, Apple hardware is extremely popular among DevOps professionals.

What about Linux? Can it be used as a platform by developers, operations, and DevOps pros? Absolutely, says Major Hayden, Principal Architect at Rackspace, who used to be a Mac OS user and has switched to Fedora. Hayden used Mac OS for everything: software development and operations. Mac OS has all the bells and whistles that you need on a consumer operating system; it also allows software professionals to get the job done. But developers are not the target audience of Mac OS. They have to make compromises. “It seemed like I had to have one app that would do one little thing and this other app would do another little thing,” said Hayden.

In contrast, a Linux-based distribution offers a more streamlined workflow. All you need as a DevOps engineer is a terminal, a browser, and an editor. Period.

Fedora is a great platform for DevOps

Hayden is currently running Fedora 24, the latest release of Fedora on his machine. According to him, Fedora is a great distribution because it offers the latest and greatest version of apps and libraries. “You’ve got modern TCC, you’ve got modern Python and that kind of thing. You have the flexibility to go install your own version of Python or something like that if you want, too,” said Hayden.

Unlike many other distributions, you don’t have to bloat your system by adding too many third-party repos or PPAs to get the latest version of apps. No wonder even Linus Torvalds uses Fedora.

Fedora now comes in three versions: Workstation, Cloud, and Server. If you are going to use GUI tools, then Workstation is the right choice for you  Almost every tool you need is available either through DNF or through third-party repositories. Hayden said that he rarely needs anything from third-party repos; everything is there in the main repos. To make life even simpler, there are DNF groups that put a bunch of packages together.  You can do things like use a DNF group list and it’ll list all the available groups.

“The developer tools group is really handy if you just need to bootstrap a system and have make and automake and C make, and GCC and GCC for C++. You can just get that list of packages really quickly. Of course it includes all the tools that you would need when something goes wrong like Valgrind,” said Hayden.

“In addition to that if you need to go in and audio why your application is using so much RAM or why something is not allocating memory properly or why it’s leaving file handles open, you can go and investigate that with those tools, too,” he said.

With Fedora 24, not only do you get all the tools you need to build it, you also have all the tools you need to compile it. And, you have all the tools you need to look at it when something explodes.

Depending on what you are going to do, every tool is available on Fedora 24: from Ansible to Jenkins. All of the DevOps tools mentioned in this previous article are available for Fedora. If you are using Fedora and you want to install Ansible, all you need is “dnf install ansible” and that’s it. But, if you are on Mac OS, you have to figure out where Homebrew puts everything. You need to install virtual machines to run docker containers where as on Linux, you can do it natively.

The best part is that even if there is a tool that’s not in DNF and in repos, you can still install it on your home directory and start using it. You don’t have to become root and have files scattered all over the place.

Fedora also doubles as a personal machine. It comes with GNOME as the default desktop environment that offers a great desktop experiences. So anything from browsing the web and checking emails to watching Netflix can be easily done from the same machine.

Most importantly, you need your OS to be as agile as your infrastructure. Fedora keeps you up to speed with latest version of packages. Other distros are known for faster access to the latest packages — including Arch Linux and Gentoo — but it could be counterproductive to compile packages all the time, if you have a lot installed. According to Hayden, “Debian is also a pretty good platform to work from because it’s a little bit more consistent than Ubuntu.”

Hardware?

Fedora 24 isn’t demanding of hardware. But, if you are going to use your machine for coding, you need a modern processor and at least 8GB of RAM, especially if you are doing a lot of work with Java. Hardware is inexpensive these days and going from 4GB to 8GB future proofs you.

Additionally, Red Hat is hiring even more engineers to test more hardware, so no matter which machine you buy, it will work out of the box on Fedora and RHEL.

So, what I gather from this conversation is that there are five advantages to using Fedora 24 as a DevOps tool:

  • A lean and thin OS that comes with everything you need without any bloat

  • Access to latest packaging

  • A distraction-free platform

  • An OS that’s the foundation of the most popular Linux platform in the enterprise space: RHEL

  • The system doubles as your entertainment platform.

If you are using Linux as a DevOps platform, which distro are you using?

 

Use Models to Measure Cloud Performance

With good modeling, you can have a clear view of what is going on with your cloud deployment even though you don’t have control of the underlying systems.

When I was young, I made three plastic models. One was of a car—a ’57 Chevy.  Another was of a plane—a Spitfire. And a third was of the Darth Vader TIE Fighter. I was so proud of them. Each one was just like the real thing. The wheels turned on the car, and the plane’s propeller moved when you blew on it. And of course, the TIE Fighter had Darth Vader inside.

When I went to work on the internet, I had to measure things. As I discussed in my last post, Measure cloud performance like a customerwhen you measure on the internet you need to measure in ways that are representative of your customers’ experiences. This affects how you measure in two ways. The first is the perspective you take when measuring, which I talked about last time. The second way is the techniques you use to perform those measurements. 

Read more at Network World

Strategies for Running Stateful Applications in Kubernetes: Volumes

One of the key challenges in running containerized workloads is dealing with persistence. Unlike virtual machines that offer durable and persistent storage, containers come with ephemeral storage. Right from its inception, Dockerencouraged the design of stateless services. Persistence and statefulness are an afterthought in the world of containers. But this design works in favor of workload scalability and portability. It is one of the reasons why containers are fueling cloud-native architectures, microservices, and web-scale deployments.

Having realized the benefits of containers, there is an ongoing effort to containerize stateful applications that can be seamlessly run with stateless application. Docker volumes and plugins are a major step towards turning stateful applications into first-class citizens of Docker. 

Read more at The New Stack

GitHub Open-Sources Internal Load-Balancing Software

GitHub will release as open source the GitHub Load Balancer (GLB), its internally developed load balancer.

GLB was originally built to accommodate GitHub’s need to serve billions of HTTP, Git, and SSH connections daily. Now the company will release components of GLB via open source, and it will share design details. 

“Historically one of the more complex components has been our load-balancing tier,” said Joe Williams, GitHub senior infrastructure engineer, and Theo Julienne, GitHub infrastructure engineering manager, in a co-authored bulletin. “Traditionally we scaled this vertically, running a small set of very large machines running haproxy and using a very specific hardware configuration allowing dedicated 10G link failover.”

Read more at InfoWorld

Blockchain Just Made Some New Friends in Congress

Meet the Blockchain Caucus.

Boy, has blockchain become respectable. It wasn’t long ago that the face of the technology, which powers the crypto-currency bitcoin, was libertarians and drug dealers. Today, it’s the banking industry and members of Congress.

On Monday, Rep. Jared Polis (D-Co) and Rep. Mick Mulvaney (R-SC) announced the creation of a “Blockchain Caucus” to promote laws and policies to encourage the development of crypto-currencies and other blockchain-related tools.

Read more at Fortune

Exactly What Is OpenStack? Red Hat’s Rich Bowen Explains

You’ve probably heard of OpenStack. It’s in the tech news a lot, and it’s an important open source project. But what exactly is it, and what is it for? Rich Bowen of Red Hat provided a high-level view of OpenStack as a software project, an open source foundation, and a community of organizations in his talk at LinuxCon North America.

OpenStack is a software stack that went from small to industry darling at warp speed. It has three major components: The compute service runs the virtual machines (VMs), and it has a networking service and a storage service, plus a dashboard to run everything. OpenStack is only six years old, and was born as a solution devised by Rackspace and NASA to solve a specific problem.

Bowen says, “NASA had this problem where they were taking photographs, and individual photographs were going to take several months to upload to their AWS instances, because these photographs were terabytes and petabytes big, pictures of space from the Hubble Space Telescope. Rackspace, on the other hand, was running a very successful web hosting and VM hosting business, and they were looking for a way to automate the process of creating new VMs without actually having to have engineers go press buttons. These two organizations realized that they were solving similar and overlapping problems, and they started the OpenStack project.”

OpenStack is an open source foundation. What does that mean? Bowen tells us one of the biggest values of an open source foundation: “Vendor-neutral governance is very important in open source projects. With the vendor-neutral governance that is offered, that is enforced by the foundation, it ensures that everybody has an equal voice. It also ensures project sustainability. This is an important thing in any major open source project. If HP or Red Hat or Mirantis were suddenly to lose interest in the OpenStack project, it wouldn’t go away.”

OpenStack is a community organization. From its humble beginning, OpenStack has grown to more than 55,000 members, including more than 600 companies. The foundation covers 57 separate, semi-autonomous projects, which could be viewed as a semi-organized cat herd. Bowen says, “The projects themselves make their own technical decisions. Now, they have to submit to the judgment of the technical committee, because interoperability between projects is obviously pretty important. The technical decisions, the road maps are decided at the project level rather than at the foundation level. OpenStack provides open governance for these projects, and each project elects what’s called a project technical lead. In most projects, decision making is completely collaborative and done on the mailing list.”

Watch Rich Bowen’s talk (below) to learn how open code, open governance, and open discussion all operate to help create great communities that produce great software like OpenStack.

https://www.youtube.com/watch?v=idyiZAz1PK8?list=PLbzoR-pLrL6qBYLdrGWFHbsolIdJIjLnN

linux-com_ctas_linuxcon_452x134.png?itok=G4guaVb3

You won’t want to miss the stellar lineup of keynotes, 185+ sessions and plenty of extracurricular events for networking at LinuxCon + ContainerCon Europe, Oct. 4-6 in Berlin. Secure your spot before it’s too late! Register now.

First 5 Commands When I Connect on a Linux Server

After half a decade working as a system administrator/SRE, I know where to start when I am connecting to a Linux server. There is a set of information that you must know about the server in order to properly, well most of the time, debug it.

First 60 seconds on a Linux server

These commands are well known for experienced software engineers but I realized that for a beginner who is getting started with Linux systems, such as my students at Holberton School, it is not obvious. That’s why I decided to share the list of the first 5 commands I type when I connect on a Linux server.

w
history
top
df
netstat

These 5 commands are shipped with any Linux distribution so you can use them everywhere without extra installation needed.

w:

[ubuntu@ip-172-31-48-251 ~]$ w
23:40:25 up 273 days, 20:52,  2 users,  load average: 0.33, 0.14, 0.12
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
ubuntu pts/0    104-7-14-91.ligh 23:39    0.00s  0.02s  0.00s w
root pts/1    104-7-14-91.ligh 23:40    5.00s  0.01s  0.03s sshd: root [priv]
[ubuntu@ip-172-31-48-251 ~]$ 

A lot of great information in there. First, you can see the server uptime which is the time during which the server has been continuously running. You can then see what users are connected on the server, quite useful when you want to make sure that you are not impacting a colleague’s work. Finally the load average will give you a good sense of the server health.

history:

[ubuntu@ip-172-31-48-251 ~]$ history
   1  cd /var/app/current/log/
   2  ls -al
   3  tail -n 3000 production.log 
   4  service apache2 status
   5  cat ../../app/services/discourse_service.rb 

`History` will tell you what was previously run by the user you are currently connected to. You will learn a lot about what type work was previously performed on the machine, what could have gone wrong with it, and where you might want to start your debugging work.

top:

top - 23:47:54 up 273 days, 21:00,  2 users,  load average: 0.02, 0.07, 0.10
Tasks:  79 total,   2 running,  77 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.0%us,  0.0%sy,  0.0%ni, 98.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.3%st
Mem:   3842624k total,  3128036k used,   714588k free,   148860k buffers
Swap:        0k total,        0k used,        0k free,  1052320k cached


 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                                                                                                      
21095 root      20   0  513m  21m 4980 S  1.0  0.6   1237:05 python                                                                                                                                                                                                                        
1380 healthd   20   0  669m  36m 5712 S  0.3  1.0 265:43.82 ruby                                                                                                                                                                                                                          
19703 dd-agent  20   0  142m  25m 4912 S  0.3  0.7  11:32.32 python                                                                                                                                                                                                                        
   1 root      20   0 19596 1628 1284 S  0.0  0.0   0:10.64 init                                                                                                                                                                                                                          
   2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd                                                                                                                                                                                                                      
   3 root      20   0     0    0    0 S  0.0  0.0  27:31.42 ksoftirqd/0                                                                                                                                                                                                                   
   4 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kworker/0:0                                                                                                                                                                                                                   
   5 root       0 -20     0    0    0 S  0.0  0.0   0:00.00 kworker/0:0H                                                                                                                                                                                                                  
   7 root      20   0     0    0    0 S  0.0  0.0  42:51.60 rcu_sched                                                                                                                                                                                                                     
   8 root      20   0     0    0    0 S  0.0  0.0   0:00.00 rcu_bh

The next information you want to know: what is currently running on this server. With `top` you can see all running processes, then order them by CPU, memory utilization and catch the ones that are resource intensive.

df:

[ubuntu@ip-172-31-48-251 ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  4.5G  3.3G  58% /
devtmpfs        1.9G   12K  1.9G   1% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm

The next important resource that your server needs to have to be working properly is disk space. Running out of it is a very classic issue.

netstat:

[ubuntu@ip-172-31-48-251 ec2-user]# netstat -lp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 *:http                      *:*                         LISTEN      1637/nginx          
tcp        0      0 *:ssh                       *:*                         LISTEN      1209/sshd           
tcp        0      0 localhost:smtp              *:*                         LISTEN      1241/sendmail       
tcp        0      0 localhost:17123             *:*                         LISTEN      19703/python        
tcp        0      0 localhost:22221             *:*                         LISTEN      1380/puma 2.11.1 (t 
tcp        0      0 *:4242                      *:*                         LISTEN      18904/jsvc.exec     
tcp        0      0 *:ssh                       *:*                         LISTEN      1209/sshd           

Computers are a big part of our world now because they have the ability to communicate between each other via sockets. It is critical for you to know on what port and IP your server is listening on and what processes are using those.

Obviously this list might change depending on your goal and the amount of existing information you have. For example, when you want to debug specifically for performance, Netflix came up with a customized list. Do you have a useful command that is not in my top 5? Please share it in the comments section!

ONOS Hummingbird SDN Release Touts Core Control Function Improvements

ON.Lab’s ONOS Project noted its eighth SDN platform release expands southbound and northbound protocol, legacy device support.

The telecommunications market’s choice of software-defined networking platforms continues to blossom, with the Open Networking Laboratory’s Open Network Operating System Project releasing its latest SDN platform variant under the “Hummingbird” tag.

The SDN platform, which is the eighth version from the open-source ONOS Project, is claimed to offer high availability and scalability, expanded southbound and northbound protocols, and improved ability to support incremental SDN on legacy devices. (Full details are available here.)

Read more at RCR Wireless

Introduction to OpenStack by Rich Bowen

https://www.youtube.com/watch?v=idyiZAz1PK8?list=PLbzoR-pLrL6qBYLdrGWFHbsolIdJIjLnN

In this talk, Rich, the OpenStack Community Liaison at Red Hat, will walk you through what OpenStack is, as a project, as a Foundation, and as a community of organizations. 

Improving Fuzzing Tools for More Efficient Kernel Testing

Fuzz testing (or fuzzing) is a software testing technique that involves passing invalid or random data to a program and observing the results, such as crashes or other failures. Bamvor Jian Zhang of Huawei, who will be speaking at LinuxCon Europe, realized that existing fuzz testing tools — such as trinity — can generate random or boundary values for syscall parameters and inject them into the kernel, but they don’t validate whether the results of those syscalls are correct.

Bamvor Jian Zhang
In his experience, the correctness of arguments passing between the C library and core kernel code is a common problem. And, in his talk — called “Efficient Unit Test and Fuzz Tools for Kernel/Libc Porting,” Bamvor will share some ways to improve the trinity fuzzing tool. We spoke with him to learn more.

Linux.com: Why are syscall issues so common when bringing up a new architecture for the Linux kernel?

Bamvor Jian Zhang: The new porting must fulfill the requirements of the evolving kernel. Usually, there is no standard porting document/reference design. So, porting is always challenging work.

Linux.com: Why don’t existing fuzz testing tools help validate whether syscalls are correct?

Bamvor: Actually, they could do part of the job. Existing fuzz tools focus on the functionality of the syscall not the wrapper. They are useful if the wrapper of the syscall is correct. The wrapper process is done by the porting of kernel and libc. Incomplete or incorrect porting leads to the useless test results with existing fuzz tools.

Linux.com: How did you discover this problem?

Bamvor: We found that trinity could not find the issue even if there are 20 fails in Linux Test Project (LTP). And, we found other issues even after we fixed all the syscall fails in LTP and glibc testsuite. Theoretically, we could add new test cases to existing tools, but this work needs more developers. In the end, the design of ilp32 is evolving. It is hard to catch up on the changes and add new test cases in the limited amount of time.

Linux.com: How can existing tools be improved to help solve the problem?

Bamvor: Generally speaking, we could improve the situation by adding two things to the existing tools. The first thing is to issue the test case of syscall through C library instead of direct syscall. The second thing is to check the argument passing before we execute the real syscall in the kernel.

You won’t want to miss the stellar lineup of keynotes, 185+ sessions and plenty of extracurricular events for networking at LinuxCon + ContainerCon Europe in Berlin. Secure your spot before it’s too late! Register now