Home Blog Page 774

What is DevOps? Michael Ducy Explains

Michael Ducy is known by many as the goat whisperer and co-host of the Goat Farm podcast and blog focused on DevOps in the enterprise. He is a champion for the idea that you should staff your DevOps team with curious, hungry individuals (goats) from within your company, rather than from the outside. When he isn’t blogging, podcasting or speaking at conferences, you can find him at Chef where he is Manager, Solutions Architects.

Michael Ducy is co-host of the Goat Farm podcast and Manager, Solutions Architects at Chef.
Linux.com: Why are so many organizations embracing DevOps?

Michael Ducy: The way consumers are interacting with business has changed greatly. More and more consumers interact with a company using technology. An app for your smartphone, a website, a kiosk, etc. are all ways that have changed how companies do business. There is also the expectation of consumers to have things instantly as well. Think of the ride sharing experience. A few taps, and a car has arrived. So with this change in the way we as consumers interact, businesses are realizing they need to change the way they deliver technology. Thus this “Digital Transformation” is driving the need to embrace a DevOps operating model.

Why are individuals interested in participating?

People want a better way to work. I think many realize that the way they’ve gone about IT in the past wasn’t ideal, and they are looking for something new and exciting. For many, their technology platforms have become stale, and they see DevOps as a way to experiment with new technologies and practices. For many, DevOps is that glass of water after being stranded in the desert.

What’s the primary advantage of DevOps?

It’s hard to boil it down to one thing. If I had to, I would say speed is a primary advantage. Speed in being able to experiment to find solutions, speed to quickly deliver technology to the consumer, and speed to respond to problems faster.

What is the overwhelming hurdle?

The overwhelming hurdle is that DevOps is a victim of it’s own success. Because this operating model is helping companies change, many vendors are getting into the DevOps space. This creates a bit of confusion as to what DevOps is, or isn’t, and often vendors shape DevOps to what they are trying to sell you. You also see this in the “fragmentation” of the DevOps space. Rugged DevOps, DevSecOps, Enterprise DevOps, and all these special branches of DevOps is just creating too much confusion in the market.

What advice would you give to people who want to get started in DevOps?

Start small and don’t get overwhelmed. Two principles of DevOps you’ll hear over and over is incremental change and continuous improvement. The DevOps space is so large these days that newcomers can easily get lost. Find a small area where you can make a change, learn from it, and iterate over those learnings to improve.

Learn more about DevOps from leaders in the field.

Read this Q&A with Kris Buytaert, one of the instigators of the current DevOps movement and organizer of DevOpsDays and Config Management Camp.

Read this Q&A with Patrick Deboisbest known as the founder of DevOpsDays and as a creator of the DevOps movement, which explains why some refer to him as the “Godfather of DevOps.”

How Docker Has Changed the DevOps Game

Cloud computing has paved the way for programmable infrastructure, which brought extreme automation into software development lifecycle. The ability to provision resources, configuring them on the fly, deploying applications, and monitoring the entire process led to the DevOps culture where developers and the operators are collaborating throughout the application lifecycle. While provisioning and configuration are best left to tools such as Chef, Puppet, and Ansible, one open source software that became the cornerstone of DevOps is Jenkins.

Software build, packaging, and deployment play a crucial role in release management. Enterprise developers often relied on tools such as Make, MS Build, Ant, Maven, and Gradle to compile and link source code. 

Read more at Forbes

9 Tech Giants Embracing The Open Source Revolution

Technology vendors are recognizing that by putting some technology into open source, they gain collaboration from other great minds to help the project mature more quickly.

Developers from different companies can collaborate on projects and develop solutions for the greater good of the entire community. That’s not something that happens in a competitive commercial software market.

Proponents argue that open source solutions can evolve more quickly because so many minds are working to improve them. Plus, more organizations have access to the software because the licenses are free.

Read more at InformationWeek

TNS Research: The Present State of Container Orchestration

The New Stack ran a survey that was able to target users of container management technology, with 70 percent of respondents using containers to some degree. We found that there is indeed uncertainty about what it means to manage and orchestrate containers. There is also a developing consensus that scheduling, cluster management and service discovery are necessary to orchestrate the use of containers in production.

Over 40 percent of respondents using or trialing containers said an orchestration platform was the primary way containers are managed in their organization. However, the actual products being used or planned for use were strongly influenced by a respondent’s stage of container adoption. People that have non-production use of containers were more likely to cite configuration management tools like Ansible andChef as a method of orchestration. Platforms-as-a-service (PaaS) solutions such as Red Hat OpenShift and Hashicorp products were cited more often in their roadmaps among those evaluating containers.

Read more at The New Stack

Customized File Monitoring with Auditd

In the previous article on auditd, I showed how to use aureport to check stuff monitored by the auditd daemon. And, I showed how you could, for example, check whether a user had experienced trouble logging in, which could be interpreted as a malicious attempt to access a system.

As I said before, aureport is part of a larger toolset that comes with auditd. Using auditd to monitor some preset events is already quite useful, but where it comes into its own is when you customize it to monitor whatever you want.

Customized Monitoring Rules

To push your rules into auditd on the fly you use auditctl. But, before you insert any of your own rules, let’s check to see if any defaults are already in place. Become root (or use sudo) and try this:

auditctl -l
-a never,task

The -l option lists all current active rules and, if you see the -a never,task line shown above, none of the rules following it will log anything. This rule, which is often a default in new auditd installations, is telling the daemon to append (-a) a rule to the task list (as in the list of tasks the kernel is running — don’t worry about this just yet), which will stop auditd from ever recording anything.

Because we’re not specifying which task, auditd assumes the rule applies to all of them. In plain English, this would read: “Never record anything from any of the tasks the kernel is running.” And, because auditd gives precedence from top to bottom (i.e., the first rule takes precedence over the ones following it in case of a conflict), this means nothing will be recorded despite what any of the other rules say.

You don’t want that, so the first thing to do is get rid of this rule. To delete all rules from a running auditd daemon, you can use:

auditctl -D

If you already have more than one rule and don’t want to zap them all, you can also selectively delete only this rule with

auditctl -d never,task

Now the coast is clear, so I’ll show how to build your own first rule. The typical use for auditd is to have it monitor files or directories. For example: As a regular user, create a directory in your /home directory, say…

mkdir test_dir

Now become root and set up a watch on the directory you just made:

auditctl -w /home/[your_user_name]/test_dir/ -k test_watch

The -w option tells auditd to watch the test_dir/ directory for any changes. The -k option tells auditd to append the string test_watch (called a key) to the log entries it creates. The key can be anything you want, although it is a good idea to make it something memorable and related to what the rule does. As you will see, this will be useful to filter out unrelated records when you revise auditd‘s logs later on.

Now, as a regular user, do some stuff in test_dir/ — make some subdirectories, create or copy some files, remove some files, or list the contents.

When you’re done, take a look at what auditd logged with

ausearch -k test_watch

See the use of -k test_watch here? Even if you have a dozen more rules logging different things, by using a key string, you can tell ausearch to only list what you’re interested in (Figure 1).

Figure 1: Output of ausearch command.

Even with this filter, the amount of information ausearch throws at you is a bit overwhelming. However, you will also notice that the information is very structured. The output is actually made up of three records per event.

Each record contains some keyword/value pairs separated by a “=” sign; some of the values are strings, others are lists enclosed in parenthesis, and so on. You can read up on what each snippet of information means in the official manual, but the important thing to take away is that the structured nature of ausearch‘s output makes processing it using scripts relatively easy. In fact, aureport, the tool I showed in the previous article, does a very good job of sorting things out.

To prove it, let’s pipe our ausearch output through aureport and see what’s what:

ausearch -k test_watch | aureport -f -i

File Report 
=============================================== 
# date time file syscall success exe auid event 
=============================================== 
1. 05/06/16 13:04:54 sub_dir mkdir yes /usr/bin/mkdir paul 193 
2. 05/06/16 13:04:54 /home/paul/test_dir/sub_dir getxattr no /usr/bin/baloo_file paul 194 
3. 05/06/16 13:04:54 /home/paul/test_dir/sub_dir getxattr no /usr/bin/baloo_file paul 195 
4. 05/06/16 13:04:54 /home/paul/test_dir/sub_dir getxattr no /usr/bin/baloo_file paul 196 
5. 05/06/16 13:05:06 /home/paul/test_dir/testfile.txt getxattr no /usr/bin/baloo_file paul 198 
.
.
.

This is starting to make sense! You can check who is doing what to which file and when.

One thing you can see in the listing above is that, because I am using the Plasma desktop, Baloo, KDE’s indexing service, is cluttering the list with irrelevant results. That’s because every time you create or destroy a file, Baloo has to come along and index the fact. This makes parsing what is going on and checking whether the user is up to no good, annoyingly hard. So, let’s filter Baloo’s actions out with a strategically placed grep:

ausearch -k test_watch | aureport -f -i | grep -v baloo

File Report 
=============================================== 
# date time file syscall success exe auid event 
=============================================== 
1. 05/06/16 13:04:54 sub_dir mkdir yes /usr/bin/mkdir paul 193 
9. 05/06/16 13:05:06 testfile.txt open yes /usr/bin/touch paul 197 
17. 05/06/16 13:05:29 ./be03316b71184fefba5cfbf59c21e6d5.jpg open yes /usr/bin/cp paul 210 
18. 05/06/16 13:05:29 ./big_city.jpg open yes /usr/bin/cp paul 211 
19. 05/06/16 13:05:29 ./blendertracking.jpg open yes /usr/bin/cp paul 212 
20. 05/06/16 13:05:29 ./Cover27_Draft01.jpg open yes /usr/bin/cp paul 213
37. 05/06/16 13:05:50 blendertracking.jpg unlinkat yes /usr/bin/rm paul 330 
38. 05/06/16 13:05:50 be03316b71184fefba5cfbf59c21e6d5.jpg unlinkat yes /usr/bin/rm paul 328
.
.
.

That’s much better. You can now clearly see what the users have been up to. You can follow how they create some directories and files and copy others from elsewhere. You can also check what files are being removed, and so on.

When you have no more use for it, you can remove the above watch with

auditctl -W /home/[your_user]/test_dir/ -k test_watch

One File at a Time

Monitoring whole directories makes for a lot of logged data. Sometimes it is better to just monitor strategic individual files to make sure no one is tampering with them. A classic example is to use

auditctl -w /etc/passwd -p wa -k passwd_watch

to make sure nobody is messing with your passwd file.

The -p parameter tells auditd which permissions to monitor. The available permissions are:

  • r to monitor for read accesses to a file or a directory,

  • w to monitor for write accesses,

  • x to monitor for execute accesses,

  • and a to check for changes of the file’s or directory’s attributes.

Because there are legitimate reasons for an application to read from /etc/passwd, you’re not going to monitor for that to avoid false positives. It is also a bit silly to monitor for the execution of a non-executable file; hence, we tell auditd to only monitor for changes to passwd‘s content (i.e., writes) and its attributes.

If you don’t specify what permissions to monitor, auditd will assume it has to monitor all of them. That’s why, when you were monitoring the test_dir/ directory in the first examples, even a simple ls command triggered auditd.

Permanent Rules

To make your rules permanent, you can include them into /etc/audit/audit.rules or create a new rules file in the /etc/audit/rules.d/ directory. If you have been experimenting with rules using auditctl and you are happy with your current set up, you could do:

echo "-D" > /etc/audit/rules.d/my.rules
auditctl -l >> /etc/audit/rules.d/my.rules

to dump your current rules into a rules file called my.rules and save yourself some typing. If you’ve been following this tutorial and used the example rules you saw above, my.rules would end up looking like this:

-D
-w /home/[your_user]/test_dir/ -k test_watch
-w /etc/passwd -p wa -k passwd_watch

To avoid interference and conflicts, move any pre-existing rules files in /etc/audit/ and /etc/audit/rules.d to backups:

mv /etc/audit/audit.rules /etc/audit/audit.rules.bak
mv /etc/audit/rules.d/audit.rules /etc/audit/rules.d/audit.rules.bak

Then, restart the daemon with

systemctl restart auditd.service

to have auditd pick up your rules straight away.

Now, every time your system is rebooted, auditd will start monitoring whatever you told it to.

There’s More

I had time to cover only a small portion of auditd’s capabilities here. But, as you can see, auditd is very powerful, and it can be used monitor much more than just files and directories, with an insane level of detail. I plan to visit the more advanced topics in a future article.

 

What is the Future of the PaaS Term?

In the beginning, Platform-as-a-Service (PaaS) was an easy to understand category of software, even if it wasn’t called Platform-as-a-Service initially. In its earliest incarnations, PaaS was a seamless if tightly constrained fabric which abstracted and made opaque the infrastructure running underneath it, from database to operating system. The promise to developers was, in strictly functional terms, serverless. No longer would developers have to concern themselves with operations minutiae like server instances. Instead, they deployed applications against a given platform and from there on out operations were, at least theoretically, the platform’s problem.

Since those first tentative releases in early 2007, PaaS has become more complicated to explain, both because the category itself has expanded its ambitions and because other, competitive layers of abstraction have emerged.

Read more at RedMonk

A Proper TSN Driver Is Being Worked On For The Linux Kernel (Time Sensitive Networking)

Henrik Austad of Cisco has published very early code for implementing a TSN core driver in the Linux kernel. TSN is short for Time Sensitive Networking and was formerly known as Audio/Video Bridging (AVB). 



TSN is designed for establishing low-latency, jitter-free, guaranteed-bandwidth links over a LAN by reserving a path on the network. In order to support TSN, the NIC and the network itself must be compatible. 

Read more at Phoronix

Portworx Aims Container Storage at Enterprise Databases

Container storage heats up pre-Dockercon.

In the days leading up to DockerCon, it seems like storage is taking its turn at being the hot topic in containers, withCoreOS and EMC recently announcing container-storage projects.

Today, Portworx is up. The Redwood City, Calif.-based startup released a developer version of its container storage platform last year, and it’s now launching an enterprise version called PX-Enterprise.

Read more at SDxCentral

The Quest to Make Code Work Like Biology Just Took A Big Step

The Google search engine isn’t software that runs on a single machine. Serving millions upon millions of people around the globe, it’s software that runs on thousands of machines spread across multiple computer data centers. Google runs this entire service like a biological system, as a vast collection of self-contained pieces that work in concert. It can readily spread those cells of code across all those machines, and when machines break—as they inevitably do—it can move code to new machines and keep the whole alive.

Now, Adam Jacob wants to bring this notion to every other business on earth. Jacob is a bearded former comic-book-store clerk who, in the grand tradition of Alan Kay, views technology like a philosopher.

Read more at WIRED

Nokia Announces Horizontal IoT Platform Called Impact

The Finnish company says its expertise in consumer networks will help create efficiencies for enterprise and government customers. With the number of Internet of Things (IoT) devices set to explode in the next decade, Nokia is using its expertise in networking and efficiencies to roll out a horizontal IoT platform.

Dubbed IMPACT (Intelligent Management Platform for All Connected Things), the platform handles every aspect of machine-to-machine (M2M) connections — data collection, event processing, device management, data contextualization, data analytics, end-to-end security and applications enablement — for any device, any protocol, and across any application.

Read more at ZDNet