Home Blog Page 352

8 Surprising Facts about Real Docker Adoption

With thousands of companies using Datadog to monitor their infrastructure and applications, we can see software trends emerging in real time. Today we’re excited to share our latest research into Docker adoption and usage.

Increasingly, Docker is not being run as a standalone technology but as part of a larger containerization strategy, which includes automated orchestration of workloads. Roughly half of the companies that monitor Docker with Datadog now also monitor an orchestrator such as Kubernetes or Mesos, or a hosted orchestration platform from AWS, Azure, or Google Cloud Platform.

In AWS environments, where Elastic Container Service (ECS) enables users to launch a container cluster in a matter of clicks, orchestration is especially prevalent. Roughly 70 percent of companies running both Docker and AWS infrastructure are also using orchestration. ECS continues to lead in AWS organizations, with 45 percent market share, but Kubernetes has also made steady gains and is now running in 30 percent of AWS Docker environments. With the recent launch of Amazon Elastic Container Service for Kubernetes (EKS), we expect Kubernetes adoption to accelerate in AWS over the coming months.

Read more at Datadog

This Week in Numbers: Discrimination in the Tech Industry

Regardless of their sex, one in three people would recommend an employer even if they had seen discrimination while working there. That is one takeaway from the Dice Diversity and Inclusion Report 2018. Based on a survey of US and UK tech professionals using Dice or and eFinancialCareers, the report looked at discrimination based on gender, age and politics.

The study found that 85 percent of women believe gender discrimination exists in the tech industry while only 62 percent of men feel likewise. In other words, twice as many men don’t see sexism. The results mirror many other studies that show men are much less likely to see sexism as a problem. This dynamic also plays out in regards to racism, with black and brown people much more likely to be concerned. Which leads us to ask, why wasn’t discrimination based on ethnicity discussed? The omission is striking!

According to the Dice survey, more tech professionals experienced or witnessed discrimination due to age compared to gender, political affiliation, or sexual orientation. In fact, among those 55 or older, 88 percent are worried that their age can hurt their continuing career.

Read more at The New Stack

The Blockchain Beyond Bitcoin

Blockchain technologies have been made popular by the creation of bitcoin, but how exactly can a blockchain benefit an enterprise? A blockchain provides an immutable store of facts. This model delivers significant value in the face of regulatory oversight by providing irrevocable proof that transactions occurred. Some even refer to these uses of a blockchain as enterprise resource planning (ERP) 2.0.

The foundational pieces that comprise a blockchain model are already in place in one fashion or another within most enterprises. They have not, however, been pieced together with the required technology components to produce the benefits of a blockchain. There are three main components required to deliver these benefits: a shared distributed ledger, smart contracts, and consensus….

These three components individually exist in some fashion in different parts of an organization, but they have not been assembled into something as well-packaged and overarching as a blockchain. Ledgers exist within accounting systems, smart contracts or FaaS exist within production software environments, and consensus algorithms exist in many places, including within expense reporting systems.

The key for an enterprise is to focus not just on the blockchain concept or isolated blockchain technology, but to understand how to integrate a blockchain’s key components into their environment.

Read more at O’Reilly

A Broad Overview of How Modern Linux Systems Boot

For reasons beyond the scope of this entry, today I feel like writing down a broad and simplified overview of how modern Linux systems boot. Due to being a sysadmin who has stubbed his toe here repeatedly, I’m going to especially focus on points of failure.

  1. The system loads and starts the basic bootloader somehow, through either BIOS MBR booting or UEFI. This can involve many steps on its own and any number of things can go wrong, such as unsigned UEFI bootloaders on a Secure Boot system. Generally these failures are the most total; the system reports there’s nothing to boot, or it repeatedly reboots, or the bootloader aborts with what is generally a cryptic error message.

    On a UEFI system, the bootloader needs to live in the EFI system partition, which is always a FAT32 filesystem. Some people have had luck making this a software RAID mirror with the right superblock format; see the comments on this entry.

  2. The bootloader loads its configuration file and perhaps additional modules from somewhere, usually your /boot but also perhaps your UEFI system partition. Failures here can result in extremely cryptic errors, dropping you into a GRUB shell, or ideally a message saying ‘can’t find your menu file’. The configuration file location is usually hardcoded, which is sometimes unfortunate if your distribution has picked a bad spot.

Read more at UTCC

Open Source Skills Soar In Demand According to 2018 Jobs Report

Linux expertise is again in the top spot as the most sought after open source skill, says the latest Open Source Jobs Reportfrom Dice and The Linux Foundation. The seventh annual report shows rapidly growing demand for open source skills, particularly in areas of cloud technology.

Key findings of the report include:

  • Linux tops the list as the most in-demand open source skill, making it mandatory for most entry-level open source careers. This is due in part to the growth of cloud and container technologies, as well as DevOps practices, all of which are typically built on Linux.
  • Container technology is rapidly growing in popularity and importance, with 57% of hiring managers seeking those skills, up from 27% last year.
  • Hiring open source talent is a priority for 83% of hiring managers, up from 76% in 2017.
  • Hiring managers are increasingly opting to train existing employees on new open source technologies and help them gain certifications.
  • Many organizations are getting involved in open source with the express purpose of attracting developers.

Career Building

In terms of job seeking and job hiring, the report shows high demand for open source skills and a strong career benefit from open source experience.

  • 87% of open source professionals say knowing open source has advanced their career.
  • 87% of hiring managers experience difficulties in recruiting open source talent.

Hiring managers say they are specifically looking to recruit in the following areas:

OS Jobs skills

Diversity

This year’s survey included optional questions about companies’ initiatives to increase diversity in open source hiring, which has become a hot topic throughout the tech industry. The responses showed a significant difference between the views of hiring managers and those of open source pros — with only 52% of employees seeing those diversity efforts as effective compared with 70% of employers.

Overall, the 2018 Open Source Jobs Report indicates a strong market for open source talent, driven in part by the growth of cloud-based technologies. This market provides a wealth of opportunities for professionals with open source skills, as companies increasingly recognize the value of open source.

The 2018 Open Source Jobs Survey and Report, sponsored by Dice and The Linux Foundation, provides an overview of the latest trends for open source careers. Download the complete Open Source Jobs Report now.

This article originally appeared at The Linux Foundation.

Systemd Services: Monitoring Files and Directories

So far in this systemd multi-part tutorial, we’ve covered how to start and stop a service by hand, how to start a service when booting your OS and have it stop on power down, and how to boot a service when a certain device is detected. This installment does something different yet again and covers how to create a unit that starts a service when something changes in the filesystem. For the practical example, you’ll see how you can use one of these units to extend the surveillance system we talked about last time.

Where we left off

Last time we saw how the surveillance system took pictures, but it did nothing with them. In fact, it even overwrote the last picture it took when it detected movement so as not to fill the storage of the device.

Does that mean the system is useless? Not by a long shot. Because, you see, systemd offers yet another type of units, paths, that can help you out. Path units allow you to trigger a service when an event happens in the filesystem, say, when a file gets deleted or a directory accessed. And, overwriting an image is exactly the kind of event we are talking about here.

Anatomy of a Path Unit

A systemd path unit takes the extension .path, and it monitors a file or directory. A .path unit calls another unit (usually a .service unit with the same name) when something happens to the monitored file or directory. For example, if you have a picchanged.path unit to monitor the snapshot from your webcam, you will also have a picchanged.service that will execute a script when the snapshot is overwritten.

Path units contain a new section, [Path], with few more directives. First, you have the what-to-watch-for directives:

  • PathExists= monitors whether the file or directory exists. If it does, the associated unit gets triggered. PathExistsGlob= works in a similar fashion, but lets you use globbing, like when you use ls *.jpg to search for all the JPEG images in a directory. This lets you check, for example, whether a file with a certain extension exists.
  • PathChanged= watches a file or directory and activates the configured unit whenever it changes. It is not activated on every write to the watched file but only when a monitored file open for for writing is changed and then closed. The associated unit is executed when the file is closed.
  • PathModified=, on the other hand, does activate the unit when anything is changed in the file you are monitoring, even before you close the file.
  • DirectoryNotEmpty= does what it says on the box, that is, it activates the associated unit if the monitored directory contains files or subdirectories.

Then, we have Unit= that tells the .path which .service unit to activate, in case you want to give it a different name to that of your .path unit; MakeDirectory= can be true or false (or 0 or 1, or yes or no) and creates the directory you want to monitor before monitoring starts. Obviously, using MakeDirectory= in combination with PathExists= does not make sense. However, MakeDirectory= can be used in combination with DirectoryMode=, which you use to set the the mode (permissions) of the new directory. If you don’t use DirectoryMode=, the default permissions for the new directory are 0755.

Building picchanged.path

All these directives are very useful, but you will be just looking for changes made to one single file, so your .path unit is very simple:

#picchanged.path
[Unit] 
Wants= webcam.service 

[Path] 
PathChanged= /home/[user name]/monitor/monitor.jpg 

In the Unit= section the line that says

Wants= webcam.service 

The Wants= directive is the preferred way of starting up a unit the current unit needs to work properly. webcam.service is the name you gave the surveillance service that you saw in the previous article and is the service that actually controls the webcam and makes it take a snap every half second. This means it’s picchanged.path that is going to start up webcam.service now, and not the Udev rule you saw in the prior article. You will use the Udev rule to start picchanged.path instead.

To summarize: the Udev rule pulls in your new picchanged.path unit, which, in turn pulls in the webcam.service as a requirement for everything to work perfectly.

The “thing” that picchanged.path monitors is the monitor.jpg file in the monitor/ directory in your home directory. As you saw last time, webcam.service called a script, checkimage.sh, took a picture at the beginning of its execution and stored it in monitor/temp.jpg. checkimage.sh then took another pic, temp.jpg, and compared it with monitor.jpg. If it found significant differences (like when somebody walks into frame) the script overwrote monitor.jpg with the temp.jpg. That is when picchanged.path fires.

As you haven’t included a Unit= directive in your .path, the unit systemd expects a matching picchanged.service unit which it will trigger when /home/[user name]/monitor/monitor.jpg gets modified:

#picchanged.service
[Service]  
Type= simple  
ExecStart= /home/[user name]/bin/picmonitor.sh

For the time being, let’s make picmonitor.sh save a time-stamped copy of monitor.jpg every time changes get detected:

#!/bin/bash
# This is the pcmonitor.sh script

cp /home/[user name]/monitor/monitor.jpg /home/[user name]/monitor/"`date`.jpg"

Udev Changes

You have to change the custom Udev rule you wrote in the previous installment so everything works. Edit /etc/udev/rules.d/01-webcam.rules so instead of looking like this:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", 
    ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd", 
    MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service"

It looks like this:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", 
    ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd",  
    MODE="0666", ENV{SYSTEMD_WANTS}="picchanged.path"

The new rule, instead of calling webcam.service, now calls picchanged.path when your webcam gets detected. (Note that you will have to change the idVendor and IdProduct to those of your own webcam — you saw how to find these out previously).

For the record, I also changed checkimage.sh from using PNG to JPEG images. I did this because I found some dependency problems with PNG images when working with mplayer on some versions of Debian. checkimage.sh now looks like this:

#!/bin/bash 

mplayer -vo jpeg -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
    /dev/mywebcam &>/dev/null 
mv 00000001.jpg /home/paul/monitor/monitor.jpg 

while true 
do 
   mplayer -vo jpeg -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
       /dev/mywebcam &>/dev/null 
   mv 00000001.jpg /home/paul/monitor/temp.jpg 

   imagediff=`compare -metric mae /home/paul/monitor/monitor.jpg 
        /home/paul/monitor/temp.jpg /home/paul/monitor/diff.png 2>&1 > 
        /dev/null | cut -f 1 -d " "` 

   if [ `echo "$imagediff > 700.0" | bc` -eq 1 ] 
       then 
           mv /home/paul/monitor/temp.jpg /home/paul/monitor/monitor.jpg 
       fi 
    
   sleep 0.5 
done

Firing up

This is a multi-unit service that, when all its bits and pieces are in place, you don’t have to worry much about: you plug in the designated webcam (or boot the machine with the webcam already connected), picchanged.path gets started thanks to the Udev rule and takes over, bringing up the webcam.service and starting to check on the snaps. There is nothing else you need to do.

Conclusion

Having the process split into two doesn’t only help explain how path units work, but it’s also very useful for debugging. One service does not “touch” the other in any way, which means that you could, for example, improve the “motion detection” part, and it would be very easy to roll back if things didn’t work as expected.

Admittedly, the example is a bit goofy, as there are definitely better ways of monitoring movement using a webcam. But remember: the main aim of these articles is to help you learn how systemd units work within a context.

Next time, we’ll finish up with systemd units by looking at some of the other types of units available and show how to improve your home-monitoring system further by setting up service that sends images to another machine.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

​Red Hat Changes Its Open-Source Licensing Rules

Red Hat-initiated open-source projects, which use GPLv2 or LGPLv2.1, will be expected to add GPLv3’s cure commitment language to their licenses.

From outside programming circles, software licensing may not seem important. In open-source, though, licensing is all important.

So, when leading Linux company Red Hat announces that — from here on out — all new Red Hat-initiated open-source projects that use the GNU General Public License(GPLv2) or GNU Lesser General Public License (LGPL)v2.1 licenses will be expected to supplement the license with GPL version 3 (GPLv3)‘s cure commitment language, it’s a big deal.

Both older open-source licenses are widely used. When the GPLv3 was released, it came with an express termination approach that offered developers the chance to cure license compliance errors. 

Read more at ZDNet

How to Build a Strong DevSecOps Culture: 5 Tips

We have a tendency in IT to treat security as fundamentally a technology problem. Hence, we also tend to focus on technology solutions.

Tools and processes do matter: But if you’ll recall our recent look at the seven habits of strong security organizations, the top of the list had nothing explicitly to do with technology: These companies treat security as a culture, not a step.

That’s where the very term DevSecOps  – and more importantly, the culture and practices it represents – can begin to make a difference. The mashup of traditional roles and teams reminds teams: Many of our so-called technology issues ultimately boil down to people and how they work together.

A DevSecOps culture suits our increasingly hybrid computing environments, faster and more frequent software delivery, and other demands upon modern IT. That’s one reason why DevSecOps matters to IT leaders. It’s also the hard part: Culture change makes something like replacing an outdated tool look easy.

Read more at Enterprisers Project

Evaluate a Product/Market Fit

How to identify when a fit has been achieved, and how to exit the explore stage and start exploiting a product with its identified market.

We live in a world of data overload, where any argument can find supporting data if we are not careful to validate our assumptions. Finding information to support a theory is never a problem, but testing the theory and then taking the correct action is still hard.

As discussed in not available, the second largest risk to any new product is building the wrong thing. Therefore, it is imperative that we don’t overinvest in unproven opportunities by doing the wrong thing the right way. We must begin with confidence that we are actually doing the right thing. How do we test if our intuition is correct, especially when operating in conditions of extreme uncertainty?

Eric Ries introduced the term innovation accounting to refer to the rigorous process of defining, experimenting, measuring, and communicating the true progress of innovation for new products, business models, or initiatives. To understand whether our product is valuable and hold ourselves to account, we focus on obtaining admissible evidence and plotting a reasonable trajectory while exploring new domains.

Read more at O’Reilly

Security and Performance Help Mainframes Stand the Test of Time

As of last year, the Linux operating system was running 90 percent of public cloud workloads; has 62 percent of the embedded market share and runs all of the supercomputers in the TOP500 list, according to The Linux Foundation Open Mainframe Project’s 2018 State of the Open Mainframe Survey report.

Despite a perceived bias that mainframes are behemoths that are costly to run and unreliable, the findings also revealed that more than nine in 10 respondents have an overall positive attitude about mainframe computing.

The project conducted the survey to better understand use of mainframes in general. “If you have this amazing technology, with literally the fastest commercial CPUs on the planet, what are some of the barriers?” said John Mertic, director of program management for the foundation and Open Mainframe Project. “The driver was, there wasn’t any hard data around trends on the mainframe.”

Read more at The Linux Foundation