Home Blog Page 302

Machine Learning for Operations

Managing infrastructure is a complex problem with a massive amount of signals and many actions that can be taken in response; that’s the classic definition of a situation where machine learning can help. 

IT and operations is a natural home for machine learning and data science. According to Vivek Bhalla, until recently a Gartner research director covering AIOps and now director of product management at Moogsoft, if there isn’t a data science team in your organization the IT team will often become the “center of excellence”.

By 2022, Gartner predicts, 40 percent of all large enterprises will use machine learning to support or even partly replace monitoring, service desk and automation processes. That’s just starting to happen in smaller numbers.

In a recent Gartner survey, the most popular use of AI in IT and operations is analyzing big data (18 percent) and chatbots for IT service management — 15 percent are already using chatbots and a further 30 percent plan to do so by the end of 2019.

Read more at The New Stack

Uber Joins the Linux Foundation as a Gold Member

“Uber has been influential in the open source community for years, and we’re very excited to welcome them as a Gold member at the Linux Foundation,” said Jim Zemlin, Executive Director of the Linux Foundation. “Uber truly understands the power of open source and community collaboration, and I am honored to witness that first hand as a part of Uber Open Summit 2018.”

Through this membership, Uber will support the Linux Foundation’s mission to build ecosystems that accelerate open source technology development. Uber will continue collaborating with the community, working with other leaders in the space to solve complex technical problems and further promote open source adoption globally.

Read more at Uber

Virtualizing the Clock

Dmitry Safonov wanted to implement a namespace for time information. The twisted and bizarre thing about virtual machines is that they get more virtual all the time. There’s always some new element of the host system that can be given its own namespace and enter the realm of the virtual machine. But as that process rolls forward, virtual systems have to share aspects of themselves with other virtual systems and the host system itself—for example, the date and time.

Dmitry’s idea is that users should be able to set the day and time on their virtual systems, without worrying about other systems being given the same day and time. This is actually useful, beyond the desire to live in the past or future. Being able to set the time in a container is apparently one of the crucial elements of being able to migrate containers from one physical host to another, as Dmitry pointed out in his post.

Read more at Linux Journal

The Growing Significance Of DevOps For Data Science

DevOps involves infrastructure provisioning, configuration management, continuous integration and deployment, testing and monitoring.  DevOps teams have been closely working with the development teams to manage the lifecycle of applications effectively.

Data science brings additional responsibilities to DevOps. Data engineering, a niche domain that deals with complex pipelines that transform the data, demands close collaboration of data science teams with DevOps. Operators are expected to provision highly available clusters of Apache Hadoop, Apache Kafka, Apache Spark and Apache Airflow that tackle data extraction and transformation. Data engineers acquire data from a variety of sources before leveraging Big Data clusters and complex pipelines for transforming it.

Read more at Forbes

6 Best Practices for High-Performance Serverless Engineering

When you write your first few lambdas, performance is the last thing on your mind. Permissions, security, identity and access management (IAM) roles and triggers all conspire to make the first couple of lambdas, even after a “hello world” trial just to get your first serverless deployments up and working. But once your users begin to rely on services your lambdas provide, it’s time to focus on high-performance serverless.

Here are some key things to remember when you’re trying to produce high-performance serverless applications.

1. Observability
Serverless handles scaling really well. But as scale interacts with complexity, slowdowns and bugs are inevitable. I’ll be frank: these can be a bear if you don’t plan for observability from the start.

Read more at The New Stack

Beyond Finding Stuff with the Linux find Command

Continuing the quest to become a command-line power user, in this installment, we will be taking on the find command.

Jack Wallen already covered the basics of find in an article published recently here on Linux.com. If you are completely unfamiliar with find, please read that article first to come to grips with the essentials.

Done? Good. Now, you need to know that find can be used to do much more than just for search for something, in fact you can use it to search for two or three things. For example:


find path/to/some/directory/ -type f -iname '*.svg' -o -iname '*.pdf'

This will cough up all the files with the extensions svg (or SVG) and pdf (or PDF) in the path/to/directory directory. You can add more things to search for using the -o over and over.

You can also search in more than one directory simultaneously just be adding them to the route bit of the command. Say you want to see what is eating up all the space on your hard drive:


find $HOME /var /etc -size +500M

This will return all the files bigger than 500 Megabytes (-size +500M) in your home directory, /var and /etc.

Additionally, find also lets you do stuff with the files it… er… finds. For example, you can use the -delete action to remove everything that comes up in a search. Now, be careful with this one. If you run


# WARNING: DO NOT TRY THIS AT $HOME

find . -iname "*" -delete

find will erase everything in the current directory (. is shorthand for “the current directory“) and everything in the subdirectories under it, and then the subdirectories themselves, and then there will be nothing but emptiness and an unbearable feeling that something has gone terribly wrong.

Please do not put it to the test.

Instead, let’s look at some more constructive examples…

Moving Stuff Around

Let’s say you have bunch of pictures of Tux the penguin in several formats and spread out over dozens of directories, all under your Documents/ folder. You want to bring them all together into one directory (Tux/) to create a gallery you can revel in:


find $HOME/Documents/ ( -iname "*tux*png" -o -iname "*tux*jpg" -o -iname "*tux*svg" ) 
  -exec cp -v '{}' $HOME/Tux/ ;

Let’s break this down:

  • $HOME/Documents is the directory (and its subdirectories) find is going to search in.
  • You enclose what you want to search for between parentheses (( ... )) because, otherwise -exec, the option that introduces the command you want to run on the results, will only receive the result of the last search (-iname "*tux*svg"). There are two things you have to bear in mind when you do this: (1) you have to escape the parentheses using backslashes like this: ( ... ). You do that so the shell interpreter (Bash) doesn’t get confused (parentheses have a special meanings for Bash); and (2) there is one space between the opening bracket ( and -iname ... and another space between "*tux*svg" and the closing bracket ). If you don’t include those spaces, find will exit with an error.
  • -exec is the option you use to introduce the command you want to run on the found files. In this case it is a simple cp (copy) command. You use cp‘s -v option to see what is going on.
  • '{}' is the shorthand find uses to say “the file or directory I have found that matches the criteria you gave me“. '{}' gets swapped for each file or directory as it is found and, in this case, then gets copied to the Tux/ directory.
  • ; tells find to execute the command for each result sequentially, that is, one after another. There is another option, + which runs the command adding each result from find to the end of the command, making a long sausage of a string. But (1) this is not helpful for you here, and (2) you need the '{}' to be at the end of the command for this to work. You could use + to make executable all the files with the .sh extension tucked away under your Documents/ folder like this:
    
    find $HOME/Documents/ -name "*.sh" -exec chmod a+x {} +
    
    

Once you have the basics of modifying files using find under your belt, you will discover all sorts of situations where it comes in handy. For example…

A Terrible Mish-Mash

Client X has sent you a zip file with important documents and images for the new website you are working on for them. You copy the zip into your ClientX folder (which already contains dozens of files and directories) and uncompress it with unzip newwebmedia.zip and, gosh darn it, the person who made the zip file didn’t compress the directory itself, but the contents in the directory. Now all the images, text files and subdirectories from the zip are all mixed up with the original contents of you folder, that contains more images, text files, and subdirectories.

You could try and remember what the original files were and then move or delete the ones that came from the zip archive. But with dozens of entries of all kinds, you are bound to get mixed up at some point and forget to move a file, or, worse, delete one of your original files.

Looking at the files’ dates (ls -la *) won’t help either: the Zip program keeps the dates the files were originally created, not when they were zipped or unzipped. This means a “new” file from the zip could very well have a date prior to some of the files that were already in the folder when you did the unzipping.

You probably can guess what comes next: find to the rescue! Move into the directory (cd path/to/ClientX), make a new directory where you want the new stuff to go (mkdir NewStuff), and then try this:


find . -cnewer newwebmedia.zip -exec mv '{}' NewStuff ;

Breaking that down:

  • The period (.) tells find to do its thing in the current directory.
  • -cnewer tells find to look for files that have been changed at the same time or after a certain file you give as reference. In this case the reference file is newwebmedia.zip. If you copied the file over at 12:00 and then unpacked it at 12:01, all the files that you unpacked will be tagged as changed at 12:01, that is, after newwebmedia.zip and will match that criteria! And, as long as you didn’t change anything else, they will be the only files meeting that criteria.
  • The -exec part of the instruction simply tells find to move the files and directories to the NewStuff/ directory, thus cleaning up the mess.

If you are unsure of anything find may do, you can swap -exec for -ok. The -ok option forces find to check with you before it runs the command you have given it. Accept an action by typing y or reject it with n.

Next Time

We’ll be looking at environmental variables and a way to search even more deeply into files with the grep command.

OPNFV Gambia — Doing What We Do Best While Advancing Cloud Native

Today, the OPNFV community is pleased to announce the availability of Gambia, our seventh platform release! I am extremely proud of the way the community rallied together to make this happen and provide the industry with another integrated reference platform for accelerating their NFV deployments.

At a high level, Gambia represents our first step towards continuous delivery (CD) and deepens our work in cloud native, while also advancing our core capabilities in testing and integration, and the development of carrier-grade features by working upstream. As an open source pioneer in NFV, it’s amazing to see the evolution of the project to meet the needs of a quickly changing technology landscape.

Here are a few Gambia highlights I’d like to share:

Cloud Native & Continuous Deployment (CD)

A key topic at the recent ONS Europe, cloud native is quickly becoming increasingly relevant for the networking industry. The Gambia release builds up the cloud native progress made in Fraser, with seven more projects supporting containers (a 77% increase), and new scenarios integrating cloud native features…

Read more at OPNFV

The Ceph Storage Project Gets a Dedicated Open-Source Foundation

Ceph is an open source technology for distributed storage that gets very little public attention but that provides the underlying storage services for many of the world’s largest container and OpenStack deployments. It’s used by financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, telcos like Deutsche Telekom, car manufacturers like BMW and software firms like SAP and Salesforce.

These days, you can’t have a successful open source project without setting up a foundation that manages the many diverging interests of the community and so it’s maybe no surprise that Ceph is now getting its own foundation. Like so many other projects, the Ceph Foundation will be hosted by the Linux Foundation.

“Today’s launch of the Ceph Foundation is a testament to the strength of a diverse open source community coming together to address the explosive growth in data storage and services.” said Sage Weil, Ceph co-creator, project leader, and chief architect at Red Hat for Ceph.

Read more at TechCrunch

Systems Engineer Salary Rises Even Higher with Linux Experience

System administration is a very reactive role, with sysadmins constantly monitoring networks for issues. Systems engineers, on the other hand, can build a system that anticipates users’ needs (and potential problems). In certain cases, they must integrate existing technology stacks (e.g., following the merger of two companies), and prototype different aspects of the network before it goes “live.”

In other words, it’s a complex job, with a salary to match.  …If you want a truly impressive salary, though, consider specializing in Linux systems—that will translate into a $20,000 pay bump. 

Read more at Dice

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

LDAP Authentication In Linux

This howto will show you how to store your users in LDAP and authenticate some of the services against it. I will not show how to install particular packages, as it is distribution/system dependent. I will focus on “pure” configuration of all components needed to have LDAP authentication/storage of users. The howto assumes somehow, that you are migrating from a regular passwd/shadow authentication, but it is also suitable for people who do it from scratch.

The thing we want to achieve is to have our users stored in LDAP, authenticated against LDAP ( direct or pam ) and have some tool to manage this in a human understandable way. This way we can use all software, which has LDAP support or fallback to PAM LDAP module, which will act as a PAM->LDAP gateway.

Configuring OpenLDAP

OpenLDAP consists of slapd and slurpd daemon. This howto covers one LDAP server without a replication, so we will focus only on slapd. I also assume you installed and initialized your OpenLDAP installation (depends on system/distribution). If so, let’s go to the configuration part.

Read more at HowToForge