Using Ansible playbooks, instead of Docker’s tools, opens the doors to new kinds of dev automation. A new project from the creators of the system automation framework Ansible, now owned by Red Hat, wants to make it possible to build Docker images and perform container orchestration within Ansible.
Ansible Container, still in the early stages of development, allows developers to use an Ansible playbook (the language that describes Ansible jobs) to outline how containers should be built; it uses Ansible’s stack to deploy those applications as well.
One of the major causes for lack of full scale operationalization is the operators’ shortcoming with their existing network management and siloed operations support systems (OSS), which limits their ability to effectively fulfill and assure services in a hybrid environment. In many instances operators have taken a very myopic bottom-up approach where they have deployed solutions solely to manage the VNF components, which does nothing but add complexity to an already complex hybrid physical and virtual network environment. In addition to that lack of alignment between service fulfillment and service assurance, operators’ networks may also lack integrity between service configuration and device configuration, automatic discovery and reconciliation capability, real-time policy driven service management, evolution of centralized catalog to manage and blend both virtualized and non-virtualized services, and multi-party compensation and revenue management capability, all of which can hinder commercialization of SDN and NFV.
Reducing waste, encouraging experimentation, and making everyone happy
Q: What do DevOps people mean when they talk about small batches?
A: To answer that, let’s take a look at an unpublished chapter from the upcoming book The Practice of System and Network Administration, third edition, due out in October 2016.
One of the themes you will see in this book is the small batches principle: it is better to do work in small batches than big leaps. Small batches permit us to deliver results faster, with higher quality and less stress.
We begin with an example that has nothing to do with system administration in order to demonstrate the general idea. Then we focus on three IT-specific examples to show how the method applies and the benefits that follow.
The small batches principle is part of the DevOps methodology. It comes from the lean manufacturing movement, which is often called just-in-time manufacturing. It can be applied to just about any kind of process. It also enables the MVP (minimum viable product) methodology, which involves launching a small version of a service to get early feedback that informs the decisions made later in the project.
The data center network layer is the engine that manages some of the most important business data points you have. Applications, users, specific services, and even entire business segments are all tied to network capabilities and delivery architectures. And with all the growth around cloud, virtualization, and the digital workspace, the network layer has become even more imporant.
Most of all, we’re seeing more intelligence and integration taking place at the network layer. The biggest evolution in networking includes integration with other services, the integration of cloud, and network virtualization. Let’s pause there and take a brief look at that last concept….
There are several vendors offering a variety of flavors of SDN and network virtualization, so how are they different? Are some more open than others? Here’s a look at some of the key players in this space.
Somewhere in a world full of advanced technology that we write about regularly here on TechCrunch, there exists an ancient realm where mainframe computers are still running programs written in COBOL.
This is a programming language, mind you, that was developed in the late 1950s, and used widely in the ’60s and ’70s and even into the ’80s, but it’s never really gone away. You might think it would have been mostly eradicated from modern business by now, but you would be wrong.
As we march along, however, the pool of people who actually know how to maintain these COBOL programs grows ever smaller by the year, and companies looking to move the data (and even the archaic programs) to a more modern platform could be stuck without personnel to help guide them through the transition.
Let’s Encrypt simplifies the process of installing SSL certificates and allows you to set up a free SSL certificate on your Web site in just a few minutes.
There are very few technologies that have had as big an influence on businesses today as cloud computing. It has completely changed the way in which businesses and tech teams think and function. Prior to the adoption of public cloud technologies, businesses relied heavily upon data centers in order to store and process their information. As a tech professional, one’s operational expertise was focused around hardware (i.e. server, storage and network). The rise of cloud computing, however, caused this operational expertise to shift. Today, more companies are defining infrastructure as software. Defining infrastructure as code requires open source professionals to adopt new skillsets to help define and manage software-defined infrastructures.
As an open source professional, familiarity with major cloud vendors such as Amazon Web Services and Microsoft Azure is critical from a hiring manager’s perspective. While these skills are not necessarily “open source,” knowing how vendors define infrastructure is crucial for deploying cloud-based services and supporting services for everything from applications hosting and data storage to content distribution. Therefore, open source professionals will need to supplement their skills with a strong working knowledge of these platforms.
Cloud-related skills are among the fastest growing on Dice.
Vendor and cloud related skills are amongst the fastest growing on Dice, with job postings for professionals with Azure experience, as an example, up 87 percent year-over-year. A keen understanding of smaller virtual private servers (VPS), like Linode or Digital Ocean, are also valuable from a professional development standpoint. Employers often use these servers in conjunction with major cloud vendors as a means to reduce costs and ensure higher levels of service.
To supplement vendor skills, employers are also in the market for open source professionals with experience in configuration management tools like Puppet, Chef, Ansible or SaltStack. Configuration management has become a growing point of entry into the open source community for many companies. All of the major configuration software vendors and sponsors began as open source projects, with most continuing to do a majority of their development in an open source model.
Configuration management can aid companies in building out infrastructure in a fully automated and repeatable fashion. Rather than having to rely upon manual configurations and custom scripting to create and manage infrastructure, tools, like Puppet and Chef, are being used to expedite the deployment process and eliminate human error. On Dice, there are roughly 1,700 Puppet postings and 1,600 Chef job postings on any given day, representing roughly 2 percent each of the more than 87,000 total jobs posted on the site.
The rise of cloud computing has revolutionized the way companies and tech teams operate today. Perhaps, this is why more than half (51 percent) of hiring managers and recruiters found cloud technologies to have the biggest impact on open source hiring in 2016, according to the 2016 Open Source Jobs Report. As an open source professional, expanding one’s knowledge base to include cloud-related skills isn’t just smart, it’s almost a necessity. It also doesn’t hurt that tech professionals who have cloud experience are well compensated. Dice’s latest annual salary survey found cloud (as well as big data) skills represented the majority of 2015’s highest earners, making $131,121 to $142,845 on average. Cloud computing is a mainstay of the tech industry, which seems to continue to weigh heavy on employers’ minds as they look to make open source hiring decisions.
The Linux kernel was born twenty-five years ago this summer. Since that time a thriving partner ecosystem has arisen around open source platforms built on Linux, GNU and other free and open source software products. Here’s a look at milestones in the evolution of the Linux channel and partner ecosystem.
Contributing to a large complex project involves a fair bit of bureaucracy. There are standards and procedures to follow. The Mesos project provides a lot of help and support for contributors, so watch Van Remoortere and Park’s talk to learn the right way to become a Mesos contributor.
Not everyone can afford their own secret mastermind datacenter lair, with monocle and Persian cat. Frank Scholten introduces minimesos, the Mesos experimentation and testing tool for running and testing Mesos on a laptop.