Home Blog Page 742

DevOps: A Pillar of Modern IT Infrastructure

A massive transformation is underway in the way we manage IT infrastructure. More companies are looking for improved agility and flexibility. They are moving from traditional server stacks to cloudy infrastructure to support a new array of applications and services that must be delivered at breakneck pace in order to remain competitive.

This transition is as much about people as it is about technology. In traditional data centers there are different IT departments specialized in different pieces — networking, storage, database, and so on. These silos operate independently. Provisioning new systems is a challenging and time-consuming task.

The adoption of cloud-based, software-defined technologies requires a much tighter collaboration between these silos. When a company needs to spin up 100 new systems in under a minute, they can’t afford 10 different departments talking to each other to provision it. They need it now. To achieve this scale and speed, they need to break the silos. And that’s exactly what has happened, giving rise to a new culture and set of IT practices called DevOps that blur the traditional line between developers and operations.

Who are DevOps pros?

DevOps is the emerging process and mindset for managing modern, cloud-based infrastructure that is gaining mainstream adoption. It’s the blending of development and operational needs to create maximum business agility, says Amit Nayar, Vice President of Engineering at Media Temple, a web hosting and cloud hosting provider.

DevOps professionals are the developers who also have the expertise of sysadmins and vice versa; they know all those components needed for operations — from networking, database, storage to almost everything else. It’s breaking down of those silos as seen in the traditional IT infrastructure, to incite tighter collaboration between “development” and “operations,” leading to the term DevOps. DevOps professionals are jacks of all trades and master of some.

Some of the core skillsets expected from DevOps professionals include the complete understanding of building, deploying, monitoring, and managing of the production and development environment.

To get a picture of the skillsets that big companies are looking for, I went through dozens of DevOps jobs opening at big companies like Boeing, Capital One, Booz Allen, Red Hat, Geico, Apple, etc. I found certain “desired” skills and knowledge common in all those job openings:  

  • Strong experience with provisioning and deployment toolchains such as Chef, Puppet, Ansible, Salt, Docker, Heroku Buildpacks, etc.

  • Containers and container orchestration (Kubernetes, Docker, Docker Swarm)

  • Continuous integration and test automation tools (Travis CI, Jenkins, etc.)

  • Cloud and virtualization technologies: AWS, Azure, VMware, KVM, zones/containers, Vagrant, Docker

  • System monitoring experience with tools such as Nagios, Sensu, etc.

  • Scripting language proficiency such as Perl, Python, and Ruby

  • Experience building dev, test, and production environments in public cloud

  • Experience in configuration management and source code control tools and setting up continuous build environments

  • Experience with the Hashicorp tools (Vagrant, Vault, Packer, etc.)

  • Experience with networking services and SDN

  • Good understanding of network theory, such as different protocols (TCP/IP, UDP, ICMP, etc), MAC addresses, IP packets, DNS, OSI layers, and load balancing)

  • Strong working knowledge of Linux operating systems, their underlying components, system statistics, performance tuning, filesystems, and I/O.

  • Problem solving, doing whatever it takes — ability and attitude.

In a nutshell, DevOps pros are doing almost everything around key areas — automation, building, testing, deploying, monitoring, and managing the production and development environment.

“With more of the industry supporting cloud-like infrastructures — including running virtualized platforms on in-house data centers — it is critical for engineers to understand how things connect and the constraints involved in running these services and applications,” said Mike Fiedler, Director of Technical Operations at DataDog, provider of SaaS-based monitoring and analytics platform.

The DevOps movement is not just about the technical skillset, however. “DevOps is as much a mindset as it is a skillset,” Nayar said. The collapse of silos has certainly brought a shift in skillset — the line between operations and developers is blurry now.

But, as Nayar said, it’s not just about skillset. Those job openings show that organizations are looking at their infrastructure as a whole, breaking the traditional silos, barriers that separated developers and operations.

Rise of DevOps

Modern infrastructures are predominantly being designed to be defined by code to ensure repeatable, programmable deployments, and this trend is an impetus for requiring the hybrid DevOps mindset and skillset, according to Nayar.

DevOps and cloud native applications are creating infrastructure as a code that can be described in a yaml file. This has interesting side effects. It’s liberating for sysadmins who don’t have to worry about provisioning and managing individual pieces of infrastructure, such as virtual machines, firewalls, etc. All of that is moved to the developer side of the house. It’s also liberating for developers who no longer are dealing with the operations side of things.

“They define the need through the file, abstracting out infrastructure — describing it all in a file and requests through APIs,” said Amar Kapadia, Sr. Director Product Marketing at Mirantis.

The result is that structural changes cannot take place independently of operational changes. If you want a modern infrastructure, you must adopt DevOps practices. DevOps professionals are the ones who enable a company to deliver applications and services more efficiently and cost effectively.

”Modern infrastructure is synonymous with DevOps,” said Thomas Hatch, creator of the popular Salt configuration management tool and CTO of SaltStack.

New game, new challenges

Deploying modern infrastructure is challenging. As more companies embrace modern IT infrastructure, there is an advantage to adopting DevOps culture; they have to transform or replace existing independent silos into a more coherent group of IT professionals. Part of adopting DevOps, Nayar says, is moving your culture to a DevOps mindset first and finding common agreement between development and operations.

The biggest challenge for these companies is breaking old habits and challenging the status quo. To make this shift, companies must hire people with experience in the DevOps culture and toolset, but they also must train their existing staff on DevOps concepts, Nayar said. This can be an easy task for a smaller company, but the bigger the company, the harder it becomes. In either case, the increasing demand for DevOps professionals can strain employees and HR departments, alike.

Different companies use different approaches

Companies are very diverse in how they run their operations. They have different business and digital requirements, and they are using different approaches to adapt to the growing need for cross-team collaboration. “Every organization is adopting the DevOps mindset in a very different way,” said Hatch.

One approach is to identify developers and sysadmins who are willing to learn more about tasks outside of their current job profile. It’s not just the companies looking for new skillsets; systems and software engineers are going through the same phase. They are aware of the changing dynamics, and they are aware of the risks of not adapting to new opportunities. It’s not really hard to find developers interested in learning more about the systems that run the applications they developed. Companies can identify such talent and encourage them to level up and acquire the knowledge needed to support their services. “The same applies for sysadmins who are looking to learn more about development,” said Fiedler.

Some companies take a different approach and cross-pollinate devs and operations. You can embed a sysadmin with the dev team so the team gets the domain-specific expertise. The sysadmin is then more involved in the dev cycle, aware of design decisions that can have impact later on. Additionally, “developers benefit from having someone readily accessible to help them understand some of the system-level constraints and greater architecture,” said Fielder.

Either way, the transformation requires a strong leader to focus on creating a culture shift by training and reorganizing new teams. “One place where companies making the transition to DevOps are hiring most is leadership to drive the transformation (VP level),” said Kapadia. “They are hired to own the transformation.”

Conclusion

The bottom line is that organizations can’t afford to ignore this cultural change.

“The transition from traditional dev and ops practices to coordinated, agile DevOps has been slow, but each organization has to move at an appropriate pace,” said Hatch. “Fear of missing out and competitive pressure has helped to accelerate adoption, as the benefits of DevOps can be obvious and markets move so quickly these days.”

The benefits of DevOps are being demonstrated by web-scale companies like Amazon, Facebook, and Google. Everyone seems to be in the hunt for GIFEE (Google-style Infrastructure For Everyone Else). But, change is hard and for some impossible.

In response, DevOps pros are fond of quoting noted statistician and champion of the lean manufacturing movement, W. Edwards Deming, who famously said, “It is not necessary to change. Survival is not mandatory.”

 

Sign up to receive one free Linux tutorial each week for 22 weeks from Linux Foundation Training. Sign Up Now »

Google, Samsung Join AT&T and Verizon in Independent CORD Project

Google and Samsung join AT&T and Verizon as partners in ONOS project’s CORD platform, which is set to be spun off as an independent open source project. The Open Network Operating System project’s central office re-architected as data center platform continues to gain momentum, with the Open Networking Lab and the Linux Foundation spinning off the CORD initiative as an independent open source project.

The initiative also gained new partners in Google, Radisys and Samsung, with Google set to host the first CORD Summit this week at its Tech Corner Campus in California. The initiative also includes members AT&T, Verizon Communications, China Unicom, NTT Communications and SK Telecom, as well as vendors like Ciena, Cisco, Fujitsu, Intel, NEC and Nokia.

Read more at RCR Wireless

Why Blockchain Matters

If your familiarity with Bitcoin and Blockchain is limited to having heard about the trial of Silk Road’s Ross Ulbricht, you can be forgiven — but your knowledge is out of date. Today, Bitcoin and especially Blockchain are moving into the mainstream, with governments and financial institutions launching experiments and prototypes to understand how they can take advantage of the unique characteristics of the technology.

Why Blockchain?

The obvious question is why they’re all so interested in blockchain. The answers vary, naturally, but they seem drawn by two opportunities: cost reduction and efficiency, and innovation. While the outcomes can be very different — after all, cost reduction and efficiency typically focus on improving the as-is state of affairs while innovation disrupts or displaces the existing order of things.

Read more at Datamation

The Apache Software Foundation’s Two New Big Data Projects Tackle Science and Processing

The Apache Software Foundation is making a big commitment to Big Data. As reported in this post, in recent months the foundation has promoted a slew of open source Big Data projects to Top-Level Status.  This puts a number of them on the same kind of development fast track that catapulted the Spark project to success.

Doug Cutting, co-founder of Hadoop, recently said at the Apache Big Data conference, “The hallmark of this ecosystem that’s emerged is the way that it’s evolving. We’re seeing not just new projects added, but some of the old projects being replaced over time by things that are better. In the end, nothing is sacred. Any component can be replaced by something that is better.”

As cases in point, Apache has announced that two new Big Data projects have earned Top-Level status: OODT and Bahir. By earning Top-Level Status, OODT and Bahir will benefit from active development and strong community support.

As background, countless organizations around the world are now working with data sets so large and complex that traditional data processing applications can no longer drive optimized analytics and insights. That’s the problem that the new wave of Big Data applications aims to solve, and Apache has graduated more than 10 of these applications to Top Level in the past year.

OODT: NASA is Onboard

Originally created at NASA Jet Propulsion Laboratory in 1998 as a way to build a national framework for data sharing, OODT has also been instrumental to the National Cancer Institute’s Early Detection Research Network for managing distributed scientific data sets across 20+ institutions nationwide for more than a decade.

According to Apache:

“OODT is a grid middleware framework for science data processing, information integration, and retrieval. As ‘middleware for metadata’ (and vice versa), OODT is used for computer processing workflow, hardware and file management, information integration, and linking databases. The OODT architecture allows distributed computing and data resources to be searchable and utilized by any end user.”

“Apache OODT 1.0 is a great milestone in this project,” said Tom Barber, Vice President of Apache OODT. “Effectively managing data pools has historically been problematic for some users, and OODT addresses a number of the issues faced. v1.0 allows us to prepare for some big changes within the platform with new UI designs for user-facing apps and data flow processing under the hood. It’s an exciting time in the data management sector and we believe Apache OODT can be at the forefront of it.”

Apache OODT is in use in many scientific data system projects in Earth science, planetary science, and astronomy at NASA, such as the Lunar Mapping and Modeling Project (LMMP), NPOESS Preparatory Project (NPP) Sounder PEATE Testbed, the Orbiting Carbon Observatory-2 (OCO-2) project, and the Soil Moisture Active Passive mission testbed.

In addition, OODT is used for large-scale data management and data preparation tasks in the DARPA MEMEX and XDATA efforts, and for supporting research and data analysis within the pediatric intensive care domain in collaboration with Children’s Hospital Los Angeles (CHLA) and its Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit (VPICU), among many other applications.

Bahir and Big Data Processing

Apache Bahir has become a Top-Level Project (TLP), too, and Spark developers will want to take note. Bahir bolsters Big Data processing by serving as a home for existing connectors that initiated under Apache Spark, and provides additional extensions/plugins for other related distributed system, storage, and query execution systems.

Bahir code is extracted from the Apache Spark project, and has spun out as a standalone project to provide implementations for different Spark-related extensions/plugins, connectors, and other pluggable components. Current extensions include:

  • streaming-akka (akka:Open Source toolkit and runtime simplifying the construction of concurrent and distributed applications on the Java Virtual Machine)

  • streaming-mqtt (mqtt: lightweight messaging protocol for small sensors and mobile devices, optimized for high-latency or unreliable networks)

  • streaming-twitter (Twitter: online social networking service; Bahir allows the processing of social data from Twitter)

  • streaming-zeromq (zeromq: a high-performance asynchronous messaging library, aimed at use in distributed or concurrent applications)

In addition, Apache Bahir has a strong relationship with different storage layers; the project intends to extend that relationship to a number of other ASF projects and Apache-licensed initiatives.

“Apache Bahir is a new community that aims to be a place to curate extensions related to distributed analytic platforms following the Apache Governance,” said Luciano Resende, Vice President of Apache Bahir and an Architect at IBM contributing to The Apache Software Foundation for over 10 years. “The project is initially offering a few Apache Spark extensions but it is definitely open for expanding to other platforms such as Apache Beam, Apache Flink and others.”

“We are very interested in streaming-mqtt for remote sensing applications and control/monitoring. We have a lot of Big Data needs in Earth science especially in remote and difficult to access environments and plugins such as streaming-mqtt from Bahir provide a readily accessible and Apache-based solution to that,” said Chris Mattmann, member of the Apache Bahir Project Management Committee, and Chief Architect, Instrument and Science Data Systems Section at NASA Jet Propulsion Laboratory.

“We are very motivated to increase the size and diversity of the Apache Bahir community,” added Resende. “We welcome feedback, use cases, bug reports, patch submissions, code contributions, documentation, new extension proposals, and other ways to participate.”

Are you interested in more cutting-edge Big Data projects that Apache is elevating to Top-Level? You can find a comprehensive collection of them in this post.

Docker London: Container Security [Video]

In this talk, Phil Estes will walk through the core security capabilities available today in Docker and other container runtimes, and how those capabilities have improved for both pure container isolation, but also improvements and capabilities that touch across the whole lifecycle of a container workflow. Phil will demonstrate recent additions to the Docker engine in 2016 such as user namespaces and seccomp and how they continue to enable better container security and isolation.

This talk is a fast-paced overview of the potential threats faced when containerizing applications, married to a quick run-through of the “security toolbox” available in the Docker engine via Linux kernel capabilities and features enabled by OCI’s libcontainer/runc and Docker.  This talk was given at Docker London on Wednesday, July 20th, 2016. 

Watch the complete video at Skills Matter

Remix OS for PC Upgraded to Marshmallow, Supports More Hardware

Remix OS has been putting Android 5.1 on PCs for only half a year, but now users can upgrade their devices to Android Marshmallow. The update also makes the OS compatible with additional NVIDIA and AMD GPUs, which adds support for more than a dozen x86 PCs and laptops. It can be installed on most Intel-based PCs and Macs, although Android and most of its apps will probably always work best on ARM.

Read more at The Verge

Linux 4.8 Bringing Intel MPX Enhancements, Work Towards Virtually Mapped Kernel Stacks

Ingo Molnar sent in his pull requests on Monday for the Linux 4.8 kernel. Among the interesting material this cycle were the x86/mm changes with some notable commits.

The x86 memory management work for Linux 4.8 includes prep work for supporting virtually mapped kernel stacks, a workaround for erratum of Intel’s Knights Landing hardware, Intel MPX (Memory Protection Extensions) enhancements, and other fixes and clean-ups. 

Read more at Phoronix

OpenBSD 6.0 Tightens Security by Losing Linux Compatibility

OpenBSD, one of the more prominent variants of the BSD family of Unix-like operating systems, will be released at the beginning of September, according to a note on the official OpenBSD website.

Often touted as an alternative to Linux. OpenBSD is known for the lack of proprietary influence on its software and has garnered a reputation for shipping with better default security than other OSes and for being highly vigilant (some might say strident) about the safety of its users. Many software router/firewall projects are based on OpenBSD because of its security-conscious development process.

Most significant among the latest security-related changes for OpenBSD is the removal of Linux emulation support. …

Read more at InfoWorld

How To Check Pokémon Go Server Status on Ubuntu

Playing Pokemon Go? This indicator applet for Ubuntu tells you when the game’s servers are up and running so you can head out and catch em all. 

Pokémon GO is ripping up the world right now, but trying to catch ’em all isn’t made easy thanks to continual server outages.

Any wannabe trainers among you have no doubt felt the frustration of heading off on a Pokémon hunt only to find the game serverKoffing and Weezing under demand or, more often, grinding to a full-on Poké-stop. It would be great if the game could sort itself out and work properly, but that won’t happen overnight.

In the meantime there is a neat little helper you can add to your Ubuntu desktop.

Read more at OMG! Ubuntu!

Electric Cloud Automates Rolling Deployments for Zero-Downtime Updates

Electric Cloud wants to free up deployment teams weekends by eliminating the heavy scripting and manual steps involved in releasing new software. The latest feature for its ElectricFlow release automation software is rolling deployments with the push of a button, allowing customers to choose the deployment strategy  including rolling, blue/green, canary that suits them best.

“We’ve heard about the Facebooks, the Etsys, the Twitters, the unicorns, all the wonderful stuff they do in how they deliver software,” said Anders Wallgren, Electric Cloud chief technology officer. “They’ve been able to do that through massive amounts of resources. They’ve spent many years building these bespoke systems to very efficiently push software into production through their delivery pipelines. We’ve been focusing in the past few years on bringing those capabilities to companies that are not unicorns that don’t have the resources or capabilities to build these things from scratch.”

The rolling deployments feature allows teams to practice deployments in QA, staging and pre-production environments so full rollout isn’t the first time its been done, making them more reliable.

Read more at The New Stack