Home Blog Page 587

How To Be a Successful DevOps Engineer

DevOps taking the center stage in the software industry, the job role ‘DevOps Engineer’ is buzzing around and today, I have some thoughts that can guide you to become a great DevOps engineer.

What is a DevOps?

The term Devops was coined as a combination of DEVelopers and OPerationS. According to Wikipedia:

DevOps is a term used to refer to a set of practices that emphasize the collaboration and communication of both software developers and information technology (IT) professionals while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.

Digital transformation is happening in every sector today and if you don’t adapt yourself to the changing technology advancements, your business is likely to die in the coming years. Automation being the key, every company nowadays wants to get rid of repetitive tasks and automate them as much as possible to increase the productivity. Here is where DevOps comes into the picture and however it is derived from the agile and lean tech aspects but still it’s new in the software industry today. 

With the increasing usage of tools and platforms like Docker, AWS, Puppet, GitHub etc the companies can easily leverage automation and succeed their way.

What is a DevOps Engineer?
A major part of adopting DevOps is to create a better working relationship between development and operations teams. Some suggestions to do this include setting the teams together, involving them in each other’s processes and workflows, and even creating one cross-functional team that does everything. In all these methods, Dev is still Dev and Ops is still Ops. 

The term DevOps Engineer tries to blur this divide between Dev and Ops altogether and suggests that the best approach is to hire engineers who can be excellent coders as well as handle all the Ops functions. In short, a DevOps engineer can be a developer who can think with an Operations mindset and has the following skillset:

  • Familiarity and experience with a variety of Ops and Automation tools 
  • Great at writing scripts
  • Comfortable with dealing with frequent testing and incremental releases
  • Understanding of Ops challenges and how they can be addressed during design and development
  • Soft skills for better collaboration across the team

How can you be a great DevOps Engineer?

The key to being a great DevOps Engineer is to focus on the following:

  • Know the basic concepts on DevOps and get into the mindset of automating almost everything
  • Know about the different DevOps tools like AWS, GitHub, Puppet, Docker, Chef, New RelicAnsible, Shippable, JIRA, Slack etc
  • Org-wide Ops mindset: 

    There are many common Ops pitfalls that developers need to consider while designing software. Reminding developers of these during design and development will go a long way in avoiding these altogether rather than running into issues and then fixing them.  

    Try to standardize this process by creating a checklist that is part of a template for design reviews.

  • End to end collaboration and helping others solve the issues 

  • You should be a scripting guru: Bash, Powershell, Perl, Ruby, JavaScript, Python – you name it. They must be able to write code to automate repeatable processes.

Factors to measure DevOps success

  • Deployment frequency
  • Lead time for code changes
  • Roll back rate
  • Usage of automation tools for CI/CD
  • Test automation
  • Meeting business goals
  • Faster time to market
  • Customer satisfaction %

This is all about DevOps and how to be good at it. 

Understanding the Economics of OpenStack

As anyone involved with managing an OpenStack deployment quickly learns, cost savings and elimination of time-consuming tasks are among the biggest benefits that the cloud platform provides. However, leaders at many OpenStack-focused organizations, including Canonical, believe that the business technology arena is under such tremendous pressure to keep up as Software-as-a-Service, containers, and cloud platforms proliferate, that the true economics of OpenStack are misunderstood. Simply put, a lot of people involved with OpenStack don’t fully understand what they can get out of the platform and the ecosystem of tools surrounding it.

Mark Baker, OpenStack Product Manager at Canonical, provides useful background on all of this  in a recent essay:

“Over the past decade, IT Directors turned to public cloud providers like AWS (Amazon Web Services), Microsoft Azure, and GPC (Google Public Cloud) as a way to offset much of the CAPEX (capital expenses) of deploying hardware and software by moving it to the cloud.  They wanted to consume applications as services and offset most of the costs to OPEX (Operating Expenses).  Initially, public cloud delivered on the CAPEX to OPEX promise. Moor Insights & Strategy analysts [point to] upwards of 45 percent in capital reductions in some cases, but organizations needing to deploy solutions at scale found themselves locked into a single cloud provider, fluctuating pricing models, and a rigid scale-up model that inhibits the organization’s ability to get the most out of their legacy hardware and software investments. Forward thinking IT directors realized they must disaggregate (put into units) their current data center environments to support scale-out. Consequently, OpenStack was introduced as a public cloud alternative for enterprises wishing to manage their IT operations as a cost-effective private or hybrid cloud environment.”

As Baker notes, it can be very challenging with OpenStack to determine where the exact year-over-year operating costs and benefits of managing the platform reach parity, not just with public cloud alternatives, but with their software licensing and critical infrastructure investments. “In a typical multi-year OpenStack deployment, labor makes up 25% of the overall costs, hardware maintenance and software license fees combined are around 20%, while hardware depreciation, networking, storage, and engineering combine to make-up the remainder,” Baker notes.

Indeed, the economic tradeoffs between public cloud solutions  and platforms like OpenStack are complicated enough that many enterprises are running both types of solutions. As Forrester Research reports in a detailed brief on OpenStack economics, 82 percent of OpenStack deployments exist in parallel to other cloud platforms.

So what are some best practices for organizations that want to better understand the economics of an OpenStack deployment? Here are key thoughts from Forrester and Canonical on managing OpenStack efficiently and understanding its benefits:

Canonical: The only way to fully benefit from OpenStack is by adopting a new model for deploying and managing IT Operations… Building a private cloud infrastructure on OpenStack, is an example of the big software challenge. Significant complexity exists in the design, configuration, and deployment of all production ready OpenStack private cloud projects. While the upfront costs are negligible, the true costs are in the ongoing operations; upgrading and patching of the deployment can be expensive.

Forrester: Identify a self-contained project or area of the business. Begin with a pilot project, and use this to build familiarity with OpenStack and any partners with whom you might be working. OpenStack users have stated that working with OpenStack requires significant training in the first six months, especially if you don’t have seasoned OpenStack veterans. Ensure that the pilot — and OpenStack — have a senior champion within the business, such as the chief technology officer. How are other aspects of organizational IT currently managed, and does the rationale behind those historic deployment decisions also apply here? Figure out the right level of vendor support, given your team and your organization’s strategies.

Canonical also offers a free ebook, that breaks down the economics of OpenStack in easily understood ways.

“It is important to keep in mind that OpenStack is not a destination, but rather a part of the scale-out journey to becoming cloud native,” said Canonical’s Baker. “CIOs know they must have cloud as part of their overall strategy. From a long-term perspective OpenStack will remain the key driver and enabler for hybrid cloud adoption. However, IT organizations will continue to struggle with service and applications integration while working to keep their operational costs from rising too much.”

In addition to the ebook from Canonical on OpenStack economics, The OpenStack Foundation has a free online webinar on the topic.

Learn more about trends in open source cloud computing and see the full list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Keynote: The Double Helix of Open Source Software & Companies by Stormy Peters

Watch Stormy Peters describe how the interactions between companies and open source software communities influence our culture, in this keynote from LinuxCon Europe 2016.

Shaping the Culture of Open Source Companies

With all of the discussion about source code contributions in open source, sometimes we don’t spend enough time talking about the culture. In her keynote at LinuxCon Europe, Stormy Peters points out that when we say the word “culture,” we sometimes think only about diversity or hiring more women, but culture means more than that. Culture is about how we work, how we think, and how we interact with each other. 

It used to be that companies were confused by open source software, and the communities were often skeptical about companies. These days, most of the Internet, most of the web, and most of the world runs on open source software with open source communities and companies working together, Peters points out. She compares the early days of open source to street art. Many people don’t understand why an artist would create a work of art for free, which is something we heard quite often in the early days of open source. Derivative work and building on the work of others is also common in street art along with social norms that a street artist should only make the work better and never deface the work of better artists. This is similar to how we have norms and unspoken ways of working in open source software.

One of the main open source contributions from companies comes in the form of providing careers in open source software. Peters once worried about whether people who are paid by companies to work on open source software would stop doing it when the company stopped paying them. Instead, she found that they might stop working on a project if it doesn’t seem as important anymore, but they’ll probably stay in open source software. 

Peters talked about how companies have helped open source software grow and helped many more people get involved, which has also brought us more diversity. While the technology industry as a whole is not very diverse, companies tend to have a more diverse population than open source software, and when they pay people to work, they bring that entire cohort with them. Anecdotally, she knows quite a few women that have started their careers with a paid job and become involved in open source software that way, which mirrors her experience starting in open source as part of her job at HP. These companies influence open source software, but it is a two-way cultural exchange, and we also influence these companies.

The way that we interact with the companies and the individuals around us shapes our culture, and every time we make a decision about whether to interact over the phone, on a mailing list, or have conversations over drinks, we are shaping the culture and the society that we’re creating. “We are the backbone of society right now, so I think it’s really important that we create a culture that is open, as open as our projects,” Peters says.

Watch the complete video below for more about how the interactions between companies and open source software communities influences our culture.

Interested in speaking at Open Source Summit North America on September 11-13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

SK Telecom CTO Discusses The Future of Software-Defined Networking in the Telco Industry

As more people access the Internet from their mobile devices, mobile operators must adapt their networks to accommodate skyrocketing data use and new traffic patterns. To do so, they’re turning to the same principles of software-defined networking (SDN) already finding success in the data center.

Next-generation 5G networks will be built with SDN technologies, revolutionizing telco infrastructure, says Dr. Alex Jinsung Choi, CTO, Executive Vice President and Head of the Corporate R&D Center at SK Telecom. Open source projects such as ONOS and CORD are leading the revolution and provide a good starting point for telco companies in SDN, he said.

Dr. Alex Jinsung Choi, CTO, Executive Vice President and Head of the Corporate R&D Center at SK Telecom.
“SDN technology has evolved dramatically,” with many commercial and open source solutions now available, Dr. Choi says in the Q&A, below. “Now it is high time to transform our telco infrastructure using the solutions.”

Dr. Choi will give a keynote on “The Road to 5G with Open Source” at Open Networking Summit 2017, to be held April 3-6 in Santa Clara. Here, he discusses how SK Telecom is involved in open source networking, some of the successes the industry has had, and the challenges it faces in 2017.

Linux.com: How is SK Telecom using SDN today?

Dr. Alex Choi: We are using SDN to control data center network and transport network. In our data center, SDN is used to construct and control leaf-spine fabric networking using a commercial SDN solution. Additionally, we are planning to use SDN to manage the virtual network of our cloud. At this time, we are considering an open source controller and our own open source solution.

We have built T-SDN using SDN technology to control transport network, T-SDN controls Layer 0 and Layer 1 of our transport network. We will try to expand T-SDN for bandwidth and VPN control of the transport network.

Linux.com: Which open source networking projects does your organization use and contribute to? Why do you participate? How are you contributing?

Choi: We are working with ONOS and CORD projects. We have joined ONOS in 2015 and are contributing our virtual network solution for data center and OpenStack, called SONA (Simplified Overlay Networking Architecture). Additionally, we have been contributing to the Open-CORD project. We have proposed the M-CORD project and have been leading it with ON.Lab, and also have contributed VTN (virtual tunnel networking) module for CORD Infrastructure.

We believe that cooperation with global community is very important for improving code quality, and the open source project is the best way for global cooperation.

Linux.com: What have been the biggest successes in SDN in the past year, and what do you expect the industry to accomplish in 2017?

Choi: The showcase of the M-CORD project with the 5G use cases is the biggest milestone in the SDN world. In the early stage of SDN, the concept of SDN was used for traffic engineering in large-scale L3 networks by Google. However, since then SDN has been applied mainly in the data center networking but not in the telco infrastructure.

Recently (two years ago), as the TCO reduction is inevitable in the telco industry, the CORD project has been born to transform telco’s central offices to data center, and M-CORD project has additionally been launched to transform mobile network functions such as IMS and EPC. We expect that M-CORD project would be the real reference architecture for 5G infrastructure.

Linux.com: What will be the biggest challenges for SDN in 2017?

Choi: Six years have passed since SDN was born. During these years, SDN technology has evolved dramatically: many new protocols and stacks have been developed such as FBOSS, SAI, and P4, in addition to OpenFlow, and many commercial solutions have been out in the market, and tons of SDN related open source projects have been launched.

Now it is high time to transform our telco infrastructure using the solutions. The market might not wait anymore. Even though some SDN solutions from major vendors were so successful, it was done in very limited areas such as data center networking, and the impact was not so significant. It would be the biggest challenge to show how SDN can revolutionize the telco infrastructure this year with real use cases.

Linux.com: What’s your advice to individuals and companies getting started in SDN?

Choi: The basic concept of SDN is separating the control plane and data plane of network devices, but we should not understand that SDN is just using OpenFlow protocol or applying white-box switches. These days there are tons of ways of adopting SDN technologies, including commercial solutions from traditional legacy device vendors and open source solutions from many startups.

We strongly recommend studying reference applications such as ONOS use cases and reference architecture like CORD as the starting point. They have the right use cases and light architecture, and thus it is easy to understand the SDN technologies. Also, they have large communities and it is easy to get help on any troubleshooting.

Learn more about the future of SDN at Open Networking Summit 2017. Linux.com readers can register now with code LINUXRD5 for 5% off the attendee registration. Register now!

How to Install GlusterFS with a Replicated HA Storage Volume on Ubuntu Linux 16.04 LTS

GlusterFS is a free and open source network distributed storage file system. Network distributed storage file systems are very popular amoung high trafficked websites, cloud computing servers, streaming media services, CDN (content delivery networks) and more. This tutorial shows you how to install GlusterFS on Ubuntu Linux 16.04 LTS server and configure 2 nodes high availability storage for your web server and enable TLS/SSL for WAN based connection between two data centers.

A single cloud or bare metal server is going to be a single point of failure. For example /var/www/html/ can be a single point of failure. So you deployed two Apache web-servers. However, how do you make sure /var/www/html/ synced with both Apache server? You do not want to server different images or data to clients. To keep your /var/www/html/ in sync you need a clustered storage. Even if one node goes down the other keeps working. Moreover, when failed node comes online, it should sync missing file from another server in /var/www/html/.

Read more: How to install GlusterFS with a replicated high availability storage volume on Ubuntu Linux 16.04 LTS and setting up TLS/SSL over WAN or the Internet for GlusterFS to incease security and privacy reasons.

 

How Linux Conquered the Data Center

In 1998 Red Hat was continuing to gather together names of new allies and prospective supporters for its enterprise Linux.  Several more of the usual suspects had joined the party: Netscape, Informix, Borland’s Interbase, Computer Associates (now CA), Software AG.  These were the challengers in the Windows software market, the names to which the VARs attached extra discounts.  

One Monday in July of that year, Oracle added its name to Red Hat’s list.

“That was a seminal moment,” recalled Dirk Hohndel, now VMware’s Chief Open Source Officer.  He happened to be visiting the home of his good friend and colleague, Linus Torvalds — the man for whom Linux was named.  A colleague of theirs, Jon “Maddog” Hall, burst in to deliver the news: Their project was no longer a weekend hobby.

Read more at Data Center Knowledge

Google Cloud Container Builder Is Here for All of Your Docker Builds

Containers make the world go round, whether it’s shipping goods from China or making cat videoson YouTube work properly on your smartphone. Yesterday, Google announced that their Cloud Container Builder is finally available for general use after a year of running the Google App Engine behind “gcloud app deploy”. Now you can build your Docker containers right in the Google Cloud Platform!

Google describes the Cloud Container Builder as “a stand-alone tool for building container images regardless of deployment environment.” Calling it faster and more reliable, Google hopes that users will find it more flexible with its command-line interface, automated build triggers, and build-steps. 

Read more at Jaxenter

The Promise of Blockchain Is a World Without Middlemen

The blockchain is a revolution that builds on another technical revolution so old that only the more experienced among us remember it: the invention of the database. First created at IBM in 1970, the importance of these relational databases to our everyday lives today cannot be overstated. Literally every aspect of our civilization is now dependent on this abstraction for storing and retrieving data. And now the blockchain is about to revolutionize databases, which will in turn revolutionize literally every aspect of our civilization.

IBM’s database model stood unchanged until about 10 years ago, when the blockchain came into this conservative space with a radical new proposition: What if your database worked like a network — a network that’s shared with everybody in the world, where anyone and anything can connect to it?

Read more at HBR

Stageless Deployment Pipelines: How Containers Change the Way We Build and Test Software

Large web services have long realized the benefits of microservices for scaling both applications and development. Most dev teams are now building microservices on containers but they haven’t updated their deployment pipeline for the new paradigm. They still use a classic build -> stage -> test -> deploy model. It’s classic and entrenched and it’s just a bad way to release code.

First the bad, staging servers get in the way of continuous delivery

Most development teams will recognize this everyday reality. You work in a sprint to get a number of changes completed. You open a new branch for each feature you work on and then you send it on for a pull request to a master, staging, or developer branch. The staging branch (or worse, your master branch) is then deployed to a staging server with all the changes from developers in the last few days (or two weeks). Then it’s pushed to  staging.

But oh no, there is a problem. The application doesn’t work, integration tests fail, there’s a bug, it’s not stable, or maybe you just sent the staging url to the marketing team and they don’t like the way a design was implemented. Now you need to get someone to go into the staging/master branch that’s been poisoned with this change. They probably have to rebase, then remerge a bunch of the pull requests minus the offending code and then push it back into staging. Assuming all goes well the entire team has only lost a day.

In your next retrospective the team talks about better controls and testing before things reach staging but no one stops to ask the question why they’re using this weird staging methodology anyway when we live in a world with containers and microservices. Staging servers were originally built for monolithic apps and were only meant to provide simple smoke tests to see that code didn’t just run on a developer’s local machine, but would at least run in some other server somewhere. Even though they are used for full application testing with microservices it’s not an efficient way to test changes.

Here’s the end result

  1. Batched changes happen in a slow cadence

  2. One person’s code only releases when everyone’s code is ready

  3. If a bug is found, the branch is now poisoned and you have to pull apart the merges to fix it (often requiring a crazy git cheatsheet to figure it out).

  4. Business owners don’t get to see code until it’s really too late to make changes

How should it work? Look at your production infrastructure

Your production infrastructure is probably ephemeral; built up of on demand instances in Amazon/Azure/Google Cloud.  Every developer should be able to spin up an instance on demand for their changes, send it to QA, iterate, etc before sending it on to release.

Instead of thinking about staging servers we have test environments which follow along the classic git branch workflow. Each test environment can bring together all the interconnected microservices for much richer testing conditions.

Following this model changes the whole feedback and iteration loop to stay within the feature branch, never moving onto merge and production until all stakeholders are happy. Further, you can actually test each image against the different versions of microservices.

To accomplish this DevOps teams can build lots of scripts, logic and workflows that they have to maintain or there’s tools out there that already build the stuff into a hosted CI as part of the container lifecycle management.

The advantages of test on demand iteration model

Once your test structure becomes untethered from a stagnant staging model dev teams can actually produce code faster. Instead of waiting for DevOps or approval to get the changes onto a staging server where stakeholders can approve the code goes straight into an environment where they can share and get feedback.

It also allows a much deeper level of testing that traditional CI by bringing all the connected microservices into a composition. You can actually write unit tests that rely on interconnected services. In this paradigm, integration testing allows for a greater variety of tests and each testing service essentially becomes it’s own microservice.

Once iteration is complete the code should be ready to go straight into Master (after a rebase) eliminating the group exercise that normally takes place around staging. Testing and iteration happens at the feature level, and then can be deployed at the feature level.

That means no more staging.

About Dan Garfield and Eran Barlev

Dan Garfield is a full-stack web developer with Codefresh, a container lifecycle management platform that provides developer teams of any size advanced pipelines and testing built specifically for containers like Docker. Check them out at https://codefresh.io

Eran is an ISTQB Certified Tester with over 20 years of experience as a software engineer working primarily in compiled languages. He is the Founder of the Canadian Software Testing Board (www.cstb.ca) and an active member of the ISTQB (www.istqb.org – International Software Testing Qualifications Board).