Home Blog Page 524

As Open Source and Cloud Converge, Red Hat Expands Partnerships and Training

As open source and cloud computing converge, Red Hat is ramping up the scope of its cloud and DevOps initiatives, including building out its training offerings. If you still think of the company as primarily focused on enterprise Linux, think again. Through partnerships, such as its work with IBM, and acquisitions, such as its intent to purchase Codenvy, the cloud represents a particularly promising frontier for Red Hat. Meanwhile, the company is calling out skills gaps in the DevOps arena.

Betting on the Cloud and Container Future

IBM and Red Hat have been deepening their partnership with IBM — helping enterprises integrate Red Hat OpenStack and Ceph with IBM Private Cloud. At IBM’s recent InterConnect conference in Las Vegas, IBM executives said the partnership means that Red Hat customers will be able to extend their Red Hat-based environments into IBM’s public cloud. That, in turn, enables many of them to run the same management and software tools they have on premises while taking advantage of Red Hat’s open source platforms.

It’s worth noting that Red Hat has integrated its open tools with most of the major public cloud platforms now. Its tools are already offered for AWS, Microsoft Azure and Google’s cloud.

Meanwhile, Red Hat has announced its intent to acquire San Francisco-based startup Codenvy, which will give developers options for building out cloud-based integrated development environments. Codenvy is built on the open source project, Eclipse Che, which offers a cloud-based Integrated Developer Environment (IDE) and development environment. The openshift.io cloud-based container development service from Red Hat already integrates Codenvy’s Eclipse Che implementation.

In essence, Codenvy has DevOps software that can streamline coding and collaboration environments. According to Red Hat: “[Codenvy’s] workspace approach makes working with containers easier for developers. It removes the need to setup local VMs and Docker instances enabling developers to create multi-container development environments without ever typing Cocker commands or editing Kubernetes files. This is one of the biggest pain points we hear from customers and we think that this has huge potential for simplifying the developer experience.”

The Bottom Line for the IT and DevOps Community

Recently, several executives from Red Hat participated in a panel discussion focused on skills gaps found in the IT industry. They emphasized that skills gaps are particularly acute in the areas of Big Data, DevOps, containers, microservices, and cloud computing.

With that in mind, Red Hat is expanding its training offerings. The company is partnered with universities to focus on open source-centric training, including Boston University, the Rensselaer Polytechnic Institute, Duke University, and the University of Colorado at Boulder. Students at these institutions get the opportunity to work with open source tools and platforms.

In addition, Red Hat offers a number of training and certification options. The company continues to be very focused on OpenStack and has certification options that are worth considering. The company has announced a cloud management certification for Red Hat Enterprise Linux OpenStack Platform as part of the Red Hat OpenStack Cloud Infrastructure Partner Network. (The Linux Foundation also offers an OpenStack Administration Fundamentals course.)

Red Hat also offers educational options for microservices, working with middleware and more. It has announced five new training and certification offerings focused on improving open source and DevOps skills, as follows:

  • Developing Containerized Applications (course and exam);

  • OpenShift Enterprise Administration (course and exam);

  • Cloud Automation with Ansible (course and exam);

  • Managing Docker Containers with RHEL Atomic Host (course and exam); and

  • Configuration Management with Puppet (course and exam).

Ken Goetz, vice president of training at Red Hat, said: “DevOps isn’t a product but rather a culture and process. There are certain technologies and skills someone working in a DevOps environment should have. Our goal with this new RHCA concentration is to offer a way for employers to validate these critical open source skills, and in the process, further enable enterprises to accelerate application delivery.”

“Today, it is almost impossible to name a major player in IT that has not embraced open source,” Red Hat CEO Jim Whitehurst noted in a LinkedIn post. “Open source was initially adopted for low cost and lack of vendor lock-in, but customers have found that it also results in better innovation and more flexibility. Now it is pervasive, and it is challenging proprietary incumbents across technology categories.”

Are you interested in how organizations are bootstrapping their own open source programs internally? You can learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now!

The Evolution of Scalable Microservices

In this article, we will look at microservices, not as a tool to scale the organization, development and release process (even though it’s one of the main reasons for adopting microservices), but from an architecture and design perspective, and put it in its true context: distributed systems. In particular, we will discuss how to leverage Events-first Domain Driven Design and Reactive principles to build scalable microservices, working our way through the evolution of a scalable microservices-based system.

Don’t build microliths

Let’s say that an organization wants to move away from the monolith and adopt a microservices-based architecture. Unfortunately, what many companies end up with is an architecture similar to the following:

Read more at O’Reilly

Serious Privilege Escalation Bug in Unix OSes Imperils Servers Everywhere

“Stack Clash” poses threat to Linux, FreeBSD, OpenBSD, and other OSes.

A raft of Unix-based operating systems—including Linux, OpenBSD, and FreeBSD—contain flaws that let attackers elevate low-level access on a vulnerable computer to unfettered root. Security experts are advising administrators to install patches or take other protective actions as soon as possible.

Stack Clash, as the vulnerability is being called, is most likely to be chained to other vulnerabilities to make them more effectively execute malicious code, researchers from Qualys, the security firm that discovered the bugs, said in a blog post published Monday. Such local privilege escalation vulnerabilities can also pose a serious threat to server host providers because one customer can exploit the flaw to gain control over other customer processes running on the same server. Qualys said it’s also possible that Stack Clash could be exploited in a way that allows it to remotely execute code directly.

Read more at ArsTechnica

What Is IT Culture? Today’s Leaders Need to Know

“Culture” is a pretty ambiguous word. Sure, reams of social science research explore exactly what exactly “culture” is, but to the average Joe and Josephine the word really means something different than it does to academics. In most scenarios, “culture” seems to map more closely to something like “the set of social norms and expectations in a group of people.” By extension, then, an “IT culture” is simply “the set of social norms and expectations pertinent to a group of people working in an IT organization.”

I suspect most people see themselves as somewhat passive contributors to this thing called “culture.” Sure, we know we can all contribute to cultural change, but I don’t think most people actually feel particularly empowered to make this kind of meaningful change. On top of that, we can also observe significant changes in cultural norms that depend on variables like time and geography. 

Read more at OpenSource.com

Hello Whale: Getting Started with Docker & Flask

When it comes to learning, I tend to retain info best by doing it myself (and failing many times in the process), and then writing a blog about it. So, surprise: I decided to create a blog explaining how you can get a Flask app up and running with Docker! Doing this on my own helped connect the dots when it came to Docker, so I hope it helps you as well. 

You can follow along with my repo here:

https://github.com/ChloeCodesThings/chloe_flask_docker_demo

First, I created a simple Flask application. I started by making a parent directory and naming it chloes_flask_demo.

Read more at Codefresh.io

What Is GraphQL and Why Should You Care? The Future of APIs

“We’re going GraphQL, we’re replacing everything with GraphQL”  — Sid Sijbrandij, GitLab founder and CEO

GraphQL is an open source technology created by Facebook that is getting a fair bit of attention of late. It is set to make a major impact on how APIs are designed.

As is so often the case with these things, it’s not terribly well named. It sounds like a general purpose query language for graph traversal, am I right? Something like Cypher.

It isn’t. The name is a little deceptive. GraphQL is about graphs if you see everything as graphs, but reading the the excellent, crisp docs GraphQL is primarily about designing your APIs more effectively, and being more specific about access to your data sources.

Read more at RedMonk

Productivity or Efficiency: What Really Matters?

Efficiency is a quality many companies and employees are proud to tout. From making 2,000 widgets a day to processing several dozen emails within an hour, being efficient is badge of honor in the working world.

The benefit of efficiency is that it can be relatively easy to measure. As management expert Peter Drucker once said, “If you can’t measure it, you can’t manage it.” So finding something you can measure – whether it’s email messages  or widgets – makes it easier to improve your efficiency by making more of the output while using less money, less time, or both.

The problem is focusing on efficiency to the omission of everything else can mean that you’re focusing on the wrong things. Is it useful to generate more email messages if people aren’t clicking on them? Is it a good use of your time to write more and bigger reports if people don’t read them?

Read more at Laserfiche

Open Source Summit Brings Diverse Voices to Keynote Lineup

As Jim Zemlin announced at last year’s LinuxCon in Toronto, the event is now called Open Source Summit. The event now combines LinuxCon, ContainerCon, and CloudOpen conferences along with two new conferences: Open Community Conference and Diversity Empowerment Summit. And, this year, the OSSummit will take place between September 11-14 in Los Angeles, CA.  

Traditionally, the event starts off with a keynote by Zemlin where he gives an overview of the state of Linux and open source, And, one highlight of the schedule is always a keynote discussion between Zemlin and Linus Torvalds, Creator of Linux and Git. 

This year, attendees will also get to hear Tanmay Bakshi, a 13-year-old Algorithm-ist and Cognitive Developer, Author and TEDx Speaker, as part of the keynote lineup, which also includes:

  • Bindi Belanger, Executive Program Director, Ticketmaster

  • Christine Corbett Moran, NSF Astronomy and Astrophysics Postdoctoral Fellow, CALTECH

  • Dan Lyons, FORTUNE columnist and Bestselling Author of “Disrupted: My Misadventure in the Startup Bubble”

  • Jono Bacon, Community Manager, Author, Podcaster

  • Nir Eyal, Behavioral Designer and Bestselling Author of “Hooked: How to Build Habit Forming Products”

  • Ross Mauri, General Manager, IBM z Systems & LinuxONE, IBM

  • Zeynep Tufekci, Professor, New York Times Writer, Author and Technosociologist

As one of the biggest open source events, the summit attracts more than 2,000 developers, operators, and community leadership professionals to collaborate, share information, and learn about the latest in open technologies, including Linux, containers, cloud computing, and more.

Top 5 reasons to attend Open Source Summit

Diversity: Open Source Summit strives to bring more diverse voices from the community and enterprise world. And, the new Diversity Empowerment Summit expands that goal by facilitating an increase in diversity and inclusion and providing a venue for discussion and collaboration. 

Cross-pollination: Open Source Summit brings together many different events, representing different projects, under the same umbrella. This allows for cross-pollination of ideas among different communities that are part of a much larger open source ecosystem.

Care for family: Open Source Summit is the only tech event where you can bring your entire family including kids. The reason is simple — the organizers offer childcare at the venue which allows parents to participate in the event without having to worry about childcare.  

Awesome activities: Angela Brown, Vice President of Events at The Linux Foundation, not only knows how to plan top-notch events, she also knows how to throw parties. The New Orleans LinuxCon, for example, hosted a Mardi Gras parade and a dinner with live jazz music. Chicago featured an event on the top floor of the Ritz hotel and a reception at the Museum of Science and Industry. Seattle included the Space Needle and Chihuly Garden and Glass Museum. The Toronto event tooks guests to Muzik where they  “gambled” and celebrated 25 years of Linux.

Great opportunity for networking: Open Source Summit is a great mix of attendees. You get to meet with leading developers, founders, community members, CEOs, CTOs, technologists, and users. As exciting as the sessions are, the real value of OSS is the hallway tracks where you connect and reconnect with friends and colleagues. You come back from OSS with more contacts, more friends, new perspectives, and good memories.

Register now at the discounted rate of $800 through June 24,. Academic and hobbyist rates are also available. Applications are also being accepted for diversity and needs-based scholarships.

Basic Commands for Performing Docker Container Operations

In this series, we’re sharing a preview of the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we looked at installing Docker and setting up your environment, and we introduced Docker Machine. Now we’ll take a look at some basic commands for performing Docker container and image operations. Watch the videos below for more details.

To do container operations, we’ll first connect to our “dockerhost” with Docker Machine. Once connected, we can start the container in the interactive mode and explore processes inside the container.

For example, the “docker container ls” command lists the running containers. With the “docker container inspect” command, we can inspect an individual container. Or, with the “docker container exec” command, we can fork a new process inside an already running container and do some operations. We can use the “docker container stop” command to stop a container and then remove a stopped container using the “docker container rm” command.

To do Docker image operations, again, we first make sure we are connected to our “dockerhost” with Docker Machine, so that all the Docker commands are executed on the “dockerhost” running on the DigitalOcean cloud.

The basic commands you need here are similar to above. With the “docker image ls” command, we can list the images available on our “dockerhost”. Using the “docker image pull” command, we can pull an image from our Docker Registry. And, we can remove an image from the “dockerhost” using the “docker image rm” command.

Want to learn more? Access all the free sample chapter videos now! 

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.

ODPi Webinar on DataOps at Scale: Taking Apache Hadoop Enterprise-Wide

2016 was a pivotal year for Apache Hadoop, a year in which enterprises across a variety of industries moved the technology out of PoCs and the lab and into production. Look no further than AtScale’s latest Big Data Maturity survey, in which 73 percent of respondents report running Hadoop in production.

ODPi recently ran a series of its own Twitter polls and found that 41 percent of respondents do not use Hadoop in-production, while 41% of respondents said they do. This split may partly be due to the fact that the concept of “production” Hadoop can be misleading. For instance, pilot deployments and enterprise-wide deployments are both considered “production,” but they are vastly different in terms of DataOps, as Table 1 below illustrates.

YiNSxpTWDbZhddVcZmA13-qBFp8yp7gqIKpNPcU2

Table 1: DataOps Considerations from Lab to Enterprise-wide Production.

As businesses move Apache Hadoop and Big Data out of Proof of Concepts (POC)s and into enterprise-wide production, hybrid deployments are the norm and several important considerations must be addressed. 

Dive into this topic further on June 28th for a free webinar with John Mertic, Director of ODPi at the Linux Foundation, hosting Tamara Dull, Director of Emerging Technologies at SAS Institute.

The webinar will discuss ODPi’s recent 2017 Preview: The Year of Enterprise-wide Production Hadoop and explore DataOps at Scale and the considerations businesses need to make as they move Apache Hadoop and Big Data out of Proof of Concepts (POC)s and into enterprise-wide production, hybrid deployments.

Register for the webinar here.

As a sneak peek to the webinar, we sat down with Mertic to learn a little more about production Hadoop needs.

Why is it that the deployment and management techniques that work in limited production may not scale when you go enterprise wide?

IT policies kick in as you move from Mode 2 IT — which tends to focus on fast moving, experimental projects such as Hadoop deployments — to Mode 1 IT — which controls stable, enterprise wide deployments of software. Mode 1 IT has to consider both the enterprise security and access requirements, but also data regulations that impact how a tool is used. On top of that, cost and efficiency come into play, as Mode 1 IT is cost conscious.

What are some of the step-change DataOps requirements that come when you take Hadoop into enterprise-wide production? 

Integrating into Mode 1 IT’s existing toolset is the biggest requirement. Mode 1 IT doesn’t want to manage tools it’s not familiar with, nor those it doesn’t feel it can integrating into the existing management tools the enterprise is already using. The more Hadoop uniformly fits into the existing devops patterns – the more successful it will be.

Register for the webinar now.