Home Blog Page 524

Tips on Scaling Open Source in the Cloud

This article was sponsored by Alibaba and written by Linux.com.

After much anticipation, LinuxCon, ContainerCon and Cloud Open China will soon be officially underway. Some of the world’s top technologists and open source leaders are gathering at the China National Convention Center in Beijing. The excitement is building around the discoveries and discussions on Linux, containers, cloud technologies, networking, microservices, and more. Attendees will also exchange insights and tips on how to navigate and lead in the open source community, and what better way than to network in person at LinuxCon China?

To preview how some leading companies are using open source and participating in the open source community, Linux.com interviewed several companies attending the conference. In this segment, Alibaba discusses how to successfully manage scaling open source in the cloud

Hong Tang, chief architect of Alibaba Cloud.
We spoke with Hong Tang, chief architect of Alibaba Cloud.  Here are the interesting insights he had to share.

Linux.com: What are some of the advantages of using open source in the cloud?

Hong: I can summarize that in three points for application developers: a shorter learning curve, better security with less hassle, and more resources with increased agility.

First is the shortened learning curve. Developers just want to develop applications when they use open source. They want to focus on their particular application logic and they want to decide what features to develop. They do not want to spend time and effort on managing the physical infrastructure, an aggravation cloud computing eliminates.

Further, developers are aware that many of the open source products are not easy to setup and configure properly — particularly those running on a distributed set of machines, which means it is much more than a single library you can just link to your application. Managing open source on the cloud lowers the learning curve on those issues for developers.

Also, given there are so many choices, with different kinds of open sources on the cloud, means developers can try several choices and quickly figure out which will work for them. And they don’t waste time learning how to set up, configure and use it, only to discover that software doesn’t deliver what they need. So that’s the first big advantage of using open source in the cloud.

The second thing I think is very important is the security. Given the nature of the openness of the open source software, everyone can see the source code, so it’s much easier to figure out the security vulnerabilities of the software. But not all developers are highly focused on security so sometimes they may fall behind in things like applying patches or upgrading to the latest version of the software. Particularly when the newer version might not be compatible, an upgrade possibly means they have to reconfigure everything. The cloud is very helpful with that since patches and upgrades are automatic.

Also, we have dedicated teams watching all those vulnerabilities of all those open source options, and commercial software as well. We can manage them and protect them from the peripherals because things can be done outside their virtual machines, or their cloud instances.

Third, running open source on the cloud combines the advantages of both open source and the cloud. Not everything the developer seeks may be available in open source, or maybe best of breed is offered in something that is not open sourced. By using both cloud and open source, developers don’t have to restrict themselves to what is within the open source software. They can leverage the best of open source with some cloud services that open source does not provide yet. We have plenty of those, by the way.

These are three things that I can see as why running open source on the cloud matters.

Linux.com: What are some of the problems you see in scaling open source on the cloud?

Hong: It’s not that there is a direct problem with scaling the adoption of open source on the cloud. We see people using open source and creating applications comfortably on the cloud. We see pretty good growth of open source options on the cloud. But certainly, I think there are a lot of things we can do to help developers to better leverage open source on the cloud. So, I wouldn’t call it a problem but I would say there are things that we can do to unlock the advantages of open source on the cloud.

The first thing is to make open source more manageable. A lot of the things we talked about previously require integrations between open source and the cloud to deliver that increased manageability. Essentially, we want developers to use open source as managed services on the cloud.

Why is that? Well, if they just repeat what they are already doing and simply put their software, including the open source parts, on the cloud, they’ll probably discover there’s not much difference in running their applications in an on-premises environment or on the cloud. A lot of people doing this kind of application migration essentially mirror the on-premises environment in a cloud environment, but that basically means they didn’t really leverage the advantages of the cloud.

We want to educate developers on how to properly architecture applications on the cloud so that they can capture all the benefits.

Linux.com: How does embracing DevOps make a positive difference in scaling properly?

Hong:  The key difference between on-premises and cloud environments is that in an on-premises environment, the developer has a fixed set of iron boxes and services and they want to put those application pieces into those boxes. Of course, private cloud solutions like VMWare or Docker make things a little bit easier, but still they have a fixed physical infrastructure. Basically what the developers do is following a fixed deployment.

Developers have to think, ok this application requires, let’s see, how many QPS?  I need to provision with how many servers? Further, they think deployment through and decide the type of servers they want to run this application on, with customizations for memory sizes, or faster disks, or faster CPUs. That’s the way they do it and they buy a set of boxes for an application and another set of boxes for other applications, and so on.

On the cloud, it’s different because there are “unlimited resources” underneath it which means you can get any combination of server specs. If you want high performance, high memory, or high performing disks, you can get that. And you get that with only the things you want with an API call so there’s no depreciation between the physical infrastructure provisioned and running things on top of that. And we provide the pieces to do this. For example, there’s a thing called elastic scaler that can monitor the load on the backend and decide when you need to acquire another server instance for the application and put load balancer in front to hide those little details.  

We have now what’s called serverless computing in the industry. With that, you don’t have to put this process in that box, you don’t have to care where all those processing and storage happen. That’s why they’re called serverless. Open source also provides some of those like HBase, Cassandra, etc so you don’t really know, you don’t really care where the piece of data is stored, or where the application’s processing is happening. So you can see that by leveraging both open source and cloud services, a developer’s work becomes much easier and faster with these multitude of options.

Also on the cloud we have resource orchestration. You can choose resources, label them, and with that spin up a testing version of services directly. This is also sometimes called agility. So you can test more easily in full scale and not in a mocking way.

All of these capabilities and options bring forth a different mentality when you write applications targeted for the cloud vs when you write applications for the on-premises environment. If you take advantage of those, developers can save a lot of hassle in reasoning the scalability of their components or deciding how much resources they need, as you don’t have to worry about it.

The application can simply scale along with the workload.

Linux.com: Any final thoughts?

Hong: I hope to see many of the people reading this at LinuxCon China. We are working hard every day to engage developers, provide them with new tools, and build services they tell us they want and features that we discover by listening to attendees at conferences like this one. See you there!

The article is sponsored by Alibaba CloudAlibaba Group’s cloud computing arm, develops highly scalable platforms for cloud computing and data management. It provides a comprehensive suite of cloud computing services to support participants of Alibaba Group’s online and mobile commerce ecosystem, including sellers, and other third-party customers and businesses. 

As Open Source and Cloud Converge, Red Hat Expands Partnerships and Training

As open source and cloud computing converge, Red Hat is ramping up the scope of its cloud and DevOps initiatives, including building out its training offerings. If you still think of the company as primarily focused on enterprise Linux, think again. Through partnerships, such as its work with IBM, and acquisitions, such as its intent to purchase Codenvy, the cloud represents a particularly promising frontier for Red Hat. Meanwhile, the company is calling out skills gaps in the DevOps arena.

Betting on the Cloud and Container Future

IBM and Red Hat have been deepening their partnership with IBM — helping enterprises integrate Red Hat OpenStack and Ceph with IBM Private Cloud. At IBM’s recent InterConnect conference in Las Vegas, IBM executives said the partnership means that Red Hat customers will be able to extend their Red Hat-based environments into IBM’s public cloud. That, in turn, enables many of them to run the same management and software tools they have on premises while taking advantage of Red Hat’s open source platforms.

It’s worth noting that Red Hat has integrated its open tools with most of the major public cloud platforms now. Its tools are already offered for AWS, Microsoft Azure and Google’s cloud.

Meanwhile, Red Hat has announced its intent to acquire San Francisco-based startup Codenvy, which will give developers options for building out cloud-based integrated development environments. Codenvy is built on the open source project, Eclipse Che, which offers a cloud-based Integrated Developer Environment (IDE) and development environment. The openshift.io cloud-based container development service from Red Hat already integrates Codenvy’s Eclipse Che implementation.

In essence, Codenvy has DevOps software that can streamline coding and collaboration environments. According to Red Hat: “[Codenvy’s] workspace approach makes working with containers easier for developers. It removes the need to setup local VMs and Docker instances enabling developers to create multi-container development environments without ever typing Cocker commands or editing Kubernetes files. This is one of the biggest pain points we hear from customers and we think that this has huge potential for simplifying the developer experience.”

The Bottom Line for the IT and DevOps Community

Recently, several executives from Red Hat participated in a panel discussion focused on skills gaps found in the IT industry. They emphasized that skills gaps are particularly acute in the areas of Big Data, DevOps, containers, microservices, and cloud computing.

With that in mind, Red Hat is expanding its training offerings. The company is partnered with universities to focus on open source-centric training, including Boston University, the Rensselaer Polytechnic Institute, Duke University, and the University of Colorado at Boulder. Students at these institutions get the opportunity to work with open source tools and platforms.

In addition, Red Hat offers a number of training and certification options. The company continues to be very focused on OpenStack and has certification options that are worth considering. The company has announced a cloud management certification for Red Hat Enterprise Linux OpenStack Platform as part of the Red Hat OpenStack Cloud Infrastructure Partner Network. (The Linux Foundation also offers an OpenStack Administration Fundamentals course.)

Red Hat also offers educational options for microservices, working with middleware and more. It has announced five new training and certification offerings focused on improving open source and DevOps skills, as follows:

  • Developing Containerized Applications (course and exam);

  • OpenShift Enterprise Administration (course and exam);

  • Cloud Automation with Ansible (course and exam);

  • Managing Docker Containers with RHEL Atomic Host (course and exam); and

  • Configuration Management with Puppet (course and exam).

Ken Goetz, vice president of training at Red Hat, said: “DevOps isn’t a product but rather a culture and process. There are certain technologies and skills someone working in a DevOps environment should have. Our goal with this new RHCA concentration is to offer a way for employers to validate these critical open source skills, and in the process, further enable enterprises to accelerate application delivery.”

“Today, it is almost impossible to name a major player in IT that has not embraced open source,” Red Hat CEO Jim Whitehurst noted in a LinkedIn post. “Open source was initially adopted for low cost and lack of vendor lock-in, but customers have found that it also results in better innovation and more flexibility. Now it is pervasive, and it is challenging proprietary incumbents across technology categories.”

Are you interested in how organizations are bootstrapping their own open source programs internally? You can learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now!

The Evolution of Scalable Microservices

In this article, we will look at microservices, not as a tool to scale the organization, development and release process (even though it’s one of the main reasons for adopting microservices), but from an architecture and design perspective, and put it in its true context: distributed systems. In particular, we will discuss how to leverage Events-first Domain Driven Design and Reactive principles to build scalable microservices, working our way through the evolution of a scalable microservices-based system.

Don’t build microliths

Let’s say that an organization wants to move away from the monolith and adopt a microservices-based architecture. Unfortunately, what many companies end up with is an architecture similar to the following:

Read more at O’Reilly

Serious Privilege Escalation Bug in Unix OSes Imperils Servers Everywhere

“Stack Clash” poses threat to Linux, FreeBSD, OpenBSD, and other OSes.

A raft of Unix-based operating systems—including Linux, OpenBSD, and FreeBSD—contain flaws that let attackers elevate low-level access on a vulnerable computer to unfettered root. Security experts are advising administrators to install patches or take other protective actions as soon as possible.

Stack Clash, as the vulnerability is being called, is most likely to be chained to other vulnerabilities to make them more effectively execute malicious code, researchers from Qualys, the security firm that discovered the bugs, said in a blog post published Monday. Such local privilege escalation vulnerabilities can also pose a serious threat to server host providers because one customer can exploit the flaw to gain control over other customer processes running on the same server. Qualys said it’s also possible that Stack Clash could be exploited in a way that allows it to remotely execute code directly.

Read more at ArsTechnica

What Is IT Culture? Today’s Leaders Need to Know

“Culture” is a pretty ambiguous word. Sure, reams of social science research explore exactly what exactly “culture” is, but to the average Joe and Josephine the word really means something different than it does to academics. In most scenarios, “culture” seems to map more closely to something like “the set of social norms and expectations in a group of people.” By extension, then, an “IT culture” is simply “the set of social norms and expectations pertinent to a group of people working in an IT organization.”

I suspect most people see themselves as somewhat passive contributors to this thing called “culture.” Sure, we know we can all contribute to cultural change, but I don’t think most people actually feel particularly empowered to make this kind of meaningful change. On top of that, we can also observe significant changes in cultural norms that depend on variables like time and geography. 

Read more at OpenSource.com

Hello Whale: Getting Started with Docker & Flask

When it comes to learning, I tend to retain info best by doing it myself (and failing many times in the process), and then writing a blog about it. So, surprise: I decided to create a blog explaining how you can get a Flask app up and running with Docker! Doing this on my own helped connect the dots when it came to Docker, so I hope it helps you as well. 

You can follow along with my repo here:

https://github.com/ChloeCodesThings/chloe_flask_docker_demo

First, I created a simple Flask application. I started by making a parent directory and naming it chloes_flask_demo.

Read more at Codefresh.io

What Is GraphQL and Why Should You Care? The Future of APIs

“We’re going GraphQL, we’re replacing everything with GraphQL”  — Sid Sijbrandij, GitLab founder and CEO

GraphQL is an open source technology created by Facebook that is getting a fair bit of attention of late. It is set to make a major impact on how APIs are designed.

As is so often the case with these things, it’s not terribly well named. It sounds like a general purpose query language for graph traversal, am I right? Something like Cypher.

It isn’t. The name is a little deceptive. GraphQL is about graphs if you see everything as graphs, but reading the the excellent, crisp docs GraphQL is primarily about designing your APIs more effectively, and being more specific about access to your data sources.

Read more at RedMonk

Productivity or Efficiency: What Really Matters?

Efficiency is a quality many companies and employees are proud to tout. From making 2,000 widgets a day to processing several dozen emails within an hour, being efficient is badge of honor in the working world.

The benefit of efficiency is that it can be relatively easy to measure. As management expert Peter Drucker once said, “If you can’t measure it, you can’t manage it.” So finding something you can measure – whether it’s email messages  or widgets – makes it easier to improve your efficiency by making more of the output while using less money, less time, or both.

The problem is focusing on efficiency to the omission of everything else can mean that you’re focusing on the wrong things. Is it useful to generate more email messages if people aren’t clicking on them? Is it a good use of your time to write more and bigger reports if people don’t read them?

Read more at Laserfiche

Open Source Summit Brings Diverse Voices to Keynote Lineup

As Jim Zemlin announced at last year’s LinuxCon in Toronto, the event is now called Open Source Summit. The event now combines LinuxCon, ContainerCon, and CloudOpen conferences along with two new conferences: Open Community Conference and Diversity Empowerment Summit. And, this year, the OSSummit will take place between September 11-14 in Los Angeles, CA.  

Traditionally, the event starts off with a keynote by Zemlin where he gives an overview of the state of Linux and open source, And, one highlight of the schedule is always a keynote discussion between Zemlin and Linus Torvalds, Creator of Linux and Git. 

This year, attendees will also get to hear Tanmay Bakshi, a 13-year-old Algorithm-ist and Cognitive Developer, Author and TEDx Speaker, as part of the keynote lineup, which also includes:

  • Bindi Belanger, Executive Program Director, Ticketmaster

  • Christine Corbett Moran, NSF Astronomy and Astrophysics Postdoctoral Fellow, CALTECH

  • Dan Lyons, FORTUNE columnist and Bestselling Author of “Disrupted: My Misadventure in the Startup Bubble”

  • Jono Bacon, Community Manager, Author, Podcaster

  • Nir Eyal, Behavioral Designer and Bestselling Author of “Hooked: How to Build Habit Forming Products”

  • Ross Mauri, General Manager, IBM z Systems & LinuxONE, IBM

  • Zeynep Tufekci, Professor, New York Times Writer, Author and Technosociologist

As one of the biggest open source events, the summit attracts more than 2,000 developers, operators, and community leadership professionals to collaborate, share information, and learn about the latest in open technologies, including Linux, containers, cloud computing, and more.

Top 5 reasons to attend Open Source Summit

Diversity: Open Source Summit strives to bring more diverse voices from the community and enterprise world. And, the new Diversity Empowerment Summit expands that goal by facilitating an increase in diversity and inclusion and providing a venue for discussion and collaboration. 

Cross-pollination: Open Source Summit brings together many different events, representing different projects, under the same umbrella. This allows for cross-pollination of ideas among different communities that are part of a much larger open source ecosystem.

Care for family: Open Source Summit is the only tech event where you can bring your entire family including kids. The reason is simple — the organizers offer childcare at the venue which allows parents to participate in the event without having to worry about childcare.  

Awesome activities: Angela Brown, Vice President of Events at The Linux Foundation, not only knows how to plan top-notch events, she also knows how to throw parties. The New Orleans LinuxCon, for example, hosted a Mardi Gras parade and a dinner with live jazz music. Chicago featured an event on the top floor of the Ritz hotel and a reception at the Museum of Science and Industry. Seattle included the Space Needle and Chihuly Garden and Glass Museum. The Toronto event tooks guests to Muzik where they  “gambled” and celebrated 25 years of Linux.

Great opportunity for networking: Open Source Summit is a great mix of attendees. You get to meet with leading developers, founders, community members, CEOs, CTOs, technologists, and users. As exciting as the sessions are, the real value of OSS is the hallway tracks where you connect and reconnect with friends and colleagues. You come back from OSS with more contacts, more friends, new perspectives, and good memories.

Register now at the discounted rate of $800 through June 24,. Academic and hobbyist rates are also available. Applications are also being accepted for diversity and needs-based scholarships.

Basic Commands for Performing Docker Container Operations

In this series, we’re sharing a preview of the new self-paced Containers for Developers and Quality Assurance (LFS254) training course from The Linux Foundation. In earlier articles, we looked at installing Docker and setting up your environment, and we introduced Docker Machine. Now we’ll take a look at some basic commands for performing Docker container and image operations. Watch the videos below for more details.

To do container operations, we’ll first connect to our “dockerhost” with Docker Machine. Once connected, we can start the container in the interactive mode and explore processes inside the container.

For example, the “docker container ls” command lists the running containers. With the “docker container inspect” command, we can inspect an individual container. Or, with the “docker container exec” command, we can fork a new process inside an already running container and do some operations. We can use the “docker container stop” command to stop a container and then remove a stopped container using the “docker container rm” command.

To do Docker image operations, again, we first make sure we are connected to our “dockerhost” with Docker Machine, so that all the Docker commands are executed on the “dockerhost” running on the DigitalOcean cloud.

The basic commands you need here are similar to above. With the “docker image ls” command, we can list the images available on our “dockerhost”. Using the “docker image pull” command, we can pull an image from our Docker Registry. And, we can remove an image from the “dockerhost” using the “docker image rm” command.

Want to learn more? Access all the free sample chapter videos now! 

This online course is presented almost entirely on video, and the material is prepared and presented by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, Docker Captain, and author of the Docker Cookbook.