Home Blog Page 384

Make your First Contribution to an Open Source Project

In this article, I’ll provide a checklist of beginner-friendly features and some tips to make your first open source contribution easy.

Understand the product

Before contributing to a project, you should understand how it works. To understand it, you need to try it for yourself. If you find the product interesting and useful, it is worth contributing to.

Too often, beginners try to contribute to a project without first using the software. They then get frustrated and give up. If you don’t use the software, you can’t understand how it works. If you don’t know how it works, how can you fix a bug or write a new feature?

Read more at OpenSource.com

How to Install Rancher Docker Container Manager on Ubuntu

If you’re looking to take your container management to the next level, the Rancher Docker Container Manager might be just what you need. Jack Wallen shows you how to get this up and running.

You’ve been working with containers for some time now—maybe you’re using docker commands to manage and deploy those containers. You’re not ready to migrate to Kubernetes (and Docker has been treating you well), but you’d like to make use of a handy web-based management tool to make your container life a bit easier. Where do you turn?

There are a number of options available, one of which is the Rancher Docker Container Manager. This particular tool should be of interest, especially considering it supports Kubernetes and can deploy and manage full stacks, so when you’re ready to make the jump, your tools are also ready.

But how do you get the Rancher Docker Container Manager (RDCM) up and running? The easiest way is (with a nod to irony) via Docker itself. I’m going to show you how to deploy a container for the RDCM quickly and easily. Once deployed, you can then log into the system, via web browser, and manage your containers.

Read more at TechRepublic

Anaconda, CPython, PyPy, and more: Know Your Python Distributions

When you choose Python for software development, you choose a large language ecosystem with a wealth of packages covering all manner of programming needs. But in addition to libraries for everything from GUI development to machine learning, you can also choose from a number of Python runtimes—and some of these runtimes may be better suited to the use case you have at hand than others.

Here is a brief tour of the most commonly used Python distributions, from the standard implementation (CPython) to versions optimized for speed (PyPy), for special use cases (Anaconda, ActivePython), or for runtimes originally designed for entirely different languages (Jython, IronPython).

Read more at InfoWorld

Microservices Explained

Microservices is not a new term. Like containers, the concept been around for a while, but it’s become a buzzword recently as many companies embark on their cloud native journey. But, what exactly does the term microservices mean? Who should care about it? In this article, we’ll take a deep dive into the microservices architecture.

Evolution of microservices

Patrick Chanezon, Chief Developer Advocate for Docker provided a brief history lesson during our conversation: In the late 1990s, developers started to structure their applications into monoliths where massive apps hadall features and functionalities baked into them. Monoliths were easy to write and manage. Companies could have a team of developers who built their applications based on customer feedback through sales and marketing teams. The entire developer team would work together to build tightly glued pieces as an app that can be run on their own app servers. It was a popular way of writing and delivering web applications.

There is a flip side to the monolithic coin. Monoliths slow everything and everyone down. It’s not easy to update one service or feature of the application. The entire app needs to be updated and a new version released. It takes time. There is a direct impact on businesses. Organizations could not respond quickly to keep up with new trends and changing market dynamics. Additionally, scalability was challenging.

Around 2011, SOA (Service Oriented Architecture) became popular where developers could cram multi-tier web applications as software services inside a VM (virtual machine). It did allow them to add or update services independent of each other. However, scalability still remained a problem.

“The scale out strategy then was to deploy multiple copies of the virtual machine behind a load balancer. The problems with this model are several. Your services can not scale or be upgraded independently as the VM is your lowest granularity for scale. VMs are bulky as they carry extra weight of an operating system, so you need to be careful about simply deploying multiple copies of VMs for scaling,” said Madhura Maskasky, co-founder and VP of Product at Platform9.

Some five years ago when Docker hit the scene and containers became popular, SOA faded out in favor of “microservices” architecture.  “Containers and microservices fix a lot of these problems. Containers enable deployment of microservices that are focused and independent, as containers are lightweight. The Microservices paradigm, combined with a powerful framework with native support for the paradigm, enables easy deployment of independent services as one or more containers as well as easy scale out and upgrade of these,” said Maskasky.

What’s are microservices?

Basically, a microservice architecture is a way of structuring applications. With the rise of containers, people have started to break monoliths into microservices. “The idea is that you are building your application as a set of loosely coupled services that can be updated and scaled separately under the container infrastructure,” said Chanezon.

“Microservices seem to have evolved from the more strictly defined service-oriented architecture (SOA), which in turn can be seen as an expression object oriented programming concepts for networked applications. Some would call it just a rebranding of SOA, but the term “microservices” often implies the use of even smaller functional components than SOA, RESTful APIs exchanging JSON, lighter-weight servers (often containerized, and modern web technologies and protocols,” said Troy Topnik, SUSE Senior Product Manager, Cloud Application Platform.

Microservices provides a way to scale development and delivery of large, complex applications by breaking them down that allows the individual components to evolve independently from each other.

“Microservices architecture brings more flexibility through the independence of services, enabling organizations to become more agile in how they deliver new business capabilities or respond to changing market conditions. Microservices allows for using the ‘right tool for the right task’, meaning that apps can be developed and delivered by the technology that will be best for the task, rather than being locked into a single technology, runtime or framework,” said Christian Posta, senior principal application platform specialist, Red Hat.

Who consumes microservices?

“The main consumers of microservices architecture patterns are developers and application architects,” said Topnik. As far as admins and DevOps engineers are concerned their role is to build and maintain the infrastructure and processes that support microservices.

“Developers have been building their applications traditionally using various design patterns for efficient scale out, high availability and lifecycle management of their applications. Microservices done along with the right orchestration framework help simplify their lives by providing a lot of these features out of the box. A well-designed application built using microservices will showcase its benefits to the customers by being easy to scale, upgrade, debug, but without exposing the end customer to complex details of the microservices architecture,” said Maskasky.

Who needs microservices?

Everyone. Microservices is the modern approach to writing and deploying applications more efficiently. If an organization cares about being able to write and deploy its services at a faster rate they  should care about it. If you want to stay ahead of your competitors, microservices is the fastest route. Security is another major benefit of the microservices architecture, as this approach allows developers to keep up with security and bug fixes, without having to worry about downtime.

“Application developers have always known that they should build their applications in a modular and flexible way, but now that enough of them are actually doing this, those that don’t risk being left behind by their competitors,” said Topnik.

If you are building a new application, you should design it as microservices. You never have to hold up a release if one team is late. New functionalities are available when they’re ready, and the overall system never breaks.

“We see customers using this as an opportunity to also fix other problems around their application deployment — such as end-to-end security, better observability, deployment and upgrade issues,” said Maskasky.

Failing to do so means you would be stuck in the traditional stack, which means microservices won’t be able to add any value to it. If you are building new applications, microservices is the way to go.

Learn more about cloud-native at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

DevOps Success: Why Continuous Is a Key Word

When implementing DevOps initiatives, the word “continuous” is the key to success. Most Agile schemes today incorporate concepts and strategies that can – and should – be implemented at all times throughout the SDLC. The most important to recognize throughout your team’s development cycle are Continuous Integration (CI), Continuous Testing (CT) and Continuous Delivery (CD). 

Often, I hear of dev teams wondering which “continuous” deployment model should be used – if at all – and when. Typically, familiar with CI and CD, they’ll pair those two off, while completely separating them from CT. While each serves different purposes, and addresses different aspects throughout the SDLC, all three can – and should – integrate to assure quality, while also maintaining velocity. 

Read more at Enterprisers Project

Making Cloud-Native Computing Universal and Sustainable

The original seed project for CNCF was Kubernetes, as orchestration is a critical piece of moving toward a cloud-native infrastructure. As many people know, Kubernetes is one of the highest-velocity open source projects of all time and is sometimes affectionately referred to as “Linux of the Cloud.” Kubernetes has become the de facto orchestration system, with more than 50 certified Kubernetes solutions and supported by the top cloud providers in the world. Furthermore, CNCF is the first open source foundation to count the top 10 cloud providers in the world as members.

However, CNCF is intended to be more than just a home for Kubernetes, as the cloud-native and open infrastructure movement encompasses more than just orchestration.

A community of open infrastructure projects

CNCF has a community of independently governed projects; as of today, there are 18 covering all parts of cloud native. For example, Prometheus integrates beautifully with Kubernetes but also brings modern monitoring practices to environments outside of cloud-native land.

Read more at OpenSource.com

This Week in Numbers: Chinese Adoption of Kubernetes

Chinese developers are, in general, less far along in their production deployment of containers and Kubernetes, according to our reading of data from a Mandarin-translated version of a Cloud Native Computing Foundationsurvey.

For example, 44 percent of the Mandarin respondents were using Kubernetes to manage containers while the figure jumped to 77 percent amongst the English sample. They are also much more likely to deploy containers to Alibaba Cloud and OpenStack cloud providers, compared to the English survey respondents. The Mandarin respondents were also twice as likely to cite reliability as a challenge. A full write-up of these findings can be found in the post “China vs. the World: A Kubernetes and Container Perspective.”

It is noteworthy that 46 percent of Mandarin-speaking respondents are challenged in choosing an orchestration solution, which is 20 percentage points more than the rest of the study. 

Read more at The New Stack

How Many Linux Users Are There Anyway?

True, desktop Linux has never taken off. But, even so, Linux has millions of desktop users. Don’t believe me? Let’s look at the numbers.

There are over 250 million PCs sold every year. Of all the PCs connected to the internet, NetMarketShare reports 1.84 percent were running LinuxChrome OS, which is a Linux variant, has 0.29 percent. Late last year, NetMarketShare admitted it had been overestimating the number of Linux desktops, but they’ve corrected their analysis.

Read more at ZDNet

Weekend Reading: Sysadmin 101

This series covers sysadmin basics. The first article explains how to approach alerting and on-call rotations as a sysadmin. In the second article, I discuss how to automate yourself out of a job, and in the third, I explain why and how you should use tickets. The fourth article covers some of the fundamentals of patch management under Linux, and the fifth and final article describes the overall sysadmin career path and the attributes that might make you a “senior sysadmin” instead of a “sysadmin” or “junior sysadmin”, along with some tips on how to level up.

Sysadmin 101: Alerting

In this first article, I cover on-call alerting. Like with any job title, the responsibilities given to sysadmins, DevOps and Site Reliability Engineers may differ, and in some cases, they may not involve any kind of 24×7 on-call duties, if you’re lucky. For everyone else, though, there are many ways to organize on-call alerting, and there also are many ways to shoot yourself in the foot.

Sysadmin 101: Automation

Here we cover systems administrator fundamentals. 

Read more at Linux Journal

Best Linux Distributions: Find One That’s Right for You

The landscape of Linux is vast and varied.  And, if you’re considering migrating to the open source platform or just thinking about trying a new distribution, you’ll find a world of possibilities.

Luckily, Jack Wallen has reviewed many different Linux distributions over the years in order to make your life easier. Recently, he has compiled several lists of distributions to consider based on your starting point. If you’re brand-new to Linux, for example, check out his list of distributions that work right out of the box — no muss, no fuss.

Jack also has lists of the best distros for developers, distros that won’t break the back of your old hardware, specialized distros for scientific and medical fields, and more. Check out Jack’s picks and see if one of these distributions is right for you.

Best Linux Distributions for 2018

To further simplify things, Jack breaks down his best distro choices in this article into the following categories: sysadmin, lightweight, desktop, distro with more to prove, IoT, and server. These categories should cover the needs of just about any Linux user.

Top 3 Linux Distros that “Just Work”

In this article, Jack highlights three distributions for anyone to use, without having to put in a lot of extra time for configuration or problem solving.

5 Best Linux Distros for Development

Here Jack shares his picks for development efforts. Although each of these five distributions can be used for general development (with maybe one exception), they each serve a specific purpose.

4 Best Linux Distros for Old Hardware

Jack looks at four distributions that will make your aging machines relevant again.

4 Distros Serving Scientific and Medical Communities

These four specialized distributions are tailored to the needs of scientific and medical communities.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.