Home Blog Page 385

Making Cloud-Native Computing Universal and Sustainable

The original seed project for CNCF was Kubernetes, as orchestration is a critical piece of moving toward a cloud-native infrastructure. As many people know, Kubernetes is one of the highest-velocity open source projects of all time and is sometimes affectionately referred to as “Linux of the Cloud.” Kubernetes has become the de facto orchestration system, with more than 50 certified Kubernetes solutions and supported by the top cloud providers in the world. Furthermore, CNCF is the first open source foundation to count the top 10 cloud providers in the world as members.

However, CNCF is intended to be more than just a home for Kubernetes, as the cloud-native and open infrastructure movement encompasses more than just orchestration.

A community of open infrastructure projects

CNCF has a community of independently governed projects; as of today, there are 18 covering all parts of cloud native. For example, Prometheus integrates beautifully with Kubernetes but also brings modern monitoring practices to environments outside of cloud-native land.

Read more at OpenSource.com

This Week in Numbers: Chinese Adoption of Kubernetes

Chinese developers are, in general, less far along in their production deployment of containers and Kubernetes, according to our reading of data from a Mandarin-translated version of a Cloud Native Computing Foundationsurvey.

For example, 44 percent of the Mandarin respondents were using Kubernetes to manage containers while the figure jumped to 77 percent amongst the English sample. They are also much more likely to deploy containers to Alibaba Cloud and OpenStack cloud providers, compared to the English survey respondents. The Mandarin respondents were also twice as likely to cite reliability as a challenge. A full write-up of these findings can be found in the post “China vs. the World: A Kubernetes and Container Perspective.”

It is noteworthy that 46 percent of Mandarin-speaking respondents are challenged in choosing an orchestration solution, which is 20 percentage points more than the rest of the study. 

Read more at The New Stack

How Many Linux Users Are There Anyway?

True, desktop Linux has never taken off. But, even so, Linux has millions of desktop users. Don’t believe me? Let’s look at the numbers.

There are over 250 million PCs sold every year. Of all the PCs connected to the internet, NetMarketShare reports 1.84 percent were running LinuxChrome OS, which is a Linux variant, has 0.29 percent. Late last year, NetMarketShare admitted it had been overestimating the number of Linux desktops, but they’ve corrected their analysis.

Read more at ZDNet

Weekend Reading: Sysadmin 101

This series covers sysadmin basics. The first article explains how to approach alerting and on-call rotations as a sysadmin. In the second article, I discuss how to automate yourself out of a job, and in the third, I explain why and how you should use tickets. The fourth article covers some of the fundamentals of patch management under Linux, and the fifth and final article describes the overall sysadmin career path and the attributes that might make you a “senior sysadmin” instead of a “sysadmin” or “junior sysadmin”, along with some tips on how to level up.

Sysadmin 101: Alerting

In this first article, I cover on-call alerting. Like with any job title, the responsibilities given to sysadmins, DevOps and Site Reliability Engineers may differ, and in some cases, they may not involve any kind of 24×7 on-call duties, if you’re lucky. For everyone else, though, there are many ways to organize on-call alerting, and there also are many ways to shoot yourself in the foot.

Sysadmin 101: Automation

Here we cover systems administrator fundamentals. 

Read more at Linux Journal

Best Linux Distributions: Find One That’s Right for You

The landscape of Linux is vast and varied.  And, if you’re considering migrating to the open source platform or just thinking about trying a new distribution, you’ll find a world of possibilities.

Luckily, Jack Wallen has reviewed many different Linux distributions over the years in order to make your life easier. Recently, he has compiled several lists of distributions to consider based on your starting point. If you’re brand-new to Linux, for example, check out his list of distributions that work right out of the box — no muss, no fuss.

Jack also has lists of the best distros for developers, distros that won’t break the back of your old hardware, specialized distros for scientific and medical fields, and more. Check out Jack’s picks and see if one of these distributions is right for you.

Best Linux Distributions for 2018

To further simplify things, Jack breaks down his best distro choices in this article into the following categories: sysadmin, lightweight, desktop, distro with more to prove, IoT, and server. These categories should cover the needs of just about any Linux user.

Top 3 Linux Distros that “Just Work”

In this article, Jack highlights three distributions for anyone to use, without having to put in a lot of extra time for configuration or problem solving.

5 Best Linux Distros for Development

Here Jack shares his picks for development efforts. Although each of these five distributions can be used for general development (with maybe one exception), they each serve a specific purpose.

4 Best Linux Distros for Old Hardware

Jack looks at four distributions that will make your aging machines relevant again.

4 Distros Serving Scientific and Medical Communities

These four specialized distributions are tailored to the needs of scientific and medical communities.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Software Security Is a Shared Responsibility

Achieving effective security takes constant discipline and effort on everyone’s part – not just one team or group within a company. That was Mårten Mickos’s message in his keynote speech appropriately titled, “Security is Everyone’s Responsibility,” at The Linux Foundation’s recent Open Source Leadership Summit (OSLS).  

Mickos, CEO of HackerOne, which he described as a “hacker-powered security company,” told the audience that $100 billion has been spent on cybersecurity, yet, “Half of the money is wasted. We’ve been buying hardware and software and machines and walls and all kinds of stuff thinking that that technology and [those] products will make us secure. But that’s not true.”

Read more at The Linux Foundation

Understanding Linux filesystems: ext4 and Beyond

The majority of modern Linux distributions default to the ext4 filesystem, just as previous Linux distributions defaulted to ext3, ext2, and—if you go back far enough—ext.

If you’re new to Linux—or to filesystems—you might wonder what ext4 brings to the table that ext3 didn’t. You might also wonder whether ext4 is still in active development at all, given the flurries of news coverage of alternate filesystems such as btrfs, xfs, and zfs.

We can’t cover everything about filesystems in a single article, but we’ll try to bring you up to speed on the history of Linux’s default filesystem, where it stands, and what to look forward to.

I drew heavily on Wikipedia’s various ext filesystem articles, kernel.org’s wiki entries on ext4, and my own experiences while preparing this overview.

Read more at OpenSource.com

Software-Defined Networking Is Harmonizing for Networking’s Future

Heather Kirksey held up her smartphone. “How often do you stare at your smartphone? How often do you use the Internet on your phone?” asked the vice president of network functions virtualization (NFV) and director at the Open Platform for NFV (OPNFV), speaking at the Open Networking Summit. “That’s why you have to care about open source networking. We are transforming the global telecommunications infrastructure.”

Perhaps you still think of networking in terms of hardware infrastructure: the Wi-Fi router in your office, the cables hiding in the plenum, or the Internet backbone cablethat a backhoe just ruined. However, moving forward, tomorrow’s networks will be built from open source software-defined networks (SDNs) running on a wide range of hardware including the open source Open Compute Project (OCP).

SDN and NFV started with OpenFlow in 2011. OpenFlow was based on a simple idea: to “exploit the fact that most modern Ethernet switches and routers contain flow tables (typically built from TCAMs) that run at line rate to implement firewalls, network address translation, quality of service, and to collect statistics.” With that architecture, you could create what they called “programmable networks.”

Since then, several open source projects have built on this basic idea of using software, instead of custom hardware, for networking needs. Developers, vendors, and customers are all moving forward with SDN, NFV, and related programs as fast as they can.

Read more at HPE

Linux All-in-One: Slimbook Curve Comes with Your Distro of Choice Pre-Installed

Spanish computer maker Slimbook has unveiled the Slimbook Curve, an all-in-one with a 24-inch curved screen made for GNU/Linux.

The all-in-one is available with either Intel Core i5 or Core i7 CPUs and can be configured with 8GB or 16GB DDR4 memory with a 32GB option coming soon.

The Curve 24 is Slimbook’s first all-in-one PC, and it shares some hardware specs with the recently released KDE Slimbook II, which offers a few improvements on the original KDE Slimbook from 2017.

Read more at ZDNet

Automated Rollback of Helm Releases Based on Logs or Metrics

Continuous delivery is becoming a standard, if you implement the right process you get a predictable deployment. When a change is made in the code, most of the time the buildtestdeploy and monitor steps are followed. This is the base for anyone willing to apply automation to their release process.

If a failure is detected during the monitoring phase, then an operator has to verify and rollback the failing release to the previous known working state. This process is time consuming and not always truthful since it requires someone to keep an eye on the monitoring dashboard and react to it.

If the team is well structured and applies the devops way of working, then there will be someone on duty who receives an alert when something goes wrong. Alerts are triggered based on metrics, but still, after receiving the alert, the person on duty has to turn on their laptop (if not on-site), take a look at the graph, think for a moment, realise that the issue is coming from the last release and decide whether or not there is a need to roll back.

Read more at Container Solutions