Home Blog Page 386

How Linux Became My Job

IBM engineer Phil Estes shares how his Linux hobby led him to become an open source leader, contributor, and maintainer.

I’ve been using open source since what seems like prehistoric times. Back then, there was nothing called social media. There was no Firefox, no Google Chrome (not even a Google), no Amazon, barely an internet. In fact, the hot topic of the day was the new Linux 2.0 kernel. The big technical challenges in those days? Well, the ELF format was replacing the old a.out format in binary Linux distributions, and the upgrade could be tricky on some installs of Linux.

How I transformed a personal interest in this fledgling young operating system to a career in open source is an interesting story. …This journey—from my first use of Linux to becoming a leader, contributor, and maintainer in today’s cloud-native open source world—has been extremely rewarding. 

Read more at OpenSource.com

Efforts to Standardize Tracing Through OpenTracing

Industry efforts toward distributed tracing have been evolving for decades, and one of the latest initiatives in this arena is OpenTracing, an open distributed standard for apps and OSS packages. APMs like Lightstep and Datadog are eagerly pushing forward the emerging specification, as are customer organizations like HomeAway, PayPal and Pinterest, while some other industry leaders – including Dynatrace, NewRelic, and App Dynamics  – are holding back from full support. Still, contributors to the open-source spec are forging ahead with more and more integrations, and considerable conference activities are in store for later this year.

Read more at SDTimes

The fc Command Tutorial With Examples For Beginners

The fc command, short for fix commands, is a shell built-in command used to list, edit and re-execute the most recently entered commands in to an interactive shell. You can edit the recently entered commands in your favorite editor and run them without having to retype the entire commands. This command can be helpful to correct the spelling mistakes in the previously entered commands and avoids the repetition of long and complicated commands. Since it is shell-builtin, it is available in most shells, including Bash, Zsh, Ksh etc. In this brief tutorial, we are going to learn to use fc command in Linux.

List the recently executed commands

If you run “fc -l” command with no arguments, it will display the last 16 commands.

Read more at OSTechnix

Linux Kernel Developer: Steven Rostedt

Linus Torvalds recently released version 4.16 of the Linux kernel. These releases typically occur every nine to ten weeks, and each one contains the work of more than 1,600 developers representing over 200 corporations, according to the 2017 Linux Kernel Development Report, written by Jonathan Corbet and Greg Kroah-Hartman. In this series, we’re highlighting some of the developers who contribute to the kernel.

Steven Rostedt, Open Source Programmer at VMware, maintains the Real Time Stable releases of the Linux kernel, among other things. Rostedt is one of the original developers of the PREEMPT_RT patch and began working on it in 2004…

Read more at The Linux Foundation

Cybersecurity Vendor Selection: What Needs to Be in a Good Policy

Operating a company in the modern enterprise landscape requires a reliance, to some degree, on third-party vendors. It’s unavoidable. But the addition of each new vendor brings with it a certain amount of risk.

Starting small is key. Company leaders should work with their CISO or CSO to determine their minimum acceptable security standards, and use that as a baseline criteria, according to Gartner research director Mark Horvath. This should be done even before a request for proposal (RFP) or request for information (RFI) is written, Horvath said.

“Every organization will have a set of requirements which are informed by the relevant industry standards and the unique needs of the organization. These should be written as a policy long before any vendor inquiries are made, so that they can be addressed up front with the vendors. The goal is to avoid the problem of buying a product and then discovering later that it violates privacy or security policies in a way which hinders the business case for the purchase.”

Read more at ZDNet

5 Things to Know Before Adopting Microservice and Container Architectures

I spend a lot of time with existing and potential customers answering questions about how we use and manage containers to create a platform composed of dozens of microservices.

We definitely consider ourselves early adopters of containers, and we started packaging services in them almost as soon as Docker released its first production-ready version in the summer of 2014. Many of the customers I talk with are just now beginning — or thinking about beginning — such journeys, and they want to know everything we know. They want to know how we make it work, and how we architected it. But part of the process, I like to stress, is that they need to know what we learned from where we struggled along the way.

With that in mind, here are five key takeaways I’d like to share with anyone pondering containers and microservices:

1. Never Stop Developing

Take your adoption project seriously, and treat it like a product. Give it a name, some internal branding even and a clear product vision. It should be managed and given a life.

Read more at The New Stack

Why You Should Use Column-Indentation to Improve Your Code’s Readability

I think that the most important aspect of programming is the readability of the source code that you write or maintain. This involves many things, from the syntax of the programming language, to the variable names, comments, and indentation. Here I discuss the last one of these, indentation.

It’s not about indentation size, or the choice between tabs and spaces, or if it should be required in a language such as Python. A lot of people like to use a maximum line length for each line of code, usually 80 or 120 characters. With this idea, there is no maximum length, and sometimes you’ll need to use the horizontal scrollbar. But don’t freak out, it is not for the whole code — it’s just for some parts of it.

Read more at FreeCodeCamp

Learn Advanced SSH Commands with the New Cheat Sheet

Secure Shell (SSH) is a powerful tool for connecting to remote servers. But with all that power comes a dizzying array of options and flags. The ssh client command has many options—some for daily use and some arcane. I put together a cheat sheet for some common SSH uses. It doesn’t begin to cover all the possible options, but I hope you find it useful for your remote access needs.

Read more at OpenSource.com

Containerization, Atomic Distributions, and the Future of Linux

Linux has come a long way since Linus Torvalds announced it in 1991. It has become the dominant operating system in the enterprise space. And, although we’ve seen improvements and tweaks in the desktop environment space, the model of a typical Linux distribution has largely remained the same over the past 25+ years. The traditional package management based model has dominated both the desktop and server space.

However, things took an interesting turn when Google launched Linux-based Chrome OS, which deployed an image-based model. Core OS (now owned by Red Hat) came out with an operating system (Container Linux) that was inspired by Google but targeted at enterprise customers.

Container Linux changed the way operating systems update. It changed the way applications were delivered and updated. Is this the future of Linux distributions? Will it replace the traditional package-based distribution model?

Three models

Matthias Eckermann, Director of Product Management for SUSE Linux Enterprise, thinks there are not two but three models. “Outside of the traditional (RHEL/SLE) and the image-based model (RH Atomic Host), there is a third model: transactional. This is where SUSE CaaS Platform and its SUSE MicroOS lives,” said Eckermann.

What’s the difference?

Those who live in Linux land are very well aware of the traditional model. It’s made up of single packages and shared libraries. This model has its own benefit as application developers don’t have to worry about bundling libraries with their apps. There is no duplication, which keeps the system lean and thin. It also saves bandwidth as users don’t have to download a lot of packages. Distributions have total control over packages so security issues can be fixed easily by pushing updates at the system level.

“Traditional packaging continues to provide the opportunity to carefully craft and tune an operating system to support mission-critical workloads that need to stand the test of time,” said Ron Pacheco, Director of Product Management at Red Hat Enterprise Linux.

But the traditional model has some disadvantages, too. App developers must restrict themselves to the libraries shipped with the distro, which means they can’t take advantage of new packages for their apps if the distro doesn’t support them. It could also lead to conflict between two different versions. As a result, it creates administration challenges as they are often difficult to keep updated and in sync.

Image-based Model

That’s where the image based model comes to the rescue. “The image-based model solves the problems of the traditional model as it replaces the operating system at every reiteration and doesn’t work with single packages,” said Eckermann.

“When we talk about the operating system as an image, what we’re really talking about is developing and deploying in a programmatic way and with better integrated life cycle management,” said Pacheco, giving the example of OpenShift, which is built on top of Red Hat Enterprise Linux.

Pacheco sees the image-based OS as a continuum, from hand-tooling a deployed image to a heavily automated infrastructure that can be managed at a large scale; regardless of where a customer is on this range, the same applications have to run. “You don’t want to create a silo by using a wholly different deployment model,” he said.

The image-based model replaces the entire OS with new libraries and packages, which introduces its own set of problems. The image-based model has to be reconstructed to meet the needs of specific environments. For example, if the user has a specific need for installing a specific hardware driver or low-level monitoring option, the image model fails, or options to have finer granularity have to be re-invented.

Transactional model

The third model is transactional updates, which follows the traditional package-based updates, but instead handles all packages as if they were images, updating all the packages that belong together in one shot like an image.

“The difference is because they are single packages that are grouped together as well as on descending and the installation, the customer has the option to influence this if necessary. This gives the user extra flexibility by combining the benefits of both and avoiding the disadvantages associated with the traditional or image model,” said Eckermann.

Pacheco said that it’s becoming increasingly common for carefully crafted workloads to be deployed as images in order to deploy consistently, reliably, and to do so with elasticity.  “This is what we see our customers do today when they create and deploy virtual machines on premises or on public/private clouds as well as on traditional bare metal deployments,” he said.

Pacheco suggests that we should not look at these models as strictly a “compare and contrast scenario,” but rather as an evolution and expansion of the operating system’s role.

Arrival of Atomic Updates

Google’s Chrome OS and the Core OS popularized the concept of transactional updates, a model followed by both Red Hat and SUSE.

“The real problem is the operating system underlining the container host operating system is not in focus anymore — at least not in a way the administrator should care about. Both RH Atomic Host and SUSE CaaS Platform solve this problem similarly from a user experience perspective,” said Eckermann.

Immutable infrastructure, such as that provided by SUSE CaaS Platform, Red Hat Atomic Host, and Container Linux (formerly Core OS), encourages the use of transactional updates. “Having a model where the host always moves to a ‘known good’ state enables better confidence with updates, which in turn enables a faster flow of features, security benefits, and an easier-to-adopt operational model,” said Ben Breard, senior technology product manager, Red Hat.

These newer OSes isolate the applications from the underlying host with Linux containers thereby removing many of the traditional limitations associated with infrastructure updates.

“The real power and benefits are realized when the orchestration layer is intelligently handling the updates, deployments, and, ultimately, seamless operations,” added Breard.

The Future

What does the future hold for Linux? The answer really depends on who you ask. Container players will say the future belongs to containerized OS, but Linux vendors who still have a huge market may disagree.

When asked if, in the long run, atomic distros will replace traditional distributions, Eckermann said, “If I say yes, then I am following the trend; if I say no, I will be considered old-fashioned. Nevertheless, I say no: atomic distros will not replace traditional distros in the long run — but traditional workloads and containerized workloads will live together in data centers as well as private and public cloud environments.”

Pacheco maintained that the growth in Linux deployments, in general, makes it difficult to imagine one model replacing the other. He said that instead of looking at them as competing models, we should look at atomic distributions as part of the evolution and deployment of the operating system.

Additionally, there are many use-cases that may need a mix of both species of Linux distributions. “Imagine the large number of PL/1 and Cobol systems in banks and insurance companies. Think about in-memory databases and core data bus systems,” said Eckermann.

Most of these applications can’t be containerized. As much as we would like to think, containerization is not a silver bullet that solves every problem. There will always be a mix of different technologies.

Eckermann believes that over time, a huge number of new developments and deployments will go into containerization, but there is still good reason to keep traditional deployment methods and applications in the enterprise.

“Customers need to undergo business, design, and cultural transformations in order to maximize the advantages that container-based deployments are delivering. The good news is that the industry understands this, as a similar transformation at scale occurred with the historical moves from mainframes to UNIX to x86 to virtualization,” said Pacheco.

Conclusion

It’s apparent that the volume of containerized workloads will increase in the future, which translates into more demand for atomic distros. In the meantime, a substantial percentage of workloads may remain on traditional distros that will keep them running. What really matters is that both players have invested heavily in new models and are ready to tweak their strategy as the market evolves. An external observer can clearly see that the future belongs to transactional/atomic models. We have seen the evolution of datacenter; we have come a long way from one application per server to function-as-a-service model. It is not far fetched to see Linux distros entering the atomic phase.

Nitrokey Digital Tokens for Linux Kernel Developers

The Linux Foundation IT team has been working to improve the code integrity of git repositories hosted at kernel.org by promoting the use of PGP-signed git tags and commits. Doing so allows anyone to easily verify that git repositories have not been altered or tampered with no matter from which worldwide mirror they may have been cloned. If the digital signature on your cloned repository matches the PGP key belonging to Linus Torvalds or any other maintainer, then you can be assured that what you have on your computer is the exact replica of the kernel code without any omissions or additions.

To help promote the use of PGP signatures in Linux kernel development, we now offer a detailed guide within the kernel documentation tree:

Further, we are happy to announce a new special program sponsored by The Linux Foundation in partnership with Nitrokey — the developer and manufacturer of smartcard-compatible digital tokens capable of storing private keys and performing PGP operations on-chip. Under this program, any developer who is listed as a maintainer in the MAINTAINERS file, or who has a kernel.org account can qualify for a free digital token to help improve the security of their PGP keys. The cost of the device, including any taxes, shipping and handling will be covered by The Linux Foundation.

To participate in this program, please access the special storefront on the Nitrokey website:

Who qualifies for this program?

To qualify for the program, you need to have an account at kernel.org or have your email address listed in the MAINTAINERS file (following the “M:” heading). If you do not currently qualify but think you should, the easiest course of action is to get yourself added to the MAINTAINERS file or to apply for an account at kernel.org.

Which devices are available under this program?

The program is limited to Nitrokey Start devices. There are several reasons why we picked this particular device among several available options.

First of all, many Linux kernel developers have a strong preference not just for open-source software, but for open hardware as well. Nitrokey is one of the few companies selling GnuPG-compatible smartcard devices that provide both, since Nitrokey Start is based on Gnuk cryptographic token firmware developed by Free Software Initiative of Japan. It is also one of the few commercially available devices that offer native support for ECC keys, which are both faster computationally than large RSA keys and generate smaller digital signatures. With our push to use more code signing of git objects themselves, both the open nature of the device and its support for fast modern cryptography were key points in our evaluation.

Additionally, Nitrokey devices (both Start and Pro models) are already used by open-source developers for cryptographic purposes and they are known to work well with Linux workstations.

What is the benefit of digital smartcard tokens?

With usual GnuPG operations, the private keys are stored in the home directory where they can be stolen by malware or exposed via other means, such as poorly secured backups. Furthermore, each time a GnuPG operation is performed, the keys are loaded into system memory and can be stolen from there using sufficiently advanced techniques (the likes of Meltdown and Spectre).

A digital smartcard token like Nitrokey Start contains a cryptographic chip that is capable of storing private keys and performing crypto operations directly on the token itself. Because the key contents never leave the device, the operating system of the computer into which the token is plugged in is not able to retrieve the private keys themselves, therefore significantly limiting the ways in which the keys can be leaked or stolen.

Questions or problems?

If you qualify for the program, but encounter any difficulties purchasing the device, please contact Nitrokey at shop@nitrokey.com.

For any questions about the program itself or with any other comments, please reach out to info@linuxfoundation.org.

This article originally appeared on kernel.org.