Home Blog Page 296

AI in 2019: 8 Trends to Watch

Forget the job-stealing robot predictions. Let’s focus on artificial intelligence trends – around talent, security, data analytics, and more – that will matter to IT leaders.

We decided to focus on the trends that matter most urgently to IT leaders – you don’t need another “AI is taking over the world” story; you need concrete insights for your team and business. So, let’s dig into the key trends in AI – as well as overlapping fields such as machine learning – that IT leaders should keep tabs on in 2019.

Open source AI communities will thrive

Many of the cloud and cloud-native technologies in use today were born out of open source projects: Consider Kubernetes Exhibit A in that argument. Expect the development of AI, machine learning, and related technologies to follow a similar path.

“Today, more leading-edge software development occurs inside open source communities than ever before, and it is becoming increasingly difficult for proprietary projects to keep up with the rapid pace of development that open source offers,” says Ibrahim Haddad, director of research at The Linux Foundation, which includes the LF Deep Learning Foundation. “AI is no different and is becoming dominated by open source software that brings together multiple collaborators and organizations.”

In particular, Haddad expects more cutting-edge technology companies and early adopters to open source their internal AI work to catalyze the next waves of development, not unlike how Google spun Kubernetes out from an internal system to an open source project.

Read more at Enterprisers Project

STIBP, Collaborate, and Listen: Linus Floats Linux Kernel That ‘Fixes’ Intel CPUs’ Spectre Slowdown

Linus Torvalds has stuck to his “no swearing” resolution with his regular Sunday night Linux kernel release candidate announcement.

Probably the most important aspect of the weekend’s release candidate is that it, in a way, improves the performance of STIBP, which is a mitigation that stops malware exploiting a Spectre security vulnerability variant in Intel processors.

In November, it emerged that STIBP (Single Thread Indirect Branch Predictors), which counters Spectre Variant 2 attacks, caused nightmare slowdowns in some cases. The mitigation didn’t play well with simultaneous multi-threading (SMT) aka Intel’s Hyper Threading, and software would take up to a 50 per cent performance hit when the security measure was enabled. Linux 4.20-rc5, emitted on Sunday, addresses this performance issue by making the security defense optional… 

Read more at The Register

How to do Basic Math in Linux Command Line

The Linux bash, or the command line, lets you perform both basic and complex arithmetic and boolean operations. The commands like expr, jot, bc and, factor etc, help you in finding optimal mathematical solutions to complex problems. In this article, we will describe these commands and present examples that will serve as a basis for you to move to more useful mathematical solutions.

We have run the commands and procedures mentioned in this article on a Ubuntu 18.04 LTS system.

We are using the Ubuntu command line, the Terminal, in order to perform all the mathematical operations. You can open the Terminal either through the system Dash or the Ctrl+Alt+T shortcut.

The expr command

The expr or the expression command in Linux is the most commonly used command that is used to perform mathematical calculations. You can use this command to perform functions like addition, subtraction, multiplication, division, incrementing a value and, even comparing two values. 

Read more at VITUX

Demystifying Kubernetes Operators with the Operator SDK: Part 1

You may have heard about the concept of custom Operators in Kubernetes. You may have even read about the CoreOS operator-sdk, or tried walking through the setup. The concept is cool: Operators can help you extend Kubernetes functionality to include managing any stateful applications your organization uses. At Kenzan, we see many possibilities for their use on stateful infrastructure apps, including upgrades, node recovery, and resizing a cluster. An ideal future for platform ops will inevitably include operators themselves maintaining stateful applications and keeping runtime intervention by a human to a bare minimum. We also admit the topic of operators and the operator-idk is a tad confusing. After reading a bit, you may secretly still be mystified as to what operators exactly do, and how all the components work together.

In this article, we will demystify what an operator is, and how the CoreOS operator-sdk translates its input to the code that is then run as an operator. In this step-by-step tutorial, we will create a general example of an operator, with a few bells and whistles beyond the functionality shown in the operator-sdk user guide. By the end, you will have a solid foundation for how to build a custom operator that can be applied to real-world use cases.

Hello Operator, could you tell me what an Operator is?

To describe what an operator does, let’s go back to Kubernetes architecture for a bit. Kubernetes is essentially a desired state manager. You give it a desired state for your application (number of instances, disk space, image to use, etc.) and it attempts to maintain that state should anything get out of wack. Kubernetes uses what’s called a control plane on it’s master node. The control plane includes a number of controllers whose job is to reconcile against the desired state in the following way:

  • Monitor existing K8s objects (Pods, Deployments, etc.) to determine their state

  • Compare it to the K8s yaml spec for the object

  • If the state is not the same as the spec, the controller will attempt to remedy this

A common scenario where reconciling takes place is a Pod is defined with three replicas. One goes down, and with K8s’ controller watching, it recognizes that there should be three Pods running, not two. It then works to create a new instance of the Pod.

This simplified diagram shows the role of controllers in Kubernetes architecture as follows.

 

  • The Kubectl CLI sends an object spec (Pod, Deployment, etc.) to the API server on the Master Node to run on the cluster

  • The Master Node will schedule the object to run (not shown)

  • Once running, a Controller will continuously monitor the object and reconcile it against the spec

In this way, Kubernetes works great to take much of the manual work out of maintaining the runtime for stateless applications. Yet it is limited in the number of object types (Pods, Deployments, Namespaces, Services, DaemonSets, etc.) that it will natively maintain. Each of these object types has a predetermined behavior and way of reconciling against their spec should they break, without much deviation in how they are handled.

Now, what if your application has a bit more complexity and you need to perform a custom operation to bring it to a desired running state?

Think of a stateful application. You have a database application running on several nodes. If a majority of nodes go down, you’ll need to reload the database from a specific snapshot following specific steps. Using existing object types and controllers in Kubernetes, this would be impossible to achieve. Or think of scaling nodes up, or upgrading to a new version, or disaster recovery for our stateful application. These kinds of operations often need very specific steps, and typically require manual intervention.

Enter operators.

Operators extend Kubernetes by allowing you to define a Custom Controller to watch your application and perform custom tasks based on its state (a perfect fit to automate maintenance of the stateful application we described above). The application you want to watch is defined in Kubernetes as a new object: a Custom Resource (CR) that has its own yaml spec and object type (in K8s, a kind) that is understood by the API server. That way, you can define any specific criteria in the custom spec to watch out for, and reconcile the instance when it doesn’t match the spec. The way an operator’s controller reconciles against a spec is very similar to native Kubernetes’ controllers, though it is using mostly custom components.

Note the primary difference in our diagram from the previous one is that the Operator is now running the custom controller to reconcile the spec. While the API Server is aware of the custom controller, the Operator runs independently, and can run either inside or outside the cluster.


JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAmJMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

Because Operators are a powerful tool for stateful applications, we are seeing a number of pre-built operators from CoreOS and other contributors for things like ectd, Vault, and Prometheus. And while these are a great starting point, the value of your operator really depends on what you do with it: what your best practice is for failed states, and how the operator functionality may have to work alongside manual intervention.

 

Dial it in: Yes, I’d like to try Building an Operator

Based on the above diagram, in order to create our custom Operator, we’ll need the following:

  1. A Custom Resource (CR) spec that defines the application we want to watch, as well as an API for the CR

  2. A Custom Controller to watch our application

  3. Custom code within the new controller that dictates how to reconcile our CR against the spec

  4. An Operator to manage the Custom Controller

  5. A deployment for the Operator and Custom Resource

All of the above could be created by writing Go code and specs by hand, or using a tool like kubebuilder to generate Kubernetes APIs. But the easiest route (and the method we’ll use here) is generating the boilerplate for these components using the CoreOS operator-sdk. It allows you to generate the skeleton for the spec, the controller, and the operator, all via a convenient CLI. Once generated, you define the custom fields in the spec and write the custom code to reconcile against the spec. We’ll walk through each of these steps in the next part of the tutorial.

Toye Idowu is a Platform Engineer at Kenzan Media.

What’s New in Bash Parameter Expansion

The bash man page is close to 40K words. It’s not quite War and Peace, but it could hold its own in a rack of cheap novels. Given the size of bash’s documentation, missing a useful feature is easy to do when looking through the man page. For that reason, as well as to look for new features, revisiting the man page occasionally can be a useful thing to do.

The sub-section of interest today is Parameter Expansion—that is, $var in its many forms. Don’t be confused by the name though, it’s really about parameter and variable expansion.

I’m not going to cover all the different forms of parameter expansion here, just some that I think may not be as widely known as others. If you’re completely new to parameter expansion, check out my ancient post or one of the many articles elsewhere on the internet.

Read more at The Linux Journal

Add It Up: Serverless vs. Kubernetes vs. Containers

Has “serverless” surpassed containers? Will Kubernetes be the center of the universe for developers? Regardless of the technical benefits, your personal investment in the technologies impact your point of view.

Containers have the edge according to a survey conducted for our ebook about serverless. Overall, 53 percent of respondents would prefer containers as the platform to standardize how their organization abstracts IT infrastructure, with 33 percent choosing functions and 10 percent opting for virtual machines. Asked a different way, 55 percent lean towards container orchestrators like Kubernetes for new applications being deployed in the next year and a half.

Read more at The New Stack

The State of Docker Container Documentation

We need to confront container documentation as the crucial, non-trivial problem that it is.

TL;DR — As far as I can tell, there’s currently no way of providing documentation for specific containers that we could fairly call canonical, “best practice,” or even all that widely used. This blog post suggests some currently available (but sadly not-great) workarounds but also points to what I think could be a fundamentally better path.

…The result is that questions like these virtually never have readily available, no-Google-involved answers:

  • How do I run this thing? Should I run a specific executable?
  • What’s the “blessed” way to run it?
  • Which ports should I use?
  • How do I debug it or run its in-container test suites?

Sometimes the problem is that no one has provided answers to these questions anywhere. More often, in my experience, the answers are out there but suffer from a really fundamental discoverability gap.

Read more at Medium

6 Methods To Rename Multiple Files At Once In Linux

As you may already know, we use mv command to rename or move files and directories in Unix-like operating systems. But, the mv command won’t support renaming multiple files at once. It can rename only one file at a time. Worry not. There are few other utilities available, especially for batch renaming files. In this tutorial, we are going to learn to rename multiple files at once in six different methods. All examples provided here are tested in Ubuntu 18.04 LTS, however they should work on any Linux operating systems. Let’s get started!

Rename Multiple Files At Once In Linux

There could be many commands and utilities to a rename bunch of files. As of writing this, I know the following methods only. I will keep updating the list if I come across any method in future.

Method 1 – Using mmv

The mmv utility is used to move, copy, append and rename files in bulk using standard wildcards in Unix-like operating systems. It is available in the default repositories of Debian-based systems. To install it on Debian, Ubuntu, Linux Mint, run the following command:

$ sudo apt-get install mmv

Read more at OSTechnix

5 Minimal Web Browsers for Linux

There are so many reasons to enjoy the Linux desktop. One reason I often state up front is the almost unlimited number of choices to be found at almost every conceivable level. From how you interact with the operating system (via a desktop interface), to how daemons run, to what tools you use, you have a multitude of options.

The same thing goes for web browsers. You can use anything from open source favorites, such as Firefox and Chromium, or closed sourced industry darlings like Vivaldi and Chrome. Those options are full-fledged browsers with every possible bell and whistle you’ll ever need. For some, these feature-rich browsers are perfect for everyday needs.

There are those, however, who prefer using a web browser without all the frills. In fact, there are many reasons why you might prefer a minimal browser over a standard browser. For some, it’s about browser security, while others look at a web browser as a single-function tool (as opposed to a one-stop shop application). Still others might be running low-powered machines that cannot handle the requirements of, say, Firefox or Chrome. Regardless of the reason, Linux has you covered.

Let’s take a look at five of the minimal browsers that can be installed on Linux. I’ll be demonstrating these browsers on the Elementary OS platform, but each of these browsers are available to nearly every distribution in the known Linuxverse. Let’s dive in.

GNOME Web

GNOME Web (codename Epiphany, which means “a usually sudden manifestation or perception of the essential nature or meaning of something”) is the default web browser for Elementary OS, but it can be installed from the standard repositories. (Note, however, that the recommended installation of Epiphany is via Flatpak or Snap). If you choose to install via the standard package manager, issue a command such as sudo apt-get install epiphany-browser -y for successful installation.  

Epiphany uses the WebKit rendering engine, which is the same engine used in Apple’s Safari browser. Couple that rendering engine with the fact that Epiphany has very little in terms of bloat to get in the way, you will enjoy very fast page-rendering speeds. Epiphany development follows strict adherence to the following guidelines:

  • Simplicity – Feature bloat and user interface clutter are considered evil.

  • Standards compliance – No non-standard features will ever be introduced to the codebase.

  • Software freedom – Epiphany will always be released under a license that respects freedom.

  • Human interface – Epiphany follows the GNOME Human Interface Guidelines.

  • Minimal preferences – Preferences are only added when they make sense and after careful consideration.

  • Target audience – Non-technical users are the primary target audience (which helps to define the types of features that are included).

GNOME Web is as clean and simple a web browser as you’ll find (Figure 1).

Figure 1: The GNOME Web browser displaying a minimal amount of preferences for the user.

The GNOME Web manifesto reads:

A web browser is more than an application: it is a way of thinking, a way of seeing the world. Epiphany’s principles are simplicity, standards compliance, and software freedom.

Netsurf

The Netsurf minimal web browser opens almost faster than you can release the mouse button. Netsurf uses its own layout and rendering engine (designed completely from scratch), which is rather hit and miss in its rendering (Figure 2).

Figure 2: Netsurf (mis)rendering the Linux.com site.

Although you might find Netsurf to suffer from rendering issues on certain sites, understand the Hubbub HTML parser is following the work-in-progress HTML5 specification, so there will be issues popup now and then. To ease those rendering headaches, Netsurf does include HTTPS support, web page thumbnailing, URL completion, scale view, bookmarks, full-screen mode, keyboard shorts, and no particular GUI toolkit requirements. That last bit is important, especially when you switch from one desktop to another.

For those curious as to the requirements for Netsurf, the browser can run on a machine as slow as a 30Mhz ARM 6 computer with 16MB of RAM. That’s impressive, by today’s standard.

QupZilla

If you’re looking for a minimal browser that uses the Qt Framework and the QtWebKit rendering engine, QupZilla might be exactly what you’re looking for. QupZilla does include all the standard features and functions you’d expect from a web browser, such as bookmarks, history, sidebar, tabs, RSS feeds, ad blocking, flash blocking, and CA Certificates management. Even with those features, QupZilla still manages to remain a very fast lightweight web browser. Other features include: Fast startup, speed dial homepage, built-in screenshot tool, browser themes, and more.
One feature that should appeal to average users is that QupZilla has a more standard preferences tools than found in many lightweight browsers (Figure 3). So, if going too far outside the lines isn’t your style, but you still want something lighter weight, QupZilla is the browser for you.

Figure 3: The QupZilla preferences tool.

Otter Browser

Otter Browser is a free, open source attempt to recreate the closed-source offerings found in the Opera Browser. Otter Browser uses the WebKit rendering engine and has an interface that should be immediately familiar with any user. Although lightweight, Otter Browser does include full-blown features such as:

  • Passwords manager

  • Add-on manager

  • Content blocking

  • Spell checking

  • Customizable GUI

  • URL completion

  • Speed dial (Figure 4)

  • Bookmarks and various related features

  • Mouse gestures

  • User style sheets

  • Built-in Note tool

Figure 4: The Otter Browser Speed Dial tab.

Otter Browser can be run on nearly any Linux distribution from an AppImage, so there’s no installation required. Just download the AppImage file, give the file executable permissions (with the command chmod u+x otter-browser-*.AppImage), and then launch the app with the command ./otter-browser*.AppImage.

Otter Browser does an outstanding job of rendering websites and could function as your go-to minimal browser with ease.

Lynx

Let’s get really minimal. When I first started using Linux, back in ‘97, one of the web browsers I often turned to was a text-only take on the app called Lynx. It should come as no surprise that Lynx is still around and available for installation from the standard repositories. As you might expect, Lynx works from the terminal window and doesn’t display pretty pictures or render much in the way of advanced features (Figure 5). In fact, Lynx is as bare-bones a browser as you will find available. Because of how bare-bones this web browser is, it’s not recommended for everyone. But if you happen to have a gui-less web server and you have a need to be able to read the occasional website, Lynx can be a real lifesaver.

Figure 5: The Lynx browser rendering the Linux.com page.

I have also found Lynx an invaluable tool when troubleshooting certain aspects of a website (or if some feature on a website is preventing me from viewing the content in a regular browser). Another good reason to use Lynx is when you only want to view the content (and not the extraneous elements).

Plenty More Where This Came From

There are plenty more minimal browsers than this. But the list presented here should get you started down the path of minimalism. One (or more) of these browsers are sure to fill that need, whether you’re running it on a low-powered machine or not.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Linux For Beginners: Understanding The Many Versions Of Ubuntu

But did you know there are seven different “flavors” of Ubuntu?* 

I’ll briefly explain the kind of user each Ubuntu version is designed for, what differentiates them and what you can expect from each distribution’s desktop environment (basically its look and feel). It’s not an exhaustive deep dive on each one; hopefully just enough to get you pointed adventurously in the right direction.

Read more at Forbes