Home Blog Page 278

The Monty Hall Problem

The original and most simple scenario of the Monty Hall problem is this: You are in a prize contest and in front of you there are three doors (A, B and C). Behind one of the doors is a prize (Car), while behind others is a loss (Goat). You first choose a door (let’s say door A). The contest host then opens another door behind which is a goat (let’s say door B), and then he ask you will you stay behind your original choice or will you switch the door. The question behind this is what is the better strategy?

The basis of the answer lies in related and unrelated events. The most common answer is that it doesn’t matter which strategy you choose because it is 50/50 chance – but it is not. The 50/50 assumption is based on the idea that the first choice (one of three doors) and the second choice (stay or switch door) are unrelated events, like flipping a coin two times. But in reality, those are related events, and the second event depends on the first event.

At the first step, when you choose one of three doors, the probability that you picked the right door is 33%, or in other words, there is 66,67% that you are on the wrong door. The fact that that in the second step you are given a choice between your door and the other one doesn’t change the fact that you are most likely starting with the wrong door. Therefore, it is better to switch door in the second step.

Read more at There’s Something About R

New Ports Bring Linux to Arm Laptops, Android to the Pi

Like life itself, software wants to be free. In our increasingly open source era, software can more easily disperse into new ecosystems. From open source hackers fearlessly planting the Linux flag on the Sony Playstation back in the aughts to standard Linux apps appearing on Chromebooks and on Android-based Galaxy smartphones (Samsung’s DeX), Linux continues to break down barriers.

The latest Linux-related ports include an AArch64-Laptops project that enables owners of Windows-equipped Arm laptops and tablets to load Ubuntu. There’s also a Kickstarter project to develop a Raspberry Pi friendly version of Google’s low-end Android 9 Pi Go stack. Even Windows is spreading its wings. A third-party project has released a WoA installer that enables a full Windows 10 image to run on the Pi.

Ubuntu to Arm laptops

The practice of replacing Windows with Linux on Intel-based computers has been around for decades, but the arrival of Arm-based laptops has complicated matters. Last year, Microsoft partnered with Qualcomm and to release the lightweight Windows 10 S on the Asus NovaGo convertible laptop and the HP Envy x2 and Lenovo Miix 630 2-in-1 tablets, all powered by a Snapdragon 835 SoC.

Reviews have been mixed, with praise for the longer battery life, but criticism about sluggish performance. Since the octa-core, 10nm fabricated Snapdragon 835 is designed to run on the Linux-based Android — it also supports embedded Linux — Linux hackers naturally decided that they could do better.

As reported by Phoronix, AArch64-Laptops has posted Ubuntu 18.04 LTS images for all three of the above systems. As noted by Liliputing, the early release lacks support for WiFi, on-board storage, or hardware-accelerated graphics, and the touchpad doesn’t work on the Asus NovaGo.

The WiFi and storage issues should be solved in the coming months and accelerated graphics should be theoretically possible thanks to the open source Freedreno GPU driver project, says Phoronix. It’s unclear if AArch64-Laptops can whip up Ubuntu builds for more powerful Arm Linux systems like the Snapdragon 850 based Samsung Galaxy Book 2 and Lenovo Yoga C630.

Liliputing notes that Arm Linux lovers can also try out the Linux-driven, Rockchip RK3399 based Pinebook laptop. Later this year, Pine64 will release a consumer-grade Pinebook Pro.

Android Go to Raspberry Pi

If you like a double helping of pie, have we got a Kickstarter project for you. As reported by Geeky Gadgets, an independent group called RaspberryPi DevTeam has launched a Kickstarter campaign to develop a version of Google’s new Android 9 Pie Go stack for entry-level smartphones that can to run on the Raspberry Pi 3.

Assuming the campaign meets its modest $3,382 goal by April 10, there are plans to deliver a usable build by the end of the year. Pledges range from 1 to 499 Euros.

The project will use AOSP-based code from Android 9 Pie Go, which was released last August. Go is designed for low-end phones with only 1GB RAM.

RaspberryPi DevTeam was motivated to launch the project because current Android stacks for the Raspberry Pi “normally have bugs, are unstable and run slow,” says the group. That has largely been true since hackers began attempting the feat four years ago with the quad-core, Cortex-A7 Raspberry Pi 2. Early attempts have struggled to give Android its due on 1GB RAM SBC, even with the RPi 3B and 3+.

The real-time focused RTAndroid has had the most success, and there have been other efforts like the unofficial, Android 7.1.2 based LineageOS 14.1 for the RPi 3. Last year, an RTAndroid-based, industrial focused emteria.OS stack arrived with more impressive performance.

A MagPi hands-on last summer was impressed with the stack, which it called “the first proper Android release running on a Raspberry Pi 3B+.” MagPi continues: “Finally there’s a proper way to install full Android on your Raspberry Pi.”

Available in free evaluation (registration required) and commercial versions, emteria.OS uses F-Droid as an open source stand-in for Google Play. The MagPi hands-on runs through an installation of Netflix and notes the availability of apps including NewPipe (YouTube), Face Slim (Facebook), and Terminal Emulator.

All these solutions should find it easier to run on next year’s Raspberry Pi 4. Its SoC will move from the current 40nm process to something larger than 7nm, but no larger than 28nm, according to RPi Trading CEO Eben Upton in a Feb. 11 Tom’s Hardware post. The SBC will have “more RAM, a faster processor, and faster I/O,” but will be the same size and price as the RPi 3B+, says the story. Interestingly, it was former Google CEO Eric Schmidt who convinced Upton and his crew to retain the $35 price for the RPi 2. The lesson seems to have stuck.

Windows 10 on RPi 3

As far back as the Raspberry Pi 2, Microsoft announced it would support the platform with its slimmed down Windows 10 IoT, which works better on the new 64-bit RPi 3 models. But why use a crippled version of Windows for low-power IoT when you could use Raspbian?

The full Windows 10 should draw more interest, and that’s what’s promised by the WOA-Project with its new WoA-Installer for the RPi 3 or 3B+. According to Windows Latest, the open source WoA (Windows on Arm) Installer was announced in January following an earlier WoA release for the Lumia 950 phones.

The WoA Installer lets you run Windows 10 Arm 64 on the Pi but comes with no performance promises. The GitHub page notes: “WoA Installer needs a set of binaries, AKA the Core Package, to do its job. These binaries are not not mine and are bundled and offered just for convenience…” Good luck!

How Much Memory Is Installed and Being Used on Your Linux Systems?

There are numerous ways to get information on the memory installed on Linux systems and view how much of that memory is being used. Some commands provide an overwhelming amount of detail, while others provide succinct, though not necessarily easy-to-digest, answers. In this post, we’ll look at some of the more useful tools for checking on memory and its usage.

Before we get into the details, however, let’s review a few details. Physical memory and virtual memory are not the same. The latter includes disk space that configured to be used as swap. Swap may include partitions set aside for this usage or files that are created to add to the available swap space when creating a new partition may not be practical. Some Linux commands provide information on both.

Swap expands memory by providing disk space that can be used to house inactive pages in memory that are moved to disk when physical memory fills up.

Read more at Network World

Ampersands and File Descriptors in Bash

In our quest to examine all the clutter (&, |, ;, >, <, {, [, (, ), ], }, etc.) that is peppered throughout most chained Bash commands, we have been taking a closer look at the ampersand symbol (&).

Last time, we saw how you can use & to push processes that may take a long time to complete into the background. But, the &, in combination with angle brackets, can also be used to pipe output and input elsewhere.

In the previous tutorials on angle brackets, you saw how to use > like this:

ls > list.txt

to pipe the output from ls to the list.txt file.

Now we see that this is really shorthand for

ls 1> list.txt

And that 1, in this context, is a file descriptor that points to the standard output (stdout).

In a similar fashion 2 points to standard error (stderr), and in the following command:

ls 2> error.log

all error messages are piped to the error.log file.

To recap: 1> is the standard output (stdout) and 2> the standard error output (stderr).

There is a third standard file descriptor, 0<, the standard input (stdin). You can see it is an input because the arrow (<) is pointing into the 0, while for 1 and 2, the arrows (>) are pointing outwards.

What are the standard file descriptors good for?

If you are following this series in order, you have already used the standard output (1>) several times in its shorthand form: >.

Things like stderr (2) are also handy when, for example, you know that your command is going to throw an error, but what Bash informs you of is not useful and you don’t need to see it. If you want to make a directory in your home/ directory, for example:

mkdir newdir

and if newdir/ already exists, mkdir will show an error. But why would you care? (Ok, there some circumstances in which you may care, but not always.) At the end of the day, newdir will be there one way or another for you to fill up with stuff. You can supress the error message by pushing it into the void, which is /dev/null:

mkdir newdir 2> /dev/null

This is not just a matter of “let’s not show ugly and irrelevant error messages because they are annoying,” as there may be circumstances in which an error message may cause a cascade of errors elsewhere. Say, for example, you want to find all the .service files under /etc. You could do this:

find /etc -iname "*.service"

But it turns out that on most systems, many of the lines spat out by find show errors because a regular user does not have read access rights to some of the folders under /etc. It makes reading the correct output cumbersome and, if find is part of a larger script, it could cause the next command in line to bork.

Instead, you can do this:

find /etc -iname "*.service" 2>  /dev/null

And you get only the results you are looking for.

A Primer on File Descriptors

There are some caveats to having separate file descriptors for stdout and stderr, though. If you want to store the output in a file, doing this:

find /etc -iname "*.service" 1> services.txt

would work fine because 1> means “send standard output, and only standard output (NOT standard error) somewhere“.

But herein lies a problem: what if you *do* want to keep a record within the file of the errors along with the non-erroneous results? The instruction above won’t do that because it ONLY writes the correct results from find, and

find /etc -iname "*.service" 2> services.txt

will ONLY write the errors.

How do we get both? Try the following command:

find /etc -iname "*.service" &> services.txt

… and say hello to & again!

We have been saying all along that stdin (0), stdout (1), and stderr (2) are file descriptors. A file descriptor is a special construct that points to a channel to a file, either for reading, or writing, or both. This comes from the old UNIX philosophy of treating everything as a file. Want to write to a device? Treat it as a file. Want to write to a socket and send data over a network? Treat it as a file. Want to read from and write to a file? Well, obviously, treat it as a file.

So, when managing where the output and errors from a command goes, treat the destination as a file. Hence, when you open them to read and write to them, they all get file descriptors.

This has interesting effects. You can, for example, pipe contents from one file descriptor to another:

find /etc -iname "*.service" 1> services.txt 2>&1

This pipes stderr to stdout and stdout is piped to a file, services.txt.

And there it is again: the &, signaling to Bash that 1 is the destination file descriptor.

Another thing with the standard file descriptors is that, when you pipe from one to another, the order in which you do this is a bit counterintuitive. Take the command above, for example. It looks like it has been written the wrong way around. You may be reading it like this: “pipe the output to a file and then pipe errors to the standard output.” It would seem the error output comes to late and is sent when 1 is already done.

But that is not how file descriptors work. A file descriptor is not a placeholder for the file, but for the input and/or output channel to the file. In this case, when you do 1> services.txt, you are saying “open a write channel to services.txt and leave it open“. 1 is the name of the channel you are going to use, and it remains open until the end of the line.

If you still think it is the wrong way around, try this:

find /etc -iname "*.service" 2>&1 1>services.txt

And notice how it doesn’t work; notice how errors get piped to the terminal and only the non-erroneous output (that is stdout) gets pushed to services.txt.

That is because Bash processes every result from find from left to right. Think about it like this: when Bash gets to 2>&1, stdout (1) is still a channel that points to the terminal. If the result that find feeds Bash contains an error, it is popped into 2, transferred to 1, and, away it goes, off to the terminal!

Then at the end of the command, Bash sees you want to open stdout as a channel to the services.txt file. If no error has occurred, the result goes through 1 into the file.

By contrast, in

find /etc -iname "*.service" 1>services.txt 2>&1

1 is pointing at services.txt right from the beginning, so anything that pops into 2 gets piped through 1, which is already pointing to the final resting place in services.txt, and that is why it works.

In any case, as mentioned above &> is shorthand for “both standard output and standard error“, that is, 2>&1.

This is probably all a bit much, but don’t worry about it. Re-routing file descriptors here and there is commonplace in Bash command lines and scripts. And, you’ll be learning more about file descriptors as we progress through this series. See you next week!

Runc and CVE-2019-5736

This morning a container escape vulnerability in runc was announced. We wanted to provide some guidance to Kubernetes users to ensure everyone is safe and secure.

What Is Runc?

Very briefly, runc is the low-level tool which does the heavy lifting of spawning a Linux container. Other tools like Docker, Containerd, and CRI-O sit on top of runc to deal with things like data formatting and serialization, but runc is at the heart of all of these systems.

Kubernetes in turn sits on top of those tools, and so while no part of Kubernetes itself is vulnerable, most Kubernetes installations are using runc under the hood.

What Is The Vulnerability?

While full details are still embargoed to give people time to patch, the rough version is that when running a process as root (UID 0) inside a container, that process can exploit a bug in runc to gain root privileges on the host running the container. This then allows them unlimited access to the server as well as any other containers on that server.

If the process inside the container is either trusted (something you know is not hostile) or is not running as UID 0, then the vulnerability does not apply. It can also be prevented by SELinux, if an appropriate policy has been applied. RedHat Enterprise Linux and CentOS both include appropriate SELinux permissions with their packages and so are believed to be unaffected if SELinux is enabled.

The most common source of risk is attacker-controller container images, such as unvetted images from public repositories.

Read more at Kubernetes blog

How to Use SSH to Proxy Through a Linux Jump Host

Secure Shell (SSH) includes a number of tricks up its sleeve. One particular trick you may not know about is the ability to use a jump host. A jump host is used as an intermediate hop between your source machine and your target destination. In other words, you can access X from Y using a gateway.

There are many reasons to use a jump server. For example, Jump servers are often placed between a secure zone and a DMZ. These jump servers provide for the transparent management of devices within the DMZ, as well as a single point of entry. Regardless of why you might want to use a jump server, do know that it must be a hardened machine (so don’t just depend upon an unhardened Linux machine to serve this purpose). By using a machine that hasn’t been hardened, you’re just as insecure as if you weren’t using the jump.

But how can you set this up? I’m going to show you how to create a simple jump with the following details (Your set up will be defined by your network.):

Read more at Tech Republic

Assess USB Performance While Exploring Storage Caching

The team here at the Dragon Propulsion Laboratory has kept busy building multiple Linux clusters as of late [1]. Some of the designs rely on spinning disks or SSD drives, whereas others use low-cost USB storage or even SD cards as boot media. In the process, I was hastily reminded of the limits of external storage media: not all flash is created equal, and in some crucial ways external drives, SD cards, and USB keys can be fundamentally different.

Turtles All the Way Down

Mass storage performance lags that of working memory in the Von Neumann architecture [2], with the need to persist data leading to the rise of caches at multiple levels in the memory hierarchy. An access speed gap three orders of magnitude between levels makes this design decision essentially inevitable where performance is at all a concern. (See Brendan Gregg’s table of computer speed in human time [3].) The operating system itself provides the most visible manifestation of this design in Linux: Any RAM not allocated to a running program is used by the kernel to cache the reads from and buffer the writes to the storage subsystem [4], leading to the often repeated quip that there is really no such thing as “free memory” in a Linux system.

An easy way to observe the operating system (OS) buffering a write operation is to write the right amount of data to a disk in a system with lots of RAM, as shown in Figure 1, in which a rather improbable half a gigabyte worth of zeros is being written to a generic, low-cost USB key in half a second, but then experiences a 30-second delay when forcing the system to sync [5] to disk. 

Read more at ADMIN magazine

Building Trust in Open Source: A Look Inside the OpenChain Project

Open source software provides businesses with a number of benefits including cost, flexibility and freedom. This freely distributed software can also be easily altered by any business that is familiar with its source code. 

However, licensing issues do arise which could present a major hurdle for an organisation’s legal team. This is why the OpenChain Project was set up to help introduce common standards regarding how companies declare their open source efforts are compliant with licensing standards.

TechRadar Pro spoke with OpenChain’s General Manager, Shane Coughlan to gain a better understanding of how open source licenses work and to learn how the Linux Foundation is making it easier for businesses to take advantage of open source software. …

The OpenChain Project is all about identifying the key requirements of a quality open source compliance program. The OpenChain Specification is the document that describes processes that companies can apply to open source compliance at inbound, internal and external inflection points. 

Read more at TechRadar

Gain Valuable Kubernetes Skills and Certification with Linux Foundation Training

Quick, what was the the most dominant technology skill requested by IT firms in 2018? According to a study from job board Dice, Kubernetes skills dominated among IT firm requests, and this news followed similar findings released last year from jobs board Indeed. The Dice report, based on its available job postings, found that Kubernetes was heavily requested by IT recruiters as well as hiring managers. As SDX Central has reported: “Indeed’s work found that Kubernetes had the fastest year-over-year surge in job searches among IT professionals. It also found that related job postings increased 230 percent between September 2017 and September 2018.”

The demand for Kubernetes skills is so high that companies of all sizes are reporting skills gaps and citing difficulty finding people who have the required Kubernetes and containerization skills. That spells opportunity for those who gain Kubernetes expertise, and the good news is that you have several approachable and inexpensive options for getting trained as well as certified.

Certification Options

Certification is the gold standard in the Kubernetes arena. On that front, last year The Cloud Native Computing Foundation launched its Certified Kubernetes Application Developer exam and a Kubernetes for Developers (LFD259) course ($299). These offerings complement the Certified Kubernetes Administrator program ($300). CNCF, working in partnership with edX, also offers an Introduction to Kubernetes course that is absolutely free, and requires a time commitment of only two to three hours a week for four to five weeks. You can register here, and find out more about Kubernetes Fundamentals (LFS2580) and Developer courses here.

The Kubernetes Fundamentals course comes with extensive course materials, and you can get a free downloadable chapter from the materials here. For those new to Kubernetes, the course covers architecture, networking setup and much more. If you are new to Kubernetes, you can also find a free webinar here, where Kubernetes Founder Craig McLuckie provides an introduction to the Kubernetes project and how it began when he was working at Google.

“As Kubernetes has grown, so has the demand for application developers who are knowledgeable about building on top of Kubernetes,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. ”The CKAD exam allows developers to certify their proficiency in designing and building cloud native applications for Kubernetes, while also allowing companies to confidently hire high-quality teams.”

According to the Cloud Native Computing Foundation: “With the majority of container-related job listings asking for proficiency in Kubernetes as an orchestration platform, the CKAD program will help expand the pool of Kubernetes experts in the market, thereby enabling continued growth across the broad set of organizations using the technology.”

December’s KubeCon + CloudNativeCon conference in Seattle was a sold-out event that has now ushered in a wealth of free Kubernetes-focused content that you can access. In fact, more than 100 lightning talks, keynotes, and technical sessions from the event have already been posted online, with more information here.

You can watch many videos from KubeCon on YouTube. You’ll find videos sharing basics and best practices, explaining how to integrate Kubernetes with various platforms, and discussing the future of Kubernetes. You can also get hear talks in person at these upcoming conferences:

KubeCon Barcelona, May 20-23

KubeCon Shanghai, June 24-26

KubeCon San Diego, November 18-21

Kubernetes is spreading its reach rapidly for many reasons, including its extensible architecture and healthy open source community, but some still feel that it is too difficult to use. The resources found here—many of them free—will help you move toward mastery of one of today’s most compelling technology architectures.

Microsoft Joins OpenChain Open-Source Compliance Group

OpenChain, I would argue, is the most important open-source project you’ve never heard of before. This Linux Foundationconsortium provides an industry standard for open-source supply chain license compliance. And now, Microsoft has joined the the OpenChain Project.

OpenChain’s important because the open-source software supply chain goes from companies that are little more than a single developer in his home office to multi-billion dollar businesses. Within it, there are tens of thousands of programs with a wide variety of open-source software licenses. So, how can companies trust and manage all the code’s legal requirements? The answer is with OpenChain.

As the OpenChain project manager Shane Coughlan explained, “The basic idea was simple: Identify key recommended processes for effective open source management. The goal was equally clear: Reduce bottlenecks and risk when using third-party code to make open-source license compliance simple and consistent across the supply chain. The key was to pull things together in a manner that balanced comprehensiveness, broad applicability, and real-world usability.”

Read more at ZDNet