Home Blog Page 327

Hollywood Formalizes Support for Open Source in Filmmaking

On August 10, the Academy of Motion Picture Arts & Sciences—the same organization responsible for the Academy Awards (also known as the Oscars), not exactly an industry that’s renowned for its openness—teamed up with the Linux Foundation to form the Academy Software Foundation(ASWF). The purpose of this organization is “to increase the quality and quantity of contributions to the content creation industry’s open source software base.” That’s right; the film industry has put together an organization in support of open source software.

According to a presentation shared during this year’s SIGGRAPH conference, code repositories will be hosted on GitHub, use Jenkins and Nexus Repository Manager for automation and continuous integration, and have much clearer processes for people to submit bug reports and pull requests. Even more important, any project that’s accepted for inclusion in the ASWF will still maintain its autonomy when it comes to development. Core developers still decide what patches to include and even what license to release under, so long as it’s OSI-approved.

The foundation hasn’t yet announced any official projects under its management, but it’s still early days. Prime candidates to start, though, look to be the libraries that were born from the industry. I would expect projects like OpenEXROpenVDB, and OpenColorIO to be first in line. 

Read more at OpenSource.com

SharkLinux Distro: Open Source in Action

Every so often I run into a Linux distribution that reminds me of the power of open source software. SharkLinux is one such distribution. With a single developer creating this project, it attempts to change things up a bit. Some of those changes will be gladly welcomed by new users, while scoffed at by the Linux faithful. In the end, however, thanks to open source software, the developer of SharkLinux has created a distribution exactly how he would want it to be. And that my friends, is one amazing aspect of open source. We get to do it our way.

But what is SharkLinux and what makes it stand out? I could make one statement about SharkLinux and end this now. The developer of SharkLinux reportedly developed the entire distribution using only an Android phone. That, alone, should have you wanting to give SharkLinux a go.

Let’s take a look at this little-known distribution and see it’s all about.

What Exactly is SharkLinux?

First off, SharkLinux is based on Ubuntu and makes use of a custom Mate/Xfce desktop. Outside of the package manager, the similarities between SharkLinux and Ubuntu are pretty much non-existent. Instead of aiming for the new or average user, the creator has his eyes set on developers and other users who need to lean heavily on virtualization. The primary feature set for SharkLinux includes:

  • KVM hypervisor

  • Full QEMU Utilities

  • Libvirt and Virtual Machine Manager

  • Vagrant (mutate and libvirt support)

  • LXD/LXC/QLc/LXDock

  • Docker/Kubernetes

  • VMDebootstrap

  • Virt-Install/Convert

  • Launch Local Cloud Images

  • Full System Containers GUI Included

  • Kimchi – WebVirtCloud – Guacamole

  • Vagrant Box Conversion

  • Many Dashboards, Admin Panels

  • LibGuestFS and other disk/filesystem tools

  • Nested Virtualization (hardware depending)

  • Alien (rpm) LinuxBrew (Mac) Nix Package Manager

  • Powershell, Upstream WINE (Win)

  • Cloud Optimized Desktop

  • Dozens of wrappers, automated install scripts, and expansion packs

  • Guake terminal

  • Kernel Options v4.4** -> v4.12*

Clearly, SharkLinux isn’t built for those who simply need a desktop, browser, and office suite. This includes tools for a specific cross section of users. Let’s dive in a bit deeper.

Post Install

As per usual, I don’t want to waste time on the installation of another Linux distribution, simply because that process has become so easy. It’s point and click, fill out a few items, and wait for 5-10 minutes for the call to reboot.

Once you’ve logged into your newly installed instance of SharkLinux, you’ll immediately notice something different. The “Welcome to SharkLinux” window is clearly geared toward users with a certain level of knowledge. Tasks such as Automatic Maintenance, the creation of swap space, sudo policy, and more are all available (Figure 1).

Figure 1: The SharkLinux Welcome to window.

The first thing you should do is click the SharkLinux Expansion button. When prompted, click Yes to install this package. Without this package installed, absolutely no upstream packages are enabled for the system. Until you install the expansion, you’ll be missing out on a lot of available software. So install the SharkLinux Expansion out of the gate.

Next you’ll want to install the SharkExtras. This makes it easy to install other packages (such as Bionic, MiniKube, Portainer, Cockpit, Kimchi, Webmin, Gimp Extension Pack, Guacamole, LXDock, Mainline Kernel, Wine, and much more (Figure 2).

Figure 2: Click on a package and OK its installation.

Sudo Policy

This is where things get a bit dicey for the Linux faithful. I will say this: I get why the developer has included this. Out of the box, SharkLinux does require a sudo password, but with the Sudo Policy editor, you can easily set up the desktop such that sudo doesn’t require a password (Figure 3).

Figure 3: Changing the sudo password policy in SharkLinux.

Click on the Sudo Policy button in the Welcome to SharkLinux window and then either click Password Required or Password Not Required. Use this option with great caution, as you’d  reduce the security of the desktop by disabling the need for a sudo password.

Automatic Maintenance

Another interesting feature, found in the Welcome to SharkLinux window is Automatic Maintenance. If you turn this feature on (Figure 4), functions like system updates will occur automatically (without user interaction). For those that often forget to regularly update their system, this might be a good idea. If you’re like me, and prefer to run updates on a daily basis manually, you’ll probably opt to skip this feature.

Figure 4: Enabling Automatic System Maintenance.

After taking care of everything you need in the Welcome to SharkLinux window, close it out and you’ll find yourself on the desktop (Figure 5).

Figure 5: The default SharkLinux desktop.

At this point, you can start using SharkLinux as you would any desktop distribution, the difference being you’ll have quite a bit more tools for virtualization and development at your disposal. One tiny word of warning: You will notice, by default, the desktop wallpaper is set to randomly change. In that mix of wallpapers, the developer has added one particular wallpaper that may not be quite suitable for a work environment (it’s nothing too drastic, just a woman posing seductively). You can remove that photo from the Appearance Preferences window (should you choose to do so). Beyond that, SharkLinux works as well as any desktop Linux distribution you can find.

One Quirky Distribution

Of all the Linux distributions I have used over the years (and I have used PLENTY), SharkLinux might well be one of the more quirky releases. That doesn’t mean it’s one to avoid. Quite the opposite. I highly recommend everyone interested in seeing what a single developer can do with the Linux platform give SharkLinux a try. I promise you, you’ll be glad you gave it a go. SharkLinux is fun, of that there is no doubt. It’s also a flavor of desktop Linux that shows you what is possible, thanks to open source.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Solving License Compliance at the Source: Adding SPDX License IDs

Accurately identifying the license for open source software is important for license compliance. However, determining the license can sometimes be difficult due to a lack of information or ambiguous information. Even when there is some licensing information present, a lack of consistent ways of expressing the license can make automating the task of license detection very difficult, thus requiring significant amounts of manual human effort.   There are some commercial tools applying machine learning to this problem to reduce the false positives, and train the license scanners, but a better solution is to fix the problem at the upstream source.

In 2013,  the U-boot project decided to use the SPDX license identifiers in each source file instead of the GPL v2.0 or later header boilerplate that had been used up to that point.   The initial commit message had an eloquent explanation of reasons behind this transition.

Read more at The Linux Foundation

A Quick Guide to DNF for yum Users

Dandified yum, better known as DNF, is a software package manager for RPM-based Linux distributions that installs, updates, and removes packages. It was first introduced in Fedora 18 in a testable state (i.e., tech preview), but it’s been Fedora’s default package manager since Fedora 22.

Since it is the next-generation version of the traditional yum package manager, it has more advanced and robust features than you’ll find in yum. Some of the features that distinguish DNF from yum are:

  • Dependency calculation based on modern dependency-solving technology
  • Optimized memory-intensive operations
  • The ability to run in Python 2 and Python 3
  • Complete documentation available for Python APIs

Read more at OpenSource.com

Storj Labs Forges Open-Source Vendors and Cloud Services Alliance

At the Linux Foundation‘s Open Source Summit in Vancouver, Storj Labs a decentralized cloud storage company, announced a partnership that will enable open-source projects to generate revenue when their users store data in the cloud: The Open Source Partner Program.

Why? Ben Golub, Storj’s executive chairman and long time open-source executive, explained there’s a “major economic disconnect between the 24-million total open-source developers and the $180 billion cloud market.” That’s why, for example, Redis Labs recently added the controversial Commons Clause license to its Redis program.

Storj takes an entirely different approach. One that starts with Storj’s fundamental decentralized storage technology, which Golub called “AirBnB for hard drives.”

Read more at ZDNet

The Linux Foundation: Accelerating Open Source Innovation

The Linux Foundation’s job is to create engines of innovation and enable the gears of those engines to spin faster, said Executive Director Jim Zemlin, in opening remarks at Open Source Summit in Vancouver.

Examples of how the organization is driving innovation across industries can be seen in projects such as Let’s Encrypt, a free, automated certificate authority working to encrypt the entire web, Automotive Grade LinuxHyperledger, and the new Academy Software Foundation, which is focused on open collaboration within the motion picture industry.

This is open source beyond Linux and, according to Zemlin, is indicative of one of the best years and most robust periods at The Linux Foundation itself. So far in 2018, the organization has added a new member every single day, with Cloud Native Computing Foundation (CNCF), one of The Linux Foundation’s fastest growing projects, announcing 38 new members this week.

Successful projects depend on members, developers, standards, and infrastructure to develop products that the market will adopt, said Zemlin, and The Linux Foundation facilitates this success in many ways. It works downstream helping industry, government, and academia understand how to consume and contribute to open source. At the same time, it works upstream to foster development and adoption of open source solutions, showing industries how to create value and generate reinvestment.

During his keynote, Zemlin spoke with Sarah Novotny, Open Source Strategy Lead at Google Cloud, about Google’s support of open source development. In the talk, Novotny announced that Google Cloud is transferring ownership and management of the Kubernetes project’s cloud resources to CNCF community contributors and is additionally granting $9 million over three years to CNCF to cover infrastructure costs associated with Kubernetes development and distribution. Novotny, who noted that the project is actively seeking new contributors, said this commitment will provide the opportunity for more people to get involved.

In the words of Zemlin, let’s go solve big problems, one person, one project, one industry at a time.

This article originally appeared at The Linux Foundation

Is the Linux 4.18 Kernel Heading Your Way?

How soon the 4.18 kernel lands on your system or network depends a lot on which Linux distributions you use. It may be heading your way or you may already be using it.

If you have ever wondered whether the same kernel is used in all Linux distributions, the answer is that all Linux distributions use the same kernel more or less, but there are several big considerations that make that “more or less” quite significant:

  1. Most distributions add or remove code to make the kernel work best for them. Some of these changes might eventually work their way back to the top of the code heap where they will be merged into the mainstream, but they’ll make the distribution’s kernel unique — at least for a while.

Read more at Network World

11.5 Factor Apps

Each time someone talks about the 12 Factor Application a weird feeling overcomes me .. Because I like the concept .. but it feels awkward .. it feels as if someone with 0 operational experience wrote it. And this devil is in a really small detail.

And that is Part III. Config … For me (and a lot of other folks I’ve talked to about this topic) , using environment variables (as in real environment variables) are just one of the worst ideas ever. Environment variables are typically set manually , or from a script that is being executed and there’s little or trace to see fast how a specific config is set.

Imagine I launch an app with an env variable X=foo , then your collegue stops that app, and launches it with X=bar. The systems integrity has not changed, no config or binaries have been changed, but the application behaviour could have completely changed.

Sure I can go in to /proc/pid/environ to find the params it was launched with … But I want to know what state the system is supposed to be in .. and have tooling around that verified that the system indeed is in that state.

Read more at Kris Buytaert’s blog

Linux for Beginners: Moving Things Around

In previous installments of this series, you learned about directories and how permissions to access directories work. Most of what you learned in those articles can be applied to files, except how to make a file executable.

So let’s deal with that before moving on.

No .exe Needed

In other operating systems, the nature of a file is often determined by its extension. If a file has a .jpg extension, the OS guesses it is an image; if it ends in .wav, it is an audio file; and if it has an .exe tacked onto the end of the file name, it is a program you can execute.

This leads to serious problems, like trojans posing as documents. Fortunately, that is not how things work in Linux. Sure, you may see occasional executable file endings in .sh that indicate they are runnable shell scripts, but this is mostly for the benefit of humans eyeballing files, the same way when you use ls --color, the names of executable files show up in bright green.

The fact is most applications have no extension at all. What determines whether a file is really program is the x (for executable) bit. You can make any file executable by running

chmod a+x some_program

regardless of its extension or lack thereof. The x in the command above sets the x bit and the a says you are setting it for all users. You could also set it only for the group of users that own the file (g+x), or for only one user, the owner (u+x).

Although we will be covering creating and running scripts from the command line later in this series, know that you can run a program by writing the path to it and then tacking on the name of the program on the end:

path/to/directory/some_program

Or, if you are currently in the same directory, you can use:

./some_program

There are other ways of making your program available from anywhere in the directory tree (hint: look up the $PATH environment variable), but you will be reading about those when we talk about shell scripting.

Copying, Moving, Linking

Obviously, there are more ways of modifying and handling files from the command line than just playing around with their permissions. Most applications will create a new file if you still try to open a file that doesn’t exist. Both

nano test.txt

and

vim test.txt

(nano and vim being to popular command line text editors) will create an empty test.txt file for you to edit if test.txt didn’t exist beforehand.

You can also create an empty file by touching it:

touch test.txt

Will create a file, but not open it in any application.

You can use cp to make a copy of a file in another location or under a new name:

cp test.txt copy_of_test.txt

You can also copy a whole bunch of files:

cp *.png /home/images

The instruction above copies all the PNG files in the current directory into an images/ directory hanging off of your home directory. The images/ directory has to exist before you try this, or cp will show an error. Also, be warned that, if you copy a file to a directory that contains another file with the same name, cp will silently overwrite the old file with the new one.

You can use

cp -i *.png /home/images

If you want cp to warn you of any dangers (the -i options stands for interactive).

You can also copy whole directories, but you need the -r option for that:

cp -rv directory_a/ directory_b

The -r option stands for recursive, meaning that cp will drill down into directory_a, copying over all the files and subdirectories contained within. I personally like to include the -v option, as it makes cp verbose, meaning that it will show you what it is doing instead of just copying silently and then exiting.

The mv command moves stuff. That is, it changes files from one location to another. In its simplest form, mv looks a lot like cp:

mv test.txt new_test.txt

The command above makes new_test.txt appear and test.txt disappear.

mv *.png /home/images

Moves all the PNG files in the current directory to a directory called images/ hanging of your home directory. Again you have to be careful you do not overwrite existing files by accident. Use

mv -i *.png /home/images

the same way you would with cp if you want to be on the safe side.

Apart from moving versus copying, another difference between mv and cpis when you move a directory:

mv directory_a/ directory_b

No need for a recursive flag here. This is because what you are really doing is renaming the directory, the same way in the first example, you were renaming the file*. In fact, even when you “move” a file from one directory to another, as long as both directories are on the same storage device and partition, you are renaming the file.

You can do an experiment to prove it. time is a tool that lets you measure how long a command takes to execute. Look for a hefty file, something that weighs several hundred MBs or even some GBs (say, something like a long video) and try copying it from one directory to another like this:

$ time cp hefty_file.mkv another_directory/
real    0m3,868s 
user    0m0,016s 
sys     0m0,887s

In bold is what you have to type into the terminal and below what time outputs. The number to focus on is the one on the first line, real time. It takes nearly 4 seconds to copy the 355 MBs of hefty_file.mkv to another_directory/.

Now let’s try moving it:

$ time mv hefty_file.mkv another_directory/
real    0m0,004s
user    0m0,000s 
sys     0m0,003s

Moving is nearly instantaneous! This is counterintuitive, since it would seem that mv would have to copy the file and then delete the original. That is two things mv has to do versus cp‘s one. But, somehow, mv is 1000 times faster.

That is because the file system’s structure, with all its tree of directories, only exists for the users convenience. At the beginning of each partition there is something called a partition table that tells the operating system where to find each file on the actual physical disk. On the disk, data is not split up into directories or even files. There are tracks, sectors and clusters instead. When you “move” a file within the same partition, what the operating system does is just change the entry for that file in the partition table, but it still points to the same cluster of information on the disk.

Yes! Moving is a lie! At least within the same partition that is. If you try and move a file to a different partition or a different device, mv is still fast, but is noticeably slower than moving stuff around within the same partition. That is because this time there is actually copying and erasing of data going on.

Renaming

There are several distinct command line rename utilities around. None are fixtures like cp or mv and they can work in slightly different ways. What they all have in common is that they are used to change parts of the names of files.

In Debian and Ubuntu, the default rename utility uses regular expressions (patterns of strings of characters) to mass change files in a directory. The instruction:

rename 's/.JPEG$/.jpg/' *

will change all the extensions of files with the extension JPEG to jpg. The file IMG001.JPEG becomes IMG001.jpg, my_pic.JPEG becomes my_pic.jpg, and so on.

Another version of rename available by default in Manjaro, a derivative of Arch, is much simpler, but arguably less powerful:

rename .JPEG .jpg *

This does the same renaming as you saw above. In this version, .JPEG is the string of characters you want to change, .jpg is what you want to change it to, and * represents all the files in the current directory.

The bottom line is that you are better off using mv if all you want to do is rename one file or directory, and that’s because mv is realiably the same in all distributions everywhere.

Learning more

Check out the both mv and cp‘s man pages to learn more. Run

man cp

or

man mv

to read about all the options these commands come with and which make them more powerful and safer to use.

Opening Doors to Collaboration with Open Source Projects

One of the biggest benefits of open source is the ability to collaborate and partner with others on projects. Another is being able to package and share resources, something Michelle Noorali has done using Kubernetes. In a presentation called “Open Source Opening Doors,” Noorali, a senior software engineer at Microsoft, told an audience at the recent LC3 conference in China about her work on the Azure containers team building open source tools for Kubernetes and containers.

Her team needed a way to reliably scale several containerized applications and found Kubernetes to be a good solution and the open source community to be very welcoming, she said.

“In the process of deploying a lot of microservices to Kubernetes we found that we wanted some additional tooling to make it easier to share and configure applications to run in our cluster,’’ she explained. “You can deploy and scale your containerized apps by giving Kubernetes some declaration of what you want it to do in the form of a Kubernetes manifest.” However, in reality, she added, to deploy one app to a cluster you may have to write several Kubernetes manifests that utilize many resources hundreds of lines long.

Read more at The Linux Foundation