Home Blog Page 444

Migrating to Linux: Disks, Files, and Filesystems

This is the second article in our series on migrating to Linux. If you missed the first one, you can find it here. As mentioned previously, there are several reasons why you might want to migrate to Linux. You might be using or developing code for Linux in your job, or you might just want to try something new.

In any case, having Linux on your main desktop will help you quickly become familiar with the methods and tools you’ll need. In this article, I’ll provide an introduction to Linux files, filesystems and disks.

Where’s My C:?

If you are coming from a Mac, Linux should feel fairly familiar to you, as the Mac uses files, filesystems, and disks pretty closely to the way Linux does. On the other hand, if your experience is primarily Windows, accessing disks under Linux may seem a little confusing. Generally, Windows assigns a drive letter (like C:) to each disk. Linux does not do this. Instead Linux presents a single hierarchy of files and directories for everything in your system.

Let’s look at an example. Suppose you use a computer with a main hard drive, a CD-ROM with folders called Books and Videos and a USB thumb drive with a directory called Transfer. Under Windows, you would see the following:

C:  [Hard drive]

├ System

├ System32

├ Program Files

├ Program Files (x86)

└ <additional folders>


D: [CD-ROM]

├ Books

└ Videos


E: [USB thumb drive]

└ Transfer

A typical Linux system would instead have this:

/ (the top most directory, called the root directory) [Hard drive]

├ bin

├ etc

├ lib

├ sbin

├ usr

├ <additional directories>

└ media

   └ <your user name>

       ├ cdrom  [CD-ROM]

├ Books

       │  └ Videos

       └ Kingme_USB [USB thumb drive]

           └ Transfer

If you are using a graphical environment, usually, the file manager in Linux will present the CD-ROM and the USB thumb drive with icons that look like the device, so you may not need to know the media’s specific directory.

Filesystems

Linux emphasizes these things called filesystems. A filesystem is a set of structures on media (like a hard drive) that keep track of all the files and directories on the media. Without a filesystem we could store information on a hard drive, but all the data would be in a jumbled mess. We wouldn’t know which blocks of data belonged to which file. You may have heard of names like Ext4, XFS, and Btrfs. These are Linux filesystem types.

Every type of media that holds files and directories has a filesystem on it. Different media types may use specific filesystem types that are optimized for the media. So CD-ROMs use ISO9660 or UDF filesystem types. USB thumbdrives typically use FAT32 so they can be easily shared with other computer systems.

Windows uses filesystems, too. It just doesn’t talk about them as much. For example, when you insert a CD-ROM, Windows will read the ISO9660 filesystem structures, assign a drive letter to it and display the files and directories under the letter (D: for example). So if you’re picky about details, technically Windows assigns a drive letter to a filesystem, not the whole disk.

Using that same example, Linux will also read the ISO9660 filesystem structures, but instead of a drive letter, it will attach the filesystem to a directory (a process called mounting). Linux will then display the files and directories on the CD-ROM under the attached directory (/media/<your user name>/cdrom, for example).

So to answer the question “Where’s my C:?” On Linux, there is no C:. It works differently.

Files

Windows stores files and directories (also called folders) in its filesystem. Linux, however, lets you put other things into the filesystem as well. These additional types of things are native objects in the filesystem, and they’re actually different from regular files. Linux allows you to create and use hard links, symbolic links, named pipes, device nodes, and sockets, in addition to the regular files and directories. We won’t get into all the types of filesystem objects here, but there are a few that are useful to know about.

Hard links are used to create one or more aliases for a file. Each alias is a different name to the same contents on disk. If you edit the file under one file name, the changes appear under the other file names as well. For example. you might have MyResume_2017.doc also have a hard link called JaneDoeResume.doc. (Note that you can create a hard link by using the ln command from the command line.) This way you can find and edit MyResume_2017.doc, then send out JaneDoeResume.doc to your prospects to help them keep track where it’s from — which will contain all your updates.

Symbolic links are a little like Windows shortcuts. The filesystem entry contains a path to another file or directory. In a lot of ways, they work like hard links in that they can create an alias to another file. However, symbolic links can alias directories as well as files, and symbolic links can refer to items in a different filesystem on different media where hard links cannot. (Note that you can create symbolic links also with the ln command, but with the -s option.)

Permissions

Another big difference between Windows and Linux involves the permissions on filesystem objects (files, directories, and others). Windows implements a fairly complex set of permissions on files and directories. For example, users and groups can have permissions to read, write, execute, modify, and more. Users and groups can be given permission to access everything in a directory with exceptions, or they can be given no permission to anything in a directory with exceptions.

Most folks using Windows don’t make use of special permissions, however; so, it’s surprising when they discover that a default set of permissions are used and enforced on Linux. Linux can enforce more sophisticated permissions by using SELinux or AppArmor. However most Linux installations just use the built-in default permissions.

In the default permissions, each item in the filesystem has a set of permissions for the owner of the file, the group for the file, and for everyone else. These permissions allow for: reading, writing, and executing. The permissions have a hierarchy to them. First, it checks whether the user (the login name) is the owner and has permission. If not, then it checks whether your user (login name) is in the group for the file and the group has permission. If not, then it checks whether everyone else has permission. There are other permission settings as well, but the three sets of three are the ones most commonly used.

If you are using the command line, and you type ls -l, you may see permissions represented as:

rwxrw-r-- 1 stan dndgrp 25 Oct 33rd 25:01 rolldice.sh

The letters at the beginning, rwxrw-r–, show the permissions. In this case, the owner (stan) can read, write, and execute the file (the first three letters, rwx); members of the group dndgrp can read and write the file but not execute (the second three letters, rw-); and everyone else can only read the file (the last three letters, r–).

(Note that on Windows to make a script executable, you make the file’s extension something specific, .bat for example. On Linux, the file’s extension doesn’t mean anything to the operating system. Instead its permissions need to be set so the file is executable.)

If you get a permission denied error, chances are you are attempting to run a program or command that requires administrator privilege, or you’re trying to access a file that doesn’t hold permissions for your user account to access it. If you are trying to do something that requires administrator privilege, you will need to switch to the user account called root by logging in as root, or by using a helper program called sudo on the command line, which will allow you to temporarily run as root. The sudo tool will, of course, ask for a password to make sure you really should have permission.

Hard Drive Filesystems

Windows predominately uses a filesystem type called NTFS for hard drives. On Linux, you get to pick which type of filesystem you want to use for the hard drive. Different types of filesystems exhibit different features and different performance characteristics. The main native Linux filesystem used today is Ext4. However, you can choose from an abundance of filesystem types at installation time, such as: Ext3 (predecessor to Ext4), XFS, Btrfs, UBIFS (for embedded systems), and more. If you’re not sure which one to use, Ext4 will work great.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Running Chromium with Ozone-GBM on a GNU/Linux Desktop System

Ozone is Chromium’s next-gen platform abstraction layer for graphics and input.  When developing either Ozone itself or an application that uses Ozone, it is often beneficial to be able to run the code on the development machine, which is usually a typical GNU/Linux desktop system, since doing so speeds up the development cycle.

By Alexandros Frantzis, Senior Software Engineer at Collabora.

The X11 backend for Ozone works without much trouble on a Linux desktop system. However, getting the DRM/GBM backend to run on such a system, which I recently needed to do as part of my work at Collabora, turned out to be significantly less straightforward. In this guide I will describe all the steps that are required to run Chromium with Ozone-GBM on a typical GNU/Linux desktop system.

Building Chromium

The Chromium developer documentation provides detailed build instructions for Linux. For this guide, we have to ensure that we enable Ozone and that the target OS for the build is “chromeos”:

$ gn gen out/OzoneChromeOS
$ gn args --args='use_ozone=true target_os="chromeos"' out/OzoneChromeOS
$ ninja -C out/OzoneChromeOS chrome

Building a functional minigbm

Ozone-GBM uses the GBM API to create buffers. However, it doesn’t use Mesa’s GBM implementation, but ships its own in the form of the minigbm library. The Chromium source code contains a copy of the library under third_party, but uses it only for building and testing purposes without enabling any of the minigbm hardware drivers.

In order to run Ozone-GBM on real hardware we need to create a build of minigbm that supports our target GPU. For the purposes of this guide, the simplest way to provide a functional minigbm is to build it independently and provide it at runtime to Chromium using LD_LIBRARY_PATH.

First we need to get the minigbm source code with:

$ git clone https://chromium.googlesource.com/chromiumos/platform/minigbm

minigbm depends on libdrm, so we have to ensure that we have the development files for the libdrm library and the vendor specific extensions. On a Debian/Ubuntu system we can get everything we need by installing the libdrm-dev package:

$ sudo apt install libdrm-dev

We can now build minigbm with the correct flags to ensure the proper GPU driver is supported:

$ make CPPFLAGS="-DDRV_I915" DRV_I915=1

Note that we need to provide the driver flag both as a preprocessor definition and a Make variable. Other driver flags for common desktop GPUs are DRV_RADEON and DRV_AMDGPU (but see below for amdgpu).

Finally we need to create a link with the proper file name so that chrome can find the library:

$ ln -s libminigbm.so.1.0.0 libminigbm.so

Continue reading on Collabora’s blog.

Linux Kernel Developer: Mauro Carvalho Chehab

According to the recent Linux Kernel Development Report, the Linux operating system runs 90 percent of the public cloud workload, has 62 percent of the embedded market share, and 100 percent of the TOP500 supercomputers. It also runs 82 percent of the world’s smartphones and nine of the top ten public clouds. However, the sustained growth of this open source ecosystem would not be possible without the steady development of the Linux kernel.

In this series, we are highlighting the ongoing work of some Linux kernel contributors. Here, Mauro Carvalho Chehab, Open Source Director at Samsung Research Brazil, answers a few questions about his work on the kernel.

Read more at The Linux Foundation

Tech Ageism and the Myth of the ‘Digital Native’

A majority of workers over 30 are worried about losing their jobs because of the ageism in tech, according to a recent report from Visier, an employee data analytics company. It pulled HR data from over 100 enterprise companies and mining the data to obtain answers to workforce questions and the results of this survey.

According to Dave Weisbeck, Visier chief strategy officer, it’s not a surprise to anyone that there is ageism in tech. But, he said in an interview, it plays out in a way that is more subtle than we might imagine.

The Findings: The Good

When we think of the term ageism in the IT sector, we generally think of how employers and project managers will systematically or casually discriminate against individuals simply on the basis of their age.

Read more at The New Stack

How to Monitor your Docker Containers with ctop

If Docker is your container service of choice, you know how easy it is to create and deploy containers. Chances are you’ve already done so and have numerous containers running on your network. However, do you know how well those containers are performing? If you’re familiar with Linux, you might wish there were an top/htop app geared specifically for containers.

There is.

That’s right, one of the best means of monitoring your containers is an open source tool, found on Github, called ctop. With this app, you can get a quick overview of your containers, their names, IDs, and how much CPU, Memory, and Network Rx/Tx data. Ctop even allows you to filter what you’re viewing, and gives you an expanded view of a selected container. Although it may not offer a massive amount of features, it does the job and does it well. The tool is easy to install, and even easier to use. I’ll demonstrate on a Ubuntu 16.04 platform, but ctop can be installed on nearly any Linux distribution.

Read more at TechRepublic

How Did Linux Come to Dominate Supercomputing?

After years of pushing toward total domination, Linux finally did it. It is running on all 500 of the TOP500 supercomputers in the world, and who knows how many more after that. That’s even more impressive than Intel’s domination of the list, with 92 percent of the processors in the top 500.

So, how did Linux get here? How did this upstart operating system created by a college student from Finland 26 years ago steamroll Unix, a creation of Bell Labs and supported by giants like IBM and Sun Microsystems and HP, Microsoft’s Windows, and other Unix derivatives?

It was a confluence of things, all of which aligned perfectly for Linux. For starters, the Unixes were fragmented and tied to vendor processors. You had AT&T, through its Bell Labs arm, licensing Unix System V to vendors who then made their own specific flavor. Sun Microsystems made Solaris, IBM made AIX, HP had HP-UX and SGI had IRIX. None of them was compatible, and at best, porting required a recompile if you were lucky.

Read more at Network World

The 5 Problem-Solving Skills of Great Software Developers

To be effective, software engineers must hone their problem-solving skills and master a complex craft that requires years of study and practice. Despite what newcomers might think, understanding a programming language, a framework or even algorithms is not the hard part of building software.

For example, languages are easy, especially the C-inspired imperative ones. There are only 32 keywords in the C language, and their meaning is easy to master…

Building software is more about solving problems than writing code or understanding technologies. Becoming good at solving problems requires a lot of practice and experience. A software engineer is a problem solver first, and a coder second. Computer languages, frameworks, and algorithms are tools that you can learn by studying. Solving problems, however, is complicated and hard to learn other than through long practice and applied mentorship.

Read more at Dev.to

Photon Could Be Your New Favorite Container OS

Containers are all the rage, and with good reason. As discussed previously, containers allow you to quickly and easily deploy new services and applications onto your network, without requiring too much in the way of added system resources. Containers are more cost-effective than using dedicated hardware or virtual machines, and they’re easier to update and reuse.

Best of all, containers love Linux (and vice versa). Without much trouble or time, you can get a Linux server up and running with Docker and deploying containers. But, which Linux distribution is best suited for the deployment of your containers? There are a lot of options. You could go with a standard Ubuntu Server platform (which makes installing Docker and deploying containers incredibly easy), or you could opt for a lighter weight distribution one geared specifically for the purpose of deploying containers.

One such distribution is Photon. This particular platform was created in 2005 by VMware; it includes the Docker daemon and works with container frameworks, such as Mesos and Kubernetes. Photon is optimized to work with VMware vSphere, but it can be used on bare metal, Microsoft Azure, Google Compute Engine, Amazon Elastic Compute Cloud, or VirtualBox.

Photon manages to stay slim by only installing what is absolutely necessary to run the Docker daemon. In the end, the distribution comes in around 300 MB. This is just enough Linux make it all work. The key features to Photon are:

  • Kernel tuned for performance.

  • Kernel is hardened according to the Kernel Self-Protection Project (KSPP).

  • All installed packages are built with hardened security flags.

  • Operating system boots with validated trust.

  • Photon management daemon manages firewall, network, packages, and users on remote Photon OS machines.

  • Support for persistent volumes.

  • Project Lightwave integration.

  • Timely security patches and updates.

Photon can be used via ISO, OVA, Amazon Machine Image, Google Compute Engine image, and Azure VHD. I’ll show you how to install Photon on VirtualBox, using an ISO image. The installation takes about five minutes and, in the end, you’ll have a virtual machine, ready to deploy containers.

Creating the virtual machine

Before you deploy that first container, you have to create the virtual machine and install Photon. To do this, open up VirtualBox and click the New button. Walk through the Create Virtual Machine wizard (giving Photon the necessary resources, based on the usage you predict the container server will need). Once you’ve created the virtual machine, you need to first make a change to the settings. Select the newly created virtual machine (in the left pane of the VirtualBox main window) and then click Settings. In the resulting window, click on Network (from the left navigation).

In the Networking window (Figure 1), you need to change the Attached to drop-down to Bridged Adapter. This will ensure your Photon server is reachable from your network. Once you’ve made that change, click OK.

Figure 1: Changing the VirtualBox network settings for Photon.

Select your Photon virtual machine from the left navigation and then click Start. You will be prompted to locate and attach the IOS image. Once you’ve done that, Photon will boot up and prompt you to hit Enter to begin the installation. The installation is ncurses based (there is no GUI), but it’s incredibly simple.

In the next screen (Figure 2), you will be asked if you want to do a Minimal, Full, or OSTree Server. I opted to go the Full route. Select whichever option you require and hit enter.

Figure 2: Selecting your installation type.

In the next window, select the disk that will house Photon. Since we’re installing this as a virtual machine, there will be only one disk listed (Figure 3). Tab down to Auto and hit Enter on your keyboard. The installation will then require you to type (and verify) an administrator password. Once you’ve done that, the installation will begin and finish in less than five minutes.

Figure 3: Selecting your hard disk for the Photon installation.

Once the installation completes, reboot the virtual machine and log in with the username root and the password you created during installation. You are ready to start working.

Before you begin using Docker on Photon, you’ll want to upgrade the platform. Photon uses the yum package manager, so login as root and issue the command yum update. If there are any updates available, you’ll be asked to okay the process (Figure 4).

Figure 4: Updating Photon.

Usage

As I mentioned, Photon comes with everything you need to deploy containers or even create a Kubernetes cluster. However, out of the box, there are a few things you’ll need to do. The first thing is to enable the Docker daemon to run at start. To do this, issue the commands:

systemctl start docker

systemctl enable docker

Now we need to create a standard user, so we’re not running the docker command as root. To do this, issue the following commands:

useradd -m USERNAME

passwd USERNAME

Where USERNAME is the name of the user to add.

Next we need to add the new user to the docker group with the command:

usermod -a -G docker USERNAME

Where USERNAME is the name of the user just created.

Log out as the root user and log back in as the newly created user. You can now work with the docker command without having to make use of sudo or switching to the root user. Pull down an image from Docker Hub and start deploying containers.

An outstanding container platform

Photon is, without a doubt, an outstanding platform, geared specifically for containers. Do note that Photon is an open source project, so there is no paid support to be had. If you find yourself having trouble with Photon, hop on over to the Issues tab in the Photon Project’s Github page, where you can read and post about issues. And if you’re interested in forking Photon, you’ll find the source code on the project’s official Github page.

Give Photon a try and see if it doesn’t make deploying Docker containers and/or Kubernetes clusters significantly easier.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

TNS Guide: How to Manage Passwords and Keep Your Online Accounts Secure

Massive data breaches over the past several years have shown that you can’t trust online service providers to keep your account information secure. So, if you haven’t done this until now, it’s time to carefully consider what and how you share with such companies, starting with your password.

First off, if you continue to use the same password for multiple accounts across different websites, you’re doing online security wrong. Just head over to HaveIBeenPwned.com and marvel at the list of user databases that have been compromised over the past 10 years.

Go through the descriptions of those breaches and one thing will become clear: It typically takes years before data thefts are discovered by the affected services. During that time the stolen information is sold among cybercriminals who exploit it for profit.

Read more at The New Stack

7 Things to Know About the Changing Security Landscape

If you’re a hacker or a security company, chances are you’ve had a very good year. If you’re one of the enterprises that lost millions because of malware, then not so much.

This year saw dozens of massive data breaches — and 2017 isn’t over yet. It also saw record investments in security startups, with at least 20 in the $40 million and up range. Older IT giants like Cisco and IBM boosted their revenuesfrom newer security businesses as well. With the size and scope of attacks expected to increase exponentially, security spending probably won’t drop anytime soon. Cybersecurity Ventures puts it at a $1 trillion market from 2017 to 2021.

“With an expanding threat landscape, cybersecurity is the No. 1 priority for businesses worldwide,” Cisco CEO Chuck Robbins said on a conference call with investors.

Aside from bigger breaches and more security spending, what should companies expect in the year ahead? 

Read more at SDxCentral