Home Blog Page 82

Baumer, Infineon, Qualcomm Innovation Center, Percepio and Silicon Labs Select Zephyr RTOS for their Next Generation of Products and Solutions

SAN FRANCISCO, January 13, 2022 The Zephyr Project announces a major milestone today with Baumer joining as a Platinum member and Infineon Technologies, Qualcomm Innovation Center, Inc., Percepio and Silicon Labs joining as Silver members. These new members have selected Zephyr RTOS as one of the key technologies to build their next generation of connected products and solutions.

Zephyr, an open source project at the Linux Foundation that builds a safe, secure and flexible real-time operating system (RTOS) for resource-constrained devices, is

easy to deploy, secure, connect and manage. It has a growing set of software libraries that can be used across various applications and industry sectors such as Industrial IoT, wearables, machine learning and more. Zephyr is built with an emphasis on broad chipset support, security, dependability, longterm support releases and a growing open source ecosystem.

“Zephyr fits where Linux can’t,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “It will help these new members with development, delivery, and maintenance across a wide variety of products and models. We look forward to working with our new members to improve the technology their products and solutions are based on.”

Zephyr LongTerm Support (LTS) Release

In October 2021, the Zephyr community of almost 500 contributors made the LTS v2 release available that offers vendors a customizable operating system that supports product longevity, security and interoperability. Product developers aren’t locked into a particular architecture, back-end platform or cloud provider and will have the freedom to choose from an ecosystem of hardware. Additionally, products based on the LTS release will benefit from a maintained code base throughout their development and deployment lifecycle. The LTS will serve as the baseline for the auditable version of Zephyr, which will benefit both the maintained LTS and development branches. Learn more about the LTS v2 here.

Commitment to Zephyr

Baumer, one of the international leading companies for smart sensors, encoders and digital cameras for industrial automation, joins other Platinum members Antmicro, Google, Intel, Meta, Nordic Semiconductor, NXP and Oticon. Roman Kellner, Embedded Software Team Lead at Baumer, will join the Governing Board and its commitment to ensure balanced collaboration and feedback that meets the needs of its community.

“The mission of the Governing Board is to cultivate an innovative relationship among stakeholders to advance the Zephyr Project’s support of new hardware, developer tools, sensors, and drivers, while maximizing the functionality of devices that run applications developed using the Zephyr RTOS,” said Barna Ibrahim, Zephyr Governing Board member and Marketing Committee Chair. “We are ecstatic to welcome Roman to the board and look forward to working more closely with Baumer.”

“Baumer as a sensor manufacturer relies on the capabilities of microcontrollers in a wide performance range for our product portfolio,” said Roman Kellner. “Zephyr was chosen as our next sensor platform for its MCU vendor openness, reliability, high configurability, its added value compared to a pure RTOS scheduler and the future ability to cover non-safe and safe products with the same code base. We are happy to contribute our expertise to attribute Zephyr RTOS as a high performance sensor platform.” 

The Zephyr Project also welcomes Silver members:

Infineon, a world leader in semiconductor solutions that make life easier, safer and greener;Qualcomm Innovation Center, a subsidiary of Qualcomm Technologies, that focuses on enabling and optimizing open source software that work with Qualcomm Technologies’ solutions;Percepio, a leader in visual trace diagnostics for embedded systems and IoT; andSilicon Labs, a leader in secure, intelligent wireless technology for a more connected world.

These members join AVSystem, BayLibre, Eclipse Foundation, Fiware, Foundries.io, Golioth, Laird Connectivity, Linaro, Memfault, Parasoft, Pat-Eta Electronics, RISC-V, SiFive, Synopsys and teenage engineering, and Wind River.

“The Zephyr Project is driving stability to developers which allows them to focus on product innovation and at Infineon, we are happy to be a part of helping customers drive differential value,” said Danny Watson, Principal Product Marketing Engineer at Infineon. “Infineon aims to be a key contributor to the underlying scalable goals of the Zephyr Project and to shape it into providing more performance and intelligent based Open Source Software for Infineon’s PSoC 6 Microcontrollers.”

“The Qualcomm Innovation Center (QuIC) is proud to become a new member of the Zephyr Project community,” said Anthony Scarpino, Senior Director of Engineering at Qualcomm Canada ULC. “QuIC looks forward to contributing to the Zephyr Project to collaborate in building the best-in-class RTOS for secure, connected, resource-constrained devices. QuIC supports the building of micro-controller-based devices as part of the hardware and software ecosystems in upcoming products and sees participation in Zephyr as a path to world- leading innovative solutions.”

“At Percepio, we’ve long recognized the potential of Zephyr RTOS as the leading independent platform for small IoT devices where Linux isn’t an option, yet capable enough for complex embedded IoT/Edge applications,” said Mike Skrtic, Vice President of Sales and Marketing at Percepio. “The latest Zephyr release brings expanded support for software tracing, which facilitates debugging and allows for improved reliability, security, and performance of embedded systems. We’re pleased to have made significant contributions to the new tracing subsystem, to provide full kernel tracing support, enabling the high-end visual trace diagnostics Tracealyzer is known for.”

“We’ve had our eye on Zephyr for some time and are excited to officially be a member of this RTOS project,” said Benny Chang, Vice President, Platform and Chief of Staff at Silicon Labs. “We appreciate the measures the Zephyr community is taking to build a reliable, well-tested RTOS for the IoT and look forward to ​connecting Zephyr users with ​our industry-leading hardware and connectivity solutions.”

To learn more about Zephyr RTOS, visit the Zephyr website and blog.

About the Zephyr Project

The Zephyr Project is an open source, scalable real-time operating system (RTOS) supporting multiple hardware architectures. To learn more, please visit www.zephyrproject.org.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

The post Baumer, Infineon, Qualcomm Innovation Center, Percepio and Silicon Labs Select Zephyr RTOS for their Next Generation of Products and Solutions appeared first on Linux Foundation.

Understanding Bluetooth Technology for Linux

This article was written by Martin Woolley of the Bluetooth SIG.

Linux has been around in various forms for about 30 years, and the kernel is the basis of other operating systems such as Android and Chrome OS. Supercomputers use it at one end of the computing spectrum and in embedded devices at the other. Linux is used on laptops, desktop computers, and servers in between these extremes.

And it’s also used in single-board computers — this category includes popular devices like the Raspberry Pi.

Figure 1 – Raspberry Pi 4 running Linux

Therefore it’s fair to say that Linux has been widely adopted.

While microcontrollers and lean, mean software frameworks necessarily dominate small electronic products that are generally single-purpose devices and have modest processing requirements, Linux meets the needs of another important subset. Some products have multiple features that need to be available concurrently. Some cases may require significant processor power and need RAM measured in gigabytes rather than the kilobytes of RAM more typically found in microcontrollers. IP security cameras are based on Linux. They can stream live video, respond to motion detection events, identify human faces in video streams in real-time, record video to an SD card, transfer files over FTP, and host a web server for management and configuration purposes. That mix of concurrently available functionality requires both sufficiently powerful hardware and an operating system that supports multiple processes and threads, provides a capable file system, and has a wide selection of applications readily available for it. Linux is a perfect fit. And it’s open source and free.

Bluetooth Technology and Linux

Bluetooth® technology can be used on Linux. The controller part of the Bluetooth stack is typically a system on a chip that is either an integral part of the mainboard or implemented in a peripheral like a USB dongle. The host part of the Bluetooth stack runs as a system service, and the standard Linux Bluetooth host implementation is called BlueZ.

BlueZ supports both the Bluetooth LE Peripheral and Central roles using GAP and GATT and Bluetooth mesh, provided the underlying controller supports dependent Bluetooth features. And its multi-process architecture means that multiple Bluetooth applications can be running simultaneously on a single device, which offers some exciting possibilities.

But for a developer, working with Bluetooth technology on Linux for the first time can be challenging. BlueZ defines a straightforward, logical API, but the way a developer must use it in applications is dissimilar to how a developer works with Bluetooth APIs on most other platforms. This is a consequence of the system’s architecture, which, whilst not unique, is typically very visible to the developer and usually needs to be well understood so that those logical BlueZ APIs can be used.

The Architecture of a Linux System using BlueZ

BlueZ APIs are not called directly by applications. Instead, Linux applications that run as independent processes make inter-process communication (IPC) calls to BlueZ APIs via an IPC broker named D-Bus. D-Bus is a system service and a type of message-oriented middleware which provides IPC support for many Linux applications and services, not just BlueZ.

BlueZ runs as a system daemon, either bluetoothd to provide applications with support for GAP and GATT or bluetooth-meshd when the physical device is to be used to run applications that act as Bluetooth mesh nodes.

Figure 2 – Architecture

Using D-Bus, applications can send messages which cause methods implemented in remote services or applications to be called and the results returned in another message. Applications and system services can also communicate events that have happened in the system to other applications by emitting special messages known as signals.

Figure 3 – DBus messages and signals

Applications work with BlueZ by sending and receiving DBus messages and signals, so developers generally need some knowledge (or perhaps a lot of knowledge) of DBus programming.

You may have noticed that we are not making the most definite statements here. Why did we say that the developer usually needs to have a solid understanding of the architecture rather than always? Why do they generally need some knowledge of DBus programming and sometimes a lot of knowledge? The answer lies in the very nature of Linux and of the Linux ecosystem.

Developers of Android or iOS applications typically use one or two programming languages favored by the operating system (o/s) owner, in this example, either Google or Apple. The APIs are designed and documented by the o/s owner, and there’s a wealth of supporting information to help developers achieve results. But the world of Linux is not like that. It’s very modular and open, which means there’s an enormous choice in programming languages that can be used. There may be a choice of different APIs for the exact same purpose provided by different supporting libraries from different originators for any given language.

The degree to which the architecture is abstracted by the APIs for different languages, hiding details so that an application developer feels they’re working directly with BlueZ APIs rather than making remote method calls using DBus messages varies. Still, it’s not uncommon for the developer to have to deal directly with DBus from their code and to need to have a thorough understanding of DBus IPC.

Some BlueZ or DBus APIs are well documented, while some do not add to the learning curve developers need to ascend. And, in some cases, there’s no documentation at all, leaving the developer to figure things out through searching the web, scrutinizing library source code, and so on. This is fine if you like that kind of thing and OK if you have the luxury of all the time in the world to finish your project. But for most people, life’s not like that.

The Bluetooth Technology for Linux Developers Study Guide

To help Linux developers quickly ascend the BlueZ learning curve, we’ve created an educational resource known as a study guide to add to our growing collection.

It’s modular and includes hands-on exercises so you can test your growing understanding of the theory by writing code and testing the results.

Figure 4 – Hands-on coding exercises included
Figure 5 – Testing

If you’re completely new to Bluetooth® Low Energy (LE), there’s a primer module that will explain the key concepts to get you started. Subsequent modules explain how Bluetooth technology works on Linux, DBus programming concepts and techniques, how to develop LE Central devices, and how to develop LE Peripheral devices, in both cases using BlueZ and Python. The appendix provides step-by-step instructions for configuring your Linux kernel and for building and installing BlueZ from the source.

After completing the work in this study guide, you should:

  • Be able to explain basic Bluetooth LE concepts and terminology such as GAP Central and GATT client
  • Be able to explain what BlueZ is and how applications use BlueZ in terms of architecture, services, and communication
  • Understand the fundamentals of developing applications that use DBus inter-process communication
  • Be able to implement key functionality, typically required by GAP Central/GATT client Bluetooth devices

Download the Bluetooth for Linux Developers Study Guide today.

Git hooks: How to automate actions in your Git repo

Protect your Git repository from mistakes, automate manual processes, gather data about Git activity, and much more with Git hooks.

Read More at Enable Sysadmin

Please Join Us In The January 2022 SPDX Community SBOM DocFest

SPDX was designed for tools to produce and consume SBOM documents. A decade of experience has shown us that tools may interpret fields differently – a file may be a valid syntactic SPDX SBOM,  but different tools may fill in different values.

By coming together as a community to examine the output of multiple tools and to compare/contrast the results, we can refine the guidance to tool vendors and improve the robustness of the ecosystem sharing SPDX documents.   Historically, these events were called Bake-offs, but we’ve evolved them into “DocFests.”

After a successful SPDX 2.2 DocFest in September of 2021, the SPDX community has decided to host another DocFest on January 27th from 7-11 AM PST. The purpose of this event is to bring together producers and consumers of SPDX documents and discuss differences between tool output and understanding for the same software artifacts.

Specifically, the goals of this DocFest are to:

  • Come to agreement on how the fields should be populated for a given artifact
  • Identify instances where different use cases might lead to different choices for fields and structures of documents
  • Assess how well the NTIA SBOM minimum elements are covered
  • Create a set of reference SPDX SBOMs as part of the corpus for further tooling evaluation.

This event will require “sweat equity” – participants who can produce SPDX documents are expected to have generated at least one SPDX document from the target set (either source, built from source, or an image/container equivalent). Participants who consume SPDX documents are expected to run at least two SPDX documents through their tooling and share any analysis results.

Those who have signed up and have submitted files by January 21, 2022, will receive a meeting invite to the DocFest.

To indicate interest to participate, please fill in the following form no later than January 16, 2022: https://forms.gle/Mq7ReinTY6gDL4cs9

The post Please Join Us In The January 2022 SPDX Community SBOM DocFest appeared first on Linux Foundation.

Record your terminal with script and scriptreplay

Creating documentation? Make an instant, editable video of your terminal to demo a process with these Linux commands.

Read More at Enable Sysadmin

How Podman can extract a container’s external IP address

You can get external IP addresses for both rootfull and rootless containers with Podman.

Read More at Enable Sysadmin

Classic SysAdmin: How to Check Disk Space on Linux from the Command Line

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Quick question: How much space do you have left on your drives? A little or a lot? Follow up question: Do you know how to find out? If you happen to use a GUI desktop (e.g., GNOME, KDE, Mate, Pantheon, etc.), the task is probably pretty simple. But what if you’re looking at a headless server, with no GUI? Do you need to install tools for the task? The answer is a resounding no. All the necessary bits are already in place to help you find out exactly how much space remains on your drives. In fact, you have two very easy-to-use options at the ready.

In this article, I’ll demonstrate these tools. I’ll be using Elementary OS, which also includes a GUI option, but we’re going to limit ourselves to the command line. The good news is these command-line tools are readily available for every Linux distribution. On my testing system, there are a number of attached drives (both internal and external). The commands used are agnostic to where a drive is plugged in; they only care that the drive is mounted and visible to the operating system.

With that said, let’s take a look at the tools.

df

The df command is the tool I first used to discover drive space on Linux, way back in the 1990s. It’s very simple in both usage and reporting. To this day, df is my go-to command for this task. This command has a few switches but, for basic reporting, you really only need one. That command is df -H. The -H switch is for human-readable format. The output of df -H will report how much space is used, available, percentage used, and the mount point of every disk attached to your system (Figure 1).

 

Figure 1: The output of df -H on my Elementary OS system.

What if your list of drives is exceedingly long and you just want to view the space used on a single drive? With df, that is possible. Let’s take a look at how much space has been used up on our primary drive, located at /dev/sda1. To do that, issue the command:

df -H /dev/sda1

The output will be limited to that one drive (Figure 2).

Figure 2: How much space is on one particular drive?

You can also limit the reported fields shown in the df output. Available fields are:

source — the file system source

size — total number of blocks

used — spaced used on a drive

avail — space available on a drive

pcent — percent of used space, divided by total size

target — mount point of a drive

Let’s display the output of all our drives, showing only the size, used, and avail (or availability) fields. The command for this would be:

df -H –output=size,used,avail

The output of this command is quite easy to read (Figure 3).

Figure 3: Specifying what output to display for our drives.

The only caveat here is that we don’t know the source of the output, so we’d want to include source like so:

df -H –output=source,size,used,avail

Now the output makes more sense (Figure 4).

Figure 4: We now know the source of our disk usage.

du

Our next command is du. As you might expect, that stands for disk usage. The du command is quite different to the df command, in that it reports on directories and not drives. Because of this, you’ll want to know the names of directories to be checked. Let’s say I have a directory containing virtual machine files on my machine. That directory is /media/jack/HALEY/VIRTUALBOX. If I want to find out how much space is used by that particular directory, I’d issue the command:

du -h /media/jack/HALEY/VIRTUALBOX

The output of the above command will display the size of every file in the directory (Figure 5).

Figure 5: The output of the du command on a specific directory.

So far, this command isn’t all that helpful. What if we want to know the total usage of a particular directory? Fortunately, du can handle that task. On the same directory, the command would be:

du -sh /media/jack/HALEY/VIRTUALBOX/

Now we know how much total space the files are using up in that directory (Figure 6).

Figure 6: My virtual machine files are using 559GB of space.

You can also use this command to see how much space is being used on all child directories of a parent, like so:

du -h /media/jack/HALEY

The output of this command (Figure 7) is a good way to find out what subdirectories are hogging up space on a drive.

Figure 7: How much space are my subdirectories using?

The du command is also a great tool to use in order to see a list of directories that are using the most disk space on your system. The way to do this is by piping the output of du to two other commands: sort and head. The command to find out the top 10 directories eating space on a drive would look something like this:

du -a /media/jack | sort -n -r | head -n 10

The output would list out those directories, from largest to least offender (Figure 8).

Figure 8: Our top ten directories using up space on a drive.

Not as hard as you thought

Finding out how much space is being used on your Linux-attached drives is quite simple. As long as your drives are mounted to the Linux system, both df and du will do an outstanding job of reporting the necessary information. With df you can quickly see an overview of how much space is used on a disk and with du you can discover how much space is being used by specific directories. These two tools in combination should be considered must-know for every Linux administrator.

And, in case you missed it, I recently showed how to determine your memory usage on Linux. Together, these tips will go a long way toward helping you successfully manage your Linux servers.

The post Classic SysAdmin: How to Check Disk Space on Linux from the Command Line appeared first on Linux Foundation.

Classic SysAdmin: Understanding Linux File Permissions

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Although there are already a lot of good security features built into Linux-based systems, one very important potential vulnerability can exist when local access is granted – – that is file permission-based issues resulting from a user not assigning the correct permissions to files and directories. So based upon the need for proper permissions, I will go over the ways to assign permissions and show you some examples where modification may be necessary.

Permission Groups

Each file and directory has three user based permission groups:

owner – The Owner permissions apply only to the owner of the file or directory, they will not impact the actions of other users.group – The Group permissions apply only to the group that has been assigned to the file or directory, they will not affect the actions of other users.all users – The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most.

Permission Types

Each file or directory has three basic permission types:

read – The Read permission refers to a user’s capability to read the contents of the file.write – The Write permissions refer to a user’s capability to write or modify a file or directory.execute – The Execute permission affects a user’s capability to execute a file or view the contents of a directory.

Viewing the Permissions

You can view the permissions by checking the file or directory permissions in your favorite GUI File Manager (which I will not cover here) or by reviewing the output of the “ls -l” command while in the terminal and while working in the directory which contains the file or folder.

The permission in the command line is displayed as: _rwxrwxrwx 1 owner:group

User rights/PermissionsThe first character that I marked with an underscore is the special permission flag that can vary.The following set of three characters (rwx) is for the owner permissions.The second set of three characters (rwx) is for the Group permissions.The third set of three characters (rwx) is for the All Users permissions.Following that grouping since the integer/number displays the number of hardlinks to the file.The last piece is the Owner and Group assignment formatted as Owner:Group.

Modifying the Permissions

When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.

Explicitly Defining Permissions

To explicitly define permissions you will need to reference the Permission Group and Permission Types.

The Permission Groups used are:

u – Ownerg – Groupo – Othersa – All users

The potential Assignment Operators are + (plus) and – (minus); these are used to tell the system whether to add or remove the specific permissions.

The Permission Types that are used are:

r – Readw – Writex – Execute

So for example, let’s say I have a file named file1 that currently has the permissions set to _rw_rw_rw, which means that the owner, group, and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.

To make this modification you would invoke the command: chmod a-rw file1
To add the permissions above you would invoke the command: chmod a+rw file1

As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.

Using Binary References to Set permissions

Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three integers/numbers.

A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.

The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.

r = 4w = 2x = 1

You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.

So to set a file to permissions on file1 to read _rwxr_____, you would enter chmod 740 file1.

Owners and Groups

I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory.

You use the chown command to change owner and group assignments, the syntax is simple

chown owner:group filename,

so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.

Advanced Permissions

The special permissions flag can be marked with any of the following:

_ – no special permissionsd – directoryl– The file or directory is a symbolic links – This indicated the setuid/setgid permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a s in the read portion of the owner or group permissions.t – This indicates the sticky bit permissions. This is not set displayed in the special permission part of the permissions display, but is represented as a t in the executable portion of the all users permissions

Setuid/Setgid Special Permissions

The setuid/setguid permissions are used to tell the system to run an executable as the owner with the owner’s permissions.

Be careful using setuid/setgid bits in permissions. If you incorrectly assign permissions to a file owned by root with the setuid/setgid bit set, then you can open your system to intrusion.

You can only assign the setuid/setgid bit by explicitly defining permissions. The character for the setuid/setguid bit is s.

So do set the setuid/setguid bit on file2.sh you would issue the command chmod g+s file2.sh.

Sticky Bit Special Permissions

The sticky bit can be very useful in shared environment because when it has been assigned to the permissions on a directory it sets it so only file owner can rename or delete the said file.

You can only assign the sticky bit by explicitly defining permissions. The character for the sticky bit is t.

To set the sticky bit on a directory named dir1 you would issue the command chmod +t dir1.

When Permissions Are Important

To some users of Mac- or Windows-based computers, you don’t think about permissions, but those environments don’t focus so aggressively on user-based rights on files unless you are in a corporate environment. But now you are running a Linux-based system and permission-based security is simplified and can be easily used to restrict access as you please.

So I will show you some documents and folders that you want to focus on and show you how the optimal permissions should be set.

home directories– The users’ home directories are important because you do not want other users to be able to view and modify the files in another user’s documents of desktop. To remedy this you will want the directory to have the drwx______ (700) permissions, so lets say we want to enforce the correct permissions on the user user1’s home directory that can be done by issuing the command chmod 700 /home/user1.bootloader configuration files– If you decide to implement password to boot specific operating systems then you will want to remove read and write permissions from the configuration file from all users but root. To do you can change the permissions of the file to 700.system and daemon configuration files– It is very important to restrict rights to system and daemon configuration files to restrict users from editing the contents, it may not be advisable to restrict read permissions, but restricting write permissions is a must. In these cases it may be best to modify the rights to 644.firewall scripts – It may not always be necessary to block all users from reading the firewall file, but it is advisable to restrict the users from writing to the file. In this case the firewall script is run by the root user automatically on boot, so all other users need no rights, so you can assign the 700 permissions.

Other examples can be given, but this article is already very lengthy, so if you want to share other examples of needed restrictions please do so in the comments.

The post Classic SysAdmin: Understanding Linux File Permissions appeared first on Linux Foundation.

Classic SysAdmin: How to Move Files Using Linux Commands or File Managers

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

There are certain tasks that are done so often, users take for granted just how simple they are. But then, you migrate to a new platform and those same simple tasks begin to require a small portion of your brain’s power to complete. One such task is moving files from one location to another. Sure, it’s most often considered one of the more rudimentary actions to be done on a computer. When you move to the Linux platform, however, you may find yourself asking “Now, how do I move files?”

If you’re familiar with Linux, you know there are always many routes to the same success. Moving files is no exception. You can opt for the power of the command line or the simplicity of the GUI – either way, you will get those files moved.

Let’s examine just how you can move those files about. First, we’ll examine the command line.

Command line moving

One of the issues so many users new to Linux face is the idea of having to use the command line. It can be somewhat daunting at first. Although modern Linux interfaces can help to ensure you rarely have to use this “old school” tool, there is a great deal of power you would be missing if you ignored it altogether. The command for moving files is a perfect illustration of this.

The command to move files is mv. It’s very simple and one of the first commands you will learn on the platform. Instead of just listing out the syntax and the usual switches for the command – and then allowing you to do the rest – let’s walk through how you can make use of this tool.

The mv command does one thing – it moves a file from one location to another. This can be somewhat misleading because mv is also used to rename files. How? Simple. Here’s an example. Say you have the file testfile in /home/jack/ and you want to rename it to testfile2 (while keeping it in the same location). To do this, you would use the mv command like so:

mv /home/jack/testfile /home/jack/testfile2

or, if you’re already within /home/jack:

mv testfile testfile2

The above commands would move /home/jack/testfile to /home/jack/testfile2 – effectively renaming the file. But what if you simply wanted to move the file? Say you want to keep your home directory (in this case /home/jack) free from stray files. You could move that testfile into /home/jack/Documents with the command:

mv /home/jack/testfile /home/jack/Documents/

With the above command, you have relocated the file into a new location, while retaining the original file name.

What if you have a number of files you want to move? Luckily, you don’t have to issue the mv command for every file. You can use wildcards to help you out. Here’s an example:

You have a number of .mp3 files in your ~/Downloads directory (~/ – is an easy way to represent your home directory – in our earlier example, that would be /home/jack/) and you want them in ~/Music. You could quickly move them with a single command, like so:

mv ~/Downloads/*.mp3 ~/Music/

That command would move every file that ended in .mp3 from the Downloads directory, and move them into the Music directory.

Should you want to move a file into the parent directory of the current working directory, there’s an easy way to do that. Say you have the file testfile located in ~/Downloads and you want it in your home directory. If you are currently in the ~/Downloads directory, you can move it up one folder (to ~/) like so:

mv testfile ../ 

The “../” means to move the folder up one level. If you’re buried deeper, say ~/Downloads/today/, you can still easily move that file with:

mv testfile ../../

Just remember, each “../” represents one level up.

As you can see, moving files from the command line isn’t difficult at all.

GUI

There are a lot of GUIs available for the Linux platform. On top of that, there are a lot of file managers you can use. The most popular file managers are Nautilus (GNOME) and Dolphin (KDE). Both are very powerful and flexible. I want to illustrate how files are moved using the Nautilus file manager.

Nautilus has probably the most efficient means of moving files about. Here’s how it’s done:

Open up the Nautilus file manager.Locate the file you want to move and right-click said file.From the pop-up menu (Figure 1) select the “Move To” option.When the Select Destination window opens, navigate to the new location for the file.Once you’ve located the destination folder, click Select.

This context menu also allows you to copy the file to a new location, move the file to the Trash, and more.

If you’re more of a drag and drop kind of person, fear not – Nautilus is ready to serve. Let’s say you have a file in your home directory and you want to drag it to Documents. By default, Nautilus will have a few bookmarks in the left pane of the window. You can drag the file into the Document bookmark without having to open a second Nautilus window. Simply click, hold, and drag the file from the main viewing pane to the Documents bookmark.

If, however, the destination for that file is not listed in your bookmarks (or doesn’t appear in the current main viewing pane), you’ll need to open up a second Nautilus window. Side by side, you can then drag the file from the source folder in the original window to the destination folder in the second window.

If you need to move multiple files, you’re still in luck. Similar to nearly every modern user interface, you can do a multi-select of files by holding down the Ctrl button as you click each file. After you have selected each file (Figure 2), you can either right-click one of the selected files and then choose the Move To option, or just drag and drop them into a new location.

The selected files (in this case, folders) will each be highlighted.

Moving files on the Linux desktop is incredibly easy. Either with the command line or your desktop of choice, you have numerous routes to success – all of which are user-friendly and quick to master.

The post Classic SysAdmin: How to Move Files Using Linux Commands or File Managers appeared first on Linux Foundation.

Top 10 Ansible tutorials of 2021

Whether you’re new to Ansible or looking to level up your automation skills, you’ll find something of value in 2021’s top Ansible articles.

Read More at Enable Sysadmin