Home Blog Page 83

Classic SysAdmin: How to Search for Files from the Linux Command Line

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

It goes without saying that every good Linux desktop environment offers the ability to search your file system for files and folders. If your default desktop doesn’t — because this is Linux — you can always install an app to make searching your directory hierarchy a breeze.

But what about the command line? If you happen to frequently work in the command line or you administer GUI-less Linux servers, where do you turn when you need to locate a file? Fortunately, Linux has exactly what you need to locate the files in question, built right into the system.

The command in question is find. To make the understanding of this command even more enticing, once you know it, you can start working it into your Bash scripts. That’s not only convenience, that’s power.

Let’s get up to speed with the find command so you can take control of locating files on your Linux servers and desktops, without the need of a GUI.

How to use the find command

When I first glimpsed Linux, back in 1997, I didn’t quite understand how the find command worked; therefore, it never seemed to function as I expected. It seemed simple; issue the command find FILENAME (where FILENAME is the name of the file) and the command was supposed to locate the file and report back. Little did I know there was more to the command than that. Much more.

If you issue the command man find, you’ll see the syntax of the find command is:

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point…] [expression]

Naturally, if you’re unfamiliar with how man works, you might be confused about or overwhelmed by that syntax. For ease of understanding, let’s simplify that. The most basic syntax of a basic find command would look like this:

find /path option filename

Now we’ll see it at work.

Find by name

Let’s break down that basic command to make it as clear as possible. The most simplistic  structure of the find command should include a path for the file, an option, and the filename itself. You may be thinking, “If I know the path to the file, I’d already know where to find it!”. Well, the path for the file could be the root of your drive; so / would be a legitimate path. Entering that as your path would take find longer to process — because it has to start from scratch — but if you have no idea where the file is, you can start from there. In the name of efficiency, it is always best to have at least an idea where to start searching.

The next bit of the command is the option. As with most Linux commands, you have a number of available options. However, we are starting from the beginning, so let’s make it easy. Because we are attempting to find a file by name, we’ll use one of two options:

name – case sensitive

iname – case insensitive

Remember, Linux is very particular about case, so if you’re looking for a file named Linux.odt, the following command will return no results.

find / -name linux.odt

If, however, you were to alter the command by using the -iname option, the find command would locate your file, regardless of case. So the new command looks like:

find / -iname linux.odt

Find by type

What if you’re not so concerned with locating a file by name but would rather locate all files of a certain type? Some of the more common file descriptors are:

f – regular file

d – directory

l – symbolic link

c – character devices

b – block devices

Now, suppose you want to locate all block devices (a file that refers to a device) on your system. With the help of the -type option, we can do that like so:

find / -type c

The above command would result in quite a lot of output (much of it indicating permission denied), but would include output similar to:

/dev/hidraw6
/dev/hidraw5
/dev/vboxnetctl
/dev/vboxdrvu
/dev/vboxdrv
/dev/dmmidi2
/dev/midi2
/dev/kvm

Voilà! Block devices.

We can use the same option to help us look for configuration files. Say, for instance, you want to locate all regular files that end in the .conf extension. This command would look something like:

find / -type f -name “*.conf”

The above command would traverse the entire directory structure to locate all regular files ending in .conf. If you know most of your configuration files are housed in /etc, you could specify that like so:

find /etc -type f -name “*.conf”

The above command would list all of your .conf files from /etc (Figure 1).

 

Figure 1: Locating all of your configuration files in /etc.

Outputting results to a file

One really handy trick is to output the results of the search into a file. When you know the output might be extensive, or if you want to comb through the results later, this can be incredibly helpful. For this, we’ll use the same example as above and pipe the results into a file called conf_search. This new command would look like: ​

find /etc -type f -name “*.conf” > conf_search

You will now have a file (conf_search) that contains all of the results from the find command issued.

Finding files by size

Now we get to a moment where the find command becomes incredibly helpful. I’ve had instances where desktops or servers have found their drives mysteriously filled. To quickly make space (or help locate the problem), you can use the find command to locate files of a certain size. Say, for instance, you want to go large and locate files that are over 1000MB. The find command can be issued, with the help of the -size option, like so:

find / -size +1000MB

You might be surprised at how many files turn up. With the output from the command, you can comb through the directory structure and free up space or troubleshoot to find out what is mysteriously filling up your drive.

You can search with the following size descriptions:

c – bytes

k – Kilobytes

M – Megabytes

G – Gigabytes

b – 512-byte blocks

Keep learning

We’ve only scratched the surface of the find command, but you now have a fundamental understanding of how to locate files on your Linux systems. Make sure to issue the command man find to get a deeper, more complete, knowledge of how to make this powerful tool work for you.

The post Classic SysAdmin: How to Search for Files from the Linux Command Line appeared first on Linux Foundation.

A New Year’s Message from Nithya Ruff (2022)

The last two years have demonstrated even more clearly that technology is the crucial fabric that weaves society and the economy together. From video conferencing to online shopping and delivery to remote collaboration tools for work, technology helped society continue to function throughout the pandemic in 2020 and the continuing uncertainty of 2021. All that technology (and more) is powered, quite literally, by open source, in one way or another. Software is eating the world and open source software is becoming the dominant part of software, from the operating system to the database and messaging layer up to the frameworks that drive the user experience. Few, if any, organizations and enterprises could run operations today without relying on open source.

Not surprisingly, as it becomes more pervasive and mission-critical, open source is also proving to be a larger economic force. Public and private companies focused on selling open source software or services now have a collective market value approaching half a trillion dollars. There is no easy way to account for the total economic value of open source consumed by all businesses, individuals, nonprofits, and governments; the value enabled is likely well into the trillions of dollars. Open source powers cloud computing, the Internet, Android phones, mobile apps, cars — even the Mars helicopter launched by NASA. Open source also powers much of consumer electronics on the market today. 

With prominent positions in society and the economy comes an urgent imperative to better address risk and security. The Linux Foundation is working with its members to take on these challenges for open source. We launched multiple new initiatives in 2021 to make the open source technology ecosystem and software supply chain more secure, transparent, and resilient. From the Software Bill of Materials to the Open Source Security Foundation, the LF, its members, and its projects and communities are collaborating with paramount importance to secure the open source supply chain.

Behind risk management and security considerations — and technology development in general — are real people. This is also why the Linux Foundation is making substantial investments in supporting diversity and inclusion in open source communities. We need to take action as a community to make open source more inclusive and welcoming. We can do this in a data-driven fashion with research on what issues hinder our progress and develop actions that will, we hope, drive measurable improvements. 

Working together on collective efforts, beyond just our company and ourselves, is not just good for business; it is personally rewarding. Recently one of our engineers explained that he loves working with open source because he feels it gives him a global network of teachers, all helping him become better. I believe this is why open source is one of the most powerful forces in the world today, and it is only growing stronger. Through a pandemic, through economic challenges, day in, day out, we see people helping each other regardless of their demographics. Open source brings out the best in people by encouraging them to work together to solve great challenges and dream big. This is why in 2021, I am excited to see all the new collaborations, expanding our collective efforts to address truly global problems in agriculture, public health, and other areas that are far bigger than any one project. 

After a successful 2021, and hopefully, with a pandemic fading into our rearview mirrors, I am optimistic for an even more amazing 2022. Thank you for your support and guidance, and I wish you all a Happy New Year!

Nithya Ruff
Chair of the Board of Directors, The Linux Foundation

These efforts are made possible by our members and communities. To learn how your organization can get involved with the Linux Foundationclick here.

The post A New Year’s Message from Nithya Ruff (2022) appeared first on Linux Foundation.

Classic SysAdmin: How to Kill a Process from the Linux Command Line

This is a classic article from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course and our Essentials of System Administration eLearning

Picture this: You’ve launched an application (be it from your favorite desktop menu or from the command line) and you start using that launched app, only to have it lock up on you, stop performing, or unexpectedly die. You try to run the app again, but it turns out the original never truly shut down completely.

What do you do? You kill the process. But how? Believe it or not, your best bet most often lies within the command line. Thankfully, Linux has every tool necessary to empower you, the user, to kill an errant process. However, before you immediately launch that command to kill the process, you first have to know what the process is. How do you take care of this layered task? It’s actually quite simple…once you know the tools at your disposal.

Let me introduce you to said tools.

The steps I’m going to outline will work on almost every Linux distribution, whether it is a desktop or a server. I will be dealing strictly with the command line, so open up your terminal and prepare to type.

Locating the process

The first step in killing the unresponsive process is locating it. There are two commands I use to locate a process: top and ps. Top is a tool every administrator should get to know. With top, you get a full listing of currently running process. From the command line, issue top to see a list of your running processes (Figure 1).

Figure 1: The top command gives you plenty of information.

From this list you will see some rather important information. Say, for example, Chrome has become unresponsive. According to our top display, we can discern there are four instances of chrome running with Process IDs (PID) 3827, 3919, 10764, and 11679. This information will be important to have with one particular method of killing the process.

Although top is incredibly handy, it’s not always the most efficient means of getting the information you need. Let’s say you know the Chrome process is what you need to kill, and you don’t want to have to glance through the real-time information offered by top. For that, you can make use of the ps command and filter the output through grep. The ps command reports a snapshot of a current process and grep prints lines matching a pattern. The reason why we filter ps through grep is simple: If you issue the ps command by itself, you will get a snapshot listing of all current processes. We only want the listing associated with Chrome. So this command would look like:

ps aux | grep chrome

The aux options are as follows:

a = show processes for all users

u = display the process’s user/owner

x = also show processes not attached to a terminal

The x option is important when you’re hunting for information regarding a graphical application.

When you issue the command above, you’ll be given more information than you need (Figure 2) for the killing of a process, but it is sometimes more efficient than using top.

Figure 2: Locating the necessary information with the ps command.

Killing the process

Now we come to the task of killing the process. We have two pieces of information that will help us kill the errant process:

Process name Process ID

Which you use will determine the command used for termination. There are two commands used to kill a process:

kill – Kill a process by ID killall – Kill a process by name

There are also different signals that can be sent to both kill commands. What signal you send will be determined by what results you want from the kill command. For instance, you can send the HUP (hang up) signal to the kill command, which will effectively restart the process. This is always a wise choice when you need the process to immediately restart (such as in the case of a daemon). You can get a list of all the signals that can be sent to the kill command by issuing kill -l. You’ll find quite a large number of signals (Figure 3).

Figure 3: The available kill signals.

The most common kill signals are:

Signal Name

Single Value

Effect

SIGHUP

1

Hangup

SIGINT

2

Interrupt from keyboard

SIGKILL

9

Kill signal

SIGTERM

15

Termination signal

SIGSTOP

17, 19, 23

Stop the process

What’s nice about this is that you can use the Signal Value in place of the Signal Name. So you don’t have to memorize all of the names of the various signals.
So, let’s now use the kill command to kill our instance of chrome. The structure for this command would be:

kill SIGNAL PID

Where SIGNAL is the signal to be sent and PID is the Process ID to be killed. We already know, from our ps command that the IDs we want to kill are 3827, 3919, 10764, and 11679. So to send the kill signal, we’d issue the commands:

kill -9 3827

kill -9 3919

kill -9 10764

kill -9 11679

Once we’ve issued the above commands, all of the chrome processes will have been successfully killed.

Let’s take the easy route! If we already know the process we want to kill is named chrome, we can make use of the killall command and send the same signal the process like so:

killall -9 chrome

The only caveat to the above command is that it may not catch all of the running chrome processes. If, after running the above command, you issue the ps aux|grep chrome command and see remaining processes running, your best bet is to go back to the kill command and send signal 9 to terminate the process by PID.

Ending processes made easy

As you can see, killing errant processes isn’t nearly as challenging as you might have thought. When I wind up with a stubborn process, I tend to start off with the killall command as it is the most efficient route to termination. However, when you wind up with a really feisty process, the kill command is the way to go.

The post Classic SysAdmin: How to Kill a Process from the Linux Command Line appeared first on Linux Foundation.

8 fundamental Linux file-management commands for new users

Learn how to create, copy, move, rename, and delete files and directories from the Linux command line.

Read More at Enable Sysadmin

8 essential Linux file navigation commands for new users

It’s straightforward to get around your Linux system if you know these basic commands.

Read More at Enable Sysadmin

5 scripts for getting started with the Nmap Scripting Engine

The NSE boosts Nmap’s power by adding scripting capabilities (custom or community-created) to the network scanning tool.

Read More at Enable Sysadmin

3 tools for troubleshooting packet filtering

Use Nmap, Wireshark, and tcpdump to sniff out router problems on your network.

Read More at Enable Sysadmin

Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble

Brian Behlendorf

As someone who has spent their entire career in open source software (OSS), the Log4Shell scramble (an industry-wide four-alarm-fire to address a serious vulnerability in the Apache Log4j package) is a humbling reminder of just how far we still have to go. OSS is now central to the functioning of modern society, as critical as highway bridges, bank payment platforms, and cell phone networks, and it’s time OSS foundations started to act like it.

Organizations like the Apache Software Foundation, the Linux Foundation, the Python Foundation, and many more, provide legal, infrastructural, marketing and other services for their communities of OSS developers. In many cases the security efforts at these organizations are under-resourced and hamstrung in their ability to set standards and requirements that would mitigate the chances of major vulnerabilities, for fear of scaring off new contributors. Too many organizations have failed to apply raised funds or set process standards to improve their security practices, and have unwisely tilted in favor of quantity over quality of code.

What would “acting like it” look like? Here are a few things that OSS foundations can do to mitigate security risks:

Set up an organization-wide security team to receive and triage vulnerability reports, as well as coordinate responses and disclosures to other affected projects and organizations.Perform frequent security scans, through CI tooling, for detecting unknown vulnerabilities in the software and recognizing known vulnerabilities in dependencies.Perform occasional outside security audits of critical code, particularly before new major releases.Require projects to use test frameworks, and ensure high code coverage, so that features without tests are discouraged and underused features are weeded out proactively.Require projects to remove deprecated or vulnerable dependencies. (Some Apache projects are not vulnerable to the Log4j v2 CVE, because they are still shipping with Log4j v1, which has known weaknesses and has not received an update since 2015!)Encourage, and then eventually require, the use of SBOM formats like SPDX to help everyone track dependencies more easily and quickly, so that vulnerabilities are easier to find and fix.Encourage, and then eventually require, maintainers to demonstrate familiarity with the basics of secure software development practices.

Many of these are incorporated into the CII Best Practices badge, one of the first attempts to codify these into an objective comparable metric, and an effort that has now moved to OpenSSF. The OpenSSF has also published a free course for developers on how to develop secure software, and SPDX has recently been published as an ISO standard.

None of the above practices is about paying developers more, or channeling funds directly from users of software to developers. Don’t get me wrong, open source developers and the people who support them should be paid more and appreciated more in general. However, it would be an insult to most maintainers to suggest that if you’d just slipped more money into their pockets they would have written more secure code. At the same time, it’s fair to say a tragedy-of-the-commons hits when every downstream user assumes that these practices are in place, being done and paid for by someone else.

Applying these security practices and providing the resources required to address them is what foundations are increasingly expected to do for their community. Foundations should begin to establish security-related requirements for their hosted and mature projects. They should fundraise from stakeholders the resources required for regular paid audits for their most critical projects, scanning tools and CI for all their projects, and have at least a few paid staff members on a cross-project security team so that time-critical responses aren’t left to individual volunteers. In the long term, foundations should consider providing resources to move critical projects or segments of code to memory-safe languages, or fund bounties for more tests.

The Apache Software Foundation seems to have much of this right, let’s be clear. Despite being notified just before the Thanksgiving holiday, their volunteer security team worked with the Log4j maintainers and responded quickly. Log4j also has almost 8000 passing tests in its CI pipeline, but even all that testing didn’t catch the way this vulnerability could be exploited. And in general, Apache projects are not required to have test coverage at all, let alone run the kind of SAST security scans or host third party audits that might have caught this.

Many other foundations, including those hosted at the Linux Foundation, also struggle to do all this – this is not easy to push through the laissez-faire philosophy that many foundations have regarding code quality, and third-party code audits and tests don’t come cheap. But for the sake of sustainability, reducing the impact on the broader community, and being more resilient, we have got to do better. And we’ve got to do this together, as a crisis of confidence in OSS affects us all.

This is where OpenSSF comes in, and what pulled me to the project in the first place. In the new year you’ll see us announce a set of new initiatives that build on the work we’ve been doing to “raise the floor” for security in the open source community. The only way we do this effectively is to develop tools, guidance, and standards that make adoption by the open source community encouraged and practical rather than burdensome or bureaucratic. We will be working with and making grants to other open source projects and foundations to help them improve their security game. If you want to stay close to what we’re doing, follow us on Twitter or get involved in other ways. For a taste of where we’ve been to date, read our segment in the Linux Foundation Annual Report, or watch our most recent Town Hall.

Hoping for a 2022 with fewer four alarm fires,

Brian

Brian Behlendorf is General Manager of the Linux Foundation’s Open Source Security Foundation (OpenSSF). He was a founding member of the Apache Group, which later became the Apache Software Foundation, and served as president of the foundation for three years.

The post Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble appeared first on Linux Foundation.

OSPOlogy: Learnings from OSPOs in 2021

A wide range of open source topics essential for OSPO related activities occurred in 2021, featured by OS experts coming from matured OSPOs like Bloomberg or RIT and communities behind open source standards like OpenChain or CHAOSS.

The TODO Group has been paving the OSPO path over a decade of change and is now composed of a worldwide community of open source professionals working in collaboration to drive Open Source Initiatives to the next level. 

The TODO Group Member Landscape

One of the many initiatives that the TODO Group has been working on since last August has been OSPOLogy. With OSPOLogy, the TODO Group aims to ease the access to more organizations across sectors to understand and adopt OSPOs by open and transparent networking: engaging with open source leaders through real-time conversations. 

“In OSPOLogy, we have have the participation of experienced OSPO leaders like Bloomberg, Microsoft or SAP, widely adopted project/Initiatives such as OpenChain, CHAOSS or SPDX, and industry open source specialists like LF Energy or FINOS. There is a huge diversity of folks in the open source ecosystem that help people and organizations to improve their Open Source Programs, their OSPO management skills, or advance in their OSPO careers. Thus, after listening to the community demands, we decided to offer a space with dedicated resources to make these connections happen, under an open governance model designed to encourage other organizations and communities to contribute.”

AJ – OSPO Program Manager at TODO Group

What has OSPOlogy accomplished so far?

Within OSPOlogy 2021 series, we had insightful discussions coming from five different OSPO topics:

[August 4, 2021] How to start an OSPO with Bloomberg[September 1, 2021] Mentoring and Talent Management within OS Ecosystems with US Bank[October 13, 2021] The State of OSPOs in 2021 with LF[November 17, 2021] Academic OSPOs with CHAOSS and RIT [December 1, 2021] Governance in the Context of Compliance and Security with OpenChain

For more information, please watch the video replays on our OSPOlogy YouTube channel here

The format is pretty simple: OSPOlogy kicks off the meetings with the OSPO news happening worldwide during that month and moves to the topic of the day where featured guests introduce a topic relevant to OSPO and ways to set up open source initiatives. These two sections are recorded and published within the LF Community platform and the new OSPOlogy youtube channel.

Once the presentation finishes, we stop the recording and move to real-time conversations and Q&A section under Chatham house rules in order to keep a safe environment for the community to freely share their opinions and issues.

“One of the biggest challenges when preparing the 2021 agenda was to get used to the new platform used to host these meetings and find contributors to kick off the initiative. We keep improving the quality and experience of these meetings every month and thanks to the feedback received by the community, building new stuff for 2022”

AJ – OSPO Program Manager at TODO Group

TODO Mission: build the next OSPOlogy 2022 series together

The TODO Group gives big importance to neutrality. That’s why this project (same as the other TODO projects) is under an open governance model, to allow people from other organizations and peers across sectors to freely contribute and grow this initiative together.

OSPOlogy  has a planning doc, governance guidelines, and a topic pool agenda to:

Propose new topicsOffer to be a moderatorBecome speaker

https://github.com/todogroup/ospology/tree/main/meetings.

“During the past months, we have been reaching out to other communities like FINOS, LF Energy, OpenChain, SPDX, or CHAOSS. These projects have become of vital importance to many OSPO activities (either for specific activities, such as managing Open Source Compliance & ISO Standards, measuring the impact of relevant open source projects or helping to overcome entry barriers for more traditional sectors, like finance or energy industry)” 

OSPOlogy, along with the TODO Associates program, aims to bring together all these projects to introduce them to the OSPO community and drive insightful discussions. These are some of the topics proposed by the community for 2022:

How to start an OSPOs within the Energy sectorHow to start an OSPOs within the Finance sectorMeasuring the impact of the open source projects that matters to your organizationOpen Source Compliance best practices in the lens of an OSPO

OSPOlogy is not just limited to LF projects and the TODO Community. Outside initiatives, foundations, or vendors that work closely with OSPOs and help the OSPO movement are also welcome to join.

We have just created a CFP form so people can easily add their OSPO topics for upcoming OSPOlogy sessions:

https://github.com/todogroup/ospology/blob/main/.github/ISSUE_TEMPLATE/call-for-papers.yml

In order to propose a topic, interested folks just need to open an issue using the call for papers GitHub form.

The TODO Group’s journey: Paving the OSPO path over a decade of change

Significant advancements and community shifts have occurred since (the year when TODO Group was formed) in the open source ecosystem and the way organizations advance in their open source journey. By that time, most of the OSPOs were gathered in the bay area and led by software companies, requesting to share limited information due to the uncertainty across this industry. 

OSPO Maturity Levels

However, this early version of TODO is far behind what it  (and OSPOs) represent in the present day.

With digital transformation forcing all organizations to be open source forward and OSPOs adopted by multiple sectors, the TODO Group is composed of a worldwide community of open source professionals working in collaboration to drive Open Source Initiatives to the next level.

It is well known that the TODO group members are also OSPO mentors and advocates who have been working in the open source industry for years.

At TODO group, we know the huge value these experienced OSPO leaders can bring to the community since they can help to pave the path for the new generation of OSPOs, cultivating the open source ecosystem. Two main challenges mark 2022:

Provide Structure and Guidance within the OSPO Industry based on the experience of Mature OSPO professionals across sectors and stages.Collaborate with other communities to enhance this guidance

New OSPO challenges are coming, and new TODO milestones and initiatives are taking shape to adapt to help the OSPO movement succeed across sectors. You will hear from TODO 2022 strategic goals and direction news very soon!

The post OSPOlogy: Learnings from OSPOs in 2021 appeared first on Linux Foundation.