Home Blog Page 83

Multiculturalism in technology and its limits: AsyncAPI and the long road to open source utopia

“Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections – and it matters which ones get made and unmade.” ¬Donna J. Haraway

The body is the best and the only tool that the human being has for life; it is the physical representation of who we are, the container in which we move and represent ourselves. It reflects our identity, the matter that represents us socially.

Human beings have differentiated themselves from other animals by creating tools, using elements that increase their physical and mental capacities, extending their limits, and mediating the way they see and understand the world. The body is, thus, transfixed and intermediated by technology.

In the contemporary era, technological progress has led to global interconnection. Global access to the Internet has become the main propeller of globalization, a democratizing and liberating weapon.

It is a place where the absence of corporeality manages to resituate us all at the same level. It is a pioneering experience in which the medium can favor equality. It offers a space of representation in which anonymity and the absence of gender and ethnic and cultural constraints facilitate equal opportunities.

A temporary autonomous zone

The absence of a previous reference, of a historical past, turned the Internet into a “temporary autonomous zone.” A new space was thus constituted where identities could be expressed and constructed in a freer way. In this way, the Internet has provided oppressed collectives and communities with a means of alleviating cultural and gender biases in which people express themselves free of socio-political pigeonholing.

This same idea can be extrapolated to the new workspaces within technology. The modern workshop is on the network and is interconnected with colleagues who live in any corner of the world. This situation leads us to remote teamwork, multiculturalism, and all the positive aspects of this concept, creating diverse and heterogeneous teams where nationalities, ethnicities, and backgrounds are mixed.

In this idyllic world of the liberation of identities and construction of new spaces to inhabit, the shadows of the physical world, with a dense and unequal past, creep in. Open source projects have faced all these opportunities and constraints in the last years, trying to achieve the goals expressed within the heroic times of the internet in the ’90s.

Opening doors: For whom? For all?

AsyncAPI is an open source initiative sustained and driven by its community. It is a free project whose objective is to be made up of all the people who want to be part of it. It follows the basic idea of being created by everyone for everyone.

Being part of the initiative is simple: join the Slack channel and contribute through GitHub. People join freely and form a team managing to take this project to a high level.

But all freedom is conditioned by the context and the system surrounding it. At this point, AsyncAPI as a project shows its limitations and feels a bit constrained. Talking about an open, inclusive, and enthusiastic community is a start. 

There is no widespread access and literacy to technology in all geographical and social contexts. Potentially and hypothetically, the doors are open, as are the doors to libraries. That does not mean that everyone will enter them. The clash against the glass ceiling makes up the technology field, specifically in software development. This conflict emerges from the difficulties of having a multicultural community rich in gender or ethnic identities and equality due to the limitations of the field.

In 2019 the number of software developers worldwide grew to 23.9 million and was expected to reach 28.7 million software engineers by 2024. In these promising numbers, there are huge inequalities. The majority of developers come from specific world areas, and women represent only 10% of the total.

Towards a utopian future: Let’s try it!

The data shows us that beyond the democratizing possibilities of the Internet, most of the advances are only hypothetical and not real. We can see approximately the same numbers reflected in the AsyncAPI community. The community is aware of what is happening and wants to reverse this situation by being more heterogeneous and multicultural. That’s a challenge in which many factors influence.

Tackling this situation, AsyncAPI has grown in all directions, creating an ecosystem that embraces variety. It comprises a community of almost 2,000 people, of more than 20 different nationalities from very diverse cultures, ethnicities, and backgrounds.

AsyncAPI was born as an open source initiative, a liberating software model in every sense, a code made by all and for all. It is not a model closed exclusively to the technological field but a movement with a solid ethical base that crosses screens and shapes principles. That is why AsyncAPI is committed to this model. No matter how many external factors are against it, there is a clear direction. 

The decisions taken now will be vital to building a better future – a freer and more inclusive one –. We do not want a unidirectional mirror where only some can see themselves reflected. The key is to search for a diverse and multifaceted mirror.

Aspiring to form a community that is a melting pot of cultures and identities may seem somewhat utopian, but we believe it is a worthy goal to keep in mind and for which to strive. Proposals are welcome. Minds, eyes, and ears always remain open. Let us at least try it. 

Barbaño González is Education Program Manager at AsyncAPI

Classic SysAdmin: How to Search for Files from the Linux Command Line

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

It goes without saying that every good Linux desktop environment offers the ability to search your file system for files and folders. If your default desktop doesn’t — because this is Linux — you can always install an app to make searching your directory hierarchy a breeze.

But what about the command line? If you happen to frequently work in the command line or you administer GUI-less Linux servers, where do you turn when you need to locate a file? Fortunately, Linux has exactly what you need to locate the files in question, built right into the system.

The command in question is find. To make the understanding of this command even more enticing, once you know it, you can start working it into your Bash scripts. That’s not only convenience, that’s power.

Let’s get up to speed with the find command so you can take control of locating files on your Linux servers and desktops, without the need of a GUI.

How to use the find command

When I first glimpsed Linux, back in 1997, I didn’t quite understand how the find command worked; therefore, it never seemed to function as I expected. It seemed simple; issue the command find FILENAME (where FILENAME is the name of the file) and the command was supposed to locate the file and report back. Little did I know there was more to the command than that. Much more.

If you issue the command man find, you’ll see the syntax of the find command is:

find [-H] [-L] [-P] [-D debugopts] [-Olevel] [starting-point…] [expression]

Naturally, if you’re unfamiliar with how man works, you might be confused about or overwhelmed by that syntax. For ease of understanding, let’s simplify that. The most basic syntax of a basic find command would look like this:

find /path option filename

Now we’ll see it at work.

Find by name

Let’s break down that basic command to make it as clear as possible. The most simplistic  structure of the find command should include a path for the file, an option, and the filename itself. You may be thinking, “If I know the path to the file, I’d already know where to find it!”. Well, the path for the file could be the root of your drive; so / would be a legitimate path. Entering that as your path would take find longer to process — because it has to start from scratch — but if you have no idea where the file is, you can start from there. In the name of efficiency, it is always best to have at least an idea where to start searching.

The next bit of the command is the option. As with most Linux commands, you have a number of available options. However, we are starting from the beginning, so let’s make it easy. Because we are attempting to find a file by name, we’ll use one of two options:

name – case sensitive

iname – case insensitive

Remember, Linux is very particular about case, so if you’re looking for a file named Linux.odt, the following command will return no results.

find / -name linux.odt

If, however, you were to alter the command by using the -iname option, the find command would locate your file, regardless of case. So the new command looks like:

find / -iname linux.odt

Find by type

What if you’re not so concerned with locating a file by name but would rather locate all files of a certain type? Some of the more common file descriptors are:

f – regular file

d – directory

l – symbolic link

c – character devices

b – block devices

Now, suppose you want to locate all block devices (a file that refers to a device) on your system. With the help of the -type option, we can do that like so:

find / -type c

The above command would result in quite a lot of output (much of it indicating permission denied), but would include output similar to:

/dev/hidraw6
/dev/hidraw5
/dev/vboxnetctl
/dev/vboxdrvu
/dev/vboxdrv
/dev/dmmidi2
/dev/midi2
/dev/kvm

Voilà! Block devices.

We can use the same option to help us look for configuration files. Say, for instance, you want to locate all regular files that end in the .conf extension. This command would look something like:

find / -type f -name “*.conf”

The above command would traverse the entire directory structure to locate all regular files ending in .conf. If you know most of your configuration files are housed in /etc, you could specify that like so:

find /etc -type f -name “*.conf”

The above command would list all of your .conf files from /etc (Figure 1).

 

Figure 1: Locating all of your configuration files in /etc.

Outputting results to a file

One really handy trick is to output the results of the search into a file. When you know the output might be extensive, or if you want to comb through the results later, this can be incredibly helpful. For this, we’ll use the same example as above and pipe the results into a file called conf_search. This new command would look like: ​

find /etc -type f -name “*.conf” > conf_search

You will now have a file (conf_search) that contains all of the results from the find command issued.

Finding files by size

Now we get to a moment where the find command becomes incredibly helpful. I’ve had instances where desktops or servers have found their drives mysteriously filled. To quickly make space (or help locate the problem), you can use the find command to locate files of a certain size. Say, for instance, you want to go large and locate files that are over 1000MB. The find command can be issued, with the help of the -size option, like so:

find / -size +1000MB

You might be surprised at how many files turn up. With the output from the command, you can comb through the directory structure and free up space or troubleshoot to find out what is mysteriously filling up your drive.

You can search with the following size descriptions:

c – bytes

k – Kilobytes

M – Megabytes

G – Gigabytes

b – 512-byte blocks

Keep learning

We’ve only scratched the surface of the find command, but you now have a fundamental understanding of how to locate files on your Linux systems. Make sure to issue the command man find to get a deeper, more complete, knowledge of how to make this powerful tool work for you.

The post Classic SysAdmin: How to Search for Files from the Linux Command Line appeared first on Linux Foundation.

A New Year’s Message from Nithya Ruff (2022)

The last two years have demonstrated even more clearly that technology is the crucial fabric that weaves society and the economy together. From video conferencing to online shopping and delivery to remote collaboration tools for work, technology helped society continue to function throughout the pandemic in 2020 and the continuing uncertainty of 2021. All that technology (and more) is powered, quite literally, by open source, in one way or another. Software is eating the world and open source software is becoming the dominant part of software, from the operating system to the database and messaging layer up to the frameworks that drive the user experience. Few, if any, organizations and enterprises could run operations today without relying on open source.

Not surprisingly, as it becomes more pervasive and mission-critical, open source is also proving to be a larger economic force. Public and private companies focused on selling open source software or services now have a collective market value approaching half a trillion dollars. There is no easy way to account for the total economic value of open source consumed by all businesses, individuals, nonprofits, and governments; the value enabled is likely well into the trillions of dollars. Open source powers cloud computing, the Internet, Android phones, mobile apps, cars — even the Mars helicopter launched by NASA. Open source also powers much of consumer electronics on the market today. 

With prominent positions in society and the economy comes an urgent imperative to better address risk and security. The Linux Foundation is working with its members to take on these challenges for open source. We launched multiple new initiatives in 2021 to make the open source technology ecosystem and software supply chain more secure, transparent, and resilient. From the Software Bill of Materials to the Open Source Security Foundation, the LF, its members, and its projects and communities are collaborating with paramount importance to secure the open source supply chain.

Behind risk management and security considerations — and technology development in general — are real people. This is also why the Linux Foundation is making substantial investments in supporting diversity and inclusion in open source communities. We need to take action as a community to make open source more inclusive and welcoming. We can do this in a data-driven fashion with research on what issues hinder our progress and develop actions that will, we hope, drive measurable improvements. 

Working together on collective efforts, beyond just our company and ourselves, is not just good for business; it is personally rewarding. Recently one of our engineers explained that he loves working with open source because he feels it gives him a global network of teachers, all helping him become better. I believe this is why open source is one of the most powerful forces in the world today, and it is only growing stronger. Through a pandemic, through economic challenges, day in, day out, we see people helping each other regardless of their demographics. Open source brings out the best in people by encouraging them to work together to solve great challenges and dream big. This is why in 2021, I am excited to see all the new collaborations, expanding our collective efforts to address truly global problems in agriculture, public health, and other areas that are far bigger than any one project. 

After a successful 2021, and hopefully, with a pandemic fading into our rearview mirrors, I am optimistic for an even more amazing 2022. Thank you for your support and guidance, and I wish you all a Happy New Year!

Nithya Ruff
Chair of the Board of Directors, The Linux Foundation

These efforts are made possible by our members and communities. To learn how your organization can get involved with the Linux Foundationclick here.

The post A New Year’s Message from Nithya Ruff (2022) appeared first on Linux Foundation.

Classic SysAdmin: How to Kill a Process from the Linux Command Line

This is a classic article from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course and our Essentials of System Administration eLearning

Picture this: You’ve launched an application (be it from your favorite desktop menu or from the command line) and you start using that launched app, only to have it lock up on you, stop performing, or unexpectedly die. You try to run the app again, but it turns out the original never truly shut down completely.

What do you do? You kill the process. But how? Believe it or not, your best bet most often lies within the command line. Thankfully, Linux has every tool necessary to empower you, the user, to kill an errant process. However, before you immediately launch that command to kill the process, you first have to know what the process is. How do you take care of this layered task? It’s actually quite simple…once you know the tools at your disposal.

Let me introduce you to said tools.

The steps I’m going to outline will work on almost every Linux distribution, whether it is a desktop or a server. I will be dealing strictly with the command line, so open up your terminal and prepare to type.

Locating the process

The first step in killing the unresponsive process is locating it. There are two commands I use to locate a process: top and ps. Top is a tool every administrator should get to know. With top, you get a full listing of currently running process. From the command line, issue top to see a list of your running processes (Figure 1).

Figure 1: The top command gives you plenty of information.

From this list you will see some rather important information. Say, for example, Chrome has become unresponsive. According to our top display, we can discern there are four instances of chrome running with Process IDs (PID) 3827, 3919, 10764, and 11679. This information will be important to have with one particular method of killing the process.

Although top is incredibly handy, it’s not always the most efficient means of getting the information you need. Let’s say you know the Chrome process is what you need to kill, and you don’t want to have to glance through the real-time information offered by top. For that, you can make use of the ps command and filter the output through grep. The ps command reports a snapshot of a current process and grep prints lines matching a pattern. The reason why we filter ps through grep is simple: If you issue the ps command by itself, you will get a snapshot listing of all current processes. We only want the listing associated with Chrome. So this command would look like:

ps aux | grep chrome

The aux options are as follows:

a = show processes for all users

u = display the process’s user/owner

x = also show processes not attached to a terminal

The x option is important when you’re hunting for information regarding a graphical application.

When you issue the command above, you’ll be given more information than you need (Figure 2) for the killing of a process, but it is sometimes more efficient than using top.

Figure 2: Locating the necessary information with the ps command.

Killing the process

Now we come to the task of killing the process. We have two pieces of information that will help us kill the errant process:

Process name Process ID

Which you use will determine the command used for termination. There are two commands used to kill a process:

kill – Kill a process by ID killall – Kill a process by name

There are also different signals that can be sent to both kill commands. What signal you send will be determined by what results you want from the kill command. For instance, you can send the HUP (hang up) signal to the kill command, which will effectively restart the process. This is always a wise choice when you need the process to immediately restart (such as in the case of a daemon). You can get a list of all the signals that can be sent to the kill command by issuing kill -l. You’ll find quite a large number of signals (Figure 3).

Figure 3: The available kill signals.

The most common kill signals are:

Signal Name

Single Value

Effect

SIGHUP

1

Hangup

SIGINT

2

Interrupt from keyboard

SIGKILL

9

Kill signal

SIGTERM

15

Termination signal

SIGSTOP

17, 19, 23

Stop the process

What’s nice about this is that you can use the Signal Value in place of the Signal Name. So you don’t have to memorize all of the names of the various signals.
So, let’s now use the kill command to kill our instance of chrome. The structure for this command would be:

kill SIGNAL PID

Where SIGNAL is the signal to be sent and PID is the Process ID to be killed. We already know, from our ps command that the IDs we want to kill are 3827, 3919, 10764, and 11679. So to send the kill signal, we’d issue the commands:

kill -9 3827

kill -9 3919

kill -9 10764

kill -9 11679

Once we’ve issued the above commands, all of the chrome processes will have been successfully killed.

Let’s take the easy route! If we already know the process we want to kill is named chrome, we can make use of the killall command and send the same signal the process like so:

killall -9 chrome

The only caveat to the above command is that it may not catch all of the running chrome processes. If, after running the above command, you issue the ps aux|grep chrome command and see remaining processes running, your best bet is to go back to the kill command and send signal 9 to terminate the process by PID.

Ending processes made easy

As you can see, killing errant processes isn’t nearly as challenging as you might have thought. When I wind up with a stubborn process, I tend to start off with the killall command as it is the most efficient route to termination. However, when you wind up with a really feisty process, the kill command is the way to go.

The post Classic SysAdmin: How to Kill a Process from the Linux Command Line appeared first on Linux Foundation.

8 fundamental Linux file-management commands for new users

Learn how to create, copy, move, rename, and delete files and directories from the Linux command line.

Read More at Enable Sysadmin

8 essential Linux file navigation commands for new users

It’s straightforward to get around your Linux system if you know these basic commands.

Read More at Enable Sysadmin

5 scripts for getting started with the Nmap Scripting Engine

The NSE boosts Nmap’s power by adding scripting capabilities (custom or community-created) to the network scanning tool.

Read More at Enable Sysadmin

3 tools for troubleshooting packet filtering

Use Nmap, Wireshark, and tcpdump to sniff out router problems on your network.

Read More at Enable Sysadmin

Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble

Brian Behlendorf

As someone who has spent their entire career in open source software (OSS), the Log4Shell scramble (an industry-wide four-alarm-fire to address a serious vulnerability in the Apache Log4j package) is a humbling reminder of just how far we still have to go. OSS is now central to the functioning of modern society, as critical as highway bridges, bank payment platforms, and cell phone networks, and it’s time OSS foundations started to act like it.

Organizations like the Apache Software Foundation, the Linux Foundation, the Python Foundation, and many more, provide legal, infrastructural, marketing and other services for their communities of OSS developers. In many cases the security efforts at these organizations are under-resourced and hamstrung in their ability to set standards and requirements that would mitigate the chances of major vulnerabilities, for fear of scaring off new contributors. Too many organizations have failed to apply raised funds or set process standards to improve their security practices, and have unwisely tilted in favor of quantity over quality of code.

What would “acting like it” look like? Here are a few things that OSS foundations can do to mitigate security risks:

Set up an organization-wide security team to receive and triage vulnerability reports, as well as coordinate responses and disclosures to other affected projects and organizations.Perform frequent security scans, through CI tooling, for detecting unknown vulnerabilities in the software and recognizing known vulnerabilities in dependencies.Perform occasional outside security audits of critical code, particularly before new major releases.Require projects to use test frameworks, and ensure high code coverage, so that features without tests are discouraged and underused features are weeded out proactively.Require projects to remove deprecated or vulnerable dependencies. (Some Apache projects are not vulnerable to the Log4j v2 CVE, because they are still shipping with Log4j v1, which has known weaknesses and has not received an update since 2015!)Encourage, and then eventually require, the use of SBOM formats like SPDX to help everyone track dependencies more easily and quickly, so that vulnerabilities are easier to find and fix.Encourage, and then eventually require, maintainers to demonstrate familiarity with the basics of secure software development practices.

Many of these are incorporated into the CII Best Practices badge, one of the first attempts to codify these into an objective comparable metric, and an effort that has now moved to OpenSSF. The OpenSSF has also published a free course for developers on how to develop secure software, and SPDX has recently been published as an ISO standard.

None of the above practices is about paying developers more, or channeling funds directly from users of software to developers. Don’t get me wrong, open source developers and the people who support them should be paid more and appreciated more in general. However, it would be an insult to most maintainers to suggest that if you’d just slipped more money into their pockets they would have written more secure code. At the same time, it’s fair to say a tragedy-of-the-commons hits when every downstream user assumes that these practices are in place, being done and paid for by someone else.

Applying these security practices and providing the resources required to address them is what foundations are increasingly expected to do for their community. Foundations should begin to establish security-related requirements for their hosted and mature projects. They should fundraise from stakeholders the resources required for regular paid audits for their most critical projects, scanning tools and CI for all their projects, and have at least a few paid staff members on a cross-project security team so that time-critical responses aren’t left to individual volunteers. In the long term, foundations should consider providing resources to move critical projects or segments of code to memory-safe languages, or fund bounties for more tests.

The Apache Software Foundation seems to have much of this right, let’s be clear. Despite being notified just before the Thanksgiving holiday, their volunteer security team worked with the Log4j maintainers and responded quickly. Log4j also has almost 8000 passing tests in its CI pipeline, but even all that testing didn’t catch the way this vulnerability could be exploited. And in general, Apache projects are not required to have test coverage at all, let alone run the kind of SAST security scans or host third party audits that might have caught this.

Many other foundations, including those hosted at the Linux Foundation, also struggle to do all this – this is not easy to push through the laissez-faire philosophy that many foundations have regarding code quality, and third-party code audits and tests don’t come cheap. But for the sake of sustainability, reducing the impact on the broader community, and being more resilient, we have got to do better. And we’ve got to do this together, as a crisis of confidence in OSS affects us all.

This is where OpenSSF comes in, and what pulled me to the project in the first place. In the new year you’ll see us announce a set of new initiatives that build on the work we’ve been doing to “raise the floor” for security in the open source community. The only way we do this effectively is to develop tools, guidance, and standards that make adoption by the open source community encouraged and practical rather than burdensome or bureaucratic. We will be working with and making grants to other open source projects and foundations to help them improve their security game. If you want to stay close to what we’re doing, follow us on Twitter or get involved in other ways. For a taste of where we’ve been to date, read our segment in the Linux Foundation Annual Report, or watch our most recent Town Hall.

Hoping for a 2022 with fewer four alarm fires,

Brian

Brian Behlendorf is General Manager of the Linux Foundation’s Open Source Security Foundation (OpenSSF). He was a founding member of the Apache Group, which later became the Apache Software Foundation, and served as president of the foundation for three years.

The post Open Source Foundations Must Work Together to Prevent the Next Log4Shell Scramble appeared first on Linux Foundation.