Home Blog Page 145

Prepr Partners with the Linux Foundation to Provide Digital Work-Integrated Learning through the F.U.N.™ Program

December 14th, 2020 – Toronto, Canada – Prepr is excited to announce a new partnership with The Linux Foundation, the nonprofit organization enabling mass innovation through open source, that will give work-integrated learning experiences to youth facing employment barriers. The new initiative, the Flexible Upskilling Network (F.U.N.™) program, launches in collaboration with the Magnet Network and the Network for the Advancement of Black Communities (NABC). The F.U.N.™ program is a blended learning program, where participants receive opportunities to combine valuable work experience with digital skill development over a 16-week journey. The objective of the F.U.N.™ program is to support youth, with a focus on women and visible minority groups who are involuntarily not in employment, education, or training (NEET) in Ontario, by helping them gain employability skills, including soft skills like communication, collaboration, and problem-solving.

Caitlin McDonough, Chief Education Officer at Prepr, says about the F.U.N.™ program, “Digital skills are essential for the workforce of the future. We at Prepr, are looking forward to the opportunity to support youth capacity development for the future of work.”

With The Linux Foundation, Prepr is committed to supporting over 180 youth participants enrolling and completing the F.U.N™ program between July 2020 and March 2021. Prepr will be using its signature PIE ® method to train the participants in Project Leadership, Innovation, and Entrepreneurship to expose them to real-world business challenges. The work-integrated learning experience Prepr provides will support participants in developing both soft and hard skills, with a focus on digital skills to help them secure gainful employment for the uncertain future of work.

“In this day and age, it is essential to have a good educational foundation in technology to maximize your chances of career success,” said Clyde Seepersad, SVP and GM, Training & Certification at The Linux Foundation. “We are thrilled to partner with Prepr to bring The Linux Foundation’s vendor-neutral, expert training in the open source technologies that serve as the backbone of modern technologies to communities that will truly benefit from it. I look forward to seeing how these promising students perform and hope to partner with Prepr on future initiatives to train even more in the future.”

The program will explore digital career pathways through multiple work-related challenges. These work challenges will bring creative approaches to gaining innovative skills that are invaluable in today’s new normal of remote work and learn while allowing individuals to become more competitive in today’s digital workforce.

Stephen Crawford, MPP for Oakville, speaking about the government’s commitment to supporting youth facing employment barriers: “This government is committed to supporting our youth, notably visible minorities, as they prepare to enter the workforce. The youth of today will be the leaders of tomorrow.” The Ontario government funding for the F.U.N. program is part of a $37 million investment in training initiatives across the province.

Through the program’s blended learning approach, participants will learn how to use Prepr’s signature PIE ® tool, which addresses three essential skills gaps facing the business services sector today: expertise in innovation, project management, and business development (entrepreneurship, sales, and commercialization). At the end of the program, participants will gain a certification, along with 12 weeks of hands-on work experience, which will foster valuable, future-proof skills to secure gainful employment.

The Linux Foundation will also support participants through an introductory course to Linux and related tools: LFS101x: Introduction to Linux. The program will help to develop the digital skills essential for our new normal of work, with beginner-level challenges to fill obvious skills gaps and foster a mentality of problem-solving. With the support of open Linux Foundation resources, these challenges will be an opportunity for participants to ideate and create project solutions ready for real-world implementation.

About Prepr

Prepr provides the tools, resources, and technology to empower individuals to become lifelong problem solvers. Through triangular cooperation between the public and private sectors as well as government, Prepr aims to strengthen the collaboration on challenges that affect individuals, communities, businesses, and infrastructure to create a more sustainable future for everyone.

About The Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

The post Prepr Partners with the Linux Foundation to Provide Digital Work-Integrated Learning through the F.U.N.™ Program appeared first on Linux Foundation – Training.

How to Create and Manage Archive Files in Linux

By Matt Zand and Kevin Downs

In a nutshell, an archive is a single file that contains a collection of other files and/or directories. Archive files are typically used for a transfer (locally or over the internet) or make a backup copy of a collection of files and directories which allow you to work with only one file (if compressed, it has a lower size than the sum of all files within it) instead of many. Likewise, archives are used for software application packaging. This single file can be easily compressed for ease of transfer while the files in the archive retain the structure and permissions of the original files.

We can use the tar tool to create, list, and extract files from archives. Archives made with tar are normally called “tar files,” “tar archives,” or—since all the archived files are rolled into one—“tarballs.”

This tutorial shows how to use tar to create an archive, list the contents of an archive, and extract the files from an archive. Two common options used with all three of these operations are ‘-f’ and ‘-v’: to specify the name of the archive file, use ‘-f’ followed by the file name; use the ‘-v’ (“verbose”) option to have tar output the names of files as they are processed. While the ‘-v’ option is not necessary, it lets you observe the progress of your tar operation.

For the remainder of this tutorial, we cover 3 topics: 1- Create an archive file, 2- List contents of an archive file, and 3- Extract contents from an archive file. We conclude this tutorial by surveying 6 practical questions related to archive file management. What you take away from this tutorial is essential for performing tasks related to cybersecurity and cloud technology.

1- Creating an Archive File

To create an archive with tar, use the ‘-c’ (“create”) option, and specify the name of the archive file to create with the ‘-f’ option. It’s common practice to use a name with a ‘.tar’ extension, such as ‘my-backup.tar’. Note that unless specifically mentioned otherwise, all commands and command parameters used in the remainder of this article are used in lowercase. Keep in mind that while typing commands in this article on your terminal, you need not type the $ prompt sign that comes at the beginning of each command line.

Give as arguments the names of the files to be archived; to create an archive of a directory and all of the files and subdirectories it contains, give the directory’s name as an argument.

 To create an archive called ‘project.tar’ from the contents of the ‘project’ directory, type:

$ tar -cvf project.tar project

This command creates an archive file called ‘project.tar’ containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged.

Use the ‘-z’ option to compress the archive as it is being written. This yields the same output as creating an uncompressed archive and then using gzip to compress it, but it eliminates the extra step.

 To create a compressed archive called ‘project.tar.gz’ from the contents of the ‘project’ directory, type:

$ tar -zcvf project.tar.gz project

This command creates a compressed archive file, ‘project.tar.gz’, containing the ‘project’ directory and all of its contents. The original ‘project’ directory remains unchanged.

NOTE: While using the ‘-z’ option, you should specify the archive name with a ‘.tar.gz’ extension and not a ‘.tar’ extension, so the file name shows that the archive is compressed. Although not required, it is a good practice to follow.

Gzip is not the only form of compression. There is also bzip2 and and xz. When we see a file with an extension of xz we know it has been compressed using xz. When we see a file with the extension of .bz2 we can infer it was compressed using bzip2. We are going to steer away from bzip2 as it is becoming unmaintained and focus on xz. When compressing using xz it is going to take longer for the files to compressed. However, it is typically worth the wait as the compression is much more effective, meaning the resulting file will usually be smaller than other compression methods used. Even better is the fact that decompression, or expanding the file, is not much different between the different methods of compression. Below we see an example of how to utilize xz when compressing a file using tar

  $ tar -Jcvf project.tar.xz project

We simply switch -z for gzip to uppercase -J for xz. Here are some outputs to display the differences between the forms of compression:

As you can see xz does take the longest to compress. However it does the best job of reducing files size, so it’s worth the wait. The larger the file is the better the compression becomes too!

2- Listing Contents of an Archive File

To list the contents of a tar archive without extracting them, use tar with the ‘-t’ option.

 To list the contents of an archive called ‘project.tar’, type:

$ tar -tvf project.tar  

This command lists the contents of the ‘project.tar’ archive. Using the ‘-v’ option along with the ‘-t’ option causes tar to output the permissions and modification time of each file, along with its file name—the same format used by the ls command with the ‘-l’ option.

 To list the contents of a compressed archive called ‘project.tar.gz’, type:

$ tar -tvf project.tar

 3- Extracting contents from an Archive File

To extract (or unpack) the contents of a tar archive, use tar with the ‘-x’ (“extract”) option.

 To extract the contents of an archive called ‘project.tar’, type:

$ tar -xvf project.tar

This command extracts the contents of the ‘project.tar’ archive into the current directory.

If an archive is compressed, which usually means it will have a ‘.tar.gz’ or ‘.tgz’ extension, include the ‘-z’ option.

 To extract the contents of a compressed archive called ‘project.tar.gz’, type:

$ tar -zxvf project.tar.gz

NOTE: If there are files or subdirectories in the current directory with the same name as any of those in the archive, those files will be overwritten when the archive is extracted. If you don’t know what files are included in an archive, consider listing the contents of the archive first.

Another reason to list the contents of an archive before extracting them is to determine whether the files in the archive are contained in a directory. If not, and the current directory contains many unrelated files, you might confuse them with the files extracted from the archive.

To extract the files into a directory of their own, make a new directory, move the archive to that directory, and change to that directory, where you can then extract the files from the archive.

Now that we have learned how to create an Archive file and list/extract its contents, we can move on to discuss the following 9 practical questions that are frequently asked by Linux professionals.

  • Can we add content to an archive file without unpacking it?

Unfortunately, once a file has been compressed there is no way to add content to it. You would have to “unpack” it or extract the contents, edit or add content, and then compress the file again. If it’s a small file this process will not take long. If it’s a larger file then be prepared for it to take a while.

  • Can we delete content from an archive file without unpacking it?

This depends on the version of tar being used. Newer versions of tar will support a –delete.

For example, let’s say we have files file1 and file2 . They can be removed from file.tar with the following:

$ tar -vf file.tar –delete file1 file2

To remove a directory dir1:

$ tar -f file.tar –delete dir1/*

  • What are the differences between compressing a folder and archiving it?

The simplest way to look at the difference between archiving and compressing is to look at the end result. When you archive files you are combining multiple files into one. So if we archive 10 100kb files you will end up with one 1000kb file. On the other hand if we compress those files we could end up with a file that is only a few kb or close to 100kb.

  • How to compress archive files?

As we saw above you can create and archive files using the tar command with the cvf options. To compress the archive file we made there are two options; run the archive file through compression such as gzip. Or use a compression flag when using the tar command. The most common compression flags are- z for gzip, -j for bzip and -J for xz. We can see the first method below:

$ gzip file.tar

Or we can just use a compression flag when using the tar command, here we’ll see the gzip flag “z”:

$ tar -cvzf file.tar /some/directory

  • How to create archives of multiple directories and/or files at one time?

It is not uncommon to be in situations where we want to archive multiple files or directories at once. And it’s not as difficult as you think to tar multiple files and directories at one time. You simply supply which files or directories you want to tar as arguments to the tar command:

$ tar -cvzf file.tar file1 file2 file3

or

$ tar -cvzf file.tar /some/directory1 /some/directory2

  • How to skip directories and/or files when creating an archive?

You may run into a situation where you want to archive a directory or file but you don’t need certain files to be archived. To avoid archiving those files or “exclude” them you would use the –exclude option with tar:

$ tar –exclude ‘/some/directory’ -cvf file.tar /home/user

So in this example /home/user would be archived but it would exclude the /some/directory if it was under /home/user. It’s important that you put the –exclude option before the source and destination as well as to encapsulate the file or directory being excluded with single quotation marks.

Summary

The tar command is useful for creating backups or compressing files you no longer need. It’s good practice to back up files before changing them. If something doesn’t work how it’s intended to after the change you will always be able to revert back to the old file. Compressing files no longer in use helps keep systems clean and lowers the disk space usage. There are other utilities available but tar has reigned supreme for its versatility, ease of use and popularity.

Resources

If you like to learn more about Linux, reading the following articles and tutorials are highly recommended:

About the Authors

Matt Zand is a serial entrepreneur and the founder of 3 tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: https://www.linkedin.com/in/matt-zand-64047871

Kevin Downs is Red Hat Certified System Administrator or RHCSA. At his current job at IBM as Sys Admin, he is in charge of administering hundreds of servers running on different Linux distributions. He is a Lead Linux Instructor at Coding Bootcamps where he has authored 5 self-paced Courses.

The post How to Create and Manage Archive Files in Linux appeared first on Linux Foundation – Training.

How to lighten the load on your container registry using Quay.io

How to lighten the load on your container registry using Quay.io

Using Buildah, Skopeo, and Quay.io to create a container registry.
Tom Sweeney
Fri, 12/11/2020 at 5:30pm

Image

Image by Dariusz Sankowski from Pixabay

In this post, I show you how to use Quay.io to host container images, and how to avoid over-taxing your container registry by limiting unnecessary requests for images. I use Buildah, Skopeo, and Quay.io, but the tips on limiting image pulls will work with any container registry you might use.

Topics:  
Containers  
Linux  
Read More at Enable Sysadmin

Continuous Delivery in the Age of Microservices and COVID-19

The goal of continuous delivery (CD) is to produce high-quality software rapidly. While the emergence of microservices and cloud-native technology has brought huge benefits in scalability, it has added a layer of complexity to this approach. Security is another big challenge. In this discussion with Tracy Miranda, Executive Director of the Continuous Delivery Foundation, we talked about some of the pain points the organizations face when bolstering their CD practices and how the Foundation is helping to address them.

Swapnil Bhartiya: How would you define continuous delivery? Also, what about the CI part of it because when we talk about it, we always say CI/CD?

Tracy Miranda: We define continuous delivery as a software engineering approach in which teams work in short cycles and they ensure that the code is always released at any point in time. Now, traditionally, people tend to speak a lot about continuous integration and continuous delivery (CI/CD). Continuous integration is when developers regularly commit at least once a day to a mainline and keep that main line up to date. But I see continuous delivery as really this umbrella of all the practices you need to keep that software ready to be released at any time. That includes continuous integration, security features, testing and so on. It’s a general set of practices.

Swapnil Bhartiya: CI/CD is a solved problem and there are many open-source projects around it. What role is the Foundation playing in this space?

Tracy Miranda: We know a lot about continuous delivery today and we appreciate that it is really important because it makes such a difference to every business today — not just software companies, but also banks and the healthcare industry. However, the adoption of continuous delivery practices is super low. Many people think they’re doing it, but maybe they’re doing some continuous integration and they haven’t quite figured out how to get through automation.

To top it off, what makes things even more complicated is we’ve seen the rise of microservices and cloud-native technology. While these give us huge benefits in terms of scalability and easy to work on separate parts of the application, they have also increased challenges, like a proliferation of environments and teams having to contend with all these different parts that make up an application.

The Continuous Delivery Foundation is there to help support teams and organizations in the adoption of these practices both from the sense of taking advantage of open source projects in the space and democratizing the best practices. We have a very recent working group that’s spun up to help anyone in this space get better at delivering software.

Swapnil Bhartiya: Security is becoming a serious concern and no longer an after-thought. In most cases, we see that companies were compromised not because of some zero-day, but because they didn’t apply the patch to a known vulnerability. When you have billions of deployments of your applications, it becomes challenging. Talk about the role CD plays in improving security.

Tracy Miranda: Security is a top concern. I think there are lots of different elements to this. On one hand, we talk a lot about shift-left of security. We need to make sure the security professionals and the folks focused on security are tightly involved with the rest of the team. So, there are no silos. People don’t regard security as someone else’s problem. Security starts with the developers.

As an industry, I think it’s really important that we work together to solve industry-level problems such as applying patches that are already available. It’s more or less an outreach problem. We need to be better at telling people to keep their systems updated. We need to cut through the noise of all the different messaging they’re hearing. I think that’s another example where something like the Continuous Delivery Foundation can make a difference in addressing these broad industry problems.

Swapnil Bhartiya: You also mentioned microservices as a challenge for companies. What is being done around solving the problem of continuous delivery for microservices?

Tracy Miranda: That’s a great question. We definitely have the big split of folks who are used to delivering a monolith and they have their existing setup, all geared towards supporting that. Then, there is an increasing number of folks who are trying to take advantage of microservices and all its implications. One of the hot topics that’s emerged for us is configuration management. How we think about this is earlier, the scope of your application was very well defined. With microservices, the definition of an application changes — it’s a set of microservices. How do we talk about which version of each microservice goes into a specific app? If we are continuously pushing code and integrating that, how are those different versions changing relative to each other? How are we testing that all together? So, we’ve definitely think configuration management is a really hot topic and people are looking at tooling in the space. I think we have a couple of interesting projects that might be coming in the pipeline to CDF that will specifically help to drive visibility into this space and give people better tooling to manage all the dependencies around microservices.

Swapnil Bhartiya: There are so many projects and open-source tools for CD, which may also lead to a problem of interoperability.  How big is it a concern for the Foundation and what are you doing to increase interoperability within these tools?

Tracy Miranda: Interoperability is one of those problems where if you’re just working in your own organization, sometimes, it’s not really a problem until it’s time to adopt a new tool or add something into your workflow. If we step back and look at the industry as a whole and take a look across the whole landscape, at the moment, it’s hugely fragmented. There’s a lot of tools doing similar things. It’s very difficult for people to move from different CI tools or different pipeline orchestration tools without having to go through a lot of pain to figure out how to do that. Providers have to implement plugins for different systems. It’s a waste of time and it slows down innovation when we could be moving up the stack.

I think where we are today, there’s a greater appreciation from end users who are saying “We want to simplify this. We want to find better ways for tools to interoperate.” At CDF, one of the very first special interest groups we had was an interoperability working group. This is a set of like-minded folks who got together and said, “As an industry, we should be better and we can be better. We need to figure that out.” It’s a really good group of folks that build the projects like Jenkins X, Tekton, and Spinnaker. We’ve also got a lot of end-user members represented like Ericsson and eBay to make sure that as the problems are being solved, they apply to real-world use cases.

It’s an open group and people are welcome to join these conversations. At the moment, there is a discussion on standardizing interfaces or metadata. Why can’t we have a standardized way to express all the metadata around a release or all the metadata around a set of testing results? I am really excited about what this group is doing and look forward to if they can really achieve this very difficult goal and bring some consolidation around the tooling.

Swapnil Bhartiya: One last question before we wrap this up: how is COVID-19 affecting continuous delivery?

Tracy Miranda: It has definitely increased. We have seen some surveys that show that the adoption of continuous delivery is increasing. The pandemic has emphasized the need to be more resilient and to adapt quickly. Most organizations are going to evolve to be very distributed. Continuous delivery practices enable all those things. The companies who are already doing these practices have a significant advantage in times like these. I think one of the benefits we have as a Foundation is that open source has always been about collaboration at scale and in a distributed way. So, we’re hoping we can take all those lessons and marry open-source practices to continuous delivery practices and make it easier for everybody to adopt them. It shouldn’t be something elite that only a few companies could do. It should be something that’s possible and achievable for every company and every organization out there.

Demystifying Ansible for Linux sysadmins

Demystifying Ansible for Linux sysadmins

Taking the labor out of labor-intensive tasks is what Ansible is all about. Learn the basics here.
Pratheek Prabhakaran
Tue, 12/8/2020 at 6:54pm

Image

Photo by Aphiwat chuangchoem from Pexels

The life of a sysadmin involves installation, configuration, performing regular system upgrade and maintenance activities, provisioning, system monitoring, vulnerability mitigation, troubleshooting issues, and much more. Many sysadmin actions consist of step-by-step tasks performed methodically. So how can we make the life of a sysadmin easier?

[ Readers also enjoyed: An introduction to Ansible Tower ]

Topics:  
Linux  
Ansible  
Automation  
Read More at Enable Sysadmin

Managing Linux users with the passwd command

Linux authentication is primarily handled with passwords and public keys. Find out how the passwd command fits into the user management process.
Read More at Enable Sysadmin

Download the Report on the 2020 FOSS Contributor Survey

Free and Open Source Software (FOSS) has become a critical part of the modern economy. It has been estimated that FOSS constitutes 80-90% of any given piece of modern software, and software is an increasingly vital resource in nearly all industries. This heavy reliance on FOSS is common in both the public and private sectors, in both tech and non-tech organizations. Therefore, ensuring the health and security of FOSS is critical to the future of nearly all industries in the modern economy.

To better understand the state of security and sustainability in the FOSS ecosystem, and how organizations and companies can support it, the Linux Foundation‘s Core Infrastructure Initiative (CII) and the Laboratory for Innovation Science at Harvard (LISH) collaborated to conduct a widespread survey of FOSS contributors as part of larger efforts to take a pre-emptive approach to strengthen cybersecurity by improving open-source software security. 

These efforts — recently incorporated into the Open Source Security Foundation (OpenSSF) working group on securing critical projects — aim to support, protect, and fortify open software, especially software critical to the global information infrastructure.

This survey’s primary goal is to identify how best to improve FOSS’s security and sustainability — especially those projects that are widely relied upon by the modern economy. Specifically, the survey seeks to help answer the question,

“How can we better incentivize adequate maintenance and security of the most used FOSS projects?”

Importantly, in conducting this survey, the research team sought to take a holistic view of security. The methodology for recruiting survey participants emphasized contributors to FOSS projects that have been identified as widely used via previous research that culminated in the release of “CII Census II Preliminary Report – Vulnerabilities in the Core.”

This new report summarizes the results of a survey of free/open source software (FOSS) developers in 2020. The goal was to identify key issues in improving FOSS’s security and sustainability since the world now depends on it as a critical infrastructure that underlies the modern economy. 

To capture a cross-section of the FOSS community, the research team distributed the survey to contributors to the most widely used open source projects and invited the wider FOSS contributor community through an open invitation. It captured more technical aspects of security and also considered the more human side. 

The survey included questions about contributor motivations and level of involvement, corporate involvement in FOSS, the role of economic considerations in contribution behavior, and sought to answer the following:

  1. Demographics: What are the demographics of FOSS contributors? In particular, what are their gender, employment, and geographic location?
  2. Motivations: What are their reasons for starting, continuing, or stopping contributions to FOSS? How can projects keep contributors engaged, and do contributors feel that their employers or others value their work?
  3. Pay: How many FOSS contributors are paid for their work on FOSS? If paid, by whom (e.g., by employers and/or corporate sponsorship)? If they are not, does the lack of payment lead to significantly poorer security or sustainability?
  4. Time Spent: How much time do contributors spend contributing to FOSS, and how would they like to spend it? Is there an interest in increasing time spent on security issues?
  5. Aid: What kinds of actions from external actors would help improve security (e.g., code contributions and/or money)?
  6. Current activity: What kinds of security-related activities are already taking place in the FOSS projects represented by the respondents?
  7. Education/training: How much education/training have FOSS contributors had in secure software development and operations? From which sources did they receive it?

The goals in running this survey were to understand the state of security and sustainability in FOSS and identify opportunities to improve them, and ensure FOSS’s viability in the future. In particular, this survey focused on the “human side” of FOSS, more than the technical side, although the two are certainly inter-related, and these findings relate to both. 

The results identified reasons for optimism about the future of FOSS (individuals are continuing to contribute to FOSS, companies are becoming friendlier to FOSS to the point of paying some employees to contribute, etc.), but also areas of concern (in particular, the lack of security-related efforts, and potential difficulties in motivating such efforts). 

In the end, free and open source software is, and always has been, a community-driven effort that has led to the development of some of the most critical building blocks of the modern economy. This survey highlights the importance of the security of this important dynamic asset. Likewise, it will take a community-driven effort, including individuals, companies, and institutions, to ensure FOSS is secure and sustainable for future generations.

Authors:

  • Frank Nagle, Harvard Business School
  • David A. Wheeler, The Linux Foundation
  • Hila Lifshitz-Assaf, New York University 
  • Haylee Ham, Laboratory for Innovation Science at Harvard
  • Jennifer L. Hoffman, Laboratory for Innovation Science at Harvard 

Download Report

The post Download the Report on the 2020 FOSS Contributor Survey appeared first on The Linux Foundation.

Linux troubleshooting: Navigating in a perfect storm

A high-level walkthrough of CI/CD Automation troubleshooting techniques with multiple, significantly impeding factors blocking progress.
Read More at Enable Sysadmin

How to encrypt a single Linux filesystem

How to encrypt a single Linux filesystem

Sure, you can manually encrypt a filesystem. But, you can also automate it with Ansible.
Peter Gervase
Mon, 12/7/2020 at 4:52pm

Image

Photo by PhotoMIX Company from Pexels

There are a few different reasons that you might want to encrypt a filesystem, such as protecting sensitive information while it’s at rest, not having to worry about encrypting individual files on the filesystem, or other reasons. To manually encrypt a filesystem in Red Hat Enterprise Linux (RHEL), you can use the cryptsetup command. This article will walk you through how to use Ansible to do this for you for a RHEL 8 server.

Topics:  
Linux  
Linux Administration  
Security  
Read More at Enable Sysadmin

6 essential SSH guides for sysadmins

SSH continues to be a go-to command line tool for system administrators. These six guides reveal key ways that SSH plays a crucial role in getting the job done.
Read More at Enable Sysadmin