Home Blog Page 440

What’s the Difference Between a Fork and Clone?

The concept of forking a project has existed for decades in free and open source software. To “fork” means to take a copy of the project, rename it, and start a new project and community around the copy. Those who fork a project rarely, if ever, contribute to the parent project again. It’s the software equivalent of the Robert Frost poem: Two paths diverged in a codebase and I, I took the one less traveled by…and that has made all the difference.

There can be many reasons for a project fork. Perhaps the project has lain fallow for a while and someone wants to revive it. Perhaps the company that has underwritten the project has been acquired and the community is afraid that the new parent company may close the project. Or perhaps there’s a schism within the community itself, where a portion of the community has decided to go a different direction with the project. Often a project fork is accompanied by a great deal of discussion and possibly also community strife. Whatever the reason, a project fork is the copying of a project with the purpose of creating a new and separate community around it. 

Read more at OpenSource.com

What Are Microservices? Lightweight Software Development Explained

Microservices architecture tears down large monolithic applications with massive complex internal architectures into smaller, independently scalable applications. Each microservice is small and less complex to develop, update, and deploy.

When you think about it, why should those functionalities need to be built into a single application in the first place? In theory, at least, you can imagine they live in separate application and data silos without major problems. For example, if the average auction received two bids, but only a quarter of all sales received feedback, the bidding service would be at least eight times as active as the feedback application at any time of day. If these were combined into a single application, you end up running—and updating—more code than you need more often. The bottom line: Separating different functionality groups into separate applications makes intuitive sense.

Read more at InfoWorld

How to Containerize GPU Applications

By providing self-contained execution environments without the overhead of a full virtual machine, containers have become an appealing proposition for deploying applications at scale. The credit goes to Docker for making containers easy-to-use and hence making them popular. From enabling multiple engineering teams to play around with their own configuration for development, to benchmarking or deploying a scalable microservices architecture, containers are finding uses everywhere.

GPU-based applications, especially in the deep learning field, are rapidly becoming part of the standard workflow; deploying, testing and benchmarking these applications in a containerized application has quickly become the accepted convention. But native implementation of Docker containers does not support NVIDIA GPUs yet — that’s why we developed nvidia-docker plugin. Here I’ll walk you through how to use it.

Read more at SuperUser

LiFT Scholarship Winners: Teens and Academic Aces Learn Open Source Skills

Four people have been named recipients of the seventh annual Linux Foundation Training (LiFT) Scholarships for 2017 in the “Academic Aces” and “Teens in Training” categories.

Teens in Training

Vinícius Almeida

Vinícius Almeida, 15, of Brazil, is the youngest recipient to receive an award from the foundation this year. Although he is a high school freshman, Almeida is already taking computer science courses at the Federal University of Bahia. He has written several articles on robotics and open source technologies, and is active in his local hackerspace, the Raul Hacker Club.

Almeida also volunteers to write browser extensions for the GNU Project. Almeida says he hopes the knowledge he gains from this scholarship will help him convince more individuals in Brazil to adopt open source.

“I can’t imagine my life without FOSS technologies!’’ he wrote in his application. “I love using Linux every day, and learning more about open source has already changed my opinion in lots of discussions.” Almeida added that he is further developing his programming skills every day, thanks to the open source community. “My future is FOSS technologies; today I’m using most of them, but soon I want to develop them [for] the community.”

Sydney Dykstra

Sydney Dykstra, 18, of the United States, is the second scholarship recipient in the Teens in Training category. A recent high school graduate, Dykstra has been contributing to several open source projects, including the games The Secret Chronicles of Dr. M., and Supertux. His goal is to become a Linux systems administrator, and he hopes the scholarship will jumpstart that.

“I believe that open source is the future for everything computer related, online and offline, and necessary… if we are to have a ‘free’ world where we are not worried about someone else watching us or taking advantage of our info,’’ he wrote in his scholarship application.

Dykstra says he wants to become a Linux systems administrator, not only because he enjoys working with Linux systems but because of the freedom and flexibility it provides him. “I’m only a beginner,” he wrote, “but have been using Linux for nearly five years now and have been learning more as I go.”

Academic Aces

Asirifi Charles

Asirifi Charles, 22, of Ghana, is a recipient in the Academic Aces category. He is in his final year studying computer science at the University of Ghana. Charles taught himself about web development through free online resources, and recently became interested in open source, completing the free Intro to Linux course on edX. He hopes this scholarship will help him expand his open source expertise, so he can share it with others in Ghana, where it is difficult to access an IT education. 

“Open source lets you share your contribution while learning to better your skills,’’ he wrote in his application.

Camilo Andres Cortes Hernandez

Camilo Andres Cortes Hernandez, 31, of Colombia, is the other scholarship winner in the Academic Aces category. Hernandez studies technology at EAN University in Colombia, where he also runs a nonprofit that teaches individuals about cloud computing. His focus is currently on Azure, and he hopes the scholarship will help him to obtain the MCSA: Linux on Azure certification from The Linux Foundation and Microsoft.

Not only will the scholarship improve his career, he wrote, but it will also help others to embrace open source solutions because of his work in the community. Recently, Hernandez says, he was discussing open source solutions on Azure during a free cloud event, and received good feedback.

“I want to keep teaching others about cloud and top trending technologies, especially open source solutions that can run on environments like Azure. I have a goal within my community (CloudFirst Campus) to teach people about the interoperability of solutions no matter if they are private or open — you can run anything on the cloud.” 

The Linux Foundation Training Scholarships cover the expenses for one class to be chosen by each recipient from the Scholarship Track choices, representing thousands of dollars in value (travel expenses for in-person classes are not included). 

Winners in all categories may also elect to take a Linux Foundation Certified System Administrator, Linux Foundation Certified Engineer, Certified OpenStack Administrator, Cloud Foundry Certified Developer or Certified Kubernetes Administrator exam at no cost following the completion of their training course.

Scholarships are supported by The Linux Foundation members seeking to help train the developers and IT professionals of the future.

Learn more about the LiFT Scholarship program from The Linux Foundation.

How Kubernetes Resource Classes Promise to Change the Landscape for New Workloads

The Colin Powell rule states that you should make a decision when you have 40 percent to 70 percent of the information necessary to make the decision. With Linux container technology like Kubernetes evolving so quickly, it’s difficult for companies to feel like they have 40 percent of the information they need, let alone 70 percent.

Customers often approach me and others at Red Hat to help them get beyond the 40 percent mark to make a decision about Red Hat OpenShift, which is based on Kubernetes.

For many of these customers, public cloud has become commonplace for workloads. However, translating their on-premise architecture into a proper design/architecture for each cloud is challenging (to say the least) in terms of both time and cost. An architecture that works the same, everywhere, is the promise of Kubernetes and OpenShift, but it’s also one of the heaviest burdens for engineers.

This contributed article is part of a series in advance of Kubecon/CloudNativeCon, taking place in Austin, Dec. 6 – 8.

Read more at The New Stack

The OpenChain Project: From A to Community

Communities form in open source all the time to address challenges. The majority of these communities are based around code, but others cover topics as diverse as design or governance. The OpenChain Project is a great example of the latter. What began three years ago as a conversation about reducing overlap, confusion, and wasted resources with respect to open source compliance is now poised to become an industry standard.

The idea to develop an overarching standard to describe what organizations could and should do to address open source compliance efficiently gained momentum until the formal project was born. The basic idea was simple: identify key recommended processes for effective open source management. The goal was equally clear: reduce bottlenecks and risk when using third-party code to make open source license compliance simple and consistent across the supply chain. The key was to pull things together in a manner that balanced comprehensiveness, broad applicability, and real-world usability.

Read more at The Linux Foundation

 

Blockchains Are Poised to End the Password Era

Blockchain technology can eliminate the need for companies and other organizations to maintain centralized repositories of identifying information, and users can gain permanent control over who can access their data (hence “self-sovereign”), says Drummond Reed, chief trust officer at Evernym, a startup that’s developing a blockchain network specifically for managing digital identities.

Self-sovereign identity systems rely on public-key cryptography, the same kind that blockchain networks use to validate transactions. Although it’s been around for decades, the technology has thus far proved difficult to implement for consumer applications. But the popularity of cryptocurrencies has inspired fresh commercial interest in making it more user-friendly.

Read more at Technology Review

How to Set Up Private DNS Servers with BIND on Ubuntu 16.04

BIND (Berkeley Internet Name Domain) is the most used DNS software over the Internet. The BIND package is available for all Linux distributions, which makes the installation simple and straightforward. In today’s article we will show you how to install, configure and administer BIND 9 as a private DNS server on a Ubuntu 16.04 VPS, in few steps.

Requirements:

  • Two servers (ns1 and ns2) connected to a private network
  • In this tutorial we will use the 10.20.0.0/16 subnet
  • DNS clients that will connect to your DNS servers

How to Set Up Private DNS Servers with BIND on Ubuntu 16.04

How to Set Up Private DNS Servers with BIND on Ubuntu 16.04

BIND (Berkeley Internet Name Domain) is the most used DNS software over the Internet. The BIND package is available for all Linux distributions, which makes the installation simple and straightforward. In today’s article we will show you how to install, configure and administer BIND 9 as a private DNS server on a Ubuntu 16.04 VPS, in few steps.

Requirements:

  • Two servers (ns1 and ns2) connected to a private network
  • In this tutorial we will use the 10.20.0.0/16 subnet
  • DNS clients that will connect to your DNS servers

1. Update both servers

Begin by updating the packages on both servers:

# sudo apt-get update

2. Install BIND on both servers

# sudo apt-get install bind9 bind9utils

3. Set BIND to IPv4 mode

Set BIND to IPv4 mode, we will do that by editing the “/etc/default/bind9” file and adding “-4” to the OPTIONS variable:

# sudo nano /etc/default/bind9

The edited file should look something like this:

# run resolvconf?
RESOLVCONF=no

# startup options for the server
OPTIONS="-4 -u bind"

Now let’s configure ns1, our primary DNS server.

4. Configuring the Primary DNS Server

Edit the named.conf.options file:

# sudo nano /etc/bind/named.conf.options

On top of the options block, add a new block called trusted.This list will allow the clients specified in it to send recursive DNS queries to our primary server:

acl "trusted" {
        10.20.30.13;  
        10.20.30.14;
        10.20.55.154;
        10.20.55.155;
};

5. Enable recursive queries on our ns1 server, and have the server listen on our private network

Then we will add a couple of configuration settings to enable recursive queries on our ns1 server and to have the server listen on our private network, add the configuration settings under the directory “/var/cache/bind” directive like in the example below:

options {
        directory "/var/cache/bind";

        recursion yes;
        allow-recursion { trusted; };
        listen-on { 10.20.30.13; };
        allow-transfer { none; };

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
};

If the “listen-on-v6” directive is present in the named.conf.options file, delete it as we want BIND to listen only on IPv4.
Now on ns1, open the named.conf.local file for editing:

# sudo nano /etc/bind/named.conf.local

Here we are going to add the forward zone:

zone "test.example.com" {
    type master;
    file "/etc/bind/zones/db.test.example.com";
    allow-transfer { 10.20.30.14; };
};

Our private subnet is 10.20.0.0/16, so we are going to add the reverse zone with the following lines:

zone "20.10.in-addr.arpa" {
    type master;
    file "/etc/bind/zones/db.10.20";
    allow-transfer { 10.20.30.14; };
};

If your servers are in multiple private subnets in the same physical location, you need to specify a zone and create a separate zone file for each subnet.

6. Creating the Forward Zone File

Now we’ll create the directory where we will store our zone files in:

# sudo mkdir /etc/bind/zones

We will use the sample db.local file to make our forward zone file, let’s copy the file first:

# cd /etc/bind/zones
# sudo cp ../db.local ./db.test.example.com

Now edit the forward zone file we just copied:

# sudo nano /etc/bind/zones/db.test.example.com

It should look something like the example below:

$TTL    604800
@       IN      SOA     localhost. root.localhost. (
                              2         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      localhost.      ; delete this
@       IN      A       127.0.0.1       ; delete this
@       IN      AAAA    ::1             ; delete this

Now let’s edit the SOA record. Replace localhost with your ns1 server’s FQDN, then replace “root.localhost” with “admin.test.example.com”.Every time you edit the zone file, increment the serial value before you restart named otherwise BIND won’t apply the change to the zone, we will increment the value to “3”, it should look something like this:

@       IN      SOA     ns1.test.example.com. admin.test.example.com. (
                              3         ; Serial

Then delete the last three records that are marked with “delete this” after the SOA record.

Add the nameserver records at the end of the file:

; name servers - NS records
    IN      NS      ns1.test.example.com.
    IN      NS      ns2.test.example.com.

After that add the A records for the hosts that need to be in this zone. That means any server whose name we want to end with “.test.example.com”:

; name servers - A records
ns1.test.example.com.          IN      A       10.20.30.13
ns2.test.example.com.          IN      A       10.20.30.14

; 10.20.0.0/16 - A records
host1.test.example.com.        IN      A      10.20.55.154
host2.test.example.com.        IN      A      10.20.55.155

The db.test.example.com file should look something like the following:

$TTL    604800
@       IN      SOA     ns1.test.example.com. admin.test.example.com. (
                  3       ; Serial
             604800     ; Refresh
              86400     ; Retry
            2419200     ; Expire
             604800 )   ; Negative Cache TTL
;
; name servers - NS records
     IN      NS      ns1.test.example.com.
     IN      NS      ns2.test.example.com.

; name servers - A records
ns1.test.example.com.          IN      A       10.20.30.13
ns2.test.example.com.          IN      A       10.20.30.14

; 10.20.0.0/16 - A records
host1.test.example.com.        IN      A      10.20.55.154
host2.test.example.com.        IN      A      10.20.55.155

7. Creating the Reverse Zone File

We specify the PTR records for reverse DNS lookups in the reverse zone files. When the DNS server receives a PTR lookup query for an example for IP: “10.20.55.154”, it will check the reverse zone file to retrieve the FQDN of the IP address, in our case that would be “host1.test.example.com”.

We will create a reverse zone file for every single reverse zone specified in the named.conf.local file we created on ns1. We will use the sample db.127 zone file to create our reverse zone file:

# cd /etc/bind/zones
# sudo cp ../db.127 ./db.10.20

Edit the reverse zone file so it matches the reverse zone defined in named.conf.local:

# sudo nano /etc/bind/zones/db.10.20

The original file should look something like the following:

$TTL    604800
@       IN      SOA     localhost. root.localhost. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      localhost.      ; delete this
1.0.0   IN      PTR     localhost.      ; delete this

You should modify the SOA record and increment the serial value. It should look something like this:

@       IN      SOA     ns1.test.example.com. admin.test.example.com. (
                              3         ; Serial

Then delete the last three records that are marked with “delete this” after the SOA record.

Add the nameserver records at the end of the file:

; name servers - NS records
      IN      NS      ns1.test.example.com.
      IN      NS      ns2.test.example.com.

Now add the PTR records for all hosts that are on the same subnet in the zone file you created. This consists of our hosts that are on the 10.20.0.0/16 subnet. In the first column we reverse the order of the last two octets from the IP address of the host we want to add:

; PTR Records
13.30  IN      PTR     ns1.test.example.com.    ; 10.20.30.13
14.30  IN      PTR     ns2.test.example.com.    ; 10.20.30.14
154.55 IN      PTR     host1.test.example.com.  ; 10.20.55.154
155.55 IN      PTR     host2.test.example.com.  ; 10.20.55.155

Save and exit the reverse zone file.

The “/etc/bind/zones/db.10.20” reverse zone file should look something like this:

$TTL    604800
@       IN      SOA     test.example.com. admin.test.example.com. (
                              3         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
; name servers
      IN      NS      ns1.test.example.com.
      IN      NS      ns2.test.example.com.

; PTR Records
13.30  IN      PTR     ns1.test.example.com.    ; 10.20.30.13
14.30  IN      PTR     ns2.test.example.com.    ; 10.20.30.14
154.55 IN      PTR     host1.test.example.com.  ; 10.20.55.154
155.55 IN      PTR     host2.test.example.com.  ; 10.20.55.155

8. Check the Configuration Files

Use the following command to check the configuration syntax of all the named.conf files that we configured:

# sudo named-checkconf

If your configuration files don’t have any syntax problems, the output will not contain any error messages. However if you do have problems with your configuration files, compare the settings in the “Configuring the Primary DNS Server” section with the files you have errors in and make the correct adjustment, then you can try executing the named-checkconf command again.

The named-checkzone can be used to check the proper configuration of your zone files.You can use the following command to check the forward zone “test.example.com”:

# sudo named-checkzone test.example.com db.test.example.com

And if you want to check the reverse zone configuration, execute the following command:

# sudo named-checkzone 20.10.in-addr.arpa /etc/bind/zones/db.10.20

Once you have properly configured all the configuration and zone files, restart the BIND service:

# sudo service bind9 restart

9. Configuring the Secondary DNS Server

Setting up a secondary DNS server is always a good idea as it will serve as a failover and will respond to queries if the primary server is unresponsive.

On ns2, edit the named.conf.options file:

# sudo nano /etc/bind/named.conf.options

At the top of the file, add the ACL with the private IP addresses for all your trusted servers:

acl "trusted" {
        10.20.30.13;
        10.20.30.14;
        10.128.100.101;
        10.128.200.102;
};

Just like in the named.conf.options file for ns2, add the following lines under the directory “/var/cache/bind” directive:

        recursion yes;
        allow-recursion { trusted; };
        listen-on { 10.20.30.13; };
        allow-transfer { none; };

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };

Save and exit the file.

Now open the named.conf.local file for editing:

# sudo nano /etc/bind/named.conf.local

Now we should specify slave zones that match the master zones on the ns1 DNS server. The masters directive should be set to the ns1 DNS server’s private IP address:

zone "test.example.com" {
    type slave;
    file "slaves/db.test.example.com";
    masters { 10.20.30.13; };
};

zone "20.10.in-addr.arpa" {
    type slave;
    file "slaves/db.10.20";
    masters { 10.20.30.13; };
};

Now save and exit the file.

Use the following command to check the syntax of the configuration files:

# sudo named-checkconf

Then restart the BIND service:

# sudo service bind9 restart

10. Configure the DNS Clients

We will now configure the hosts in our 10.20.0.0/16 subnet to use the ns1 and ns2 servers as their primary and secondary DNS servers. This greatly depends on the OS the hosts are running but for most Linux distributions the settings that need to be changed reside in the /etc/resolv.conf file.

Generally on the Ubuntu, Debian and CentOS distributions just edit the /etc/resolv.conf file, execute the following command as root:

# nano /etc/resolv.conf

Then replace the existing nameservers with:

nameserver 10.20.30.13 #ns1
nameserver 10.20.30.14 #ns2

Now save and exit the file and your client should be configured to use the ns1 and ns2 nameservers.

Then test if your clients can send queries to the DNS servers you just configured:

# nslookup host1.test.example.com

The output from this command should be:

Output:
Server:     10.20.30.13
Address:    10.20.30.13#53

Name:   host1.test.example.com
Address: 10.20.55.154

You can also test the reverse lookup by querying the DNS server with the IP address of the host:

# nslookup 10.20.55.154

The output should look like this:

Output:
Server:     10.20.30.13
Address:    10.20.30.13#53

154.55.20.10.in-addr.arpa   name = host1.test.example.com.

Check if all of the hosts resolve correctly using the commands above, if they do that means that you’ve configured everything properly.

Adding a New Host to Your DNS Servers

If you need to add a host to your DNS servers just follow the steps below:

On the ns1 nameserver do the following:

  • Create an A record in the forward zone file for the host and increment the value of the Serial variable.
  • Create a PTR record in the reverse zone file for the host and increment the value of the Serial variable.
  • Add your host’s private IP address to the trusted ACL in named.conf.options.
  • Reload BIND using the following command: sudo service bind9 reload

On the ns2 nameserver do the following:

  • Add your host’s private IP address to the trusted ACL in named.conf.options.
  • Reload BIND using the following command: sudo service bind9 reload

On the host machine do the following:

  • Edit /etc/resolv.conf and change the nameservers to your DNS servers.
  • Use nslookup to test if the host queries your DNS servers.

Removing a Existing Host from your DNS Servers

If you want to remove the host from your DNS servers just undo the steps above.

Note: Please subsitute the names and IP addresses used in this tutorial for the names and IP addresses of the hosts in your own private network.

Linux Kernel 4.15 Gets a Slightly Bigger Second RC, Linus Torvalds Isn’t Worried

The development cycle of the upcoming Linux 4.15 kernel continues with the second Release Candidate, which was announced this past weekend by Linus Torvalds.

 Linus Torvalds kicked off the development of Linux kernel 4.15 last week when he announced the first Release Candidate milestone, which contained most of the changes that will land in the final version, due for release next year. And now he announces the second RC, which is slightly bigger that than the first one.

“It’s a slightly bigger RC2 than I would have wished for, but this early in the release process I don’t worry about it,” said Linus Torvalds in the mailing list announcement, which contains the shortlog with details on the fixes implemented in this second Release Candidate for core. 

Read more at Softpedia

How to Manage Users with Groups in Linux

When you administer a Linux machine that houses multiple users, there might be times when you need to take more control over those users than the basic user tools offer. This idea comes to the fore especially when you need to manage permissions for certain users. Say, for example, you have a directory that needs to be accessed with read/write permissions by one group of users and only read permissions for another group. With Linux, this is entirely possible. To make this happen, however, you must first understand how to work with users, via groups and access control lists (ACLs).

We’ll start from the beginning with users and work our way to the more complex ACLs. Everything you need to make this happen will be included in your Linux distribution of choice. We won’t touch on the basics of users, as the focus on this article is about groups.

For the purpose of this piece, I’m going to assume the following:

You need to create two users with usernames:

  • olivia

  • nathan

You need to create two groups:

  • readers

  • editors

Olivia needs to be a member of the group editors, while nathan needs to be a member of the group readers. The group readers needs to only have read permission to the directory /DATA, whereas the group editors needs to have both read and write permission to the /DATA directory. This, of course, is very minimal, but it will give you the basic information you need to expand the tasks to fit your much larger needs.

I’ll be demonstrating on the Ubuntu 16.04 Server platform. The commands will be universal—the only difference would be if your distribution of choice doesn’t make use of sudo. If this is the case, you’ll have to first su to the root user to issue the commands that require sudo in the demonstrations.

Creating the users

The first thing we need to do is create the two users for our experiment. User creation is handled with the useradd command. Instead of just simply creating the users we need to create them both with their own home directories and then give them passwords.

The first thing we do is create the users. To do this, issue the commands:

sudo useradd -m olivia

sudo useradd -m nathan

We have now created our users. If you look in the /home directory, you’ll find their respective homes (because we used the -m option, which creates a home directory).

Next each user must have a password. To add passwords into the mix, you’d issue the following commands:

sudo passwd olivia

sudo passwd nathan

When you run each command, you will be prompted to enter (and verify) a new password for each user.

That’s it, your users are created.

Creating groups and adding users

Now we’re going to create the groups readers and editors and then add users to them. The commands to create our groups are:

addgroup readers

addgroup editors

That’s it. If you issue the command less /etc/group, you’ll see our newly created groups listed (Figure 1).

Figure 1: Our new groups ready to be used.

With our groups created, we need to add our users. We’ll add user nathan to group readers with the command:

sudo usermod -a -G readers nathan

We’ll add the user olivia to the group editors with the command:

sudo usermod -a -G editors olivia

Now we’re ready to start managing the users with groups.

Giving groups permissions to directories

Let’s say you have the directory /READERS and you need to allow all members of the readers group access to that directory. First, change the group of the folder with the command:

sudo chown -R :readers /READERS 

Next, remove write permission from the group with the command:

sudo chmod -R g-w /READERS

Now we remove the others x bit from the /READERS directory (to prevent any user not in the readers group from accessing any file within) with the command:

sudo chmod -R o-x /READERS

At this point, only the owner of the directory (root) and the members of the readers group can access any file within /READERS.

Let’s say you have the directory /EDITORS and you need to give members of the editors group read and write permission to its contents. To do that, the following command would be necessary:

sudo chown -R :editors /EDITORS

sudo chmod -R g+w /EDITORS

sudo chmod -R o-x /EDITORS

At this point, any member of the editors group can access and modify files within. All others (minus root) have no access to the files and folders within /EDITORS.

The problem with using this method is you can only add one group to a directory at a time. This is where access control lists come in handy.

Using access control lists

Now, let’s get tricky. Say you have a single folder—/DATAand you want to give members of the readers group read permission and members of the group editors read/write permissions. To do that, you must take advantage of the setfacl command. The setfacl command sets file access control lists for files and folders.

The structure of this command looks like this:

setfacl OPTION X:NAME:Y /DIRECTORY

Where OPTION is the available options, X is either u (for user) or g (for group), NAME is the name of the user or group, and DIRECTORY is the directory to be used. We’ll be using the option -m for modify. So our command to add the group reader for read access to the /DATA directory would look like this:

sudo setfacl -m g:readers:rx -R /DATA

Now any member of the readers group can read the files contained within /DATA, but they cannot modify them.

To give members of the editors group read/write permissions (while retaining read permissions for the readers group), we’d issue the command;

sudo setfacl -m g:editors:rwx -R /DATA 

The above command would give any member of the editors group both read and write permission, while retaining the read-only permissions to the readers group.

All the control you need

And there you have it. You can now add members to groups and control those groups’ access to various directories with all the power and flexibility you need. To read more about the above tools, issue the commands:

  • man usradd

  • man addgroup

  • man usermod

  • man sefacl

  • man chown

  • man chmod

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.