Home Blog Page 440

One Month Left to Submit Your Talk to ELC + OpenIoT Summit NA 2018

Embedded Linux Conference (ELC), happening March 12-14 in Portland, OR, gathers kernel and systems developers, and the technologists building the applications running on embedded Linux platforms, to learn about the newest and most interesting embedded technologies, gain access to leading experts, have fascinating discussions, collaborate with peers, and gain a competitive advantage with innovative embedded Linux solutions.

View Suggested Topics and Submit a Proposal to Speak

Co-located with ELC, the OpenIoT Summit serves the unique needs of system architects, firmware developers and software developers in the booming IoT ecosystem. Join experts from the world’s leading companies and open source projects and present the information needed to lead successful IoT developments and progress the development of IoT solutions.

View Suggested Topics and Submit a Proposal to Speak

Linux Foundation events are an excellent way to get to know the community and share your ideas and the work that you are doing. If you haven’t presented at ELC + OpenIoT Summit NA or other conferences before, we’d especially like to hear from you! In the instance that you aren’t sure about your abstract, reach out to us and we will be more than happy to work with you on your proposal.

Sign up for ELC/OpenIoT Summit updates to get the latest information:

Predictive Analytics in the Multicloud

Cloud computing has plenty of complexities. And while many IT leaders would prefer a unified infrastructure, wherein the business standardizes on one or two cloud vendors, that is not going to happen in the real world.

The reason is simple: Applications the business depends on reside on a variety of clouds. Forcing users to stop using some applications and services in the interest of simplifying the company’s cloud mix is unreasonable. That means a multicloud strategy—managing multiple clouds simultaneously—is the only logical recourse.

Even so, managing the multicloud is a difficult task and fraught with often-unexpected obstacles. For example, abstracting the platform—simplifying the user interface by pushing complex details, such as computer code, to a lower level on the platform—is helpful for developers and users, but it can be more complicated for the IT operations staff. This sort of complexity increases management issues.

Read more at HPE 

What’s the Difference Between a Fork and Clone?

The concept of forking a project has existed for decades in free and open source software. To “fork” means to take a copy of the project, rename it, and start a new project and community around the copy. Those who fork a project rarely, if ever, contribute to the parent project again. It’s the software equivalent of the Robert Frost poem: Two paths diverged in a codebase and I, I took the one less traveled by…and that has made all the difference.

There can be many reasons for a project fork. Perhaps the project has lain fallow for a while and someone wants to revive it. Perhaps the company that has underwritten the project has been acquired and the community is afraid that the new parent company may close the project. Or perhaps there’s a schism within the community itself, where a portion of the community has decided to go a different direction with the project. Often a project fork is accompanied by a great deal of discussion and possibly also community strife. Whatever the reason, a project fork is the copying of a project with the purpose of creating a new and separate community around it. 

Read more at OpenSource.com

What Are Microservices? Lightweight Software Development Explained

Microservices architecture tears down large monolithic applications with massive complex internal architectures into smaller, independently scalable applications. Each microservice is small and less complex to develop, update, and deploy.

When you think about it, why should those functionalities need to be built into a single application in the first place? In theory, at least, you can imagine they live in separate application and data silos without major problems. For example, if the average auction received two bids, but only a quarter of all sales received feedback, the bidding service would be at least eight times as active as the feedback application at any time of day. If these were combined into a single application, you end up running—and updating—more code than you need more often. The bottom line: Separating different functionality groups into separate applications makes intuitive sense.

Read more at InfoWorld

How to Containerize GPU Applications

By providing self-contained execution environments without the overhead of a full virtual machine, containers have become an appealing proposition for deploying applications at scale. The credit goes to Docker for making containers easy-to-use and hence making them popular. From enabling multiple engineering teams to play around with their own configuration for development, to benchmarking or deploying a scalable microservices architecture, containers are finding uses everywhere.

GPU-based applications, especially in the deep learning field, are rapidly becoming part of the standard workflow; deploying, testing and benchmarking these applications in a containerized application has quickly become the accepted convention. But native implementation of Docker containers does not support NVIDIA GPUs yet — that’s why we developed nvidia-docker plugin. Here I’ll walk you through how to use it.

Read more at SuperUser

LiFT Scholarship Winners: Teens and Academic Aces Learn Open Source Skills

Four people have been named recipients of the seventh annual Linux Foundation Training (LiFT) Scholarships for 2017 in the “Academic Aces” and “Teens in Training” categories.

Teens in Training

Vinícius Almeida

Vinícius Almeida, 15, of Brazil, is the youngest recipient to receive an award from the foundation this year. Although he is a high school freshman, Almeida is already taking computer science courses at the Federal University of Bahia. He has written several articles on robotics and open source technologies, and is active in his local hackerspace, the Raul Hacker Club.

Almeida also volunteers to write browser extensions for the GNU Project. Almeida says he hopes the knowledge he gains from this scholarship will help him convince more individuals in Brazil to adopt open source.

“I can’t imagine my life without FOSS technologies!’’ he wrote in his application. “I love using Linux every day, and learning more about open source has already changed my opinion in lots of discussions.” Almeida added that he is further developing his programming skills every day, thanks to the open source community. “My future is FOSS technologies; today I’m using most of them, but soon I want to develop them [for] the community.”

Sydney Dykstra

Sydney Dykstra, 18, of the United States, is the second scholarship recipient in the Teens in Training category. A recent high school graduate, Dykstra has been contributing to several open source projects, including the games The Secret Chronicles of Dr. M., and Supertux. His goal is to become a Linux systems administrator, and he hopes the scholarship will jumpstart that.

“I believe that open source is the future for everything computer related, online and offline, and necessary… if we are to have a ‘free’ world where we are not worried about someone else watching us or taking advantage of our info,’’ he wrote in his scholarship application.

Dykstra says he wants to become a Linux systems administrator, not only because he enjoys working with Linux systems but because of the freedom and flexibility it provides him. “I’m only a beginner,” he wrote, “but have been using Linux for nearly five years now and have been learning more as I go.”

Academic Aces

Asirifi Charles

Asirifi Charles, 22, of Ghana, is a recipient in the Academic Aces category. He is in his final year studying computer science at the University of Ghana. Charles taught himself about web development through free online resources, and recently became interested in open source, completing the free Intro to Linux course on edX. He hopes this scholarship will help him expand his open source expertise, so he can share it with others in Ghana, where it is difficult to access an IT education. 

“Open source lets you share your contribution while learning to better your skills,’’ he wrote in his application.

Camilo Andres Cortes Hernandez

Camilo Andres Cortes Hernandez, 31, of Colombia, is the other scholarship winner in the Academic Aces category. Hernandez studies technology at EAN University in Colombia, where he also runs a nonprofit that teaches individuals about cloud computing. His focus is currently on Azure, and he hopes the scholarship will help him to obtain the MCSA: Linux on Azure certification from The Linux Foundation and Microsoft.

Not only will the scholarship improve his career, he wrote, but it will also help others to embrace open source solutions because of his work in the community. Recently, Hernandez says, he was discussing open source solutions on Azure during a free cloud event, and received good feedback.

“I want to keep teaching others about cloud and top trending technologies, especially open source solutions that can run on environments like Azure. I have a goal within my community (CloudFirst Campus) to teach people about the interoperability of solutions no matter if they are private or open — you can run anything on the cloud.” 

The Linux Foundation Training Scholarships cover the expenses for one class to be chosen by each recipient from the Scholarship Track choices, representing thousands of dollars in value (travel expenses for in-person classes are not included). 

Winners in all categories may also elect to take a Linux Foundation Certified System Administrator, Linux Foundation Certified Engineer, Certified OpenStack Administrator, Cloud Foundry Certified Developer or Certified Kubernetes Administrator exam at no cost following the completion of their training course.

Scholarships are supported by The Linux Foundation members seeking to help train the developers and IT professionals of the future.

Learn more about the LiFT Scholarship program from The Linux Foundation.

How Kubernetes Resource Classes Promise to Change the Landscape for New Workloads

The Colin Powell rule states that you should make a decision when you have 40 percent to 70 percent of the information necessary to make the decision. With Linux container technology like Kubernetes evolving so quickly, it’s difficult for companies to feel like they have 40 percent of the information they need, let alone 70 percent.

Customers often approach me and others at Red Hat to help them get beyond the 40 percent mark to make a decision about Red Hat OpenShift, which is based on Kubernetes.

For many of these customers, public cloud has become commonplace for workloads. However, translating their on-premise architecture into a proper design/architecture for each cloud is challenging (to say the least) in terms of both time and cost. An architecture that works the same, everywhere, is the promise of Kubernetes and OpenShift, but it’s also one of the heaviest burdens for engineers.

This contributed article is part of a series in advance of Kubecon/CloudNativeCon, taking place in Austin, Dec. 6 – 8.

Read more at The New Stack

The OpenChain Project: From A to Community

Communities form in open source all the time to address challenges. The majority of these communities are based around code, but others cover topics as diverse as design or governance. The OpenChain Project is a great example of the latter. What began three years ago as a conversation about reducing overlap, confusion, and wasted resources with respect to open source compliance is now poised to become an industry standard.

The idea to develop an overarching standard to describe what organizations could and should do to address open source compliance efficiently gained momentum until the formal project was born. The basic idea was simple: identify key recommended processes for effective open source management. The goal was equally clear: reduce bottlenecks and risk when using third-party code to make open source license compliance simple and consistent across the supply chain. The key was to pull things together in a manner that balanced comprehensiveness, broad applicability, and real-world usability.

Read more at The Linux Foundation

 

Blockchains Are Poised to End the Password Era

Blockchain technology can eliminate the need for companies and other organizations to maintain centralized repositories of identifying information, and users can gain permanent control over who can access their data (hence “self-sovereign”), says Drummond Reed, chief trust officer at Evernym, a startup that’s developing a blockchain network specifically for managing digital identities.

Self-sovereign identity systems rely on public-key cryptography, the same kind that blockchain networks use to validate transactions. Although it’s been around for decades, the technology has thus far proved difficult to implement for consumer applications. But the popularity of cryptocurrencies has inspired fresh commercial interest in making it more user-friendly.

Read more at Technology Review

How to Set Up Private DNS Servers with BIND on Ubuntu 16.04

BIND (Berkeley Internet Name Domain) is the most used DNS software over the Internet. The BIND package is available for all Linux distributions, which makes the installation simple and straightforward. In today’s article we will show you how to install, configure and administer BIND 9 as a private DNS server on a Ubuntu 16.04 VPS, in few steps.

Requirements:

  • Two servers (ns1 and ns2) connected to a private network
  • In this tutorial we will use the 10.20.0.0/16 subnet
  • DNS clients that will connect to your DNS servers

How to Set Up Private DNS Servers with BIND on Ubuntu 16.04

How to Set Up Private DNS Servers with BIND on Ubuntu 16.04

BIND (Berkeley Internet Name Domain) is the most used DNS software over the Internet. The BIND package is available for all Linux distributions, which makes the installation simple and straightforward. In today’s article we will show you how to install, configure and administer BIND 9 as a private DNS server on a Ubuntu 16.04 VPS, in few steps.

Requirements:

  • Two servers (ns1 and ns2) connected to a private network
  • In this tutorial we will use the 10.20.0.0/16 subnet
  • DNS clients that will connect to your DNS servers

1. Update both servers

Begin by updating the packages on both servers:

# sudo apt-get update

2. Install BIND on both servers

# sudo apt-get install bind9 bind9utils

3. Set BIND to IPv4 mode

Set BIND to IPv4 mode, we will do that by editing the “/etc/default/bind9” file and adding “-4” to the OPTIONS variable:

# sudo nano /etc/default/bind9

The edited file should look something like this:

# run resolvconf?
RESOLVCONF=no

# startup options for the server
OPTIONS="-4 -u bind"

Now let’s configure ns1, our primary DNS server.

4. Configuring the Primary DNS Server

Edit the named.conf.options file:

# sudo nano /etc/bind/named.conf.options

On top of the options block, add a new block called trusted.This list will allow the clients specified in it to send recursive DNS queries to our primary server:

acl "trusted" {
        10.20.30.13;  
        10.20.30.14;
        10.20.55.154;
        10.20.55.155;
};

5. Enable recursive queries on our ns1 server, and have the server listen on our private network

Then we will add a couple of configuration settings to enable recursive queries on our ns1 server and to have the server listen on our private network, add the configuration settings under the directory “/var/cache/bind” directive like in the example below:

options {
        directory "/var/cache/bind";

        recursion yes;
        allow-recursion { trusted; };
        listen-on { 10.20.30.13; };
        allow-transfer { none; };

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };
};

If the “listen-on-v6” directive is present in the named.conf.options file, delete it as we want BIND to listen only on IPv4.
Now on ns1, open the named.conf.local file for editing:

# sudo nano /etc/bind/named.conf.local

Here we are going to add the forward zone:

zone "test.example.com" {
    type master;
    file "/etc/bind/zones/db.test.example.com";
    allow-transfer { 10.20.30.14; };
};

Our private subnet is 10.20.0.0/16, so we are going to add the reverse zone with the following lines:

zone "20.10.in-addr.arpa" {
    type master;
    file "/etc/bind/zones/db.10.20";
    allow-transfer { 10.20.30.14; };
};

If your servers are in multiple private subnets in the same physical location, you need to specify a zone and create a separate zone file for each subnet.

6. Creating the Forward Zone File

Now we’ll create the directory where we will store our zone files in:

# sudo mkdir /etc/bind/zones

We will use the sample db.local file to make our forward zone file, let’s copy the file first:

# cd /etc/bind/zones
# sudo cp ../db.local ./db.test.example.com

Now edit the forward zone file we just copied:

# sudo nano /etc/bind/zones/db.test.example.com

It should look something like the example below:

$TTL    604800
@       IN      SOA     localhost. root.localhost. (
                              2         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      localhost.      ; delete this
@       IN      A       127.0.0.1       ; delete this
@       IN      AAAA    ::1             ; delete this

Now let’s edit the SOA record. Replace localhost with your ns1 server’s FQDN, then replace “root.localhost” with “admin.test.example.com”.Every time you edit the zone file, increment the serial value before you restart named otherwise BIND won’t apply the change to the zone, we will increment the value to “3”, it should look something like this:

@       IN      SOA     ns1.test.example.com. admin.test.example.com. (
                              3         ; Serial

Then delete the last three records that are marked with “delete this” after the SOA record.

Add the nameserver records at the end of the file:

; name servers - NS records
    IN      NS      ns1.test.example.com.
    IN      NS      ns2.test.example.com.

After that add the A records for the hosts that need to be in this zone. That means any server whose name we want to end with “.test.example.com”:

; name servers - A records
ns1.test.example.com.          IN      A       10.20.30.13
ns2.test.example.com.          IN      A       10.20.30.14

; 10.20.0.0/16 - A records
host1.test.example.com.        IN      A      10.20.55.154
host2.test.example.com.        IN      A      10.20.55.155

The db.test.example.com file should look something like the following:

$TTL    604800
@       IN      SOA     ns1.test.example.com. admin.test.example.com. (
                  3       ; Serial
             604800     ; Refresh
              86400     ; Retry
            2419200     ; Expire
             604800 )   ; Negative Cache TTL
;
; name servers - NS records
     IN      NS      ns1.test.example.com.
     IN      NS      ns2.test.example.com.

; name servers - A records
ns1.test.example.com.          IN      A       10.20.30.13
ns2.test.example.com.          IN      A       10.20.30.14

; 10.20.0.0/16 - A records
host1.test.example.com.        IN      A      10.20.55.154
host2.test.example.com.        IN      A      10.20.55.155

7. Creating the Reverse Zone File

We specify the PTR records for reverse DNS lookups in the reverse zone files. When the DNS server receives a PTR lookup query for an example for IP: “10.20.55.154”, it will check the reverse zone file to retrieve the FQDN of the IP address, in our case that would be “host1.test.example.com”.

We will create a reverse zone file for every single reverse zone specified in the named.conf.local file we created on ns1. We will use the sample db.127 zone file to create our reverse zone file:

# cd /etc/bind/zones
# sudo cp ../db.127 ./db.10.20

Edit the reverse zone file so it matches the reverse zone defined in named.conf.local:

# sudo nano /etc/bind/zones/db.10.20

The original file should look something like the following:

$TTL    604800
@       IN      SOA     localhost. root.localhost. (
                              1         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
;
@       IN      NS      localhost.      ; delete this
1.0.0   IN      PTR     localhost.      ; delete this

You should modify the SOA record and increment the serial value. It should look something like this:

@       IN      SOA     ns1.test.example.com. admin.test.example.com. (
                              3         ; Serial

Then delete the last three records that are marked with “delete this” after the SOA record.

Add the nameserver records at the end of the file:

; name servers - NS records
      IN      NS      ns1.test.example.com.
      IN      NS      ns2.test.example.com.

Now add the PTR records for all hosts that are on the same subnet in the zone file you created. This consists of our hosts that are on the 10.20.0.0/16 subnet. In the first column we reverse the order of the last two octets from the IP address of the host we want to add:

; PTR Records
13.30  IN      PTR     ns1.test.example.com.    ; 10.20.30.13
14.30  IN      PTR     ns2.test.example.com.    ; 10.20.30.14
154.55 IN      PTR     host1.test.example.com.  ; 10.20.55.154
155.55 IN      PTR     host2.test.example.com.  ; 10.20.55.155

Save and exit the reverse zone file.

The “/etc/bind/zones/db.10.20” reverse zone file should look something like this:

$TTL    604800
@       IN      SOA     test.example.com. admin.test.example.com. (
                              3         ; Serial
                         604800         ; Refresh
                          86400         ; Retry
                        2419200         ; Expire
                         604800 )       ; Negative Cache TTL
; name servers
      IN      NS      ns1.test.example.com.
      IN      NS      ns2.test.example.com.

; PTR Records
13.30  IN      PTR     ns1.test.example.com.    ; 10.20.30.13
14.30  IN      PTR     ns2.test.example.com.    ; 10.20.30.14
154.55 IN      PTR     host1.test.example.com.  ; 10.20.55.154
155.55 IN      PTR     host2.test.example.com.  ; 10.20.55.155

8. Check the Configuration Files

Use the following command to check the configuration syntax of all the named.conf files that we configured:

# sudo named-checkconf

If your configuration files don’t have any syntax problems, the output will not contain any error messages. However if you do have problems with your configuration files, compare the settings in the “Configuring the Primary DNS Server” section with the files you have errors in and make the correct adjustment, then you can try executing the named-checkconf command again.

The named-checkzone can be used to check the proper configuration of your zone files.You can use the following command to check the forward zone “test.example.com”:

# sudo named-checkzone test.example.com db.test.example.com

And if you want to check the reverse zone configuration, execute the following command:

# sudo named-checkzone 20.10.in-addr.arpa /etc/bind/zones/db.10.20

Once you have properly configured all the configuration and zone files, restart the BIND service:

# sudo service bind9 restart

9. Configuring the Secondary DNS Server

Setting up a secondary DNS server is always a good idea as it will serve as a failover and will respond to queries if the primary server is unresponsive.

On ns2, edit the named.conf.options file:

# sudo nano /etc/bind/named.conf.options

At the top of the file, add the ACL with the private IP addresses for all your trusted servers:

acl "trusted" {
        10.20.30.13;
        10.20.30.14;
        10.128.100.101;
        10.128.200.102;
};

Just like in the named.conf.options file for ns2, add the following lines under the directory “/var/cache/bind” directive:

        recursion yes;
        allow-recursion { trusted; };
        listen-on { 10.20.30.13; };
        allow-transfer { none; };

        forwarders {
                8.8.8.8;
                8.8.4.4;
        };

Save and exit the file.

Now open the named.conf.local file for editing:

# sudo nano /etc/bind/named.conf.local

Now we should specify slave zones that match the master zones on the ns1 DNS server. The masters directive should be set to the ns1 DNS server’s private IP address:

zone "test.example.com" {
    type slave;
    file "slaves/db.test.example.com";
    masters { 10.20.30.13; };
};

zone "20.10.in-addr.arpa" {
    type slave;
    file "slaves/db.10.20";
    masters { 10.20.30.13; };
};

Now save and exit the file.

Use the following command to check the syntax of the configuration files:

# sudo named-checkconf

Then restart the BIND service:

# sudo service bind9 restart

10. Configure the DNS Clients

We will now configure the hosts in our 10.20.0.0/16 subnet to use the ns1 and ns2 servers as their primary and secondary DNS servers. This greatly depends on the OS the hosts are running but for most Linux distributions the settings that need to be changed reside in the /etc/resolv.conf file.

Generally on the Ubuntu, Debian and CentOS distributions just edit the /etc/resolv.conf file, execute the following command as root:

# nano /etc/resolv.conf

Then replace the existing nameservers with:

nameserver 10.20.30.13 #ns1
nameserver 10.20.30.14 #ns2

Now save and exit the file and your client should be configured to use the ns1 and ns2 nameservers.

Then test if your clients can send queries to the DNS servers you just configured:

# nslookup host1.test.example.com

The output from this command should be:

Output:
Server:     10.20.30.13
Address:    10.20.30.13#53

Name:   host1.test.example.com
Address: 10.20.55.154

You can also test the reverse lookup by querying the DNS server with the IP address of the host:

# nslookup 10.20.55.154

The output should look like this:

Output:
Server:     10.20.30.13
Address:    10.20.30.13#53

154.55.20.10.in-addr.arpa   name = host1.test.example.com.

Check if all of the hosts resolve correctly using the commands above, if they do that means that you’ve configured everything properly.

Adding a New Host to Your DNS Servers

If you need to add a host to your DNS servers just follow the steps below:

On the ns1 nameserver do the following:

  • Create an A record in the forward zone file for the host and increment the value of the Serial variable.
  • Create a PTR record in the reverse zone file for the host and increment the value of the Serial variable.
  • Add your host’s private IP address to the trusted ACL in named.conf.options.
  • Reload BIND using the following command: sudo service bind9 reload

On the ns2 nameserver do the following:

  • Add your host’s private IP address to the trusted ACL in named.conf.options.
  • Reload BIND using the following command: sudo service bind9 reload

On the host machine do the following:

  • Edit /etc/resolv.conf and change the nameservers to your DNS servers.
  • Use nslookup to test if the host queries your DNS servers.

Removing a Existing Host from your DNS Servers

If you want to remove the host from your DNS servers just undo the steps above.

Note: Please subsitute the names and IP addresses used in this tutorial for the names and IP addresses of the hosts in your own private network.