Home Blog Page 627

New Linux WiFi Daemon Streamlines Networking Stack

If you’ve ever used an embedded Linux development device with wireless networking, you’ve likely benefited from the work of Marcel Holtmann, the maintainer of the BlueZ Bluetooth daemon since 2004, who spoke at an Embedded Linux Conference Europe panel in October.

In 2007 Holtmann joined Intel’s Open Source Technology Center (OTC), where he created ConnMan (Internet connectivity), oFono (cellular telephony), and PACrunner (proxy handling). Over the last year, Holtmann and other OTC developers have been developing a replacement for the wpa_supplicant WiFi daemon called IWD (Internet Wireless Daemon). In the process, they have streamlined the entire Linux communications stack.

“We decided to create a wireless daemon that actually works on IoT devices,” said Holtmann in the presentation called “New Wireless Daemon for Linux.”

The IWD is now mostly complete, featuring a smaller footprint and more streamlined workflow than wpa_supplicant while adding support for the latest wireless technologies. The daemon was also developed with the help of the OTC’s Denis Kenzior, Andrew Zaborowski, Tim Kourt, Rahul Rahul, and Mat Martineau.

IWD aims to solve problems in wpa_supplicant including lack of persistence and limited feedback. “Wpa-supplicant doesn’t remember anything,” Holtmann told the ELCE audience in Berlin. “By comparison, like BlueZ, oFono, and neard [NFC], IWD is stateful, so whenever you repair the device, it remembers and restarts when you reboot. Wpa_supplicant does have a function that lets you redo the configuration network, but it’s so hackish and problematic that nobody uses it. Everyone stores this information at a higher layer, which complicates things and creates an imbalance.”

Wpa_supplicant manages to be overly comprehensive while also failing to reveal key information. The daemon is difficult to use because it adds support for “just about every OS or wireless extension,” including many things that are never actually used, says Holtmann. “The abstraction system actually gets in your way.”

Despite its capacity to “abstract everything,” wpa_supplicant does not expose much information. “You have to know a lot about WiFi and how things like parsing are done,” said Holtmann. “I just want to connect, not read a 2,000-page document to find out I have to use a pushbutton method to gain my credentials.”

Other limitations with wpa-supplicant include its dependence on blocking operations, in which the system must ask each peripheral for confirmation of operations before it moves on to ask other systems. This leads to “a system just waiting for something to happen,” says Holtmann.

Wpa-supplicant has other complications like “exposing itself to user space in a least four different ways,” said Holtmann. These include the antiquated D-Bus v1 and still problematic D-Bus v2, which “swallows states,” as well as a binder interface and CTL, “which is great for users, but for a daemon is horrible.”

To make up for the limitations of D-Bus v2, the overall wireless stack long ago spawned an abstraction layer above D-Bus and below ConnMan called gSupplicant, While this helped offload work from ConnMan, the latter was still overloaded.

Reducing Complexity

With the addition of IWD, Holtmann and his team removed GSupplicant entirely. It also replaced other user space interfaces with a single updated D-Bus layer. In addition, the new stack removed inoctl and lib Netlink (libnl), which Holtmann called “a blocking design that can’t track family changes.” Libnl was replaced with Generic Netlink, which does offer family discovery.

Holtmann also eliminated wireless extensions (wext) because “they’re broken and hopefully they will someday be removed from the kernel,” he said. The new wireless stack retains cfg80211 and nl80211 (Netlink), although the latter has been upgraded and pushed upstream.

The OTC team developed a new Embedded Linux Library (ELL) that features tables, queues, and ring buffers to reduce the complexity of IWD while still providing basic building blocks for netlink and D-Bus. “We extended ELL with cryptographic support libraries instead of using OpenSSL, which is huge, and is not an easy interface,” said Holtmann. “In a lot of cases you need only 10 percent of OpenSSL, so we went a different route and used random numbers using the getandom() system call, with no problems with boot up time.”

Finally, for ciphers and hashes Holtmann used AF_ALG, which he defined as “an encrypt interface for symmetric ciphers into the kernel.” With ELL and AF_ALG in place, the developers could eliminate OpenSSL, as well as gnuTLS and InternalTLS. The team also added tracing support for nl80211 with the help of a tool called iwmon.

“Now we can start scanning and selecting networks,” said Holtmann. “We can do active and passing scanning and SSDID grouping, and support open networks, We can connect to open access points and W2 and WPA/RSN protected access points. We have simple roaming, experimental D-Bus APIs, and EAPol, and ELL logging for D-Bus and Generic Netlink.”

Holtmann went on to discuss new support for enterprise WiFi technologies like X.509 certificates and TLS. Recent kernels have improved X.509 support, so the OTC team is exploiting the kernel’s keyrings to better manage certificates.

Future tasks include finishing up enterprise WiFi and developing a debug API. The developers are also looking at possible integrations with Passpoint 2.0, P2P, Miracast, and Neighborhood Aware Networking (NAN). Once IWD is complete, Holtmann and the OTC will address 802.15.4, bringing improvements to 802.15.4 compliant wireless protocols like ZigBee, 6LowPAN, and Thread.

Watch the complete video below:

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 – 23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.


Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

Continuous Delivery of a Microservice Architecture using Concourse.ci, Cloud Foundry and Artifactory

This comprehensive tutorial takes a simple microservice architecture and explains how to setup a concourse pipeline in order to test and deploy single microservices independently without affecting the overall microservice system. Cloud Foundry will be used as a platform to which the microservices are deployed to.

Along the way all basic concourse.ci concepts are explained.

The goal of the concourse pipeline – which is build during this tutorial – is to automatically trigger and execute the following steps whenever a developer pushes a change to a git repository…
Read more at Specify.io

The Basics of Web Application Security

We discussed how authentication establishes the identity of a user or system (sometimes referred to as a principal or actor). Until that identity is used to assess whether an operation should be permitted or denied, it doesn’t provide much value. This process of enforcing what is and is not permitted is authorization. Authorization is generally expressed as permission to take a particular action against a particular resource, where a resource is a page, a file on the files system, a REST resource, or even the entire system.

Authorize on the Server

Among the most critical mistakes a programmer can make is hiding capabilities rather than explicitly enforcing authorization on the server. For example, it is not sufficient to simply hide the “delete user” button from users that are not administrators. The request coming from the user cannot be trusted, so the server code must perform the authorization of the delete.

Read more at Martin Fowler blog

Report: Agile and DevOps Provide More Benefits Together Than Alone

DevOps and agile are two of the most popular ways businesses try to stay ahead of the market, but put them together and they provide even more benefits. A new report, Accelerating Velocity and Customer Value with Agile and DevOps, from CA Technologies revealed businesses experienced greater customer satisfaction and brand loyalty when integrating agile with DevOps.

According to the report, about 75% of respondents reported improved employee recruitment and retention when using agile with DevOps, compared to 30% who only used agile. In addition, businesses saw a 45% increase in employee productivity, a 29% increase in customer satisfaction, and a 78% increase in customer experience when using the two. 

Read more at SDTimes

The Hard Truths about Microservices and Software Delivery

Everybody’s talking about Microservices right now. But are you having trouble figuring out what it means for you? 

At the recent LISA conference, I had the pleasure of giving a joint talk with Avan Mathur, Product Manager of ElectricFlow, on Microservices.

With Microservices, what was once one application, with self-contained processes, is now a complex set of independent services that connect via the network. Each microservice is developed and deployed independently, often using different languages, technology stacks, and tools.

While Microservices support agility—particularly on the development side—they come with many technical challenges that greatly impact your software delivery pipelines, as well as other operations downstream.

During our session, Avan and I discussed some use cases that lend themselves well for microservices, and the implications of microservices on the architecture and design of your application, infrastructure, delivery pipeline, and operations. We discussed increased pipeline variations, complexities in integration, testing and monitoring, governance, and more. We also shared best practices on how to avoid these challenges when implementing microservices and designing your pipelines to support microservices-driven applications.

Read the full article here

Understanding Docker Networking Drivers And Their Use Cases

Applications requirements and networking environments are diverse and sometimes opposing forces. In between applications and the network sits Docker networking, affectionately called the Container Network Model or CNM. It’s CNM that brokers connectivity for your Docker containers and also what abstracts away the diversity and complexity so common in networking. The result is portability and it comes from CNM’s powerful network drivers. These are pluggable interfaces for the Docker Engine, Swarm, and UCP that provide special capabilities like multi-host networking, network layer encryption, and service discovery.

Naturally, the next question is which network driver should I useEach driver offers tradeoffs and has different advantages depending on the use case.

Read more at Docker 

Microservices Design: Get Scale, Availability Right

The promise of microservices is that you can divide and conquer the problem of a large application by breaking it down into its constituent services and what each one actually accomplishes. Each can be supported by an independent team. You get to the point where you can break the limits on productivity that Fred Brooks described in his book, The Mythical Man-month.

Aside from being able to throw more people at the problem and—unlike what Brooks observed—actually become more efficient once you get a microservices-based application into production, you can quickly start thinking about how to scale it. Think resiliency and high-availability. And you can easily determine what services don’t need scaling, or high availability.

These things become easier than with a large, monolithic application, because each microservice can scale in its own way. Here are my insights about these variables, and the decisions you may face in designing your own microservices platform.

Read the full article here

Why Open Source is Rising Up the Networking Stack in 2017

With 2016 behind us, we can reflect on a landmark year where open source migrated up the stack. As a result a new breed of open service orchestration projects were announced, including ECOMP, OSM, OpenBaton, and The Linux Foundation  project OPEN-O, among them. While the scope varies between orchestrating Virtualized Network Functions (VNFs) in a Cloud Data Center, and more comprehensive end-to-end service delivery platforms, the new open service orchestration initiatives enable carriers and cable operators to automate end-to-end service delivery, ultimately minimizing the software development required for new services.

Open orchestration was propelled into the limelight as major operators have gained considerable experience over the past years with open source platforms, such as OpenStack and OpenDaylight. Many operators have announced ambitious network virtualization strategies, that are moving from proofs of concept (PoCs) into the field, including AT&T (Domain 2.0), Deutsche Telekom (TeraStream), Vodafone (Ocean), Telefonica (Unica), NTT Communications (O3), China Mobile (NovoNet), China Telecom (CTNet2025), among them.

Traditional Standards Development Organizations (SDOs) and open source projects have paved the way for the emergence of open orchestration. For instance, OPNFV (open NFV reference platform) expanded its charter to address NFV Management and Orchestration (MANO). Similarly, MEF is pursuing the Lifecycle Services Orchestration (LSO) initiative to standardize service orchestration, and intends to accelerate deployment with the OpenLSO open reference platform. Other efforts such as the TMForum Zero-touch Orchestration, Operations and Management (ZOOM) project area addressing the operational aspects as well.

Standards efforts are guiding the open source orchestration projects, which set the stage for 2017 to become The Year of Orchestration.

One notable example is the OPEN-O project, which delivered its initial release less than six months from the project formation. OPEN-O enables operators to deliver end-to-end composite services over NFV Infrastructure along with SDN and legacy networks. In addition to addressing the NFV MANO, OPEN-O integrates a model-driven automation framework, service design front-end, and connectivity services orchestration.

OPEN-O is backed by some of the world’s largest and innovative SDN/NFV market leaders, including China Mobile, China Telecom, Ericsson, Huawei, Intel, and VMware among them. The project is also breaking new ground in evolving how open source can be successfully adopted for large scale, carrier-grade platforms.

To learn more about OPEN-O and rapidly evolving open orchestration landscape, please join us for our upcoming Webinar:

Title: Introduction to Open Orchestration and OPEN-O

Date/Time: Tue January 17, 2017  10:00a – 11:00a PST

Presenter: Marc Cohn, Executive Director, OPEN-O

Register today to save your spot in this engaging and interactive webinar. Can’t make it on the 17th? Registering will also ensure you get a copy of the recording via email after the presentation is over.

For additional details on OPEN-O, visit: www.open-o.org

How to Keep Hackers out of Your Linux Machine Part 1: Top Two Security Tips

There is nothing a hacker likes more than a tasty Linux machine available on the Internet. In my recent Linux Foundation webinar I shared tactics, tools, and methods hackers use to invade your space.

In this blog series, we’ll cover the five easiest ways to keep hackers out and know if they have made it in. Want more information? Watch the free webinar on-demand.

Easy Linux Security Tip #1

If you are not using Secure Shell, you should be.

This has been a thing for a very, very long time. Telnet is insecure. rLogin is insecure. There are still services out there that require those services but they shouldn’t be exposed to the Internet. If you don’t have SSH just turn off your Internet connection. As we always say: use SSH keys.

Rule No. 1 of SSH: Don’t use password authentication. The second rule of SSH is: Don’t use password authentication. This is really, really important.

If you have a Linux machine on the Internet for any period of time, you are going to get brute forced. It is just going to happen. The brute force is scripted.  Scanners see port 22 open to the Internet and they have to hammer it hard.

The other thing you can do is you can move SSH off of the standard port which many of us do. That works to prevent a small number of brute force attacks but, in general, just don’t use password authentication and you’ll be safe.

The third rule of SSH is: All keys have passphrases. A no-passphrase key may as well not be a key at all. I realize that makes services hard to deal with if you are trying to log into something automatically or trying to automate stuff but all keys should have passphrases.

My favorite thing to do is to compromise a host and find home directories with private keys. As soon as I have private keys, it’s game over. I can log into anything that the public key provides access to.

If you provide a passphrase or even a password, it doesn’t have to be a long passphrase on your keys, then it makes my life much, much more difficult.

Easy Linux Security Tip #2

Install Fail2ban.

Those brute force attacks that I was talking about? This helps dramatically. It will automatically activate iptables rules to block repeated attempts to SSH into your machine. Be sure to configure it in such a way that it doesn’t lock you out or doesn’t take up too many resources. But use it, love it, and watch it.

It has its own logs so make sure to watch them and check to see if it’s actually functioning. That’s a really important thing as well.

In part 2 of this series, I’ll give you three more easy security tips for keeping hackers out of your  Linux machine. In part 3, I’ll answer questions from the webinar. You can also watch the entire free webinar on-demand now.

Mike Guthrie works for the Department of Energy doing Red Team engagements and penetration testing.

Essentials of OpenStack Administration Part 6: Installing DevStack (Lab)

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

DevStack is a GitHub-based deployment of OpenStack, provided by OpenStack.org, that allows for easy testing of new features. This tutorial, the last in our series from The Linux Foundation’s Essentials of OpenStack Administration course, will cover how to install and configure DevStack.

While DevStack is easy to deploy, it should not be considered for production use. There may be several configuration choices for new or untested code used by developers, which would not be appropriate for production.

DevStack is meant for developers, and uses a bash shell installation script instead of a package-based installation. The stack.sh script runs as a non-root user. You can change the default values by creating a local.conf file.

Should you make a mistake or want to test a new feature, you can easily unstack, clean, and stack again quickly. This makes learning and experimenting easier than rebuilding the entire system.

Setting up the Lab

One of the difficulties of learning OpenStack is that it’s tricky to install, configure and troubleshoot. And when you mess up your instance it’s usually painful to fix or reinstall it.

That’s why Linux Foundation Training introduced on-demand labs which offer a pre-configured virtual environment. Anyone enrolled in the course can click to open the exercise and then click to open a fully functional OpenStack server environment to run the exercise. If you mess it up, simply reset it. Each session is then available for up to 24 hours. It’s that easy.

Access to the lab environment is only possible for those enrolled in the course. However, you can still try this tutorial by first setting up your own AWS instance with the following specifications:

Deploy an Ubuntu Server 14.04 LTS (HVM), SSD Volume Type – ami-d732f0b7

with a m4.large (2 vcpus, 8GiB ram) instance type, increase the root disk to 20G, and open up all the network ports.

See Amazon’s EC2 documentation for more direction on how to set up an instance.

Verify the System

Once you are able to log into the environment verify some information:

1. To view and run some commands we may need root privilege. Use sudo to become root:

  ubuntu@devstack-cc:~$ sudo -i

2. Verify the Ubuntu user has full sudo access in order to install the software:


    root@devstack-cc:~# grep ubuntu /etc/sudoers.d/*

    /etc/sudoers.d/90-cloud-init-users:# User rules for ubuntu

    /etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL

3. We are using a network attached to eth2 for our cloud connections. You will need the public IP, eth0, to access the OpenStack administrative web page after installing DevStack. From the output find the inet line and make note of the IP Address. In the following example the IP address to write down would be: 166.78.151.57 Your IP address will be different. If you restart the lab the IP address may change.   


 root@devstack-cc:~# ip addr show eth0

    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

        link/ether bc:76:4e:04:b5:9b brd ff:ff:ff:ff:ff:ff

        inet 166.78.151.57/24 brd 166.78.151.255 scope global eth0

           valid_lft forever preferred_lft forever

        inet6 2001:4800:7812:514:be76:4eff:fe04:b59b/64 scope global

           valid_lft forever preferred_lft forever

        inet6 fe80::be76:4eff:fe04:b59b/64 scope link

           valid_lft forever preferred_lft forever


     root@devstack-cc:~# ip addr show eth2

    4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

        link/ether bc:76:4e:06:10:32 brd ff:ff:ff:ff:ff:ff

        inet 192.168.97.1/24 brd 192.168.97.255 scope global eth2

           valid_lft forever preferred_lft forever

        inet6 fe80::be76:4eff:fe06:1032/64 scope link

           valid_lft forever preferred_lft forever
Public IP eth0
Internal IP eth2

4. When the previous command finishes return to being the Ubuntu user:


    root@devstack-cc:~# exit

    logout

    ubuntu@devstack-cc:~$

Install the git command and DevStack software

DevStack is not typically considered safe for production, but can be useful for testing and learning. It is easy to configure and reconfigure. While other distributions may be more stable they tend to be difficult to reconfigure, with a fresh installation being the easiest option. DevStack can be rebuilt in place with just a few commands.

DevStack is under active development. What you download could be different from a download made just minutes later. While most updates are benign, there is a chance that a new version could render a system difficult or impossible to use. Never deploy DevStack on an otherwise production machine.

1. Before we can download the software we will need to update the package information and install a version control system command, git.    


    ubuntu@devstack-cc:~$ sudo apt-get update

    <output_omitted>

    ubuntu@devstack-cc:~$ sudo apt-get install git

    <output_omitted>

    After this operation, 21.6 MB of additional disk space will be used.

    Do you want to continue? [Y/n] y

    <output_omitted>

2. Now to retrieve the DevStack software:


    ubuntu@devstack-cc:~$ pwd

    /home/ubuntu

    ubuntu@devstack-cc:~$ git clone https://git.openstack.org/openstack-dev/devstack -b stable/liberty

    Cloning into ’devstack’...

    <output_omitted>

3. The newly installed software can be found in a new sub-directory named devstack. Installation of the script is by a shell script called stack.sh. Take a look at the file:

    ubuntu@devstack-cc:~$ cd devstack

    ubuntu@devstack-cc:~/devstack$ less stack.sh

4. There are several files and scripts to investigate. If you have issues during installation and configuration you can use theunstack.sh and clean.sh script to (usually) return the system to the starting point:

    ubuntu@devstack-cc:~/devstack$ less unstack.sh

    ubuntu@devstack-cc:~/devstack$ less clean.sh

5. We will need to create a configuration file for the installation script. A sample has been provided to review. Use the contents of the file to answer the following question.

    ubuntu@devstack-cc:~/devstack$ less samples/local.conf

6. What is the location of script output logs? _____________

7. There are several test and exercise scripts available, found in sub-directories of the same name. A good, general test is the run_ tests.sh script.

Due to the constantly changing nature of DevStack these tests are not always useful or consistent. You can expect to see errors but be able to use OpenStack without issue. For example missing software should be installed by the upcoming stack.sh script.

Keep the output of the tests and refer back to it as a place to start troubleshooting if you encounter an issue.

    ubuntu@devstack-cc:~/devstack$ ./run_tests.sh

While there are many possible options we will do a simple OpenStack deployment. Create a ~/devstack/local.conf file. Parameters not found in this file will use default values, ask for input at the command line or generate a random value.

Create a local.conf file

1. We will create a basic configuration file. In our labs we use eth2 for inter-node traffic. Use eth2 and it’s IP address when you create the following file.


    ubuntu@devstack-cc:~devstack$ vi local.conf

    [[local|localrc]]

    HOST_IP=192.168.97.1

    FLAT_INTERFACE=eth2

    FIXED_RANGE=10.10.128.0/20 #Range for private IPs

    FIXED_NETWORK_SIZE=4096

    FLOATING_RANGE=192.168.100.128/25 #Range for public IPs

    MULTI_HOST=1

    LOGFILE=/opt/stack/logs/stack.sh.log

    ADMIN_PASSWORD=openstack

    MYSQL_PASSWORD=DB-secret

    RABBIT_PASSWORD=MQ-secret

    SERVICE_PASSWORD=SERVICE-secret

    SERVICE_TOKEN=ALongStringUsuallyHere

    enable_service rabbit mysql key

Install and Configure OpenStack

The following command will generate a lot of output to the terminal window. The stack.sh script will run for 15 to 20 minutes.

1. Start the installation script:


    ubuntu@devstack-cc:~devstack$ ./stack.sh

    <output_omitted>

2. View the directory where various logs have been made. If the logs are not present you may have an issue with the syntax of the local.conf file:


    ubuntu@devstack-cc:~devstack$ ls -l /opt/stack/logs

3. Review the output from the stack.sh script:


    ubuntu@devstack-cc:~devstack$ less /opt/stack/logs/stack.sh.log

DevStack runs under a user account. There used to be a rejoin.sh script can be used to attach to the ongoing screen session after a reboot. DevStack is not meant to be durable, so the script was removed late in the Liberty release. Due to lab environment issues if you reboot the node you may have to start the lab again.

Log into the OpenStack Browser User Interface

The Horizon software produces a web page for management. By logging into this Browser User Interface (BUI) we can configure almost everything in OpenStack. The look and feel may be different than what you see in the book. The project and vendor updates change often.

1. Open a web browser on your local system. Using the output of the ip command, find the IP address of the eth0 interface on your devstack-cc node. Type that IP into your browser URL.


    ubuntu@devstack-cc:~devstack$ ip addr show eth0

    ...

    inet 104.22.81.13

    ...

2. Log into the BUI with a username of admin and a password of openstack.You should be viewing the Overview and Usage Summary page. It should look something like the following:

Browser Use Interface

3. Navigate to the System -> Hypervisors page. Use the Hypervisor and Compute Host sub-tabs to answer the following questions. a. How many hypervisors are there?

b. How many VCPUs are used?

c. How many VCPUs total?

d. How many compute hosts are there?

e. What is its state?

4. Navigate to the System -> Instances page.

a. How many instances are there currently?

5. Navigate to the Identity -> Projects page.

a. How many projects exist currently?

6. Navigate through the other tabs and subtabs to become familiar with the BUI.

Solutions

Task 2

6. $DEST/logs/stack.sh.log

Task 5

3. a. 1 b. 0 c. 2 d. 1 e. up

4. a. 0 5. a. 6

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases