Home Blog Page 622

How To Install Webmin on Ubuntu 16.04

install web min

In this tutorial we will show you how to install and configuration Webmin on Ubuntu 16.04. For those of you who didn’t know, Webmin is a free control Panel for managing VPS. Webmin is a web based interface which is used to manage VPS web hosting server. With the help of webmin you can setup user account, apache, dns and file sharing and other actions. Webmin very suitable for beginners who do not know much about the unix or linux command line.

This article assumes you have at least basic knowledge of linux, know how to use the shell, and most importantly, you host your site on your own VPS. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo’ to the commands to get root privileges. I will show you through the step by step installation Webmin on a Ubuntu 16.04 LTS (Xenial Xerus) server.

Read more at idroot

LED Control From the Web Browser

About the Application

  • This is a simple application which enables the control of a GPIO line via a web browser. To toggle the line HIGH/LOW, click on a rectangle. This app can be used to control a relay Tibbit, a LED Tibbit, or some other “output” Tibbit.

  • The app utilizes a popular socket.io library to facilitate a connection without interruption between the TPS and the browser, as well as the AngularJS V1.x.x front-end framework that makes the development of single-page applications simple. ​

What you need

Hardware

*Feel free to replace the Tibbits with other output ones

Onboard Software

  • Node.js V6.x.x (pre-installed during production) ​

GitHub Repository

Name: tps-gpio-tutorials

Repository page: https://github.com/tibbotech/tps-gpio-tutorials

Clone URL: https://github.com/tibbotech/tps-gpio-tutorials.git

Updated At: Mon Oct 10 2016

Node.js Application

  • Socket.io and Express and are used to support the web interface functionality

  • The code for web interface connectivity, LED manipulation, and the HTTP server providing web client files is located in the server.js file

  • Web client files are served from the /public folder ​

Installation and Configuration

git clone https://github.com/tibbotech/tps-gpio-tutorials
cd tps-gpio-tutorials
npm install .
cd one-led
  • Launch the app:
node server

server.js

Comments in the code explain how it works

// Requires HTTP as WebSocket server modules
const express = require("express");
const app = express();
const http = require('http').Server(app);
const io = require('socket.io')(http);
const gpio = require("@tibbo-tps/gpio");

// Serves static assets from the 'public' folder
app.use("/", express.static('public'));

const led = gpio.init("S15A");

if(led.getDirection() === "input"){
    led.setDirection('output');
    led.setValue(1);
}

// Listens to the incoming WebSocket connection
var clients = io.on('connection', function(socket){
    // When the connection is established
    console.log('USER CONNECTED');

    // Reads I/O line state..
    // ..and broadcasts it to all the connected clients
    var value = led.getValue();

    clients.emit('tps:state:changed', value);

    // When any of the connected clients require a change of the line state
    socket.on('web:state:changed', function(value){
        // Changes the line state...
        led.setValue(value);

        //.. and broadcasts it to all the clients
        clients.emit('tps:state:changed', value);
    });

    socket.on('disconnect', function(){
        console.log('USER DISCONNECTED');
    });
});

// Runs HTTP server on :3000 port
http.listen(3000,function(){
    console.log("LISTENING");
});

Web client

index.html

Comments in the code explain how it works:

<html lang="en">
    <head>
        <meta charset="UTF-8">
        <title>Title</title>
        <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/1.4.8/socket.io.min.js"></script>
        <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.5.0/angular.min.js"></script>
        <script type="text/javascript" src="client.js"></script>
        <link href="main.css" rel="stylesheet" type="text/css"/>
    </head>
    <body ng-app="leds"> <!-- The ng-app directive bootstraps your Angular application -->

    <!-- The ng-controller directive attaches controller to a view -->
    <!-- The ng-hide directive hides DOM element depending on the 'locked' varibale -->
    <svg version="1.1" xmlns="http://www.w3.org/2000/svg" x="0px" y="0px" width="110px" height="110px" xml:space="preserve"
                ng-controller="ledsController"
                ng-hide="locked">

                <!-- The ng-class directive changes class of the DOM element depending on the 'state' variable -->
        <!-- The ng-click directive evokes the function on click by DOM element -->
                <g transform="translate(5,5)" class="led-icon">
                    <rect width="100" height="100" class="frame"></rect>
                    <rect x="10" y="10" width="80" height="80"
                        class="led"
                        ng-class="(state ? 'on' : 'off')"
                        ng-click="switch()">
                    </rect>
                </g>
            </svg>
    </body>
</html>

client.js

Comments in the code explain how it works:

angular.module('leds', [])
    .controller('ledsController', function($scope) {
        var socket = io(); //

        $scope.locked = true; // Disables the view by default

        socket.on('connect', function () { // On connection established
            $scope.locked = false; // Enables the view
            $scope.$apply(); // Re-renders the view
        });

        socket.on('disconnect', function () { // Hides everything on disconnect
            $scope.locked = true;
            $scope.$apply();
        });

        socket.on('tps:state:changed', function (value) { // Catches the 'tps:state:changed' event
            $scope.state = value == 0;
            $scope.$apply();
        });

        $scope.switch = function() { // Sends the inversed value of the 'state' variable
            console.log($scope.state ? 1 : 0);
            socket.emit('web:state:changed', $scope.state ? 1 : 0);
        }
    });

 

Long-Term Maintenance, or How to (Mis-)Manage Embedded Systems for 10+ Years


In this presentation, kernel hacker Jan Lübbe will explain why apparently reasonable approaches to long-term maintenance fail and how to establish a sustainable workflow instead.

Long-term Embedded Linux Maintenance Made Easier

The good old days when security breaches only happened to Windows folk are fading fast. Malware hackers and denial of service specialists are increasingly targeting out of date embedded Linux devices, and fixing Linux security vulnerabilities was the topic of several presentations at the Embedded Linux Conference Europe (ELCE) in October.

One of the best attended was “Long-Term Maintenance, or How to (Mis-)Manage Embedded Systems for 10+ Years” by Pengutronix kernel hacker Jan Lübbe. After summarizing the growing security threats in embedded Linux, Lübbe laid out a plan to keep long-life devices secure and fully functional. “We need to move to newer, more stable kernels and do continuous maintenance to fix critical vulnerabilities,” said Lübbe. “We need to do the upstreaming and automate processes, and put in place a sustainable workflow. We don’t have any more excuses for leaving systems in the field with outdated software.”

As Linux devices grow older, traditional lifecycle procedures are no longer up to the job. “Typically, you would take a kernel from a SoC vendor or mainline, take a build system, and add user space,” said Lübbe. “You customize that and add an application, and do some testing and you’re done. But then there’s a maintenance phase for 15 years, and you better hope you have no platform changes, or want to add new features, or need to apply regulatory changes.”

All these changes increasingly expose your system to new errors, and require massive updates to keep in sync with upstream software. “But it’s not always unintentional errors that occur in the kernel that lead to problems,” said Lübbe. “These vendor kernels never went through the mainline community review process,” he added, noting the backdoor found last year in an Allwinner kernel.

“You cannot trust that your vendor will do the correct thing,” continued Lübbe. “Maybe only one or two engineers looked at that backdoor code. That would never happen if the patch was posted on a Linux kernel mailing list. Somebody would notice. Hardware vendors don’t care about security or maintenance. Maybe you get an update after one or two years, but even then it usually takes years between the time they start developing based on one fixed version to the point they declare it stable. If you then start developing on that base, you add maybe another half a year, and it’s even more obsolete.”

Increasingly, embedded developers working with long-life products build on Long Term Stable (LTS) kernels. But that doesn’t mean your work is done. “After a product is released, people don’t often follow the stable release chain anymore, so they don’t apply the security patches,” said Lübbe. “You’re getting the worst of both worlds: an obsolete kernel and no security. You don’t get the benefit of testing by many people.”

Lübbe noted that Pengutronix customers that used server-oriented distributions like Red Hat often ran into problems due to the rapid rate of customizations, as well as deployment and update systems that assume a sysadmin is on duty.

“The updates can work for some things, especially if they are x86, but each project is basically on its own to build infrastructure to update to new releases.”

Many developers choose backporting as a solution for updating long-life products. “It’s easy in the beginning, but once you are no longer in the project’s maintenance window, they don’t tell you if the version you use is affected by a bug, so it becomes much more difficult to find out if a fix is relevant,” said Lübbe. “So you pile up patches and changes and the bugs accumulate, and you have to maintain them yourself because no one else is using those patches. The benefits of using open source software are lost.”

Follow Upstream Projects

The best solution, argues Lübbe, is to follow releases maintained by upstream projects. “We’ve mostly focused on mainline based development, so we have as little difference as possible between the product and the mainstream kernel and other upstream projects. Long-term systems are well supported in mainline. Most systems that don’t use 3D graphics can run very few patches. Newer kernel versions also have lots of new hardening features that reduce the impact of vulnerabilities.”

Following mainline seems daunting to many developers, but it’s relatively easy if you implement procedures from the start, and then stick to them, said Lübbe. “You need to develop processes for everything you do on the system,” he said. “You always need to know what software is running, which is easier when you use a good build system. Each software release should define the complete system so you can update everything in the field. If you don’t know what’s there, you can’t fix it. You also want to have automated testing and automated deployment of updates.”

To “save an update cycle,” Lübbe recommends using the most recent Linux kernel when you start developing, and only moving to a stable kernel when you enter testing. After that, he suggests updating all the software in the system, including kernel, build system, user space, glibc, and components like OpenSSL every year, to versions that are supported by the upstream projects for the rest of the year.

“Just because you update at that point doesn’t mean you need to deploy,” said Lübbe. “If you see no security vulnerabilities, you can just put the patch on the shelf and have it ready if you need it.”

Finally, Lübbe recommends looking at release announcements every month, and checking out security announcements on CVE and mainline lists every week. You only need to respond “if the security announcement actually affects you,” he added. “If your kernel is current enough, it’s not too much work. You don’t want to get feedback on your product by seeing your device in the news.”

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 – 23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

4 Common Open Source License Compliance Failures and How to Avoid Them

The following is adapted from Open Source Compliance in the Enterprise by Ibrahim Haddad, PhD.

Companies or organizations that don’t have a strong open source compliance program often suffer from errors and limitations in processes throughout the software development cycle that can lead to open source compliance failures.

The previous article in this series covered common intellectual property failures. This time, we’ll discuss the four common open source license compliance failures and how to avoid them.

License compliance problems are typically less damaging than intellectual property problems, as they don’t have the side effect of forcing you to release your proprietary source code under an open source license. But license compliance failures may still have serious consequences including:

  • An injunction preventing a company from shipping a product until source code is released.

  • Support or customer service headaches as a result of version mismatches (as a result of people calling or emailing the support hotline and inquiring about source code releases).

  • Embarrassment and/or bad publicity with customers and open source community.

4 Common OS License Compliance Failures

Problem #1: Failure to publish or make available source code packages as part of meeting license obligations

How to avoid it: Follow a detailed compliance checklist to ensure that all compliance action items have been completed when a given product, application, or software stack is released into the market.

Problem #2: Failure to provide correct version of the source code corresponding to the shipped binaries.

How to avoid it: Add a verification step into the compliance process to ensure that you’re publishing the version of source code that exactly corresponds to the distributed binary version.

Problem #3: Failure to release modifications that were introduced to the open source software being incorporated into the shipping product.

How to avoid it:

  • Use a bill of material (BOM) difference tool that allows the identification of software components that change across releases

  • Re-introduce the newer version of the software component in the compliance process

  • Add the “compute diffs” of any modified source code (eligible for open source distribution) to the checklist item before releasing open source used in the product.

Problem #4: Failure to mark open source code that has been changed or to include a description of the changes.

How to avoid it:

  • Add source code marking as checklist item before releasing source code to ensure you flag all the source code introduced to the original copy you downloaded

  • Conduct source code inspections before releasing the source code

  • Add a milestone in the compliance process to verify modified source code has been marked as such

  • Offer training to staff to ensure they update the change logs of source code files as part of the development process.

The most important outcome of non-compliance cases has been that the companies involved ultimately had to comply with the terms of the license(s) in question, and the costs of addressing the problem after the fact has categorically exceeded those of basic compliance.

Therefore, it is really a smart idea to ensure compliance before a product ships or a service launches. In part 6 of this series, we’ll cover some of the top lessons learned in achieving open source compliance that open source professionals need to know.

Download the free e-book, Open Source Compliance in the Enterprise, for a complete guide to creating compliance processes and policies for your organization.

Read the other articles in this series:

An Introduction to Open Source Compliance in the Enterprise

Open Compliance in the Enterprise: Why Have an Open Source Compliance Program?

Open Source Compliance in the Enterprise: Benefits and Risks

3 Common Open Source IP Compliance Failures and How to Avoid Them

4 Common Open Source License Compliance Failures and How to Avoid Them

Top Lessons For Open Source Pros From License Compliance Failures

openSUSE Package Management Cheat Sheet

Debian/Ubuntu have long been my primary Linux distributions, although like all good Linux users I have used Fedora, CentOS, Gentoo, Red Hat, Slackware, Arch Linux, Mageia, and other Linux distributions because why not? It is a feast of riches and the best playground there is.

I became a SUSE employee recently, so naturally I’ve been spending more time with openSUSE. openSUSE is sponsored by SUSE, and it is an independent community project. There are two openSUSE flavors: Tumbleweed and Leap. Tumbleweed is a bleeding-edge rolling release, containing the latest software versions. Leap is more conservative, and it incorporates core code from SUSE Linux Enterprise Server (SLES) 12. Both are plenty good for everyday use.

openSUSE is an RPM-based distro so you can use your favorite RPM commands. YaST (Yet another Setup Tool) is one of the most famous, or infamous, SUSE tools. YaST supports nearly every aspect of systems administration: hardware configuration, server management, software management, networking, user management, virtualization, security…you name it, it’s probably in YaST. YaST has both a graphical and console interface. YaST is vast.

I prefer less monolithic apps, and openSUSE has another tool for package and software repository management, zypper. I like zypper. It’s fast and does the job. zypper is command-line only so you have only one interface to learn, which is something I highly approve of. Many of zypper's commands have easy-to-remember shortcuts such as se for search, in for install, and rm for remove.

Help

Run zypper help for a command listing, zypper help [command name] to get additional information on any command, and man zypper for detailed help.

Run zypper help for a command listing.

Repository Management

Remember the olden days of distro-only repositories, with few third-party repos to choose from? If your particular Linux distribution did not package an application that you wanted, or did not maintain an up-to-date version your only option was compiling from source code. When Ubuntu created Personal Package Archives (PPAs) the floodgates opened and now third-party and special-purpose repos are everywhere.

openSUSE ships with a batch of configured repositories (see Package repositories for lists of official and third-party repos). Not all of them are enabled. You can see a table with enabled/disabled status, and also generate a table with additional information such as the repository URLs:

zypper lr
zypper lr -d

Enable and disable an installed repo, without removing it:

zypper mr -e repo-debug
zypper mr -d repo-debug

Remove and add a repo:

zypper rr repo-debug
zypper ar -name "repo-debug" http://download.opensuse.org/debug/distribution/leap/42.2/repo/oss/

The --name is required and arbitrary, so you call it anything you want. You can refresh all repositories, or selected repos:

zypper refresh
zypper refresh openSUSE-Leap-4.2-Update

Generate a table of all packages in a repo, and their installed status:

zypper se --repo openSUSE-Leap-4.2-Update

zypper won’t list the files in installed packages, but good old rpm will, like this example for python:

rpm -qi python

Clear your local package cache to force fresh package downloads:

zypper clean

Search For Packages

zypper's search function is fast and easy. You can run a simple name search, see version, architecture, and repository, and search for installed and uninstalled only, like these examples for tiff:

zypper se tiff
zypper se -s tiff
zypper se -i tiff
zypper se -u tiff

You can search for the exact package name, which is great for excluding giant listings of libraries and plugins:

zypper se -x tiff

Name specific repositories to search in:

zypper se --repo openSUSE-Leap-4.2-Update tiff

Install Packages

Installing packages is easy peasey, so let’s install digiKam:

zypper in digikam

Install digiKam from a special repository:

zypper in --repo myspecialrepo digikam

Do a dry-run before installation:

zypper in -D --repo myspecialrepo digikam

Download the package without installing it:

zypper in -d --repo myspecialrepo digikam

Remove Packages

Getting rid of packages is just as easy. We don’t want digiKam anymore (which is silly, because everyone wants digiKam):

zypper rm digikam

Do a dry-run first:

zypper rm -D digikam

Remember when there were flamewars over dependency-resolving package managers like apt-get, Yum, and zypper, like having automatic dependency-resolution was a bad thing? We sure were weird back then. See Using Zypper for more help and some advanced usage.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Set Up a Simple CI Pipeline to Build Docker Images for ARM

Recently I did an experiment: Can we build Docker images for ARM on ordinary cloud CI services that only provide Intel CPU’s?

The idea was to get rid of self hosted CI build agents that you have to care for. If you want to provide an ARM Docker image for an open source project your task is to build it and not to setup and maintain a whole pipeline for it.

We at Hypriot have created several Dockerfiles for open source tools like MySQLTræfɪk or Node.js to make them available as Docker images for your ARM devices.

Read more at Hypriot

Distributed Fabric: A New Architecture for Container-Based Applications

There’s a palpable sense of excitement in the application development world around container technology. Containers bring a new level of agility and speed to app development, giving developers the ability to break large monolithic apps into small, manageable microservices that can talk to one another, be more easily tested and deployed, and operate more efficiently as a full application. However, containers also demand a new architecture for the application services managing these microservices and apps, particularly in regards to service discovery — locating and consuming the services of those microservices.

Read more at The New Stack

npm Best Practices – Node.js at Scale

With our new series, called Node.js at Scale, we are creating a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node.

In the first chapter of Node.js at Scale you are going to learn the best practices on using npm as well as tips and tricks that can save you a lot of time on a daily basis.

Read more at Rising Stack

Intel’s BigDL Deep Learning Framework Snubs GPUs for CPUs

Last week Intel unveiled BigDL, a Spark-powered framework for distributed deep learning, available as an open source project. With most major IT vendors releasing machine learning frameworks, why not the CPU giant, too?

What matters most about Intel’s project may not be what it offers people building deep learning solutions on Spark clusters, but what it says about Intel’s ambitions to promote hardware that competes with GPUs for those applications. BigDL is aimed at those who want to apply machine learning to data already available through Spark or Hadoop clusters, 

Read more at InfoWorld