Home Blog Page 430

Set Ubuntu Derivatives Back to Default with Resetter

How many times have you dived deep into Ubuntu (or a Ubuntu derivative), configuring things and installing software, only to find that your desktop (or server) platform isn’t exactly what you wanted. This situation can be problematic when you already have all of your user files on the machine. In this case, you have a choice, you can either back up all your data, reinstall the operating system, and copy your data back onto the machine, or you can give a tool like Resetter a go.

Resetter is a new tool (written by Canadian developer that goes by the name “gaining”), written in Python and pyqt, that will reset Ubuntu, Linux Mint (and a few other, Ubuntu-based distributions) back to stock configurations. Resetter offers two different reset options: Automatic and Custom. With the Automatic option, the tool will:

  • Remove user-installed apps

  • Delete users and home directories

  • Create default backup user

  • Auto install missing pre-installed apps (MPIAs)

  • Remove non-default users

  • Remove snap packages

The Custom option will:

  • Remove user-installed apps or allow you to select which apps to remove

  • Remove  old kernels

  • Allow you to choose users to delete

  • Delete users and home directories

  • Create default backup user

  • Allow you to create custom backup user

  • Auto install MPIAs or chose which MPIAs to install

  • Remove non-default users

  • View all dependent packages

  • Remove snap packages

I’m going to walk you through the process of installing and using Resetter. However, I must tell you that this tool is very much in beta. Even so, resetter is definitely worth a go. In fact, I would encourage you to test the app and submit bug reports (you can either submit them via GitHub or send them directly to the developer’s email address, gaining7@outlook.com).

It should also be noted that, at the moment, the only supported distributions are:

  • Debian 9.2 (stable) Gnome edition

  • Linux Mint 17.3+ (support for mint 18.3 coming soon)

  • Ubuntu 14.04+ (Although I found 17.10 not supported)

  • Elementary OS 0.4+

  • Linux Deepin 15.4+

With that said, let’s install and use Resetter. I’ll be demonstrating on Elementary OS Loki.

Installation

There are a couple of ways to install Resetter. The method I chose is by way of the gdebi helper app. Why? Because it will pick up all the necessary dependencies for installation. First, we must install that particular tool. Open up a terminal window and issue the command:

sudo apt install gdebi

Once that is installed, point your browser to the Resetter Download Page and download the most recent version of the software. Once it has downloaded, open up the file manager, navigate to the downloaded file, and click (or double-click, depending on how you’ve configured your desktop) on the resetter_XXX-stable_all.deb file (where XXX is the release number). The gdebi app will open (Figure 1). Click on the Install Package button, type your sudo password, and Resetter will install.

Figure 1: Installing Resetter with gdebi.

Once Resetter is installed, you’re ready to go.

Using Resetter

Remember, before you do this, you must back up your data. You’ve been warned.

From your terminal window, issue the command sudo resetter. You’ll be prompted for your sudo password. Once Resetter opens, it will automatically detect your distribution (Figure 2).

Figure 2: The Resetter main window.

We’re going to test the Resetter waters by running an automatic reset. From the main window, click Automatic Reset. The app will offer up a clear warning that it is about to reset your operating system (in my case, Elementary OS 0.4.1 Loki) to its factory defaults (Figure 3).

Figure 3: Resetter warns you before you continue on.

Once you click Yes, Resetter will display all of the packages it will remove (Figure 4). If you’re okay with that, click OK and the reset will begin.

Figure 4: All of the packages to be removed, in order to reset Elementary OS to factory defaults.

During the reset, the application will display a progress window (Figure 5). Depending upon how much you’ve installed, the process shouldn’t take too long.

Figure 5: The Resetter progress window.

When the process completes, Resetter will display a new username and password for you to use, in order to log back into your newly reset distribution (Figure 6).

Figure 6: New username and password.

Click OK and then, when prompted, click Yes to reboot the system. Once you are prompted to login, use the new credentials given to you by the Resetter app. After a successful login, you’ll need to recreate your original user. That user’s home directory will still be intact, so all you need to do is issue the command sudo useradd USERNAME (where USERNAME is the name of the user). Once you’ve done that, issue the command sudo passwd USERNAME (where USERNAME is the name of the user). With the user/password set, you can log out and log back in as your old user (enjoying the same home directory you had before resetting the operating system).

My results

I have to confess, after adding the password back to my old user (and testing it by using the su command to change to that user), I was unable to log into the Elementary OS desktop with that user. To solve that problem, I logged in with the Resetter-created user, moved the old user home directory, deleted the old user (with the command sudo deluser jack), and recreated the old user (with the command sudo useradd -m jack).

After doing that, I checked the original home directory, only to find out the ownership had been changed from jack.jack to 1000.1000. That could have been fixed simply by issuing the command sudo chown -R jack.jack /home/jack. The lesson? If you use Resetter and find you cannot log in with your old user (after you’ve re-created user and given it a new password), make sure to change the ownership of the user’s home directory.

Outside of that on issue, Resetter did a great job of taking Elementary OS Loki back to a default state. Although Resetter is in beta, it’s a rather impressive tool. Give it a try and see if you don’t have the same outstanding results I did.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Small Open-Source OSs for Small IoT Devices

A range of open-source operating-system solutions are available for those confined to scaled-down dimensions—homing in on the best option does require some research, though.

Linux has become the de facto open-source operating system (OS), although there are niche alternatives like flavors of BSD (Berkeley Software Distribution). A variety of incarnations target minimal memory platforms such as Ubuntu Core/Snappy and Android Things.

Many commercial open-source solutions are available in this space, too, but they all require virtual memory-management-unit (MMU) hardware. 

Read more at Electronic Design

A perf Cheat Sheet

Right now I’m working on finishing up a zine about perf that I started back in May, and I’ve been struggling with how to explain all there is to say about perf in a concise way. Yesterday I finally hit on the idea of making a 1-page cheat sheet reference which covers all of the basic perf command line arguments.

All the examples in this cheat sheet are taken (with permission) from http://brendangregg.com/perf.html, which is a fantastic perf reference and has many more great examples.

Read more at Julia Evans

OpenStack Foundation Embraces Containers With “Kata Containers”

Kata Containers were one of the exciting announcements from this year’s KubeCon. See how they work and how the makers are working with the community.

On Dec. 5, when the enthusiastic container community was getting ready for KubeCon, the OpenStack Foundation renewed its long-standing friendship with the announcement of a new effort called Kata Containers with the goal of unifying the speed and manageability of containers with the security advantages of virtual machines (VMs).

Read more at DZone

Tips and Resources for Learning Kubernetes

Regardless of how you decide to begin, it’s time to start learning Kubernetes.

If you read Kubernetes‘ description—”an open source system for automating deployment, scaling, and management of containerized applications” —you may think that getting started with Kubernetes is quite a daunting feat. But there are a number of great resources out there that make it easier to learn this container orchestration system.

Before we dive in, let’s examine where Kubernetes started. In the Unix world, containers have been in use for a very long time, and today Linux containers are popular thanks to projects like Docker. Google created Process Containers in 2006; when it later realized it needed a way to maintain all those containers, it created Borg as an internal Google project. Many tools sprang from its users, including Omega, which was built by iterating on Borg. Omega maintained cluster states separate from the cluster members, thus breaking Borg’s monolith. Finally, Kubernetes sprung from Google, and it is now maintained by the Cloud Native Computing Foundation’smembers and contributors.

Read more at OpenSource.com

Unraveling the MEC Standards Puzzle

Multi-access Edge Computing (MEC) is quickly gaining traction as a disruptive technology that promises to bring applications and content closer to the network edge. It is also expected to reduce latency in networks and make new services possible.

Analyst Iain Gillott of iGR Research says that he expects MEC to be as disruptive to the market as 5G and software-defined networking (SDN).  … There currently is no MEC standard, however, there are several projects working on standards and trying to bring some order to this burgeoning technology.

Here are four groups that are involved in MEC that are worth watching: 

Read more at SDxCentral

Container Basics: Terms You Need to Know

In the previous article, we talked about what containers are and how they breed innovation and help companies move faster. And, in the following articles in this series, we will discuss how to use them. Before we dive more deeply into the topic, however, we need to understand some of the terms and commands used in the container world. Without a confident grasp of this terminology, things could get confusing.

Let’s explore some of the basic terms used in the Docker container world.

Container: What exactly is container? It’s a runtime instance of a Docker image. It contains a Docker image, an execution environment, and instructions.  It’s totally isolated from the system so multiple containers can run on the system, completely oblivious of each other. You can replicate multiple containers from the same image and scale the service when the demand is high and nuke those containers when demand is low.

Docker Image: This is no different from the image of a Linux distribution that you download. It’s a package of dependencies and information for creating, deploying, and executing a container. You can spin up as many containers as you want in a matter of seconds. Each behaves exactly the same. Images are built on top of one another in layers. Once an image is created, it doesn’t change. If you want to make changes to your container, you simply create a new image and deploy new containers from that image.

Repository (repo): Linux users will already be familiar with the term repository; it’s a reservoir where packages are stored that can be downloaded and installed on a system. In the context of Docker containers, the only difference is that it hosts Docker images which are categorized via labels. You can have different variants of the same applications or different versions, all tagged appropriately.

Registry: Think of this as like GitHub. It’s an online service that hosts and provides access to repositories of docker images. DockerHub, for example is the default registry for public images. Vendors can upload their repositories on DockerHub so that their customers can download and use official images. Some companies offer their own registries for their images. Registries don’t have to be run and managed by third party vendors. Organizations can have on-prem registries to manage organization-wide access to repositories.

Tag: When you create a Docker image, you can tag it appropriately so that different variants or versions can be easily identified. It’s no different from what you see in any software package. Docker images are tagged when you add them to the repository.

Now that you have an understanding of the basics, the next phase is understanding the terminology used when working with actual Docker containers.

Dockerfile: This is a text file that comprises the commands that are executed manually in order to build a Docker image. These instructions are used by Docker to automatically build images.

Build: This is the process that creates an image from Dockerfile.

Push: Once the image is created, “push” is the process to publish that image to the repository. The term is also used as part of a command that we will learn in the next articles.

Pull: A user can retrieve that image from repository through the “pull” process.

Compose: Most complex applications comprise more than one container. Compose is a command-line tool that’s used to run a multi-container application. It allows you to run a multi-container application with one command. It eases the headache of running multiple containers needed for that application.  

Conclusion

The scope of container terminology is massive, but these are some basic terms that you will frequently encounter. The next time you see these terms, you will know exactly what they mean. In the next article, we will get started working with Docker containers.

The Sweetness of JAMstack: JavaScript, APIs and Markup

The JAMstack approach to web development has been emerging for several years, but really took off in 2017. More a design philosophy than an explicit framework, JAMstack takes the concept of static, database-free websites to the next level via an architecture advocates are calling “the future of the internet.”

Which only makes sense. Browsers themselves have essentially become mini operating systems capable of running complex client-side applications while interacting with myriad APIs. Meanwhile, with the help of with Node.js and npm, JavaScript has leaped the divide between front and back end for real-time, two-way communication between client and server. JAMstack is simply harnessing these factors in a logical and effective way.

(A word on static vs. dynamic site architecture: static in this context refers to how websites are built, powered and served, which in no way means that a static site lacks interactivity.)

Read more at The New Stack

One Small Step to Harden USB Over IP on Linux

The USB over IP kernel driver allows a server system to export its USB devices to a client system over an IP network via USB over IP protocol. Exportable USB devices include physical devices and software entities that are created on the server using the USB gadget sub-system. This article will cover a major bug related to USB over IP in the Linux kernel that was recently uncovered; it created some significant security issues but was resolved with help from the kernel community.

The Basics of the USB Over IP Protocol

There are two USB over IP server kernel modules:

  • usbip-host (stub driver): A stub USB device driver that can be bound to physical USB devices to export them over the network.
  • usbip-vudc: A virtual USB Device Controller that exports a USB device created with the USB Gadget Subsystem.

There is one USB over IP client kernel module:

Customizing a Linux System for an Autonomous Arctic Monitoring Station

Developing an embedded system for remote field duty is hard enough, but what if you had to contend with -40ºC temperatures, high winds, ice-encased cables, and attacks from Arctic wildlife? These are just some of the harsh realities faced by the developers of a Linux-driven sensor buoy deployed on the sea ice off the north coast of Alaska.

At the recent Embedded Linux Conference Europe (ELCE), Satish Chetty talked about his volunteer work setting up a sea ice monitoring station funded by Ice911. The principal goal is to study changes in ice formation and melting due to global warming. Chetty’s day job is VP of software engineering at Hera Systems, a Silicon Valley startup that develops Earth imaging satellites and edge analytics solutions.

The mostly autonomous monitoring buoy has been evolving since 2009. Planted in or near sea ice from November to July every year, the station measures weather, water temperature, water depth (sonar), ice depth and melt, sunlight, and albedo (the reflection of sunlight). Cameras are used for visual analysis.

A custom, multi sensor, 1-Wire temperature string is attached to the buoy and embedded into the ice, “with sensors at every depth so you get a profile of water and ice thickness,” said Chetty. “Where we were testing, most of the melt happens from the bottom up because the meltwater flows into the water, heating it up.”

Like the underwater, Linux-driven ESP monitoring station described by Brent Roman at a presentation at last year’s ELCE conference, Chetty’s Arctic buoy is severely restrained by power. The site is just off the Arctic Ocean coast near Barrow, the northernmost town in the United States. The location sits in darkness for 65 days of the year, and even in warmer months, a battery bank is required to augment the solar panels.

Four panels are positioned at almost 90-degree angles to track the sun as it passes just over the horizon in a circular path. This configuration increases exposure to the fierce winds caused by the site’s peninsular location. As a result, Chetty’s team was forced to use small, 5-10 Watt panels so they wouldn’t blow over.

Originally, they used non-rechargeable Lithium batteries. For various reason, including the greater difficulty of replacement, as well as the regulatory hassles of transporting the batteries by air, they switched to banks of smartphone LiPo batteries. The developers and researchers are away from Barrow, so regular maintenance is typically performed by armed bear guards, who also accompany researchers during their visits to the buoy.

Wireless power hogs

The station’s biggest power draw comes from the cell modems, followed by multiple cameras. The station relies primarily on a $50 Huawei 3G cellular modem to transmit data to an archiving server. To avoid cellular service charges, the team originally started to set up WiFi repeaters, but abandoned the project due to the complexities of maintenance.

They did, however, add a WiFi access point, which is used for close-range communications with researchers’ mobile devices. “Sometimes the 3G and satellite modems fail so we have to go out and retrieve the SD card,” explained Chetty. “During melting, the buoy is surrounded by slushy, dangerous water, so we had to put a board down to reach it. It was hard pulling an SD card wearing gloves while balancing on a board. It’s much easier to use WiFi.”

Chetty and his team chose WiFi over Bluetooth to ease simultaneous access by multiple researchers. Yet, WiFi added other challenges. “Certain WiFi drivers require other network drivers before you can compile, so it adds to the complexity and boot time, and it burns more power,” said Chetty.

Power efficiency was the main consideration in system design, followed by cost, size, and weight. “The equipment needs to be small and light enough to be carried by an ATV or a sled pulled by snowmobiles, and so it can be easily dragged into a boat in July,” said Chetty. The system was also designed so it could be quickly disassembled. “The ice melt happens within a single week so you want to be able to quickly disassemble it,” said Chetty.

The station runs Linux on a Technologic TS-7400-v2 SBC connected to a Belkin USB hub. Chetty’s team considered using a cheaper and more power efficient microcontroller-based system, but selected Linux for several reasons. One was that most of the sensors they wanted to use were low-cost off the shelf devices with USB drivers. “Instead of making custom PCBs, it was easier to use a Linux system and just plug in the sensors.” Chetty developed a custom kernel for the board with a Debian stack that was trimmed to reduce non-essential packages.

Chetty praised the TS-7400-v2 for its $150 price, fanless operation, power efficiency, and -40 to 85ºC range. The ARM9-based i.MX286 SoC can be configured down to 454MHz to save on consumption. “The SBC can run at half a watt, and it can operate at 8 to 24V power, which is good because the battery doesn’t maintain charge all the time,” said Chetty. “There’s a built-in sleep timer that you can program to shut off after doing tasks, and we can turn peripherals on and off via software.”

The board includes a Real Time Clock (RTC), but at extremely low temperatures, it slows down causing time synchronization issues. “Every three or four days we do a time sync,” said Chetty. Originally, the developers performed remote config updates using ssh, but now they update once a year during the summer.

Prepping for cold, ice, and polar bears

Unlike most industrial systems, the station experiences -40ºC temperatures on a regular basis. The SBC works fine at -40ºC, as do the $75, USB-connected Logitech webcams, said Chetty. “Our 3G modem is rated only for -20ºC,” he added. “Lower than that it still connects, but it occasionally drops connections during handshakes.” In that case, sensor data is stored on the SD card.

Ice buildup proved to be a bigger challenge than low temperatures. For example, the Logitech cameras are housed in a fishing bait box that resists ice build-up, but still allows icicles to extend into the camera’s cutout view. When the cameras grabbed stills from the video, they focused on the icicles instead of the landscape.

Chetty’s solution was to run video capture for 3-5 seconds before taking the still, giving the cameras time to refocus. The system could then identify the good stills to save while discarding the remaining video to save disk space. “Compiling that at the kernel level was important,” said Chetty.

Ice and rime buildup on cables and sensors was a bigger problem. “The sonar sensor got so much ice on it after every blizzard that we kept getting incorrect readings,” Said Chetty. “For a while, we sent people out to chip off the ice, but it happened so often we decided to change the sensor. Just because it’s temperature rated, doesn’t mean it can handle every situation. At -40ºC, cables get encased in ice and can get brittle, and you tap them you can break them. The ice makes it hard to open the box up to repair things. One time we broke the board pins and ruined the experiment. We can’t take the whole thing back to the lab to fix it because the sensors are embedded into the ice. For our next version, we’ll put connectors outside instead of running cables inside.”

If all this wasn’t enough, there are also the animal attacks. “One time, a fox chewed out our sensors, so we put a cap on it,” said Chetty. “We think a polar bear stepped on one of the arms and broke some other sensors. When we see the sensor data acting weird, we know something has happened.”

You can watch the entire presentation below: