Whether your budget permits you to attend large, global events or just small local shows, there’s a Linux and open source conference to suit everyone.
Even if you don’t live and breathe open source, I highly recommend you attend at least one conference that fits your schedule and travel budget. The technical know-how you gain can make your life easier, and it’s helpful to know what’s on the horizon. Sometimes, a single how-to presentation can save you a week of work or a panel discussion can help you formulate your company’s IT strategy—and that justifies the cost.
Plus, in the sense of enlightened self-interest, attending conferences is an investment in your own career: You need to keep your tech skills honed. Even introverts can get something out of the personal networking experience, which helps when you want to find your next job. You can’t beat the “hallway” track at a conference for learning what people really think about the latest and greatest programs. Webinars and online streaming of keynote speeches are all well and good, but nothing’s quite as rewarding as meeting people of like minds in real life.
How many times have you dived deep into Ubuntu (or a Ubuntu derivative), configuring things and installing software, only to find that your desktop (or server) platform isn’t exactly what you wanted. This situation can be problematic when you already have all of your user files on the machine. In this case, you have a choice, you can either back up all your data, reinstall the operating system, and copy your data back onto the machine, or you can give a tool like Resetter a go.
Resetter is a new tool (written by Canadian developer that goes by the name “gaining”), written in Python and pyqt, that will reset Ubuntu, Linux Mint (and a few other, Ubuntu-based distributions) back to stock configurations. Resetter offers two different reset options: Automatic and Custom. With the Automatic option, the tool will:
Remove user-installed apps
Delete users and home directories
Create default backup user
Auto install missing pre-installed apps (MPIAs)
Remove non-default users
Remove snap packages
The Custom option will:
Remove user-installed apps or allow you to select which apps to remove
Remove old kernels
Allow you to choose users to delete
Delete users and home directories
Create default backup user
Allow you to create custom backup user
Auto install MPIAs or chose which MPIAs to install
Remove non-default users
View all dependent packages
Remove snap packages
I’m going to walk you through the process of installing and using Resetter. However, I must tell you that this tool is very much in beta. Even so, resetter is definitely worth a go. In fact, I would encourage you to test the app and submit bug reports (you can either submit them via GitHub or send them directly to the developer’s email address, gaining7@outlook.com).
It should also be noted that, at the moment, the only supported distributions are:
Debian 9.2 (stable) Gnome edition
Linux Mint 17.3+ (support for mint 18.3 coming soon)
Ubuntu 14.04+ (Although I found 17.10 not supported)
Elementary OS 0.4+
Linux Deepin 15.4+
With that said, let’s install and use Resetter. I’ll be demonstrating on Elementary OS Loki.
Installation
There are a couple of ways to install Resetter. The method I chose is by way of the gdebi helper app. Why? Because it will pick up all the necessary dependencies for installation. First, we must install that particular tool. Open up a terminal window and issue the command:
sudo apt install gdebi
Once that is installed, point your browser to the Resetter Download Page and download the most recent version of the software. Once it has downloaded, open up the file manager, navigate to the downloaded file, and click (or double-click, depending on how you’ve configured your desktop) on the resetter_XXX-stable_all.deb file (where XXX is the release number). The gdebi app will open (Figure 1). Click on the Install Package button, type your sudo password, and Resetter will install.
Figure 1: Installing Resetter with gdebi.
Once Resetter is installed, you’re ready to go.
Using Resetter
Remember, before you do this, you must back up your data. You’ve been warned.
From your terminal window, issue the command sudo resetter. You’ll be prompted for your sudo password. Once Resetter opens, it will automatically detect your distribution (Figure 2).
Figure 2: The Resetter main window.
We’re going to test the Resetter waters by running an automatic reset. From the main window, click Automatic Reset. The app will offer up a clear warning that it is about to reset your operating system (in my case, Elementary OS 0.4.1 Loki) to its factory defaults (Figure 3).
Figure 3: Resetter warns you before you continue on.
Once you click Yes, Resetter will display all of the packages it will remove (Figure 4). If you’re okay with that, click OK and the reset will begin.
Figure 4: All of the packages to be removed, in order to reset Elementary OS to factory defaults.
During the reset, the application will display a progress window (Figure 5). Depending upon how much you’ve installed, the process shouldn’t take too long.
Figure 5: The Resetter progress window.
When the process completes, Resetter will display a new username and password for you to use, in order to log back into your newly reset distribution (Figure 6).
Figure 6: New username and password.
Click OK and then, when prompted, click Yes to reboot the system. Once you are prompted to login, use the new credentials given to you by the Resetter app. After a successful login, you’ll need to recreate your original user. That user’s home directory will still be intact, so all you need to do is issue the command sudo useradd USERNAME(where USERNAME is the name of the user). Once you’ve done that, issue the command sudo passwd USERNAME(where USERNAME is the name of the user). With the user/password set, you can log out and log back in as your old user (enjoying the same home directory you had before resetting the operating system).
My results
I have to confess, after adding the password back to my old user (and testing it by using the su command to change to that user), I was unable to log into the Elementary OS desktop with that user. To solve that problem, I logged in with the Resetter-created user, moved the old user home directory, deleted the old user (with the command sudo deluser jack), and recreated the old user (with the command sudo useradd -m jack).
After doing that, I checked the original home directory, only to find out the ownership had been changed from jack.jack to 1000.1000. That could have been fixed simply by issuing the command sudo chown -R jack.jack /home/jack. The lesson? If you use Resetter and find you cannot log in with your old user (after you’ve re-created user and given it a new password), make sure to change the ownership of the user’s home directory.
Outside of that on issue, Resetter did a great job of taking Elementary OS Loki back to a default state. Although Resetter is in beta, it’s a rather impressive tool. Give it a try and see if you don’t have the same outstanding results I did.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
A range of open-source operating-system solutions are available for those confined to scaled-down dimensions—homing in on the best option does require some research, though.
Linux has become the de facto open-source operating system (OS), although there are niche alternatives like flavors of BSD (Berkeley Software Distribution). A variety of incarnations target minimal memory platforms such as Ubuntu Core/Snappy and Android Things.
Many commercial open-source solutions are available in this space, too, but they all require virtual memory-management-unit (MMU) hardware.
Right now I’m working on finishing up a zine about perf that I started back in May, and I’ve been struggling with how to explain all there is to say about perf in a concise way. Yesterday I finally hit on the idea of making a 1-page cheat sheet reference which covers all of the basic perf command line arguments.
All the examples in this cheat sheet are taken (with permission) from http://brendangregg.com/perf.html, which is a fantastic perf reference and has many more great examples.
Kata Containers were one of the exciting announcements from this year’s KubeCon. See how they work and how the makers are working with the community.
On Dec. 5, when the enthusiastic container community was getting ready for KubeCon, the OpenStack Foundation renewed its long-standing friendship with the announcement of a new effort called Kata Containers with the goal of unifying the speed and manageability of containers with the security advantages of virtual machines (VMs).
Regardless of how you decide to begin, it’s time to start learning Kubernetes.
If you read Kubernetes‘ description—”an open source system for automating deployment, scaling, and management of containerized applications” —you may think that getting started with Kubernetes is quite a daunting feat. But there are a number of great resources out there that make it easier to learn this container orchestration system.
Before we dive in, let’s examine where Kubernetes started. In the Unix world, containers have been in use for a very long time, and today Linux containers are popular thanks to projects like Docker. Google created Process Containers in 2006; when it later realized it needed a way to maintain all those containers, it created Borg as an internal Google project. Many tools sprang from its users, including Omega, which was built by iterating on Borg. Omega maintained cluster states separate from the cluster members, thus breaking Borg’s monolith. Finally, Kubernetes sprung from Google, and it is now maintained by the Cloud Native Computing Foundation’smembers and contributors.
Multi-access Edge Computing (MEC) is quickly gaining traction as a disruptive technology that promises to bring applications and content closer to the network edge. It is also expected to reduce latency in networks and make new services possible.
Analyst Iain Gillott of iGR Research says that he expects MEC to be as disruptive to the market as 5G and software-defined networking (SDN). … There currently is no MEC standard, however, there are several projects working on standards and trying to bring some order to this burgeoning technology.
Here are four groups that are involved in MEC that are worth watching:
In the previous article, we talked about what containers are and how they breed innovation and help companies move faster. And, in the following articles in this series, we will discuss how to use them. Before we dive more deeply into the topic, however, we need to understand some of the terms and commands used in the container world. Without a confident grasp of this terminology, things could get confusing.
Let’s explore some of the basic terms used in the Docker container world.
Container: What exactly is container? It’s a runtime instance of a Docker image. It contains a Docker image, an execution environment, and instructions. It’s totally isolated from the system so multiple containers can run on the system, completely oblivious of each other. You can replicate multiple containers from the same image and scale the service when the demand is high and nuke those containers when demand is low.
Docker Image: This is no different from the image of a Linux distribution that you download. It’s a package of dependencies and information for creating, deploying, and executing a container. You can spin up as many containers as you want in a matter of seconds. Each behaves exactly the same. Images are built on top of one another in layers. Once an image is created, it doesn’t change. If you want to make changes to your container, you simply create a new image and deploy new containers from that image.
Repository (repo): Linux users will already be familiar with the term repository; it’s a reservoir where packages are stored that can be downloaded and installed on a system. In the context of Docker containers, the only difference is that it hosts Docker images which are categorized via labels. You can have different variants of the same applications or different versions, all tagged appropriately.
Registry: Think of this as like GitHub. It’s an online service that hosts and provides access to repositories of docker images. DockerHub, for example is the default registry for public images. Vendors can upload their repositories on DockerHub so that their customers can download and use official images. Some companies offer their own registries for their images. Registries don’t have to be run and managed by third party vendors. Organizations can have on-prem registries to manage organization-wide access to repositories.
Tag: When you create a Docker image, you can tag it appropriately so that different variants or versions can be easily identified. It’s no different from what you see in any software package. Docker images are tagged when you add them to the repository.
Now that you have an understanding of the basics, the next phase is understanding the terminology used when working with actual Docker containers.
Dockerfile: This is a text file that comprises the commands that are executed manually in order to build a Docker image. These instructions are used by Docker to automatically build images.
Build: This is the process that creates an image from Dockerfile.
Push: Once the image is created, “push” is the process to publish that image to the repository. The term is also used as part of a command that we will learn in the next articles.
Pull: A user can retrieve that image from repository through the “pull” process.
Compose: Most complex applications comprise more than one container. Compose is a command-line tool that’s used to run a multi-container application. It allows you to run a multi-container application with one command. It eases the headache of running multiple containers needed for that application.
Conclusion
The scope of container terminology is massive, but these are some basic terms that you will frequently encounter. The next time you see these terms, you will know exactly what they mean. In the next article, we will get started working with Docker containers.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
The JAMstack approach to web development has been emerging for several years, but really took off in 2017. More a design philosophy than an explicit framework, JAMstack takes the concept of static, database-free websites to the next level via an architecture advocates are calling “the future of the internet.”
Which only makes sense. Browsers themselves have essentially become mini operating systems capable of running complex client-side applications while interacting with myriad APIs. Meanwhile, with the help of with Node.js and npm, JavaScript has leaped the divide between front and back end for real-time, two-way communication between client and server. JAMstack is simply harnessing these factors in a logical and effective way.
(A word on static vs. dynamic site architecture: static in this context refers to how websites are built, powered and served, which in no way means that a static site lacks interactivity.)
The USB over IP kernel driver allows a server system to export its USB devices to a client system over an IP network via USB over IP protocol. Exportable USB devices include physical devices and software entities that are created on the server using the USB gadget sub-system. This article will cover a major bug related to USB over IP in the Linux kernel that was recently uncovered; it created some significant security issues but was resolved with help from the kernel community.
The Basics of the USB Over IP Protocol
There are two USB over IP server kernel modules:
usbip-host (stub driver): A stub USB device driver that can be bound to physical USB devices to export them over the network.
usbip-vudc: A virtual USB Device Controller that exports a USB device created with the USB Gadget Subsystem.