Home Blog Page 420

AWS Lambda and the Spectrum of Compute

Amazon fully understand the reality of the compute spectrum, but they are also completely focused on making it easier and easier to begin new development projects on Lambda for a wide variety of scenarios. This makes perfect sense, as we noted previously Serverless is volume compute for a new generation of applications, with significant upside for the providers in usage of adjacent services, and also an efficient disruptor of established processes.

Additionally, by expanding the various entry points to the Serverless paradigm for developers, via routes such as AWS DeepLensAWS Greengrass and so forth, Amazon are focusing minds on the end product required rather than solving for the underlying operational complexity.

Read more at RedMonk

General Data Protection Regulation: A Checklist to Compliance

The General Data Protection Regulation (GDPR) is perhaps the most sweeping data privacy law in history. Within its nearly 100 articles, it outlines new requirements for organizations that have access to the personal information of European Union (EU) citizens, giving average consumers far more power over how their data is used.

Failure to comply will mean heavy fines of approximately $24 million (€20 million), or 4% of a company’s global annual revenue — whichever is greater.

Despite the passing of this regulation in 2016, many businesses still don’t consider it a priority. This is particularly true of U.S.-based organizations, some of which don’t even realize they’re required to comply.

Read more at HPE

Linux Kernel Developer: Julia Lawall

A kernel that has had nearly 83,000 patches applied will certainly have a few bugs introduced along with the new features, states the 2017 Linux Kernel Development Report, written by Jonathan Corbet and Greg Kroah-Hartman.

To find and report those bugs, Linux kernel developers depend on a wide community of testers. And, according to convention, when a bug-fixing patch is applied to the kernel, it should contain a “Reported-by” tag to credit the tester who found the problem. During the period covered by the most recent report, more than 4,100 patches carried such tags, and the top 21 bug reporters are shown in the table at right.

Read more at The Linux Foundation

Securing the Linux Filesystem with Tripwire

Linux users need to know how to protect their servers or personal computers from destruction, and the first step they need to take is to protect the filesystem.

In this article, we’ll look at Tripwire, an excellent tool for protecting Linux filesystems. Tripwire is an integrity checking tool that enables system administrators, security engineers, and others to detect alterations to system files. Although it’s not the only option available (AIDE and Samhain offer similar features), Tripwire is arguably the most commonly used integrity checker for Linux system files, and it is available as open 

Read more at OpenSource.com

Meltdown and Spectre Fallout Leads to First RC9 of a Linux Kernel Since 2011

In an almost unprecedented move, Linus Torvalds has delayed the release of a final build of the Linux Kernel 4.15, instead announcing an unusual ninth release candidate, the first time he had felt he has to do so since 2011.

And you can be fairly sure that Torvalds is not happy with the release because everyone has been busy dealing with the fallout from Meltdown and Spectre, even though the impact on Linux is minimal.

Read more at The Inquirer

How to Create a Docker Image

In the previous article, we learned about how to get started with Docker on Linux, macOS, and Windows. In this article, we will get a basic understanding of creating Docker images. There are prebuilt images available on DockerHub that you can use for your own project, and you can publish your own image there.

We are going to use prebuilt images to get the base Linux subsystem, as it’s a lot of work to build one from scratch. You can get Alpine (the official distro used by Docker Editions), Ubuntu, BusyBox, or scratch. In this example, I will use Ubuntu.

Before we start building our images, let’s “containerize” them! By this I just mean creating directories for all of your Docker images so that you can maintain different projects and stages isolated from each other.

$ mkdir dockerprojects

cd dockerprojects

Now create a Dockerfile inside the dockerprojects directory using your favorite text editor; I prefer nano, which is also easy for new users.

$ nano Dockerfile

And add this line:

FROM Ubuntu

m7_f7No0pmZr2iQmEOH5_ID6MDG2oEnODpQZkUL7

Save it with Ctrl+Exit then Y.

Now create your new image and provide it with a name (run these commands within the same directory):

$ docker build -t dockp .

(Note the dot at the end of the command.) This should build successfully, so you’ll see:

Sending build context to Docker daemon  2.048kB

Step 1/1 : FROM ubuntu

---> 2a4cca5ac898

Successfully built 2a4cca5ac898

Successfully tagged dockp:latest

It’s time to run and test your image:

$ docker run -it Ubuntu

You should see root prompt:

root@c06fcd6af0e8:/# 

This means you are literally running bare minimal Ubuntu inside Linux, Windows, or macOS. You can run all native Ubuntu commands and CLI utilities.

vpZ8ts9oq3uk--z4n6KP3DD3uD_P4EpG7fX06MC3

Let’s check all the Docker images you have in your directory:

$docker images


REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

dockp               latest              2a4cca5ac898        1 hour ago          111MB

ubuntu              latest              2a4cca5ac898        1 hour ago          111MB

hello-world         latest              f2a91732366c        8 weeks ago         1.85kB

You can see all three images: dockp, Ubuntu, and hello-world, which I created a few weeks ago when working on the previous articles of this series. Building a whole LAMP stack can be challenging, so we are going create a simple Apache server image with Dockerfile.

Dockerfile is basically a set of instructions to install all the needed packages, configure, and copy files. In this case, it’s Apache and Nginx.

You may also want to create an account on DockerHub and log into your account before building images, in case you are pulling something from DockerHub. To log into DockerHub from the command line, just run:

$ docker login

Enter your username and password and you are logged in.

Next, create a directory for Apache inside the dockerproject:

$ mkdir apache

Create a Dockerfile inside Apache folder:

$ nano Dockerfile

And paste these lines:

FROM ubuntu

MAINTAINER Kimbro Staken version: 0.1

RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*


ENV APACHE_RUN_USER www-data

ENV APACHE_RUN_GROUP www-data

ENV APACHE_LOG_DIR /var/log/apache2


EXPOSE 80


CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"] 

Then, build the image:

docker build -t apache .

(Note the dot after a space at the end.)

It will take some time, then you should see successful build like this:

Successfully built e7083fd898c7

Successfully tagged ng:latest

Swapnil:apache swapnil$

Now let’s run the server:

$ docker run –d apache

a189a4db0f7c245dd6c934ef7164f3ddde09e1f3018b5b90350df8be85c8dc98

Eureka. Your container image is running. Check all the running containers:

$ docker ps

CONTAINER ID  IMAGE        COMMAND                 CREATED            

a189a4db0f7 apache "/usr/sbin/apache2ctl"  10 seconds ago

You can kill the container with the docker kill command:

$docker kill a189a4db0f7

So, you see the “image” itself is persistent that stays in your directory, but the container runs and goes away. Now you can create as many images as you want and spin and nuke as many containers as you need from those images.

That’s how to create an image and run containers.

To learn more, you can open your web browser and check out the documentation about how to build more complicated Docker images like the whole LAMP stack. Here is a Dockerfile file for you to play with.  In the next article, I’ll show how to push images to DockerHub.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Security with the Trusted Platform Module

The Trusted Platform Module on your computer’s motherboard could lead to better security for your Linux system.

The security of any operating system (OS) layer depends on the security of every layer below it. If the CPU can’t be trusted to execute code correctly, there’s no way to run secure software on that CPU. If the bootloader has been tampered with, you cannot trust the kernel that the bootloader boots. Secure Boot allows the firmware to validate a bootloader before executing it, but if the firmware itself has been backdoored, you have no way to verify that Secure Boot functioned correctly.

This problem seems insurmountable: You can only trust the OS to verify that the firmware is untampered with if the firmware itself has not been tampered with. How can you verify the state of a system without having to trust it first?

Read more at Linux Pro

The Meaning of Open

There are a lot of misconceptions about what open means, when it is the right strategy to apply, and the fundamental tradeoffs that go along with it. It’s very easy to cargo-cult the notion of open — using it in an imprecise or half-baked way that can obscure the real dynamics of an ecosystem, or even lead you in the wrong strategic direction. Here are a few important things to know about openness in the context of ecosystems.

Openness is a spectrum. A thing is not “open” or “closed” — it instead exists upon a spectrum of how open the system is.

Read more at HackerNoon

Unix: Dealing with Signals

On Unix systems, there are several ways to send signals to processes—with a kill command, with a keyboard sequence (like control-C), or through your own program (e.g., using a kill command in C). Signals are also generated by hardware exceptions such as segmentation faults and illegal instructions, timers and child process termination.

But how do you know what signals a process will react to? After all, what a process is programmed to do and able to ignore is another issue.

Read more at Network World

Tips for Automating Distributed Logging on Production Kubernetes

Any Kubernetes production environment will rely heavily on logs. Using built-in Kubernetes capabilities along with some additional data collection tools, you can easily automate log collection and aggregation for ongoing analysis of your Kubernetes clusters.

At Kenzan, we typically try to separate out platform logging from application logging. This may be done via very different tooling and applications, or even by filtering and tagging within the logs themselves.

Read more at The New Stack