Home Blog Page 417

Containers, the GPL, and Copyleft: No Reason for Concern

Though open source is thoroughly mainstream, new software technologies and old technologies that get newly popularized sometimes inspire hand-wringing about open source licenses. Most often the concern is about the GNU General Public License (GPL), and specifically the scope of its copyleft requirement, which is often described (somewhat misleadingly) as the GPL’s derivative work issue.

One imperfect way of framing the question is whether GPL-licensed code, when combined in some sense with proprietary code, forms a single modified work such that the proprietary code could be interpreted as being subject to the terms of the GPL. While we haven’t yet seen much of that concern directed to Linux containers, we expect more questions to be raised as adoption of containers continues to grow. But it’s fairly straightforward to show that containers do not raise new or concerning GPL scope issues.

Read more at OpenSource.com

How to Fix the Docker and UFW Security Flaw

If you use Docker on Linux, chances are your system firewall might be relegated to Uncomplicated Firewall (UFW). If that’s the case, you may not know this, but the combination of Docker and UFW poses a bit of a security issue. Why? Because Docker actually bypasses UFW and directly alters iptables, such that a container can bind to a port. This means all those UFW rules you have set won’t apply to Docker containers.

Let me demonstrate this.

I’m going to set up UFW (running on Ubuntu Server 16.04), so that the only thing it will allow through is SSH traffic. To do this, I open a terminal and issue the following commands:

sudo ufw allow ssh
sudo ufw default deny incoming
sudo ufw enable

Read more at TechRepublic

Linux Kernel 4.15: ‘An Unusual Release Cycle’

Linus Torvalds released version 4.15 of the Linux Kernel on Sunday, again, and for a second version in a row, a week later than scheduled. The culprits for the late release were the Meltdown and Spectre bugs, as these two vulnerabilities forced developers to submit major patches well into what should have been the last cycle. Torvalds was not comfortable rushing the release, so he gave it another week.

Unsurprisingly, the first big bunch of patches worth mentioning were those designed to sidestep Meltdown and Spectre. To avoid Meltdown, a problem that affects Intel chips, developers have implemented Page Table Isolation (PTI) for the x86 architecture. If for any reason you want to turn this off, you can use the pti=off kernel boot option.

Spectre v2 affects both Intel and AMD chips and, to avoid it, the kernel now comes with the retpoline mechanism. Retpoline requires a version of GCC that supports the -mindirect-branch=thunk-extern functionality. As with PTI, the Spectre-inhibiting mechanism can be turned of. To do so, use the spectre_v2=off option at boot time. Although developers are working to address Spectre v1, at the moment of writing there is still not a solution, so there is no patch for this bug in 4.15.

The solution for Meltdown on ARM has also been pushed to the next development cycle, but there is a remedy for the bug on PowerPC with the RFI flush of L1-D cachefeature included in this release.

An interesting side affect of all of the above is that new kernels now come with a /sys/devices/system/cpu/vulnerabilities/ virtual directory. This directory shows the vulnerabilities affecting your CPU and the remedies being currently applied.

The issues with buggy chips (and the manufacturers that keep things like this secret) has revived the call for the development of viable open source alternatives. This brings us to the partial support for RISC-V chips that has now been merged into the mainline kernel. RISC-V is an open instruction set architecture that allows manufacturers to create their own implementation of RISC-V chips, and it has resulted in several open sourced chips. While RISC-V chips are currently used mainly in embedded devices, powering things like smart hard disks or Arduino-like development boards, RISC-V proponents argue that the architecture is also well-suited for use on personal computers and even in multi-node supercomputers.

The support for RISC-V, as mentioned above, is still incomplete, and includes the architecture code but no device drivers. This means that, although a Linux kernel will run on RISC-V, there is no significant way to actually interact with the underlying hardware. That said, RISC-V is not vulnerable to any of the bugs that have dogged other closed architectures, and development for its support is progressing at a brisk pace, as the RISC-V Foundation has the support of some of the industries biggest heavyweights.

Other stuff that’s new in kernel 4.15

Torvalds has often declared he likes things boring. Fortunately for him, he says, apart from the Spectre and Meltdown messes, most of the other things that happened in 4.15 were very much run of the mill, such as incremental improvements for drivers, support for new devices, and so on. However, there were a few more things worth pointing out:

  • AMD got support for Secure Encrypted Virtualization. This allows the kernel to fence off the memory a virtual machine is using by encrypting it. The encrypted memory can only be decrypted by the virtual machine that is using it. Not even the hypervisor can see inside it. This means that data being worked on by VMs in the cloud, for example, is safe from being spied on by any other process outside the VM.
  • AMD GPUs get a substantial boost thanks to the inclusion of display code. This gives mainline support to Radeon RX Vega and Raven Ridge cards and also implements HDMI/DP audio for AMD cards.
  • Raspberry Pi aficionados will be glad to know that the 7” touchscreen is now natively supported, which is guaranteed to lead to hundreds of fun projects.

To find out more, you can check out the write-ups at Kernel Newbies and Phoronix.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Index: A Focus on the Future of Code and Community

One of the most significant challenges developers face is keeping up with the increasingly rapid pace of change in our industry. With each new innovation comes a new crop of vendors and best practices, and staying on top of your game can become a second profession in itself.

Cloud, containers, data, analytics, IoT, AI, machine learning, serverless architecture, blockchain: Behind all of these rapidly evolving technologies are the programming languages and developers who are leading the charge into the next era of innovation.

An ideal way for developers to understand all this is through conversations with other developers. We believe conversation about development—like innovation itself—is best when it happens in the open. This idea was the catalyst for Index, a first-of-its-kind, developer-focused event that will take place in San Francisco Feb. 20-22 at Moscone West.

Read more at IBM developerWorks

A Look Inside Facebook’s Open Source Program

Open source becomes more ubiquitous every year, appearing everywhere from government municipalities to universities. Companies of all sizes are also increasingly turning to open source software. In fact, some companies are taking open source a step further by supporting projects financially or working with developers.

Facebook’s open source program, for example, encourages others to release their code as open source, while working and engaging with the community to support open source projects.

Read more at OpenSource.com

Q&A on Machine Learning and Kubernetes with David Aronchick of Google from Kubecon 2017

At the recently concluded Kubecon in Austin, TX, attended by over 4000 engineers, Kubernetes was front, left and center. Due to the nature of workloads and typical heavy compute requirements in training algorithms, Machine Learning topics and its synergy with Kubernetes was discussed in many sessions.

Kubeflow is a platform for making Machine Learning on Kubernetes easy, portable and scalable by providing manifests for creating:

  • A JupyterHub to create and manage Jupyter notebooks
  • Tensorflow training controller to adapt for both CPUs and GPUs, and
  • A Tensorflow serving container

Read more at InfoQ

Why You Should Care About Diversity and Inclusion

Aubrey Blanche, Global Head of Diversity and Inclusion at Atlassian, joins us in this latest edition of The New Stack Makers podcast to talk about the difference between diversity and inclusion and why anyone should care.

“Diversity is being invited to the party,” she said. “Inclusion is being glad you’re there.”

When you create an inclusive culture, Blanche explained, business thrives. Employees who feel comfortable bringing their authentic selves to work perform better and are happier at work, which leads to less turnover, which leads to greater profits.

Read more at The New Stack

How to Use DockerHub

In the previous articles, we learned the basics of Docker terminology,  how to install Docker on desktop Linux, macOS, and Windows, and how to create container images and run them on your system. In this last article in the series, we will talk about using images from DockerHub and publishing your own images to DockerHub.

First things first: what is DockerHub and why is it important? DockerHub is a cloud-based repository run and managed by Docker Inc. It’s an online repository where Docker images  can be published and used by other users. There are both public and private repositories. If you are a company, you can have a private repository for use within your own organization, whereas public images can be used by anyone.

You can also use official Docker images that are published publicly. I use many such images, including for my test WordPress installations, KDE plasma apps, and more. Although we  learned last time how to create your own Docker images, you don’t have to. There are thousands of images published on DockerHub for you to use. DockerHub is hardcoded into Docker as the default registry, so when you run the docker pull command against any image, it will be downloaded from DockerHub.

Download images from Docker Hub and run locally

Please check out the previous articles in the series to get started. Then, once you have Docker running on your system, you can open the terminal and run:

$ docker images

This command will show all the docker images currently on your system. Let’s say you want to deploy Ubuntu on your local machine; you would do:

$ docker pull ubuntu

If you already have Ubuntu image on your system, the command will automatically update that image to the latest version. So, if you want to update the existing images, just run the docker pull command, easy peasy. It’s like apt-get upgrade without any muss and fuss.

You already know how to run an image:

$ docker run -it <image name>

$ docker run -it ubuntu

The command prompt should change to something like this:

root@1b3ec4621737:/#

Now you can run any command and utility that you use on Ubuntu. It’s all safe and contained. You can run all the experiments and tests you want on that Ubuntu. Once you are done testing, you can nuke the image and download a new one. There is no system overhead that you would get with a virtual machine.

You can exit that container by running the exit command:

$ exit

Now let’s say you want to install Nginx on your system. Run search to find the desired image:

$ docker search nginx

aizMFFysICAEsgDDYrsrlqwoCgGbWVHtcOzgV9mA

As you can see, there are many images of Nginx on DockerHub. Why? Because anyone can publish an image. Various images are optimized for different projects, so you can choose the appropriate image. You just need to install the appropriate image for your use-case.

Let’s say you want to pull Bitnami’s Nginx container:

$ docker pull bitnami/nginx

Now run it with:

$ docker run -it bitnami/nginx

How to publish images to Docker Hub?

Previously, we learned how to create a Docker image, and we can easily publish that image to DockerHub. First, you need to log into DockerHub. If you don’t already have an account, please create one. Then, you can open terminal app and log in:

$ docker login --username=<USERNAME>

Replace <USERNAME> with the name of your username for Docker Hub. In my case it’s arnieswap:

$ docker login --username=arnieswap>

Enter the password, and you are logged in. Now run the docker images command to get the ID of the image that you created last time.

$ docker images

tW1jDOugkX7J2FfyFyToM6B8m5OYFwMba-Ag5aez

Now, suppose you want to push the ng image to DockerHub. First, we need to tag that image (learn more about tags):

$ docker tag e7083fd898c7 arnieswap/my_repo:testing

Now push that image:

$ docker push arnieswap/my_repo

The push refers to repository [docker.io/arnieswap/my_repo]

12628b20827e: Pushed

8600ee70176b: Mounted from library/ubuntu

2bbb3cec611d: Mounted from library/ubuntu

d2bb1fc88136: Mounted from library/ubuntu

a6a01ad8b53f: Mounted from library/ubuntu

833649a3e04c: Mounted from library/ubuntu

testing: digest: sha256:286cb866f34a2aa85c9fd810ac2cedd87699c02731db1b8ca1cfad16ef17c146 size: 1569

Eureka! Your image is being uploaded. Once finished, open DockerHub, log into your account, and you can see your very first Docker image. Now anyone can deploy your image. It’s the easiest and fastest way to develop and distribute software. Whenever you update the image, users can simply run:

$ docker run arnieswap/my_repo

Now you know why people love Docker containers. They solve many problems that traditional workloads face and allow you develop, test, and deploy applications in no time.  And, by following the steps in this series, you can try them out for yourself.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

CNCF to Host the Rook Project to Further Cloud-Native Storage Capabilities

Today, the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) voted to accept Rook as the 15th hosted project alongside Kubernetes, Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary and TUF.

Rook has been accepted as an inception-level project, under the CNCF Graduation Criteria v1.0. The CNCF provides every project an associated maturity level of either inception, incubating or graduated. At a minimum, an inception-level project is required to add value to cloud native computing and be aligned with the CNCF charter.

Rook brings File, Block and Object storage systems into the Kubernetes cluster, running them seamlessly alongside other applications and services that are consuming the storage. By doing so, the cloud-native cluster becomes self-sufficient and portable across public cloud and on-premise deployments. The project has been developed to enable organizations to modernize their data centers with dynamic application orchestration for distributed storage systems running in on-premise and public cloud environments.

“Storage is one of the most important components of cloud native computing, yet persistent storage systems typically run outside the cloud native environments today,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “Rook was one of the early adopters of the Kubernetes operator pattern and we’re excited to bring in Rook as an inception level project to advance the state of cloud native storage.”

Instead of building an entirely new storage system which requires many years to mature, Rook focuses on turning existing battle-tested storage systems like Ceph into a set of cloud-native services that run seamlessly on-top of Kubernetes. Rook integrates deeply into Kubernetes providing a seamless experience for security, policies, quotas, lifecycle management, and resource management.

In this Software Engineering Daily podcast, Bassam Tabbara, CEO of Upbound and creator of Rook, said: “Rook is essentially using the operator pattern to extend Kubernetes to support storage systems. We’ve added a concept of a storage cluster, a storage pool, an object store and a file system. Those are all new abstractions that we’ve used to extend Kubernetes”

An alpha version of Rook (release 0.6) is available now, followed by a beta and production ready versions in the first half of 2018.

Main features:

  • Software-defined storage running on commodity hardware
  • File, block and object storage presentations integrated with Ceph
  • Hyper-scale or hyper-converged storage options
  • Elastic storage that can easily scale up or down
  • Zero-touch management
  • Integrated data protection with snapshot, cloning and versioning
  • Deployable on Kubernetes.

The latest release of Kubernetes 1.9 introduced a CSI alpha implementation that makes installing new volume plugins as easy as deploying a pod, and enables third-party storage providers to develop their solutions without adding to the core Kubernetes codebase. Rook will expose storage through CSI to Kubernetes.

“It’s a natural fit to run a storage cluster on Kubernetes. It makes perfect sense to bring it into the fold and keep the unified management interface,” said Dan Kerns, Senior Director at Quantum, the initial sponsor of the Rook project. “With Rook, we wanted to create a software-defined storage cluster that could run really well in modern cloud-native environments, and the storage cluster becomes even more resilient with an orchestrator like Kubernetes.”

Community support for Rook is growing rapidly as companies and users deploy Rook in their cloud-native environments (on-premise and public cloud). Companies and organizations like HBO, UCSD Nautilus Project, Norwegian Welfare, Verne Global, FlexShopper, and Acaleph have implemented Rook as part of their storage platforms.

Notable Milestones:

  • 47 contributors
  • 1,935 GitHub stars
  • 13 releases
  • 1,463 commits
  • 1.25M+ container downloads

“We used Rook underneath our Prometheus servers at HBO, running on Kubernetes and deployed on AWS,” said Illya Chekrygin, former senior staff engineer at HBO and founding member of Upbound. “Rook made a significant improvement on the Prometheus pod restart time, virtually eliminating downtime and metrics scrape gaps. We are looking forward to Rook being in a production ready state.”

As a CNCF hosted project, Rook will be part of a neutral foundation aligned with technical interests, receive help with project governance and be provided marketing support to reach a wider audience.

“Operating storage in cloud-native environments is a significantly more difficult task than stateless containers,” said Benjamin Hindman, co-founder of Mesosphere and CNCF TOC representative and project sponsor. “We’re thrilled to have Rook as the first CNCF inception project that begins to address the difficult problem of storage orchestration.”

For more read the Rook blog, Quantum’s recent announcement on the momentum of the project, Upbound’s blog, and listen to The New Stack’s Makers Podcast or Software Engineering Daily featuring Bassam Tabbara discussing Rook and Storage on Kubernetes.

This article originally appeared at Cloud Native Computing Foundation.

What Happens When You Want to Create a Special File with All Special Characters in Linux?

I recently joined Holberton School as a student, hoping to learn full-stack software development. What I did not expect was that in two weeks I would be pretty much proficient with creating shell scripts that would make my coding life easy and fast!

So what is the post about? It is about a novel problem that my peers and I faced when we were asked to create a file with no regular alphabets/ numbers but instead special characters!! Just to give you a look at what kind of file name we were dealing with —

*\’”Holberton School”’\*$?*****:)

What a novel file name! Of course, this question was met with the collective groaning and long drawn sighs of all 55 (batch #5) students!

1*Lf_XPhmgm-RB5ipX_lBjsQ.gif

Some proceeded to make their lives easier by breaking the file name into pieces on a doc file and adding in the “\” or “” in front of certain special character which kind of resulted in this format –

\*\\’”Holberton School”\’\\*$\?\*\*\*\*\*:)

1*p6s8WlysClalj0x2fQhGOg.gif

Everyone trying to get the \ right

bamboozled? me, too! I did not want to believe that this was the only way to solve this, as I was getting frustrated with every “\” or “” that was required to escape and print those special characters as normal characters!

If you’re new to shell scripting, here is a quick walk through on why so many “\” , “” were required and where.

In shell scripting “ ” and ‘ ’ have special usage and once you understand and remember when and where to use them it can make your life easier!

Double Quoting

The first type of quoting we will look at is double quotes. If you place text inside double quotes, all the special characters used by the shell lose their special meaning and are treated as ordinary characters. The exceptions are “$”, “” (backslash), and “`” (back- quote). This means that word-splitting, pathname expansion, tilde expansion, and brace expansion are suppressed, but parameter expansion, arithmetic expansion, and command substitution are still carried out. Using double quotes, we can cope with filenames containing embedded spaces.

So this means that you can create file with names that have spaces between words — if that is your thing, but I would suggest you to not do that as it is inconvenient and rather an unpleasant experience for you to try to find that file when you need !

Quoting “THE” guide for linux I follow and read like it is the Harry Potter of the linux coding world —

Say you were the unfortunate victim of a file called two words.txt. If you tried to use this on the command line, word-splitting would cause this to be treated as two separate arguments rather than the desired single argument:

[me@linuxbox me]$ ls -l two words.txt

ls: cannot access two: No such file or directory
ls: cannot access words.txt: No such file or directory

By using double quotes, you can stop the word-splitting and get the desired result; further, you can even repair the damage:

[me@linuxbox me]$ ls -l “two words.txt”
-rw-rw-r — 1 me me 18 2008–02–20 13:03 two words.txt
[me@linuxbox me]$ mv “two words.txt” two_words.t

There! Now we don’t have to keep typing those pesky double quotes.

Now, let us talk about single quotes and what is their significance in shell —

Single Quotes

Enclosing characters in single quotes (‘’) preserves the literal value of each character within the quotes. A single quote may not occur between single quotes, even when preceded by a backslash.

Yes! that got me and I was wondering how will I use it, apparently when I was googling to find and easier way to do it I stumbled across this piece of information on the internet —

Strong quoting

Strong quoting is very easy to explain:

Inside a single-quoted string nothing is interpreted, except the single-quote that closes the string.

echo 'Your PATH is: $PATH'

$PATH won’t be expanded, it’s interpreted as ordinary text because it’s surrounded by strong quotes.

In practice that means to produce a text like Here's my test… as a single-quoted string, you have to leave and re-enter the single quoting to get the character “'” as literal text:

# WRONG
echo 'Here's my test...'
# RIGHT
echo 'Here'''s my test...'
# ALTERNATIVE: It's also possible to mix-and-match quotes for readability:
echo "Here's my test"

Well now you’re wondering — “well that explains the quotes but what about the “”??”

So for certain characters we need a special way to escape those pesky “” we saw in that file name.

Escaping Characters

Sometimes you only want to quote a single character. To do this, you can precede a character with a backslash, which in this context is called the escape character. Often this is done inside double quotes to selectively prevent an expansion:

[me@linuxbox me]$ echo “The balance for user $USER is: $5.00”
The balance for user me is: $5.00

It is also common to use escaping to eliminate the special meaning of a character in a filename. For example, it is possible to use characters in filenames that normally have special meaning to the shell. These would include “$”, “!”, “&”, “ “, and others. To include a special character in a filename you can to this:

[me@linuxbox me]$ mv bad&filename good_filename

To allow a backslash character to appear, escape it by typing “\”. Note that within single quotes, the backslash loses its special meaning and is treated as an ordinary character.

Looking at the filename now we can understand better as to why the “\” were used in front of all those “”s.

So, to print the file name without losing “” and other special characters what others did was to suppress the “” with “\” and to print the single quotes there are a few ways you can do that.

1. echo $'It's Shell Programming'  # ksh, bash, and zsh only, does not expand variables
2. echo "It's Shell Programming"   # all shells, expands variables
3. echo 'It'''s Shell Programming' # all shells, single quote is outside the quotes
4. echo 'It'"'"'s Shell Programming' # all shells, single quote is inside double quotes
for further reading please follow this link

Looking at option 3, I realized this would mean that I would only need to use “” and single quotes at certain places to be able to write the whole file without getting frustrated with “\” placements.

So with the hope in mind and lesser trial and errors I was actually able to print out the file name like this:

‘*\’’’”Holberton School”’’’\*$?*****:)’

to understand better I have added an “a” instead of my single quotes so that the file name and process becomes more clearer. For a better understanding, I’ll break them down into modules:

1*hP1gmzbn7G7gUEhoynj1ew.gif

a*\a ’ a”Holberton School”a ’ a\*$?*****:)a

Module 1 — a*\a

Here the use of single quote (a) creates a safe suppression for *\ and as mentioned before in strong quoting, the only way we can print the ‘ is to leave and re-enter the single quoting to get the character.

Module 2 , 4— ’

The suppresses the single quote as a standalone module.

Module 3 — a”Holberton School”a

Here the use of single quote (a) creates a safe suppression for double quotes and along with regular text.

Module 5 — a\*$?*****:)a

Here the use of single quote (a) creates a safe suppression for all special characters being used such as *, , $, ?, : and ).

so in the end I was able to be lazy and maintain my sanity, and got away with only using single quotes to create small modules and “” in certain places.

1*rO34jp-bYSkCnHSdwoO3qQ.gif

And, that is how I was able to get the file to work right! After a few misses, it felt amazing and it was great to learn a new way to do things!

1*PE9_VtcfGGQjnYMwJ8YB1A.gif

Handled that curve-ball pretty well! Hope this helps you in the future when, someday you might need to create a special file for a special reason in shell!

Mitali Sengupta is a former digital marketing professional, currently enrolled as a full-stack engineering student at Holberton School. She is passionate about innovation in AI and Blockchain technologies.. You can contact Mitali on TwitterLinkedIn or GitHub.