Home Blog Page 529

This Week in Open Source News: Toyota Picks AGL for 2018 Camry, Raspberry Pi Vulnerability & More

This week in open source and Linux news, Toyota’s 2018 Camry to feature Automotive Grade Linux (AGL) infotainment system, older Raspberry Pis risk vulnerability without updating, and more. Read on!

1) Toyota has adopted the Automotive Grade Linux (AGL) platform for its infotainment systems. The 2018 Toyota Camry will be their first vehicle to have it preinstalled.

Toyota Moves to Automotive Grade Linux for Infotainment – BlackBerry Hits Back– IoTNews

2) Older Raspberry Pi devices may be more vulnerable to the malware if they haven’t been updated in a while.

Linux Malware Enslaves Raspberry Pi to Mine Cryptocurrency– ZDNet

3) Toyota’s decision not to offer Apple CarPlay or Andriod Auto, favoring a Linux system. What will this mean for proprietary software fans?

Toyota owners to get Linux system instead of Apple CarPlay, Android Auto. Hooray?– The Car Connection

4) “[Red Hat Summit & OpenStack Summit] brought unique open source perspectives as a business and as a community.”

Red Hat Summit And OpenStack Summit: Two Weeks Of Open Source Software In Boston– Forbes

5) Eric S Raymond has brought back Colossal Cave Adventure as an open source program.

​One of the First Computer Games Is Born Again in Open Source– ZDNet

Understanding Linux Links

Linux is, without a doubt, one of the single most flexible operating system platforms on the planet. With the flagship open source ecosystem, there is almost nothing you cannot do. What makes Linux so flexible? The answer to that question will depend on your needs. Suffice it to say, the list of answers is significant and starts from the kernel and works it way out to the desktop environment. This flexibility was built into the operating system from the beginning, borrowing quite a lot of features from UNIX. One such feature is links.

What are links? I’m glad you asked.

Links are a very handy way to create a shortcut to an original directory. Links are used in many instances: Sometimes to create a convenient path to a directory buried deep within the file hierarchy; other uses for links include:

  • Linking libraries

  • Making sure files are in constant locations (without having to move the original)

  • Keeping a “copy” of a single file in multiple locations

But aren’t these just “shortcuts”?

In a way, yes…but not exactly. Within the realm of Linux, there’s more to links than just creating a shortcut to another location. Consider this: A shortcut is simply a pseudo-file that points to the original location of the file. For instance, create a shortcut on the Windows desktop to a particular folder and, when you click that icon, it will automatically open your file manager in the original location. On Linux, when you create a link in Linux, you click on that link and it will open the link in the exact location in which it was created.

Let me explain. Say, for instance, you have an external drive, attached to your Windows machine. On that drive is a folder called Music. If you create a shortcut to the directory on your desktop, when you click to open the shortcut, your file manager will open to the Music directory on your external drive.

Now, say you have that drive attached to a Linux machine. That drive is mounted to, say, /data and on that drive is the folder Music. You create a link to that location in your home directory—so you how have a link from ~/Music that points to /data/Music. If you open the shortcut in your home directory, it opens the file manager in ~/Music, instead of /data/Music. Any changes you make in ~/Music will automatically be reflected in /data/Music. And that is the big difference.

Types of links

In Linux there are two different types of links:

  • Hard links

  • Symbolic links

The difference between the two are significant. With hard links, you can only link to files (and not directories); you cannot reference a file on a different disk or volume, and they reference the same inode as the original source. A hard link will continue to remain usable, even if the original file is removed.

Symbolic links, on the other hand, can link to directories, reference a file/folder on a different disk or volume, will exist as a broken (unusable) link if the original location is deleted, reference abstract filenames and directories (as opposed to physical locations), and are given their own, unique inode.

Now comes the fun part. How do you work with links? Let’s find out how to create both hard and symbolic links.

Working with hard links

We’re going to make this very simple. The basic command structure for creating a hard link is:

ln SOURCE LINK

Where SOURCE is the original file and LINK is the new file you will create that will point to the original source. So let’s say we want to create a link pointing to /data/file1 and we want to create the link in the ~/ directory. The command for this would be:

ln /data/file1 ~/file1

The above command will create the file ~/file1 as a hard link to /data/file1. If you open up both files, you will see they have the exact same contents. If you alter the contents in one, the changes will reflect in both. One of the benefits of using hard links is that if you were to delete /data/file1, ~/file1 would still remain. If you want to simply remove the link, you can use the rm command like so:

rm ~/file1

Working with symbolic links

The command structure for symbolic links works in the same manner as do hard links:

ln -s SOURCE LINK

The primary difference between hard and symbolic link creation, is that you use the -s option. Let’s create a symbolic link from ~/file2 to /data/file2 in similar fashion as we did above, only we’ll create a symbolic link, instead of a hard link. Here’s how that would be accomplished:

ln -s /data/file2 ~/file2 

The above command will create a symbolic link from ~/file2 to the original location /data/file2. If you update the file in either location, it will update in both.

It is also important to note that you can use symbolic links for directories. Say, for instance, you have /data/directory1 and you want to create a symbolic link to that directory in ~/. This is done in the same way as creating a link to a file:

ln -s /data/directory1 ~/directory1

The above command will create the link ~/directory1 which points to /data/directory1. You can then add to that directory from either location and the change will reflect in both.

To see the difference between how each type of link looks from a terminal window, issue the command ls -li. You will see how each is represented with slight variation from one another (Figure 1).

Figure 1: Both hard links and symbolic links represented in the terminal window.

One interesting thing of note is how inodes are treated by way of the different types of links. In Figure 1, you can see that the inode (string of characters in the first column) for the hard links are the same, whereas the inodes for the symbolic links are different. This can be further illustrated by removing the original location of the symbolic link. When you do that, the soft link goes away (although the broken referral link file remains behind). Why? The reference inode the symbolic link pointed to no longer exists.

Unlike with hard links, if you delete the original file or directory, the symbolic link will remain, however it will now be considered a broken link and will be unusable. Remember, with hard links, you can remove the original and the link will remain and still be usable.

Learn more

Of course, you’re going to want to know more about using links. If you issue the command man ln, you can read the manual page for the ln command and gain an even more in-depth understanding as to how links work.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Opening Up the Way to Industry Transformation

Heather Kirksey leads the OPNFV community that is changing the way the telecoms industry innovates and the way it works. She talks to Alan Burkitt-Gray.

There’s a deep cultural change rolling through the industry. The way things have been done for the past century and a half – with vendors and operators doing their own R&D and competing vigorously – is being replaced by a new spirit of collaboration.  At the heart of this is the move to software-defined networks (SDN) and network functions virtualisation (NFV) – two abbreviations that mean, in short, using IT industry-standard hardware in the network with software to define and run the services.

And leading the move is an organisation called Open Platform for NFV (OPNFV), whose director for the past two years has been Heather Kirksey.  



Read more at Global Telecoms Business

Five Tips on Building Serverless Teams in an Enterprise

Streaming video provider Toons.TV — owned by Finnish enterprise Rovio Entertainment and most famous for its Angry Birds cartoon series — has amassed some 8.5 billion streaming views in the past four years. Marcia Villalba, Full Stack Developer at Rovio Entertainment spoke at the recent Serverlessconf conference in Austin to discuss how her team reoriented towards a serverless approach to meet these challenges and to speed up their backend systems.

And much like a game of Angry Birds itself, the challenge of implementing serverless in the enterprise can be seen as a source of frustration or as a challenge to reach the next level. Here are some of the lessons she picked up on the way:

Read more at The New Stack

You Are Not Google

Software engineers go crazy for the most ridiculous things. We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s Hacker News comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place.

This is not how rational people make decisions, but it is how software engineers decide to use MapReduce.

As Joe Hellerstein sideranted to his undergrad databases class (54 min in):

The thing is there’s like 5 companies in the world that run jobs that big. For everybody else… you’re doing all this I/O for fault tolerance that you didn’t really need. People got kinda Google mania in the 2000s: “we’ll do everything the way Google does because we also run the world’s largest internet data service” [tilts head sideways and waits for laughter]

Read more at Bradfield

10 Critical Skills That Every DevOps Engineer Needs for Success

Enterprises including Adobe, Amazon, and Target are increasingly turning to DevOps as a way to deliver software and security updates more rapidly, both internally and to customers. And the spread of the workflow means there are more DevOps engineer positions available than ever.

DevOps engineer came in at no. 3 on Indeed’s list of best jobs in America for 2017, in terms of salary, number of job postings, and opportunities for growth. These positions grew by 106% in the past few years, Indeed found, and boast an average base salary of $123,165.

Read more at Tech Republic

Container Technologies Overview

Containers are lightweight OS-level virtualizations that allow us to run an application and its dependencies in a resource-isolated process. All the necessary components that are required to run an application are packaged as a single image and can be re-used. While an image is executed, it runs in an isolated environment and does not share memory, CPU, or the disk of the host OS. This guarantees that processes inside the container cannot watch any processes outside the container.

Read more at DZone

The Economics of Software Security: What Car Makers Can Teach Enterprises

Now back to software security. When it comes to embedding software security controls in the software development lifecycle, we may have to stop the car assembly line and incur some up-front cost in terms of changing the way we build software, but over time this cost will be properly amortized into the total cost of development. 

Consider that there are two types of security controls available: controls that prevent defects before release and controls that detect defects after release. A good example of a preventive control is secure code review with an automated tool that helps to identify bugs in the source code well before software ships or is put into production. Detective controls identify defects as well, but only after release.

Read more at DarkReading

Containers Running Containers with LinuxKit

Some genuinely exciting news piqued my interest at this year’s DockerCon, that being the new operating system (OS) LinuxKit, which was announced and is immediately on offer from the undisputed heavyweight container company, Docker. The container giant has announced a flexible, extensible operating system where system services run inside containers for portability. You might be surprised to hear that even includes the Docker runtime daemon itself.

In this article, I’ll take a quick look at what’s promised in LinuxKit, how to try it out for yourself, and look also at ever-shrinking, optimized containers.

Less Is More

There’s no denying that users have been looking for a stripped-down version of Linux on which to run their microservices. With containerization, you’re trying your hardest to minimize each application so that it becomes a standalone process which sits inside a container of its own. However, constantly shifting containers around because you’re patching the host that the containers reside on causes issues. In fact, without an orchestrator like Kubernetes or Docker Swarm that container-shuffling is almost always going to cause downtime.

Needless to say that’s just one reason to keep your OS as miniscule as possible; one of many.

A favorite quote I’ve repeated on a number of occasions, comes from the talented Dutch programmer, Wietse Zweitze, who brought us the email stalwart Postfix and TCP Wrappers amongst other renowned software.

The Postfix website states that even if you’re as careful with your coding as Wietse that for “every 1000 lines [you] introduce one additional bug into Postfix.” From my professional DevSecOps perspective by the mention of “bug” I might be forgiven for loosely translating that definition into security issues, too.

From a security perspective, it’s precisely for this reason that less-is-more in the world of code. Simply put, there’s a number of benefits to using less lines of code; namely security, administration time and performance. For starters there’s less security bugs, less time updating packages and faster boot times.

Look deeper inside

Think about what runs your application from inside a container.

A good starting point is Alpine Linux which is a low-fat, boiled-down, reduced OS commonly preferred over the more bloated host favourites, such as Ubuntu or CentOS. Alpine also provides a miniroot filesystem (for use within containers) which comes in at a staggering 1.8MB at the last check. Indeed the ISO download for a fully working Linux operating system comes in at a remarkable 80MB in size.

If you decide to utilize a Docker base image from Alpine Linux, then you can find one on the Docker Hub where Alpine Linux describes itself as: “A minimal Docker image based on Alpine Linux with a complete package index and only 5 MB in size!”.

It’s been said, and I won’t attempt to verify this meme, that the ubiquitous Window Start button is around the same file size! I’ll refrain from commenting further.

In all seriousness, I hope that gives you an idea of the power of innovative Unix-type OSs like Alpine Linux.

Lock everything up

What’s more, it goes on to explain that Alpine Linux is (not surprisingly) based on Busy Box, the famous set of Linux commands neatly packaged which many people won’t be aware sits inside their broadband router, smart television and of course many IoT devices in their homes as they read this.

Comments on the the About page of Alpine Linux site state:

“Alpine Linux was designed with security in mind. The kernel is patched with an unofficial port of grsecurity/PaX, and all userland binaries are compiled as Position Independent Executables (PIE) with stack smashing protection. These proactive security features prevent exploitation of entire classes of zero-day and other vulnerabilities.”

In other words the boiled-down binaries bundled inside the Alpine Linux builds which offers the system its functionality have already been sieved through clever industry-standard security tools in order to help mitigate buffer overflow attacks.

Odd socks

Why do the innards of containers matter when we’re dealing with Docker’s new OS you may ask?

Well, as you might have guessed, when it comes to containers, their construction is all about losing bloat. It’s about not including anything unless it’s absolutely necessary. It’s about having confidence so that you can reap the rewards of decluttering your cupboards, garden shed, garage, and sock drawer with total impunity.

Docker certainly deserve some credit for their foresight. Reportedly, early 2016 Docker hired a key driving force behind Alpine Linux, Nathaniel Copa, who helped switch the default, official image library away from Ubuntu to Alpine. The bandwidth that Docker Hub saved from the newly-streamlined image downloads alone must have been welcomed.

And, bringing us up to date, that work will stand arm-in-arm with the latest container-based OS work; Docker’s LinuxKit.

For clarity, LinuxKit is not ever-likely destined to replace Alpine but rather to sit underneath the containers and act as a stripped-down OS that you can happily spin up your runtime daemon on (in this case, the Docker daemon which spawns your containers).

Blondie’s Atomic

A finely-tuned host is by no means a new thing (I mentioned the household devices embedded with Linux previously) and the evil geniuses who have been optimizing Linux for the last couple of decades realized sometime ago that the underlying OS was key to churning out a server estate fulls of hosts brimming with containers.

For example the mighty Red Hat have long been touting Red Hat Atomic having contributed to Project Atomic. The latter goes on to explain:

“Based on proven technology either from Red Hat Enterprise Linux or the CentOS and Fedora projects, Atomic Host is a lightweight, immutable platform, designed with the sole purpose of running containerized applications.”

There’s good reason that the underlying, immutable Atomic OS is forwarded as the recommended choice with Red Hat’s OpenShift PaaS (Platform as a Service) product. It’s minimal, performant and sophisticated.

Features

The mantra that less-is-more was evident throughout Docker’s announcement regarding LinuxKit. The project to realise the vision of LinuxKit was apparently no small undertaking and with the guiding hand of expert Justin Cormack, a Docker veteran and master with unikernels, and in partnership with HPE, Intel, ARM, IBM and Microsoft LinuxKit can run on mainframes as well as IoT-based fridge freezers.

The configurable, pluggable and extensible nature of LinuxKit will appeal to many projects looking for a baseline upon which to build their services. By open-sourcing the project Docker are wisely inviting input from every man and their dog to contribute to its functionality which will mature like a good cheese undoubtedly over time.  

Proof of the pudding

Having promised to point those eager to get going with this new OS, let us wait no longer. If you want to get your hands on LinuxKit you can do so from the GitHub page here: LinuxKit

On the GitHub page, there are instructions on how to get up and running along with some features.

Time permitting, I plan to get my hands much dirtier with LinuxKit. The somewhat-contentious Kubernetes versus Docker Swarm orchestration capabilities will be interesting to try out. I’d like to see memory footprints, boot times, and diskspace-usage benchmarking, too.

If the promises are true then pluggable system services which run as containers is a fascinating way to build an OS. Docker blogged the following on its tiny footprint: “Because LinuxKit is container-native, it has a very minimal size – 35MB with a very minimal boot time. All system services are containers, which means that everything can be removed or replaced.”

I don’t know about you, but that certainly whets my appetite.

Call the cops

Features aside with my DevSecOps hat on, I will be in seeing how the promise of security looks in reality.

Docker quotes from the National Institute of Standards and Technology (NIST) and claims that:

“Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”

Possibly the most important container-to-host and host-to-container security innovation will be the fact that system containers (system services) are apparently heavily sandboxed into their own unprivileged space, given just the external access that they need.

Couple that functionality with the collaboration of the Kernel Self Protection Project (KSPP) and with a resounding thumbs-up from me, it looks like Docker have focused on something very worthwhile. For those unfamiliar, KSPP’s raison d’etre is as follows:

“This project starts with the premise that kernel bugs have a very long lifetime, and that the kernel must be designed in ways to protect against these flaws.”

The KSPP site goes on to state admirably that:

“Those efforts are important and on-going, but if we want to protect our billion Android phones, our cars, the International Space Station, and everything else running Linux, we must get proactive defensive technologies built into the upstream Linux kernel. We need the kernel to fail safely, instead of just running safely.”

And, initially, if Docker only take baby steps with LinuxKit, the benefit that it will bring over time through maturity will likely make great strides in the container space.

The end is far from nigh

As the powerhouse that is Docker continues to grow, there’s no doubt whatsoever that these giant-sized leaps in the direction of solid progress will benefit users and other software projects alike.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

This article originally appeared on DevSecOps.

Submit Your Talk for MesosCon NA: CFP Closes June 30

The MesosCon program committee is now seeking your fresh ideas, enlightening case studies, best practices, or deep technical knowledge to share with the Apache Mesos community at MesosCon North America and Europe in 2017.

Submit a proposal to speak at MesosCon North America » The deadline is June 30.

MesosCon is an annual conference held in three locations around the globe and organized by the Apache Mesos community in partnership with The Linux Foundation. The events bring together users and developers of the open source orchestration framework to share knowledge and learn about the project and its growing ecosystem.

Best practices, lessons learned, and case studies are among the topics the program committee is seeking for 2017. Sample topics include:  

  • Best practices and lessons on deploying and running Mesos at scale

  • Deep dives and tutorials into Mesos

  • Interesting extensions to Mesos (e.g., new communication models, support for new containerizers, new resource types and allocation models, etc.)

  • Improvements/additions to the Mesos ecosystem (packaging systems, monitoring, log aggregation, load balancing etc., service discovery)

  • New frameworks

  • Microservice design

  • Continuous Delivery / DevOps (automating into production)

This list is by no means an exhaustive set of topics for submissions, and we welcome you to submit proposals that fall outside the mentioned areas. Check out these videos of previous talks to see the types of presentations that have been accepted in the past.

All 2017 MesosCon events will be held directly following Open Source Summit events in China, North America, and Europe. Dates are as follows:

MesosCon Asia June 21 – 22, 2017 in Beijing, China

MesosCon North America September 14 – 15, 2017 in Los Angeles, California, USA

MesosCon Europe October 26 – 27, 2017 in Prague, Czech Republic

Not interested in speaking but want to attend? Linux.com readers receive 5% off the “attendee” registration with code LINUXRD5.

Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.