Home Blog Page 833

Linus Torvalds Speaks Openly about Work and Code at TED2016 [Video]

Linus Torvalds, creator of the Linux operating system and the Git source code management system, opened the “Code Power” session at the recent TED2016 conference, speaking in an interview with TED Curator Chris Anderson.

In the talk, Torvalds, whose blunt approach in dealing with people is well known, stated, “I’m actually not a people person. I don’t really love other people, but I do love other people who get involved in my project.”  

Torvalds went on to discuss his belief that “code either works or it doesn’t.” He should know. The current Linux kernel is one of the largest collaborative projects ever attempted, with more than 20 million lines of code and more than 12,000 contributors so far. Additionally, an average of 185 changes are accepted into the kernel every day — nearly 1,300 per week — and Torvalds ultimately has the final say on what code is accepted.

In the TED talk, Torvalds admitted that he is sometimes “myopic, when it comes to other people’s feelings…” However, he said, “What I love about open source is that it really allows different people to work together.”

Torvalds was listed as one of the most influential people in the world by TIME magazine back in 2004. In that profile, Lawrence Lessig wrote, “there is no doubt that the open, collaborative model that produced GNU/Linux has changed the business of software development forever.”

Nonetheless, typically self-deprecating Torvalds doesn’t see himself as a visionary. Instead, he says: “I’m an engineer. I’m happy with the people who are wandering around looking at the stars but I am looking at the ground and I want to fix the pothole before I fall in.”

Watch the video at TED.com. 
 

Atom Editor: Your Next Go-To Text Editor

The text editor is a tool Linux users have either a casual or a very deep relation with. If you’re one of those users that only opens up the text editor on the rare occasion that a configuration file must be tweaked, then you’re probably good with the likes of Nano. Developers, on the other hand, need something much more powerful. On the Linux platform, you can easily turn to Vi or Emacs, but some developers prefer to have a GUI at their fingertips.

Figure 1: The Atom welcome guide is ready to help you get to know the text editor.
That’s where Atom comes in. Atom is a text editor of a different ilk. It has the power of hard-core editors with a user-friendly GUI. Atom offers all the features you’d need in a platform ready for developers:

  • Easy extensibility

  • Cross-platform editing

  • Built-in package manager

  • Smart autocompletion

  • Built-in file browser

  • Multi-pane viewing

  • Find and replace

  • Themable

  • Customize styling with your own CSS/LESS

  • And much more

In terms of available packages for Atom, you can browse among the nearly four thousand available extensions that can be added. If you’re looking for your next favorite text editor, look no further.

Let’s install Atom and use it.

Installation

I will be demonstrating Atom on Elementary OS Freya. From the Atom home page, you can download either an .rpm or .deb package for installation. To install Atom on the Debian-based platform, download the .deb package and save it in your ~/Downloads directory. Once the file has downloaded, follow these steps:

  1. Open up a terminal window

  2. Change into the ~/Downloads directory with the command cd ~/Downloads

  3. Issue the command sudo dpkg -i atom-XXX.deb (Where XXX is the architecture of the file downloaded…i.e. amd64)

  4. Type your sudo password and hit Enter

  5. Allow the installation to complete

The installation should go off without a hitch. However, I tested the same installation on Ubuntu Mate 16.04, and it installed with errors (meaning it wouldn’t run). If you find that is the case on your Ubuntu system, you can fix it with the following steps:

  1. Open up a terminal window (or remain in the one used for installing Atom)

  2. Issue the command sudo apt-get install -f

  3. Type your sudo command (if necessary) and hit Enter

  4. Allow apt-get to do its thing

That should fix the dependency issue. You’re ready to go.

First Launch

When you first launch atom (either from your desktop menu or from the command line…with the command atom), you will be greeted by the welcome guide (Figure 1 above).

This welcome guide will appear the first time you open Atom. Upon closing the editor, when you reopen, it will land on the editor window. To get back to the Welcome Guide, open Atom and then click Help > Welcome Guide.

From the Welcome Guide, you can easily open a project, install new packages, customize the styling, hack the Atom initscript, create snippets to be used later, learn keyboard shortcuts (memorize Shift+Ctrl+p, which is the command to open up the keyboard shortcut drop-down).

Installing Packages

Figure 2: Installing packages in Atom is quite simple.
This will probably be one of the first things you do with Atom. Out of the box, Atom offers quite a lot of features. Even so, you might find a feature you need added to Atom. Installing packages is quite simple. Here’s how:

  1. Open up Atom

  2. From the Welcome Guide, click Install a Package

  3. Click Open Installer

  4. From the newly opened pane (Figure 2) scroll through the listing of packages (or do a search for a keyword or name)

  5. When you find the package you want to install, click the associated Install button

  6. Allow the installation to complete

Once you’ve installed a package, you’ll find a newly-created sub-menu in the Packages menu. Click on that sub-menu to see what the package offers.

Let me show you a really cool example. Say you write in C or C++. Out of the box, Atom cannot run scripts written in those languages. However, there is an outstanding package, aptly named script, that can run C and C++. Here’s what you do:

  1. Open Atom

  2. Go to the Welcome Guide

  3. Click Install package

  4. Enter script in the search field

  5. Locate the package, script, by rgbkrk

  6. Click Install

Figure 3: A C++ script with proper color-coding.
Once the package has been installed, click File > New File and either enter your code or copy/paste it. Once you’ve added your code, click File > Save and make sure to give the file a proper extension (such as .c). Once the file saves, proper color-coding will appear (Figure 3) and you’re ready to run the script.

I’ve added a C++ script for random number generating. Click Packages > Script > Run Script and (if the code works) the results of the run will appear in a pane at the bottom of the window (Figure 4).

Figure 4: Running the random number generator.

It’s the Little Things

Atom is filled with some pretty amazing features and tools. There are also several little additions that make this text editor spectacular. For example, say you’re looking for a matching bracket in a large snippet of code. All you have to do is click on one of the brackets and then click Packages > Bracket Matcher > Go to matching bracket. The cursor will be immediately teleported to the matching bracket, so you won’t have to go on a hunt for that missing character.

Another nice feature exists in the bottom right corner of the window. After you save a file, the bottom right corner will display:

  • Line break type

  • Encoding

  • Syntax highlighting

Figure 5: Changing the syntax highlighting in Atom.
Say, for example, the syntax highlighting associated with your C++ file is set to C. If you click the C, you can then select the proper highlighting from the popup menu (Figure 5).

Atom offers something for just about everyone. I’ve just scratched the surface here of what this powerful text-editor can do. If you’re looking for the perfect combination of features and ease of use, Atom is ready to become your go-to text editor.

Academics Claim Google Android Two-Factor Authentication Is Breakable

Computer security researchers warn security shortcomings in Android/Playstore undermine the security offered by all SMS-based two-factor authentication (2FA).

The issue – first reported to Google more than a year ago – revolves around an alleged security weakness rather than a straightforward software vulnerability. The BAndroid vulnerability was presented at the Android Security Symposium in Vienna last September by Victor van der Even of Vrije Universiteit, Amsterdam. In the BAndroid microsite (featuring a video and FAQ), the Dutch researchers explain the cause and scope of the alleged vulnerability. 

If attackers have control over the browser on the PC of a user using Google services (like Gmail, Google+, etc.), they can push any app with any permission on any of the user’s Android devices, and activate it – allowing one to bypass 2-factor authentication via the phone.

Read more at The Register

What to Know before Using Windows 10’s New Linux System

Curious about what the new Linux subsystem in Windows 10 can and can’t do? Here’s what we’ve learned about its first release. What sounded like an April Fools’ joke turned out to be anything but: Core Linux tools, including the shell, are now available to run natively inside Windows 10 thanks to an official Microsoft project that translates Linux system calls.

If you using the Linux command line at all, odds are you consider yourself a pro. Consequently, the Linux subsystem in Windows is hidden behind a “for pros only” side entrance that you can only get into if you’re running Windows 10 from the Fast Ring developer builds numbered 14316 or greater, via the Windows Insider program. 

Read more at InfoWorld

Google Adds Cloud Test Lab Integration to New Android Studio 2.0

Google has updated its key Android development tool, Android Studio, to version 2.0 and added cloud test integration, a GPU debugger, and faster emulation and resource allocation. [VIDEO]

Mountain View touts the instant run feature as just about the most important new feature in the upgrade, as it analyses Android app code as it runs and determines ways it can be deployed faster, without requiring app re-installation.

The tool’s Android emulator is three times faster too. Connections over command line tool Android Debug Bridge are 10 times faster than the previous version.

Read more at The Register

A Closer Look into Google Stackdriver

Last month at GCP Next conference, Google announced the public beta of Stackdriver cloud monitoring and logging service. It is designed to be a hybrid monitoring service spanning both Amazon Web Services and Google Cloud Platform.

After launching Compute Engine in 2012, Google moved fast in adding new infrastructure services required by ops teams. To add monitoring capabilities to its cloud platform, Google acquired Stackdriver in May 2014. A year later, it surfaced as the preview of Google Cloud Monitoring service for Compute Engine, App Engine, Cloud Pub/Sub, and Cloud SQL. As expected, Google conveniently dropped the support for AWS. Like most of the GCP services, Cloud Monitoring had its own set of APIs.

Stackdriver is Googles answer to Amazon CloudWatch and CloudTrail. The service has the potential to become the core DevOps platform for applications and workloads deployed in Google Cloud Platform.

Read more at The New Stack.

50 Embedded Linux Conference Presentation Slide Decks on Tap

The Linux Foundation has posted slide presentations from this week’s Embedded Linux Conference, which featured the first ever ELC keynote by Linus Torvalds. In case you missed this week’s North American Embedded Linux Conference and OpenIoT Summit in San Diego, you’ll be happy to know that videos of the live streamed event will be released in the coming weeks. Meanwhile, the Linux Foundation has posted slide presentations from the event…

This year’s event marks the first time Linux creator and kernel overseer Linus Torvalds gave a keynote at an Embedded Linux Conference (ELC). His appearance reflects the growing importance of embedded in the Linux universe, especially of the IoT variety.

Read more at LinuxGizmos

This Week in Linux News: Civil Infrastructure Project Launches, Skype for Linux Disappoints, & More

This week in Linux news, The Linux Foundation launches the Civil Infrastructure Project (CIP), Skype for Linux users are disappointed, and more! Catch up on the latest in Linux news with our weekly digest.

1) The Linux Foundation launches a new Collaborative Project to help expand civil infrastructure.

​The Linux Foundation Launches Linux-Based Civil Infrastructure Platform– ZDNet

2) Microsoft partners with R3 to further blockchain tools; Lead rival is IBM, using the Hyperledger Project. 

Microsoft Blockchain-as-a-Service Gains Momentum with Banking Partnership– TechRepublic

3) Users complain that Skype for Linux is missing important features and lacks reliability.

Skype for Linux is lagging behind and falling apart due to Microsoft’s neglect

4) Linus Torvalds comments on the ubiquity of Linux in embedded systems at Embedded Linux Conference/OpenIoT Summit.

Linux’s Torvalds surprised by IoT uptake– ReadWrite

5) Windows isn’t worried about destructive rm -rf / command. 

Linux’s deadliest command doesn’t faze Bash on Windows 10– PCWorld

OpenStack Mitaka Aimed at Simplifying Cloud Operations

The goal with Mitaka is to help enable easier integration and management of all the projects in the OpenStack Big Tent model.

Mitaka, the first OpenStack cloud platform release of 2016, is now out after six months of development and the participation of a global community of 2,336 developers from 293 organizations. OpenStack Mitaka is the 13th release from the open-source cloud effort, which Rackspace and NASA began in June 2010.

Read more at eWeek

Docker Volumes and Networks with Compose

In my previous article on Docker Compose, I showed how to build a two-container application using a MySQL container and a Ghost container using official Docker hub images. In part two of this Docker Compose series, I will look at a few Docker Compose commands to manage the application, and I will introduce Docker Volumes and Docker Networks, which can be specified in the YAML file describing our Compose application.

Docker Compose Commands

Last time I showed how to bring up the application with the single `docker-compose up -d` command. Let’s first look at what this does and how you could modify it.

The `-d` option made the `docker-compose` command return. If you omit it, then `docker-compose` will not return, and you will see the logs of the two containers in stdout. Try it.

This is helpful when you start writing your Compose file and you need to see why some containers may not be starting. The second particularity is that Compose automatically looked for your application in the file `docker-compose.yml` or with the `.yaml` extension. If you had an application in a file with a different name, you could specify it with the `-f` option like `docker-compose -f otherfile.yaml up`.

Now, enter `docker-compose -h` at the command line. The complete usage of the command will return, and you will see many more commands than just `up`. We saw `ps` already, which is an equivalent to `docker ps`. The most interesting point about these Compose commands is that you can manage individual services. For example, if you wanted to stop one of the services in your application, you would use `stop`:

$ docker-compose stop ghost

The container would stop. You would bring it back up with `start` or `restart`:

$ docker-compose start ghost

The name of the service is the name you gave it in your Compose file. Restarting services might be needed if there is a dependency between them and one stopped because another one was not ready yet.

The Docker `exec` command can be used via the Compose `run` command; for instance, to run `/bin/date` in your ghost service, do the following:

$ docker-compose run ghost /bin/date

Mon Mar 21 10:16:24 UTC 2016

There are lots of commands; make sure you check the usage and experiment. Finally, to bring down the entire application and remove the containers, images, volumes, and networks entirely, use the `down` command.

$ docker-compose down

During testing, bringing down the entire application is very useful and avoids confusion. If you only stop containers, they will be restarted without changes. If you make changes to an image or want to rebuild an image, make sure that your containers are indeed taking into account your changes. In a future post, we will look at the `scale` command, which starts multiple containers of a specific service. This is particularly useful if you use Compose in conjunction with Swarm and run your application in a cluster of Docker hosts.

Docker Volumes

In the previous article, we built a new image for Ghost. We did this to write the proper configuration file and add it to the Docker image started by Compose. This illustrated the use of the `build` directive in a Compose file. However, we can directly mount a configuration file in a running container using volumes, by replacing a `build` directive with `image` and `volumes` like so:

[yaml]


ghost:  

  image: ghost

  volumes:

    - ./ghost/config.js:/var/lib/ghost/config.js

...


In this YAML snippet, we defined a volume for the ghost container, which mounted the local `config.js` file into the `/var/lib/ghost/config.js` file in the container. Docker created a volume for the `/var/lib/ghost` directory and pointed the container `config.js` file to the one we have in our project directory. If you were to edit the file on the host and restart the container, the changes would take effect immediately.

The other use of volumes in Docker is for persistent data. So far, our database is not persistent. If we remove the MySQL container, we will lose all the data we put into our blog. Less than ideal! To avoid this situation, we can create a Docker volume and mount it in `/var/lib/mysql` of the database container. The life of this volume would be totally separate from the container lifecycle.

Thankfully, Compose can help us with managing these so-called named volumes. They need to be defined under the `volumes` key in a Compose file and can be used in a service definition.

Below is a snippet of our modified Compose file, which creates a `mysql` named volume and uses it in the `mysql` service.

version: '2'

services:

 mysql:  

  image: mysql

  container_name: mysql

  volumes:

    - mysql:/var/lib/mysql

...

volumes:

 mysql:


Compose will automatically create this named volume, and you will be able to see it with the `docker volume ls` command as well as find its path with `docker volume inspect <volume_name>`. Here is an example:

$ docker volume ls | grep mysql

local               vagrant_mysql

$ docker volume inspect vagrant_mysql

[

   {

       "Name": "vagrant_mysql",

       "Driver": "local",

       "Mountpoint": "/var/lib/docker/volumes/vagrant_mysql/_data"

   }

]

Be careful, however. If you bring down the application with `docker-compose down`, the persistent volume will be deleted and you will lose your data.

Docker Networks

To finish up this article in our Docker Compose series, I’ll illustrate the use of Docker networks. Similarly to volumes, Docker can manage networks. When defining services in our Compose file, we can specify a network for each one. This allows us to build tiered applications, where each container can live in its own network providing added isolation between each service.

To showcase this functionality, we are going to add an Nginx container to our application. Nginx will proxy the requests to the Ghost service, which will access the database container. Nginx will be in its own `proxy` network. The database will be in its own `db` network. And, the ghost service will have access to both `proxy` and `db` network.

We can do this by defining both networks under a `networks` directive similar to `services` and `volumes` and adding a `networks` map in each service.

For the Nginx service, we will write a local configuration file `default.conf` which will be mounted inside the Nginx container at `/etc/nginx/conf.d/default.conf`. The file will be minimal, containing only:


server {

   listen       80;

   location / {

     proxy_pass http://ghost:2368;

   }

}

It instructs Nginx to listen on port 80 and proxy all requests to the hostname `ghost` on port 2368. Our final Compose file, containing volumes and networks definitions is below:

[yaml]


version: '2'


services:

 nginx:

   image: nginx

   depends_on:

     - ghost

   volumes:

     - ./default.conf:/etc/nginx/conf.d/default.conf

   ports:

     - "80:80"    

   networks:

     - proxy

 mysql:  

   image: mysql

   container_name: mysql

   volumes:

     - mysql:/var/lib/mysql

   environment:

     - MYSQL_ROOT_PASSWORD=root

     - MYSQL_DATABASE=ghost

     - MYSQL_USER=ghost

     - MYSQL_PASSWORD=password

   networks:

     - db

 ghost:  

   image: ghost

   volumes:

     - ./ghost/config.js:/var/lib/ghost/config.js

   depends_on:

     - mysql

   networks:

     - db

     - proxy


volumes:

 mysql:


networks:

 proxy:

 db:


Note that, in this Compose file, we removed the port definitions for MySQL and Ghost because each service will try to reach the other on the default port, which will be reachable on each isolated network.

And, we’re done! We have moved from a simple Compose file that created a blog using Ghost to a three-container application, containing volumes for configurations and persistent data as well as isolated networks for a tiered application.

Read my next article, How to Use Docker Machine to Create a Swarm Clusterfor how to start this application on a Docker Swarm cluster and scale the Ghost application.