Home Blog Page 834

Docker Volumes and Networks with Compose

In my previous article on Docker Compose, I showed how to build a two-container application using a MySQL container and a Ghost container using official Docker hub images. In part two of this Docker Compose series, I will look at a few Docker Compose commands to manage the application, and I will introduce Docker Volumes and Docker Networks, which can be specified in the YAML file describing our Compose application.

Docker Compose Commands

Last time I showed how to bring up the application with the single `docker-compose up -d` command. Let’s first look at what this does and how you could modify it.

The `-d` option made the `docker-compose` command return. If you omit it, then `docker-compose` will not return, and you will see the logs of the two containers in stdout. Try it.

This is helpful when you start writing your Compose file and you need to see why some containers may not be starting. The second particularity is that Compose automatically looked for your application in the file `docker-compose.yml` or with the `.yaml` extension. If you had an application in a file with a different name, you could specify it with the `-f` option like `docker-compose -f otherfile.yaml up`.

Now, enter `docker-compose -h` at the command line. The complete usage of the command will return, and you will see many more commands than just `up`. We saw `ps` already, which is an equivalent to `docker ps`. The most interesting point about these Compose commands is that you can manage individual services. For example, if you wanted to stop one of the services in your application, you would use `stop`:

$ docker-compose stop ghost

The container would stop. You would bring it back up with `start` or `restart`:

$ docker-compose start ghost

The name of the service is the name you gave it in your Compose file. Restarting services might be needed if there is a dependency between them and one stopped because another one was not ready yet.

The Docker `exec` command can be used via the Compose `run` command; for instance, to run `/bin/date` in your ghost service, do the following:

$ docker-compose run ghost /bin/date

Mon Mar 21 10:16:24 UTC 2016

There are lots of commands; make sure you check the usage and experiment. Finally, to bring down the entire application and remove the containers, images, volumes, and networks entirely, use the `down` command.

$ docker-compose down

During testing, bringing down the entire application is very useful and avoids confusion. If you only stop containers, they will be restarted without changes. If you make changes to an image or want to rebuild an image, make sure that your containers are indeed taking into account your changes. In a future post, we will look at the `scale` command, which starts multiple containers of a specific service. This is particularly useful if you use Compose in conjunction with Swarm and run your application in a cluster of Docker hosts.

Docker Volumes

In the previous article, we built a new image for Ghost. We did this to write the proper configuration file and add it to the Docker image started by Compose. This illustrated the use of the `build` directive in a Compose file. However, we can directly mount a configuration file in a running container using volumes, by replacing a `build` directive with `image` and `volumes` like so:

[yaml]


ghost:  

  image: ghost

  volumes:

    - ./ghost/config.js:/var/lib/ghost/config.js

...


In this YAML snippet, we defined a volume for the ghost container, which mounted the local `config.js` file into the `/var/lib/ghost/config.js` file in the container. Docker created a volume for the `/var/lib/ghost` directory and pointed the container `config.js` file to the one we have in our project directory. If you were to edit the file on the host and restart the container, the changes would take effect immediately.

The other use of volumes in Docker is for persistent data. So far, our database is not persistent. If we remove the MySQL container, we will lose all the data we put into our blog. Less than ideal! To avoid this situation, we can create a Docker volume and mount it in `/var/lib/mysql` of the database container. The life of this volume would be totally separate from the container lifecycle.

Thankfully, Compose can help us with managing these so-called named volumes. They need to be defined under the `volumes` key in a Compose file and can be used in a service definition.

Below is a snippet of our modified Compose file, which creates a `mysql` named volume and uses it in the `mysql` service.

version: '2'

services:

 mysql:  

  image: mysql

  container_name: mysql

  volumes:

    - mysql:/var/lib/mysql

...

volumes:

 mysql:


Compose will automatically create this named volume, and you will be able to see it with the `docker volume ls` command as well as find its path with `docker volume inspect <volume_name>`. Here is an example:

$ docker volume ls | grep mysql

local               vagrant_mysql

$ docker volume inspect vagrant_mysql

[

   {

       "Name": "vagrant_mysql",

       "Driver": "local",

       "Mountpoint": "/var/lib/docker/volumes/vagrant_mysql/_data"

   }

]

Be careful, however. If you bring down the application with `docker-compose down`, the persistent volume will be deleted and you will lose your data.

Docker Networks

To finish up this article in our Docker Compose series, I’ll illustrate the use of Docker networks. Similarly to volumes, Docker can manage networks. When defining services in our Compose file, we can specify a network for each one. This allows us to build tiered applications, where each container can live in its own network providing added isolation between each service.

To showcase this functionality, we are going to add an Nginx container to our application. Nginx will proxy the requests to the Ghost service, which will access the database container. Nginx will be in its own `proxy` network. The database will be in its own `db` network. And, the ghost service will have access to both `proxy` and `db` network.

We can do this by defining both networks under a `networks` directive similar to `services` and `volumes` and adding a `networks` map in each service.

For the Nginx service, we will write a local configuration file `default.conf` which will be mounted inside the Nginx container at `/etc/nginx/conf.d/default.conf`. The file will be minimal, containing only:


server {

   listen       80;

   location / {

     proxy_pass http://ghost:2368;

   }

}

It instructs Nginx to listen on port 80 and proxy all requests to the hostname `ghost` on port 2368. Our final Compose file, containing volumes and networks definitions is below:

[yaml]


version: '2'


services:

 nginx:

   image: nginx

   depends_on:

     - ghost

   volumes:

     - ./default.conf:/etc/nginx/conf.d/default.conf

   ports:

     - "80:80"    

   networks:

     - proxy

 mysql:  

   image: mysql

   container_name: mysql

   volumes:

     - mysql:/var/lib/mysql

   environment:

     - MYSQL_ROOT_PASSWORD=root

     - MYSQL_DATABASE=ghost

     - MYSQL_USER=ghost

     - MYSQL_PASSWORD=password

   networks:

     - db

 ghost:  

   image: ghost

   volumes:

     - ./ghost/config.js:/var/lib/ghost/config.js

   depends_on:

     - mysql

   networks:

     - db

     - proxy


volumes:

 mysql:


networks:

 proxy:

 db:


Note that, in this Compose file, we removed the port definitions for MySQL and Ghost because each service will try to reach the other on the default port, which will be reachable on each isolated network.

And, we’re done! We have moved from a simple Compose file that created a blog using Ghost to a three-container application, containing volumes for configurations and persistent data as well as isolated networks for a tiered application.

Read my next article, How to Use Docker Machine to Create a Swarm Clusterfor how to start this application on a Docker Swarm cluster and scale the Ghost application.

 

 

MariaDB Targets Big Data Analytics Market with ColumnStore

The relational database company’s upcoming MariaDB ColumnStore is a columnar storage engine for massively parallel distributed query execution and data loading, supporting use cases ranging from real-time to batch to algorithmic.

MariaDB on Tuesday moved to unite transactional and analytical processing in a single relational database with the announcement of the upcoming release of MariaDB ColumnStore. “We’re uniting transactional and big data analytics all together under one roof,” says Michael Howard, CEO of MariaDB. “It’s the same MariaDB interface, security, SQL richness simplifying management. You don’t need to buy specialized hardware. It’s one unified platform.”

Read more at CIO

Google Reveals its Shift to an Open Security Architecture

Perimeter fences down, security posture up – Google shows how it’s done. Google has revealed how it completely changed its security architecture, shifting from a traditional infrastructure to a more open model in which all network traffic is treated with suspicion.

The project, called BeyondCorp, shifted the company from a perimeter security model to one where access to services and tools are not gated according to a user’s physical location or their originating network, but instead deploys access policies based on information about a device, its state and associated user.

The architecture was disclosed in a detailed article published on Usenix. “BeyondCorp considers both internal networks and external networks to be completely untrusted, and gates access to applications by dynamically asserting and enforcing levels, or ‘tiers’, of access,” claim the Google engineers behind BeyondCorp.

Read more at Computing

Puppet Expands Support for Docker, Kubernetes

The Puppet Enterprise 2016.1 automation platform features support modules for managing containers and microservices architectures.  

Devops staple Puppet, formerly Puppet Labs, is upgrading its Puppet Enterprise IT automation platform and offering new and expanded support for infrastructure like Docker containers and Kubernetes container managementPuppet automates the software delivery process to bridge traditional infrastructure with more contemporary technology, including public and private clouds and microservices architectures. It even has been suggested as a tool for users to build their own PaaS clouds.

Read more at ITWorld

Ubuntu Patches Linux Kernel Security Bugs

An Ubuntu update released on Wednesday fixes bug in a Linux kernel driver that could be used to take control of a machine. 

Canonical has released an update that patches four bugs that, including one that could cause an attacker to execute code.

Ubuntu users have been notified of a reasonably pressing update to install that addresses four security issues, though none are remotely exploitable. The bugs affect Ubuntu 14.04 Long Term Support (LTS), which gets five years of coverage.

Read more at ZDNet

LibreOffice 5.1.2 Officially Released with Over 80 Bug Fixes and Improvements

libreoffice-5-1-2-officially-released-wiWe have just been informed by Italo Vignoli of The Document Foundation about the availability of second maintenance release of the LibreOffice 5.1 open-source office suite.

LibreOffice 5.1.2 is the second point release in the current stable and most advanced version of the office suite preinstalled in numerous GNU/Linux operating systems by default, and it comes approximately one month after the release of the first maintenance build to fix many of the bugs reported by users.
“LibreOffice 5.1.2 is targeted at technology enthusiasts, early adopters and power users. For more conservative users, and for enterprise deployments, TDF suggests the “still” version: LibreOffice 5.0.5. …”

Read more at Softpedia

 

Linux Botnet Attacks Increase in Scale

Linux-targeting malware family is a “high” risk, warn security researchers.

Hackers are using malware which targets Linux to build botnets to launch distributed denial of service (DDoS attacks) security researchers have warned.

The so-called BillGates Trojan botnet family of malware – apparently so named by the virus writers because it targets machines running Linux, not Windows – has been labelled with a “high” risk factor in a threat advisory issued by Akamai’s Security Intelligence Research Team.

Read more at ZDNet

How to Integrate ClamAV into PureFTPd for Virus Scanning on CentOS 7

This tutorial explains how you can integrate ClamAV into PureFTPd for virus scanning on a CentOS 7 system. In the end, whenever a file gets uploaded through PureFTPd, ClamAV will check the file and delete it if it contains a virus or malware.

Read more at HowTo Forge

Containers, Virtual Machines, or Bare Metal?

Which technology will you use to deploy your next big application?

The data center is changing. Again. In the olden days, really not all that many years ago, pretty much every server that sat on a rack in a data center was fairly straightforward (if you could call it that). Each machine ran a single operating system, and often many programs, each requiring their own updates, upgrades, and patches. It was, putting it nicely, a hard situation to maintain, though many management tools emerged to help administrators keep all of their machines safe, secure, and up-to-date.

Read more at OpenSource.com

The Linux Foundation Partners with Kids on Computers to Support the Next Generation of Open Source Professionals

At The Linux Foundation we support a variety of community initiatives and organizations that are advancing free and open source software and creating opportunities for people from all backgrounds and ages to contribute. We focus this support through partnerships, donations and activities like the workshop we’re planning with Kids on Computers.

Just last week we announced a comprehensive, new partnership with Kids on Computers in an effort to further our commitment to supporting the future of open source. Our partnership includes a hands-on workshop for children to run their first Linux installation and will likely be hosted at a LinuxCon event this year. It also includes free and discounted passes to Linux Foundation events for members and volunteers of Kids on Computers. We are also making a donation to the organization to help advance their larger education and technology accessibility mission. 

Kids on Computers is a nonprofit organization that establishes computer labs around the world for kids who don’t have access to technology. The organization uses free and open source software (FOSS) to build and supply computers to eighteen computer labs throughout Mexico, India, Morocco, Nepal and Argentina.Their work extends opportunities to the next-generation of FOSS developers and SysAdmins.

“Linux provides a free and open source platform for our kids to learn about technology,” said Avni Khatri, Kids on Computers President. “This partnership with The Linux Foundation aligns with our mission perfectly and will allow us to impact the lives of even more kids. Partnerships like this are what keep the technology field growing and moving in new directions. We thank The Linux Foundation for helping to bring technology to underprivileged kids!”

Through our community giving work we hope to help advance like-minded organizations that protect and advance free and open source software; increase diversity in technology and the open source community; support career development opportunities for the next generation of IT managers and developers, regardless of background or circumstances; and empower open source professionals to take on leadership opportunities and advance open source. The greatest shared technology investments begin with a vision, a passion for innovation, and a talent pool of individuals who understand the power of collaboration. We’re excited about our work with Kids on Computers and for what the future holds.