As the open source community continues to grow, Jim Zemlin, Executive Director of The Linux Foundation, says the Foundation’s goal remains the same: to create a sustainable ecosystem for open source technology through good governance and innovation.
As the open source community continues to grow, Jim Zemlin, Executive Director of The Linux Foundation, says the Foundation’s goal remains the same: to create a sustainable ecosystem for open source technology through good governance and innovation.
Sometimes we find ourselves using technologies that — although we may not realize it — stem from way back in the history of the Internet. The other day, I was using Trivial File Transfer Protocol (TFTP) and looked up its Request For Comments (RFC) only to discover that it’s been around a while longer than I suspected: since June 1981 to be exact. That may not be the 1970s but FTP and TFTP can certainly be considered founding protocols.
In an unusual twist, TFTP doesn’t use the now almost mandatory Transmission Control Protocol (TCP) for moving its data around. TCP offers resilience through error recovery but instead TFTP uses the User Data Protocol (UDP) presumably because of the “trivial” nature of its file transfers.
The feature set included with TFTP is admittedly quite limited but, make no mistake, it can still be very useful on a local area network (LAN). Unlike the well-known FTP service, which is commonly used for moving files back and forth across the Internet (and which includes successors with encryption, such as sFTP and FTPS, among its family members), TFTP doesn’t even allow the listing of directories, so you can see which files are available. If you want to use TFTP, you need to be aware of the filenames, which are sometimes complex and lengthy to provide a small dose of security through obscurity, before connecting to a server.
Other somewhat surprising limitations, relative to its cousin FTP, include a lack of authentication and the ability to delete or rename files. Admittedly, there may have been improvements since its original design but the RFC also states that, in essence, the only errors it can pick up are noticing if the wrong user is specified, if a file that’s requested doesn’t exist, and other access violations.
Now that you’re firmly sold on using this somewhat-deprecated protocol, let’s have a think about what it might be used for.
If you’re creating new machines from images, then TFTP is perfect for bootstrapping a new server with predefined config and a sprinkling of packages. It might also be used during boot time to pull the latest version of a file from a local server so that all clients are guaranteed to be up-to-date with a certain software package.
You may also want to use TFTP — as several vendors do — for firmware updates. Why choose TFTP over FTP or even HTTP, you may ask? Simply because even if you don’t have a TFTP server already up and running, it’s relatively easy to quickly get started. Also the number of parameters required to retrieve a file (and therefore the number of things that can go wrong) is very limited. It tends to work or it doesn’t; there’s little middle ground. This functional simplicity is definitely a bonus.
If you’ve ever maintained switches or routers, then you’ve likely used TFTP either to back up or restore config files or possibly to perform a firmware upgrade. Many of the major vendors still prefer this method, possibly because there’s a feeling of comfort (in relation to security) when moving files around inside a LAN relative to doing so across the Internet.
On a network device, for example, you might encounter a transaction similar to that seen in Listing 1:
Router# copy running-config tftp: Address or name of remote host []? 10.10.10.10 Destination filename [router_config_backup_v1]? router_config_backup_v1 !!!! 3151 bytes copied in 1.21 secs (2,604 bytes/sec) Router#
Listing 1: The type of transaction that you may see when backing up a network device’s config via TFTP.
As you can see in this listing, the exclamation marks provide a progress bar of sorts and each one acts as an indicator that a successful transfer of ten packets.
Let’s look at how to get a TFTP server up and running.
In the olden days, inetd ruled the roost and was responsible for letting many local services out onto the network so that they could communicate with other users and machines. On the majority of systems that I used, thanks to security limitations, inetd ultimately became xinetd, which closed down more unneeded services by default. Thankfully, however we can avoid also installing xinetd, which was the norm until a few years ago but instead solely focus on the tftpd package.
On Debian derivatives, installing tftpd is as simple as running:
# apt-get install tftpd
As you can see in Figure 1, inetd is indeed mentioned as a supplementary package but this is of little consequence in terms of the filesystem footprint remaining minuscule.
On Red Hat derivatives, there are a few other considerations. You can use an alternative package with similar relative ease but opt in to using the more advanced xinetd by running a command such as:
# yum install tftp-server xinetd
This pulls down the more sophisticated replacement for inetd and tftp-server. Incidentally, the tftpd-hpa is pulled down on Debian systems if you try and install tftp-server, and you would edit the file /etc/default/tftpd-hpa to config your service. Look for the Debian-specific README to allow file uploads, too.
Back to Red Hat. The description for the tftp-server packages is as follows, echoing what we’ve said until now:
“The Trivial File Transfer Protocol (TFTP) is normally used only for booting diskless workstations. The tftp-server package provides the server for TFTP, which allows users to transfer files to and from a remote machine. TFTP provides very little security, and should not be enabled unless it is expressly needed. The TFTP server is run from /etc/xinetd.d/tftp, and is disabled by default.”
If you haven’t used xinetd before it uses individual config files per service. For example, inside the file /etc/xinetd.d/tftp you need to make a couple of small changes to get started. Have a look at Listing 2.
service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /tftp_server_directory disable = yes per_source = 11 cps = 100 2 flags = IPv4 }
Listing 2: A sample “xinetd” config for a TFTP service.
As you can see in this listing, we will need to change the “disable” setting to “no” if we want this service to start. Additionally, we might need to alter the “server_args” option away from “-s /tftp_server_directory” if we want to serve files from another directory. If you want to allow file uploads then simply add a “-c” option before the aforementioned “-s” on that line.
In the next article, we’ll look more closely at the main config file and talk about how to enable and disable tftpd services.
Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.
Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.
Cloud Foundry, a massive open source project that allows enterprises to host their own platform-as-a-service for running cloud applications in their own data center or in a public cloud, today announced the launch of its “Cloud Foundry Certified Developer” program.
The Cloud Foundry Foundation calls this “the world’s largest cloud-native developer certification initiative,” and while we obviously still have to wait and see how successful this initiative will be, it already has the backing of the likes of Dell EMC, IBM, SAP and Pivotal (the commercial venture that incubated the Cloud Foundry project). The company is partnering with the Linux Foundation to deliver the program through its eLearning infrastructure.
Read more at TechCrunch
The Linux Foundation is on track to break the 1,000 participating organizations mark some time in 2017 and has set its sights on bringing more new and diverse voices into open source technology through training and outreach efforts. Even as the open source community continues to grow, Executive Director Jim Zemlin said at the Open Source Leadership Summit in February that the Foundation’s goal remains the same: to create a sustainable ecosystem for open source technology through good governance and innovation.
“We think that the job of the Foundation,” Zemlin said, “is to create that sustainable ecosystem. It’s to work with projects that solve a meaningful problem in society, in the market, to create really good communities.”
According to Zemlin, The Linux Foundation has trained more than 800,000 students, many of them at no cost. Training is crucial, he said, so the barrier both to contribute to open source and to use open source projects in more settings is lowered a little every day.
“We are trying to make sure that the projects that we work with have a set of practitioners and developers that can further increase the adoption of that particular code,” he said.
Zemlin is also thrilled that companies not traditionally known for their open source contributions are becoming excited about the opportunities The Linux Foundation and the open source code can provide.
“The thing I’m most proud about that is the fact that companies are coming in now from wholesale new sectors that hadn’t done a lot of open source work in the past,” Zemlin said. “Telecom, automotive, etc., are really learning how to do shared software development, understanding the intellectual property regimes that open source represents, and just greasing the skids for broader flow of code, which is incredibly important if your mission is to create a greater shared technology resource in the world.”
Zemlin was particularly excited about Automotive Grade Linux (AGL), a middleware project that was represented at the Consumer Electronics Show this year. “This is such a sleeper project at The Linux Foundation that’s going to have a huge impact just as more and more production vehicles roll out with the AGL code in it,” Zemlin said. “It’s at CES this year. Daimler announced that they’re joining our Automotive Grade Linux initiative so now we have Toyota, Daimler, and a dozen of the world’s biggest automotive OEMs all working together to create the future automotive middleware and informatics systems that will really define what an automotive cockpit experience looks like.”
The goal for that project, and all the various projects that the different open source foundations are shepherding in 2017, is to create value for both the contributors and the organizations investing their time and money.
“The best projects, the projects that are meaningful and that you can count on for decades to come, are those who have a good developer community solving a really big problem where that code is used to create real value,” Zemlin said. “Value in the form of profit for companies.”
For that value to be created, foundations such as The Linux Foundation must continue their hard work by supporting the developers and other professionals leading their passion projects.
“Ecosystems take real work,” Zemlin said. “This is what foundations do… We create a governance structure where you can pull intellectual property for long-term safe harbor.”
You can watch the complete presentation below:
Learn how successful companies gain a business advantage with open source software in our online, self-paced Fundamentals of Professional Open Source Management course. Download a free sample chapter now!
Docker is an open source tool that automates the deployment of the application inside software container.
The easiest way to get the idea behind Docker is to compare it to, well… standard shipping containers.
Back in the days, transportation companies faced the following challenges:
With the introduction of containers, bricks can be put over glass, and chemicals can be stored next to food. Cargo of the various size can be put inside a standardized container that can be loaded/unloaded by the same vehicle.
Note: the code samples may be displayed improperly because of markdown. I recommend to continue reading the original article on our blog to make sure all the examples are displayed properly.
Let’s go back to containers in software development.
When you develop an application, you need to provide your code alongside with all possible dependencies like libraries, web server, databases, etc. You may end up in a situation when the application is working on your computer but won’t even start on stage server, dev or a QA’s machine.
This challenge can be addressed by isolating the app to make it independent of the system.
Traditionally virtual machines were used to avoid this unexpected behavior. The main problem with VM is that “extra OS” on top of the host operating system adds gigabytes of space to the project. Most of the time your server will host several VMs that will take even more space. And by the way, at the moment most cloud-based server providers will charge you for that extra space. Another significant drawback of VM is a slow boot.
Docker eliminates all the above by simply sharing OS kernel across all the containers that are running as separate processes of the host OS.

Keep in mind that Docker is not the first and not the only containerization platform. However, at the moment Docker is the biggest and the most powerful player on the market.
The list of benefits is the following:
Docker’s native platform is Linux, as it’s based on features provided by Linux kernel. However, you can still run it on macOS and Windows. The only difference is that on macOS and Windows Docker is encapsulated into a tiny virtual machine. At the moment Docker for macOS and Windows has reached a significant level of usability and feels more like a native app.
Moreover, there a lot of supplementary apps such as
Kitematic or Docker Machine which help to install and operate Docker on non Linux platforms.
You can check the installation instructions here.
If you’re running Docker on Linux you need to run all the following commands as root or you can add your user to docker group and re-login:
sudo usermod -aG docker $(whoami)
Container — running instance that encapsulates required software. Containers are always created from images.
Container can expose ports and volumes to interact with other containers or/and outer world.
Container can be easily killed / removed and re-created again in a very short time.
Image — basic element for every container. When you create an image every step is cached and can be reused (Copy On Write model). Depending on the image it can take some time to build it. Containers, on the other hand can be started from images right away.
Port — a TCP/UDP port in its original meaning. To keep things simple let’s assume that ports can be exposed to the outer world (accessible from host OS) or connected to other containers — accessible only from those containers and invisible to the outer world.
Volume — can be described as a shared folder. Volumes are initialized when a container is created. Volumes are designed to persist data, independent of the container’s lifecycle.
Registry — the server that stores Docker images. It can be compared to Github — you can pull an image from the registry to deploy it locally, and you can push locally built images to the registry.
Docker hub — a registry with web-interface provided by Docker Inc. It stores a lot of Docker images with different software. Docker hub is a source of the “official” Docker images made by Docker team or made in cooperation with the original software manufacturer (it doesn’t necessary mean that these “original” images are from official software manufacturers). Official images list their potential vulnerabilities. This information is available for any logged in user. There are both free and paid accounts available. You can have one private image per account and an infinite amount of public images for free.

It’s time to run your first container:
docker run ubuntu /bin/echo 'Hello world'
Console output:
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
d54efb8db41d: Pull complete
f8b845f45a87: Pull complete
e8db7bf7c39f: Pull complete
9654c40e9079: Pull complete
6d9ef359eaaa: Pull complete
Digest: sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535
Status: Downloaded newer image for ubuntu:latest
Hello world
Let’s try to create an interactive shell inside Docker container:
docker run -i -t --rm ubuntu /bin/bash
If you want to keep container running after the end of the session, you need to daemonize it:
docker run --name daemon -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
Let’s see what containers we have at the moment:
docker ps -a
Console output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fc8cee64ec2 ubuntu "/bin/sh -c 'while..." 32 seconds ago Up 30 seconds daemon
c006f1a02edf ubuntu "/bin/echo 'Hello ..." About a minute ago Exited (0) About a minute ago gifted_nobel
The ps shows us that we have two containers:
Note: there is no second container (the one with interactive shell) because we set –rm option. As a result, this container is automatically deleted right after execution.
Let’s check the logs and see what daemon container is doing right now:
docker logs -f daemon
Console output:
...
hello world
hello world
hello world
Now let’s stop daemon container:
docker stop daemon
Let’s make sure that the container has stopped.
docker ps -a
Console output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fc8cee64ec2 ubuntu "/bin/sh -c 'while..." 5 minutes ago Exited (137) 5 seconds ago daemon
c006f1a02edf ubuntu "/bin/echo 'Hello ..." 6 minutes ago Exited (0) 6 minutes ago gifted_nobel
The container is stopped. We can start it again:
docker start daemon
Let’s ensure that it is running:
docker ps -a
Console output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fc8cee64ec2 ubuntu "/bin/sh -c 'while..." 5 minutes ago Up 3 seconds daemon
c006f1a02edf ubuntu "/bin/echo 'Hello ..." 6 minutes ago Exited (0) 7 minutes ago gifted_nobel
Now let’s stop it again and remove all the containers manually:
docker stop daemon
docker rm <your first container name>
docker rm daemon
To remove all containers we can use the following command:
docker rm -f $(docker ps -aq)
Starting from this example you’ll need several additional files you can find on my GitHub repo.
You can clone my repo or simply use the following link to download the sample files.
It is time to create and run more meaningful container like Nginx.
Change the directory to examples/nginx.
docker run -d --name test-nginx -p 80:80 -v $(pwd):/usr/share/nginx/html:ro nginx:latest
Console output:
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
693502eb7dfb: Pull complete
6decb850d2bc: Pull complete
c3e19f087ed6: Pull complete
Digest: sha256:52a189e49c0c797cfc5cbfe578c68c225d160fb13a42954144b29af3fe4fe335
Status: Downloaded newer image for nginx:latest
436a602273b0ca687c61cc843ab28163c720a1810b09005a36ea06f005b0c971
Important: run command accepts only absolute paths. In our example we’ve used $(pwd) to set current directory absolute path.
Now you can check this url in your web browser.
We can try to change /example/nginx/index.html (which is mounted as a volume to/usr/share/nginx/html directory inside the container) and refresh the page.
Let’s get the information about test-nginx container:
docker inspect test-nginx
This command displays system wide information about the Docker installation. This information includes the kernel version, number of containers and images, exposed ports, mounted volumes, etc.
To build a Docker image you need to create a Dockerfile. It is a plain text file with instructions and arguments. Here is the description of the instructions we’re going to use in our next example:
You can check Dockerfile reference for more details.
Let’s create an image that will get the contents of the website with curl and store it to the text file. We need to pass website url via environment variable SITE_URL. Resulting file will be placed in a directory mounted as a volume.
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install --no-install-recommends --no-install-suggests -y curl
ENV SITE_URL https://google.com/
WORKDIR /data
VOLUME /data
CMD sh -c "curl -L $SITE_URL > /data/results"
Dockerfile is ready, it’s time to build the actual image.
Go to examples/curl and execute the following command to build an image:
docker build . -t test-curl
Console output:
Sending build context to Docker daemon 3.584 kB
Step 1/7 : FROM ubuntu:latest
---> 0ef2e08ed3fa
Step 2/7 : RUN apt-get update
---> Running in 4aa839bb46ec
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
...
Fetched 24.9 MB in 4s (5208 kB/s)
Reading package lists...
---> 35ac5017c794
Removing intermediate container 4aa839bb46ec
Step 3/7 : RUN apt-get install --no-install-recommends --no-install-suggests -y curl
---> Running in 3ca9384ecf8d
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed...
---> f3c6d26b95e6
Removing intermediate container 3ca9384ecf8d
Step 4/7 : ENV SITE_URL https://google.com/
---> Running in 21b0022b260f
---> 9a733ee39a46
Removing intermediate container 21b0022b260f
Step 5/7 : WORKDIR /data
---> c024301ddfb8
Removing intermediate container 3bc973e5584c
Step 6/7 : VOLUME /data
---> Running in a9594a8958fe
---> 6802707a7114
Removing intermediate container a9594a8958fe
Step 7/7 : CMD sh -c "curl -L $SITE_URL > /data/results"
---> Running in 37503bc4e386
---> 5ebb2a65d771
Removing intermediate container 37503bc4e386
Successfully built 5ebb2a65d771
Now we have the new image and we can see it in the list of existing images:
docker images
Console output:
REPOSITORY TAG IMAGE ID CREATED SIZE
test-curl latest 5ebb2a65d771 37 minutes ago 180 MB
nginx latest 6b914bbcb89e 7 days ago 182 MB
ubuntu latest 0ef2e08ed3fa 8 days ago 130 MB
We can create and run container from the image. Let’s try it with default parameters:
docker run --rm -v $(pwd)/vol:/data/:rw test-curl
To see results saved to file run:
cat ./vol/results
Let’s try with facebook.com:
docker run --rm -e SITE_URL=https://facebook.com/ -v $(pwd)/vol:/data/:rw test-curl
To see results saved to file run:
cat ./vol/results
Docker compose — is the only right way to connect containers with each other.
In this example, I am going to connect Python and Redis containers.
version: '2'
services:
app:
build:
context: ./app
depends_on:
- redis
environment:
- REDIS_HOST=redis
ports:
- "5000:5000"
redis:
image: redis:3.2-alpine
volumes:
- redis_data:/data
volumes:
redis_data:
Go to examples/compose and execute the following command:
docker-compose --project-name app-test -f docker-compose.yml up
Console output:
Creating network "apptest_default" with the default driver
Creating volume "apptest_redis_data" with default driver
Pulling redis (redis:3.2-alpine)...
3.2-alpine: Pulling from library/redis
627beaf3eaaf: Pull complete
a503a4771a4a: Pull complete
72c5d910c683: Pull complete
6aadd3a49c30: Pull complete
adf925aa1ad1: Pull complete
0565da0f872e: Pull complete
Digest: sha256:9cd405cd1ec1410eaab064a1383d0d8854d1eef74a54e1e4a92fb4ec7bdc3ee7
Status: Downloaded newer image for redis:3.2-alpine
Building app
Step 1/9 : FROM python:3.5.2-alpine
3.5.2-alpine: Pulling from library/python
b7f33cc0b48e: Pull complete
8eda8bb6fee4: Pull complete
4613e2ad30ef: Pull complete
f344c00ca799: Pull complete
Digest: sha256:8efcb12747ff958de32b32424813708f949c472ae48ca28691078475b3373e7c
Status: Downloaded newer image for python:3.5.2-alpine
---> e70a322afafb
Step 2/9 : ENV BIND_PORT 5000
---> Running in 8518936700b3
---> 0f652cdd2cee
Removing intermediate container 8518936700b3
Step 3/9 : ENV REDIS_HOST localhost
---> Running in 027286e90699
---> 6da3674f79fa
Removing intermediate container 027286e90699
Step 4/9 : ENV REDIS_PORT 6379
---> Running in 0ef17cb512ed
---> c4c514aa3008
Removing intermediate container 0ef17cb512ed
Step 5/9 : COPY ./requirements.txt /requirements.txt
---> fd523d64faae
Removing intermediate container 8c94c82e0aa8
Step 6/9 : COPY ./app.py /app.py
---> be61f59b3cd5
Removing intermediate container 93e38cd0b487
Step 7/9 : RUN pip install -r /requirements.txt
---> Running in 49aabce07bbd
Collecting flask==0.12 (from -r /requirements.txt (line 1))
Downloading Flask-0.12-py2.py3-none-any.whl (82kB)
Collecting redis==2.10.5 (from -r /requirements.txt (line 2))
Downloading redis-2.10.5-py2.py3-none-any.whl (60kB)
Collecting itsdangerous>=0.21 (from flask==0.12->-r /requirements.txt (line 1))
Downloading itsdangerous-0.24.tar.gz (46kB)
Collecting Werkzeug>=0.7 (from flask==0.12->-r /requirements.txt (line 1))
Downloading Werkzeug-0.11.15-py2.py3-none-any.whl (307kB)
Collecting Jinja2>=2.4 (from flask==0.12->-r /requirements.txt (line 1))
Downloading Jinja2-2.9.5-py2.py3-none-any.whl (340kB)
Collecting click>=2.0 (from flask==0.12->-r /requirements.txt (line 1))
Downloading click-6.7-py2.py3-none-any.whl (71kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.4->flask==0.12->-r /requirements.txt (line 1))
Downloading MarkupSafe-1.0.tar.gz
Installing collected packages: itsdangerous, Werkzeug, MarkupSafe, Jinja2, click, flask, redis
Running setup.py install for itsdangerous: started
Running setup.py install for itsdangerous: finished with status 'done'
Running setup.py install for MarkupSafe: started
Running setup.py install for MarkupSafe: finished with status 'done'
Successfully installed Jinja2-2.9.5 MarkupSafe-1.0 Werkzeug-0.11.15 click-6.7 flask-0.12 itsdangerous-0.24 redis-2.10.5
---> 18c5d1bc8804
Removing intermediate container 49aabce07bbd
Step 8/9 : EXPOSE $BIND_PORT
---> Running in f277fa7dfcd5
---> 9f9bec2abf2e
Removing intermediate container f277fa7dfcd5
Step 9/9 : CMD python /app.py
---> Running in a2babc256093
---> 2dcc3b299859
Removing intermediate container a2babc256093
Successfully built 2dcc3b299859
WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating apptest_redis_1
Creating apptest_app_1
Attaching to apptest_redis_1, apptest_app_1
redis_1 | 1:C 08 Mar 09:56:55.765 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | _._
redis_1 | _.-``__ ''-._
redis_1 | _.-`` `. `_. ''-._ Redis 3.2.8 (00000000/0) 64 bit
redis_1 | .-`` .-```. ```/ _.,_ ''-._
redis_1 | ( ' , .-` | `, ) Running in standalone mode
redis_1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
redis_1 | | `-._ `._ / _.-' | PID: 1
redis_1 | `-._ `-._ `-./ _.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' | http://redis.io
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | |`-._`-._ `-.__.-' _.-'_.-'|
redis_1 | | `-._`-._ _.-'_.-' |
redis_1 | `-._ `-._`-.__.-'_.-' _.-'
redis_1 | `-._ `-.__.-' _.-'
redis_1 | `-._ _.-'
redis_1 | `-.__.-'
redis_1 |
redis_1 | 1:M 08 Mar 09:56:55.767 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 08 Mar 09:56:55.767 # Server started, Redis version 3.2.8
redis_1 | 1:M 08 Mar 09:56:55.767 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 08 Mar 09:56:55.767 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 08 Mar 09:56:55.767 * The server is now ready to accept connections on port 6379
app_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger pin code: 299-635-701
Current example will increment view counter in Redis. Open the following url in your web browser and check it.
Using docker-compose is a topic for a separate article. To get started you can play with some images from Docker Hub or if you want to create your own images – follow best practices listed above. The only thing I can add in terms of using docker-compose: always give explicit names to your volumes in docker-compose.yml (if image has volumes). This simple rule will save you from issue in the future when you’ll be inspecting your volumes.
version: '2'
services:
...
redis:
image: redis:3.2-alpine
volumes:
- redis_data:/data
volumes:
redis_data:
In this case redis_data will be the name inside docker-compose.yml file, for the real volume name it will be prepended with project name prefix.
To see volumes run:
docker volume ls
Console output:
DRIVER VOLUME NAME
local apptest_redis_data
Without explicit volume name there will be UUID. And here is an example from my local machine:
DRIVER VOLUME NAME
local ec1a5ac0a2106963c2129151b27cb032ea5bb7c4bd6fe94d9dd22d3e72b2a41b
local f3a664ce353ba24dd43d8f104871594de6024ed847054422bbdd362c5033fc4c
local f81a397776458e62022610f38a1bfe50dd388628e2badc3d3a2553bb08a5467f
local f84228acbf9c5c06da7be2197db37f2e3da34b7e8277942b10900f77f78c9e64
local f9958475a011982b4dc8d8d8209899474ea4ec2c27f68d1a430c94bcc1eb0227
local ff14e0e20d70aa57e62db0b813db08577703ff1405b2a90ec88f48eb4cdc7c19
local polls_pg_data
local polls_public_files
local polls_redis_data
local projectdev_pg_data
local projectdev_redis_data
Docker has some restrictions and requirements depending on the architecture of your system (applications that you pack into containers). You can ignore these requirements or find some workarounds, but in this case, you won’t get all the benefits from using Docker. My strong advice is to follow these recommendations:
To summarize all the above, alongside with IDE and Git, Docker has become one of the must-have developer tools.
We at Django Stars have successfully implemented Docker in numerous projects. Stay tuned if you are interested in such advanced tutorials — “How to set up Django app in Docker?” and “How to use Docker and CircleCI?”.
Have you already used Docker on your project? Leave us a comment or ask questions below!
Free and open source software has been part of our technical and organizational foundation since Google’s early beginnings. From servers running the Linux kernel to an internal culture of being able to patch any other team’s code, open source is part of everything we do. In return, we’ve released millions of lines of open source code, run programs like Google Summer of Code and Google Code-in, and sponsor open source projects and communities through organizations like Software Freedom Conservancy, the Apache Software Foundation, and many others.
Today, we’re launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.
Read more at Google
Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers – sometimes with the addition of machine learning or other artificial intelligence techniques – and to respond to attacks at computer speeds.
While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them security orchestration, not automation.
This isn’t just a choice of words – it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.
Read more at Schneier on Security
A group of engineers from every leading container orchestrator maker have gathered together, virtually, around an initiative to explore a common lexicon for container-based data storage. Initially proposed by Mesosphere’s Benjamin Hindman, the Container Storage Interface initiative — which, for now, is essentially a GitHub document — is exploring the issue of whether the community at large, and their users, would benefit from a standardized API for addressing and managing storage volumes.
“The goal of this standard is to have a single, cluster-level volumes plugin API that is shared by all orchestrators,” Goelzer writes in the group’s preamble. “So, for example, conformant storage plugins written for Docker would run unmodified in Kubernetes (and vice-versa).”
Read more at The New Stack
How do you develop and sustain an operating system primed for the continuously evolving nature of the internet of things? You model it, in part, on the highly successful Linux platform, which is exactly the tactic of the Zephyr Project, an open, real-time operating system overseen by the nonprofit Linux Foundation along with a variety of other big-name industry players.
The Zephyr Project, which celebrated its one-year anniversary in February 2017, is a modular, scalable platform designed for connected, resource-strained devices. The open source RTOS — which, in fact, includes no Linux code, but rather is based on the Wind River Rocket IoT OS technology acquired by Intel — is able to integrate with myriad third-party libraries and embedded devices, regardless of architecture, and was built with security in mind, according to project members.
Read more at TechTarget
To summarize: your Agile transformation is stuck. You’ve thought about your why, as in Becoming an Agile Leader, Part 1: Define Your Why. You’ve started to measure possibilities. You have an idea of who you might talk with as in Becoming an Agile Leader, Part 2: Who to Approach. You’ve considered who you need as allies and how to enlist them in Becoming an Agile Leader, Part 3: How to Create Allies. In Becoming an Agile Leader, Part 4: Determining Next Steps, you thought about creating win-wins with influence. Now, it’s time to think about how you and the people involved (or not involved!) learn.
Read more at DZone