Apr 28 2015

Slim application containers (using Docker)

Another talk I gave at Linux.conf.au, was about making slim containers (youtube) –  ones that contain only the barest essentials needed to run an application.

And I thought I’d do it from source, as most “Built from source” images also contain the tools used to build the software.

1. Make the Docker base image you’re going to use to build the software

In January 2015, the main base images and their sizes looked like:

scratch             latest              511136ea3c5a        19 months ago       0 B
busybox             latest              4986bf8c1536        10 days ago         2.433 MB
debian              7.7                 479215127fa7        10 days ago         85.1 MB
ubuntu              15.04               b12dbb6f7084        10 days ago         117.2 MB
centos              centos7             acc1b23376ec        10 days ago         224 MB
fedora              21                  834629358fe2        10 days ago         250.2 MB
crux                3.1                 7a73a3cc03b3        10 days ago         313.5 MB

I’ll pick Debian, as I know it, and it has the fewest restrictions on what contents you’re permitted to redistribute (and because bootstrapping busybox would be an amazing talk on its own).

Because I’m experimenting, I’m starting by seeing how small I can make a new Debian base image –  starting with:

FROM debian:7.7

RUN rm -r /usr/share/doc /usr/share/doc-base \
          /usr/share/man /usr/share/locale /usr/share/zoneinfo

CMD ["/bin/sh"]

Then make a new single layer (squashed image) by running `docker export` and `docker import`

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
debian              7.7                 479215127fa7        10 days ago         85.1 MB
our/debian:jessie   latest              cba1d00c3dc0        1 seconds ago       46.6 MB

Ok, not quite half, but you get the idea.

Its well worth continuing this exercise using things like `dpkg —get-selections` to remove anything else you won’t need.

Importantly, once you’ve made your smaller base image, you should use it consistently for ALL the containers you use. This means that whenever there are important security fixes, that base image will be downloadable as quickly as possible –  and all your related images can be restarted quickly.

This also means that you do NOT want to squish your images to one or two layers, but rather into some logical set of layers that match your deployment update risks –  a common root base, and then layers based on common infrastructure, and lastly application and customisation layers.

2. Build static binaries –  or not

Building a static binary of your application (in typical `Go` style) makes some things simpler –  but in the end, I’m not really convinced it makes a useful difference.

But in my talk, I did it anyway.

Make a Dockerfile that installs all the tools needed, builds nginx, and then output’s a tar file that is a new build context for another Docker image (and contains the libraries ldd tells us we need):

cat Dockerfile.build-static-nginx | docker build -t build-nginx.static -
docker run --rm build-nginx.static cat /opt/nginx.tar > nginx.tar
cat nginx.tar | docker import - micronginx
docker run --rm -it -p 80:80 micronginx /opt/nginx/sbin/nginx -g "daemon off;"
nginx: [emerg] getpwnam("nobody") failed (2: No such file or directory)

oh. I need more than just libraries?

3. Use inotify to find out what files nginx actually needs!

Use the same image, but start it with Bash –  use that to install and run inotify, and then use `docker exec` to start nginx:

docker run --rm build-nginx.static bash
$ apt-get install -yq inotify-tools iwatch
# inotifywait -rm /etc /lib /usr/lib /var
Setting up watches.  Beware: since -r was given, this may take a while!
Watches established.
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_files-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_nis-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE ld-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libc-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnsl-2.13.so
/lib/x86_64-linux-gnu/ CLOSE_NOWRITE,CLOSE libnss_compat-2.13.so
/etc/ OPEN passwd
/etc/ OPEN group
/etc/ ACCESS passwd
/etc/ ACCESS group
/etc/ CLOSE_NOWRITE,CLOSE group
/etc/ CLOSE_NOWRITE,CLOSE passwd
/etc/ OPEN localtime
/etc/ ACCESS localtime
/etc/ CLOSE_NOWRITE,CLOSE localtime

Perhaps it shouldn’t be too surprising that nginx expects to rifle through your user password files when it starts :(

4. Generate a new minimal Dockerfile and tar file Docker build context, and pass that to a new `docker build`

The trick is that the build container Dockerfile can generate the minimal Dockerfile and tar context, which can then be used to build a new minimal Docker image.

The excerpt from the Dockerfile that does it looks like:


# Add a Dockerfile to the tar file
RUN echo "FROM busybox" > /Dockerfile \
    && echo "ADD * /" >> /Dockerfile \
    && echo "EXPOSE 80 443" >> /Dockerfile \
    && echo 'CMD ["/opt/nginx/sbin/nginx", "-g", "daemon off;"]' >> /Dockerfile

RUN tar cf /opt/nginx.tar \
           /Dockerfile \
           /opt/nginx \
           /etc/passwd /etc/group /etc/localtime /etc/nsswitch.conf /etc/ld.so.cache \
           /lib/x86_64-linux-gnu

This tar file can then be passed on using

cat nginx.tar | docker build -t busyboxnginx .

Result

Comparing the sizes, our build container is about 1.4GB, the Official nginx image about 100MB, and our minimal nginx container, 21MB to 24MB –  depending if we add busybox to it or not:

REPOSITORY          TAG            IMAGE ID            CREATED              VIRTUAL SIZE
micronginx          latest         52ec332b65fc        53 seconds ago       21.13 MB
nginxbusybox        latest         80a526b043fd        About a minute ago   23.56 MB
build-nginx.static  latest         4ecdd6aabaee        About a minute ago   1.392 GB
nginx               latest         1822529acbbf        8 days ago           91.75 MB

Its interesting to remember that we rely heavily on `I know this, its a UNIX system` –  application services can have all sorts of hidden assumptions that won’t be revealed without putting them into more constrained environments.

In the same way that we don’t ship the VM / filesystem of our build server, you should not be shipping the container you’re building from source.

This analysis doesn’t try to restrict nginx to only opening certain network ports, devices, or IPC mechanisms – so there’s more to be done…

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Sep 26 2014

Speeding up CPAN module contributions using the Docker language stack images

Tag: devops,Docker,enterprise,new,open source,perlSven Dowideit @ 5:11 pm

Docker Inc. just released our first set of programming language images on the Docker Hub. They cover c/c++ (gcc), clojure, go (golang), hy (hylang), java, node, perl, php, python, rails, and ruby.

As I need to do some work on API testing when I come back from holidays, I thought I’d look at the Net:Docker CPAN module – and of course, there is no Perl on my Boot2Docker image, so its a perfect opportunity to see what I should do.

After forking and cloning the Git repository, I created the following initial Dockerfile:


FROM perl:5.20
MAINTAINER Sven Dowideit

COPY . /docker-perl
WORKDIR /docker-perl

RUN cpanm --installdeps .
RUN perl Build.PL
RUN ./Build build
RUN ./Build test

It fails to build during the ‘test’ step:


$ docker build -t docker-perl .

... snip ...

Step 6 : RUN ./Build test
---> Running in 367afe04c77e
Can't open socket var/run/docker.sock: No such file or directory at /usr/local/lib/perl5/site_perl/5.20.0/LWP/Protocol/http/SocketUnixAlt.pm line 27. at t/docker-api.t line 9.
# Tests were run but no plan was declared and done_testing() was not seen.
# Looks like your test exited with 255 just after 1.
t/docker-api.t ....
Dubious, test returned 255 (wstat 65280, 0xff00)
All 1 subtests passed
Can't locate IO/String.pm in @INC (you may need to install the IO::String module) (@INC contains: /docker-perl/blib/arch /docker-perl/blib/lib /usr/local/lib/perl5/site_perl/5.20.0/x86_64-linux /usr/local/lib/perl5/site_perl/5.20.0 /usr/local/lib/perl5/5.20.0/x86_64-linux /usr/local/lib/perl5/5.20.0 .) at t/docker-start.t line 3.
BEGIN failed--compilation aborted at t/docker-start.t line 3.
t/docker-start.t ..
Dubious, test returned 2 (wstat 512, 0x200)
No subtests run

Test Summary Report
-------------------
t/docker-api.t (Wstat: 65280 Tests: 1 Failed: 0)
Non-zero exit status: 255
Parse errors: No plan found in TAP output
t/docker-start.t (Wstat: 512 Tests: 0 Failed: 0)
Non-zero exit status: 2
Parse errors: No plan found in TAP output
Files=2, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.21 cusr 0.03 csys = 0.26 CPU)
Result: FAIL
2014/09/26 16:08:19 The command [/bin/sh -c ./Build test] returned a non-zero code: 1

I’m going to have to give this Dockerfile a DOCKER_HOST (incorrectly using http://) setting (to one of my insecure plain text tcp based servers :), and add IO::String and JSON:XS to the cpanfile.

Unfortunately, because cpanm --installdeps . uses the files in the build context, this way does not use the build cache – so its slow. Its worth duplicating the contents of the cpanfile before the COPY instruction for speed.

So the working Dockerfile looks like:


FROM perl:5.20
MAINTAINER Sven Dowideit

RUN cpanm Module::Build::Tiny
RUN cpanm Moo
#', '1.002000';
RUN cpanm JSON
RUN cpanm JSON::XS
RUN cpanm LWP::UserAgent
RUN cpanm LWP::Protocol::http::SocketUnixAlt
RUN cpanm URI
RUN cpanm AnyEvent
RUN cpanm AnyEvent::HTTP
RUN cpanm IO::String

COPY . /docker-perl
WORKDIR /docker-perl

RUN cpanm --installdeps .
RUN perl Build.PL
RUN ./Build build

# This is a terrible cheat.
ENV DOCKER_HOST http://10.10.10.4:2375

RUN ./Build test
RUN ./Build install

CMD ["docker.pl", "ps"]

and then docker build -t docker-perl . results in:


bash-3.2$ docker build -t docker-perl .
Sending build context to Docker daemon 138.8 kB
Sending build context to Docker daemon
Step 0 : FROM perl:5.20
---> 4d4674548e76
Step 1 : MAINTAINER Sven Dowideit
---> Using cache
---> 4ad0946e76aa
Step 2 : RUN cpanm Module::Build::Tiny
---> Using cache
---> f1b94d36a51c
Step 3 : RUN cpanm Moo
---> Using cache
---> 98de8c3a19a8
Step 4 : RUN cpanm JSON
---> Using cache
---> 73debd4ee367
Step 5 : RUN cpanm JSON::XS
---> Using cache
---> 89378a425f0b
Step 6 : RUN cpanm LWP::UserAgent
---> Using cache
---> 252fe329cf22
Step 7 : RUN cpanm LWP::Protocol::http::SocketUnixAlt
---> Using cache
---> a77d289faf19
Step 8 : RUN cpanm URI
---> Using cache
---> 6804b418778d
Step 9 : RUN cpanm AnyEvent
---> Using cache
---> c595f66bcf73
Step 10 : RUN cpanm AnyEvent::HTTP
---> Using cache
---> 31b25b2da3c4
Step 11 : RUN cpanm IO::String
---> Using cache
---> e54cd3d01988
Step 12 : COPY . /docker-perl
---> 4d4801209a79
Removing intermediate container c42897136186
Step 13 : WORKDIR /docker-perl
---> Running in 36575a59e465
---> 7042c67cf1b7
Removing intermediate container 36575a59e465
Step 14 : RUN cpanm --installdeps .
---> Running in c1b5cbb75c4a
--> Working on .
Configuring Net-Docker-0.002005 ... OK
<== Installed dependencies for .. Finishing.
---> 071f9caca472
Removing intermediate container c1b5cbb75c4a
Step 15 : RUN perl Build.PL
---> Running in fae9bbce142f
Creating new 'Build' script for 'Net-Docker' version '0.002005'
---> 2800182bd0ff
Removing intermediate container fae9bbce142f
Step 16 : RUN ./Build build
---> Running in a98cb6c7a808
cp lib/Net/Docker.pm blib/lib/Net/Docker.pm
cp script/docker.pl blib/script/docker.pl
---> f5ba5be85f9d
Removing intermediate container a98cb6c7a808
Step 17 : ENV DOCKER_HOST http://10.10.10.4:2375
---> Running in 1e8b3273974c
---> fffb42d69011
Removing intermediate container 1e8b3273974c
Step 18 : RUN ./Build test
---> Running in 3baacccbf17e
t/docker-api.t .... ok
t/docker-start.t .. ok
All tests successful.
Files=2, Tests=41, 5 wallclock secs ( 0.02 usr 0.02 sys + 0.26 cusr 0.06 csys = 0.36 CPU)
Result: PASS
---> f5d371cdc1fa
Removing intermediate container 3baacccbf17e
Step 19 : RUN ./Build install
---> Running in 60cd90714e02
Installing /usr/local/lib/perl5/site_perl/5.20.0/Net/Docker.pm
Installing /usr/local/bin/docker.pl
---> 62c6368a2fb0
Removing intermediate container 60cd90714e02
Step 20 : CMD ["docker.pl", "ps"]
---> Running in cb5ade11e146
---> 94984ed5756d
Removing intermediate container cb5ade11e146
Successfully built 94984ed5756d

So that I can use it:


bash-3.2$ docker run --rm -it docker-perl
ID IMAGE COMMAND CREATED STATUS PORTS
e619112eae2f 10.10.10.2:5001/sve bash 1411104597 Up 7 days ARRAY(0x2b84a48)
363ec1c45841 10.10.10.2:5001/sve bash 1411104470 Up 7 days ARRAY(0x29bae20)

You can also run the container with bash – docker run --rm -it docker-perl bash so you can do some more testing, or try out more complex examples.

In this case, the `./Build test` step probably needs to happen in the `docker run` phase, as it needs access to a working Docker daemon – this issue will be true for modules that talk to external resources.

I’ve made a pull request for the tiny changes to get me this far. Perhaps Dockerfiles like this could be a gateway into the world of contributing quick fixes for open source libraries.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Mar 31 2014

Docker, containers and simplicity.

Tag: devops,Docker,enterprise,new,redhat,RPM,virtualisation,windowsSven Dowideit @ 10:25 pm

I’ve now been working for Docker Inc. for 2 months. My primary role is Enterprise Support Engineer: I’m one of the guys that your company can turn to when the going gets tough, for training, or just generally to ask questions.

In these months, I’ve been working on Boot2Docker (OSX, Windows installers), our Documentation, and generally helping users come to terms with the broad spectrum of effects that Docker has on developing, managing and thinking about software components.

I’m still trying to work out ways to explain what Docker does – this is March’s version:

Virtual machines emulate complete computers, so you setup, maintain and run a complete Operating System, and copy around complete monolithic filesystem images.
Docker Containers emulate Operating Systems, allowing you to build, manage and run applications and services. And you copy around your application, data and configurations.

This might not quite feel right, given that images are build ‘FROM’ a base image – but one thought I have, is that as that base image (and most often some local modifications) are likely to be common to your entire infrastructure, that layer will be shared for all your containers. Chances are, you didn’t build it either – Tianon did :).

Solomon keeps reminding me that Dockerfiles are like Makefiles – and in the back of my mind, I think of our application image layers as packages, thin wrappers around applications that are then orchestrated together to produce your service. The base image you choose is only there to support that, and over time I’m sure we’ll simplify those much more.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Nov 27 2013

Docker 0.7 is here – welcome RPM distros (and anyone else that lacks AUFS)

The Docker project has continued its mostly-monthly releases with the long anticipated 0.7 release, this time making the storage backend pluggable, so fedora/redhat based users can use it without building a custom kernel.

I’m curious to see the performance differences between the 3 storage backends we have now – but I need to assimilate the wonders of Linking containers for adhoc scaling first.

Try it out – I’m even more convinced that Docker containers have an interesting future :)

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Oct 15 2013

easy install of Docker.io on Debian

Tag: debian,devops,Docker,new,virtualisationSven Dowideit @ 9:05 pm

UPDATE: for Docker 0.6.5, the ubuntu debian package also installs on Debian. You still need to enable IPv4 forwarding as below – then re-start the docker daemon

I’ve been doing some work on Docker – learning golang, Docker internals, and just some of the command line options that I didn’t know I needed to know about.

Because I was in a hurry, I threw an old unused disk into one of my old laptops and installed ubuntu. That was enough for me to learn that I wanted to know alot more about Docker.

So, I’m back to using the loaner T530 with my 128GB SSD in it – its been running Debian since the day I got the SSD, over 2 years ago.

it turns out that on Debian testing (with the 3.10-3-amd64 kernel), its incredibly easy to run docker:

sudo apt-get install lxc wget bsdtar curl golang git aufs-tools mercurial iptables
wget --output-document=docker https://get.docker.io/builds/Linux/x86_64/docker-latest
chmod +x docker
sudo su -
#enable IPv4 forwarding
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
# set up and mount the cgroup mountpoint
echo 'none /sys/fs/cgroup cgroup defaults 0 0' | sudo tee -a /etc/fstab
mount /sys/fs/cgroup
#OK, you might need to reboot if it fails to mount?
./docker -d &

done.
from there, you can run the docker cli like normal (except that its not in your path yet).

I’m going to pull over the apt pinning installation documentation I wrote for publican the other week and re-write it (and test) for installing docker on Debian Stable, and we’ll all be much happier.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Aug 21 2013

I wonder if Docker can replace Puppet.

I’ve finally spent a little time playing with Docker, and to be honest, the really simple

here’s a list of commands that get run to set up the image

feels awesome.

to test it out, I wrote the simplest steps I could think of to create a working foswiki installation into a Dockerfile:

FROM ubuntu
MAINTAINER    Sven Dowideit <svendowideit@home.org.au>

RUN echo deb http://fosiki.com/Foswiki_debian/ stable main contrib > /etc/apt/sources.
list.d/fosiki.list
RUN echo deb http://archive.ubuntu.com/ubuntu precise main restricted universe multive
rse >> /etc/apt/sources.list
RUN gpg –keyserver the.earth.li –recv-keys 379393E0AAEE96F6
RUN apt-key add //.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get install -y foswiki

#create the tmp dir
RUN mkdir /var/lib/foswiki/working/tmp
RUN chmod 777 /var/lib/foswiki/working/tmp
#TODO: randomise the admin pwd..
RUN htpasswd -cb /var/lib/foswiki/data/.htpasswd admin admin
RUN mv /etc/foswiki/LocalSite.cfg /etc/foswiki/LocalSite.cfg.orig
RUN grep –invert-match {Password} /etc/foswiki/LocalSite.cfg.orig > /etc/foswiki/Loca
lSite.cfg
RUN chown www-data:www-data /etc/foswiki/LocalSite.cfg

RUN bash -c ‘echo “/usr/sbin/apachectl start” >> /.bashrc’
RUN bash -c ‘echo “echo foswiki configure admin user password is ‘admin'” >> /.bashrc’

EXPOSE 80

and then I can create the image with a simple:

docker build -t svendowideit/ubuntu-foswiki .

and run that image by calling:

docker run -t -i -p 8888:80 svendowideit/ubuntu-foswiki /bin/bash

Which (assuming that port 8888 is unused on my host computer) means I can do some testing by pointing my web client to http://localhost:8888/foswiki

When I exit the bash shell, which allows me to debug what is happening, everything is shutdown, and all changes are lost. If I make changes, I can commit them, but at this point, I prefer to make a new Dockerfile.

The interesting thing is that Docker seems to create an image tag for every command, so if I make add some RUN lines, or make changes, it doesn’t need to re-do steps that it has done before…. which sounds to me just like Rex, Puppet, Ansible etc – but more re-useable.

And so, I’m curious to see how hard it would be to push out Docker versioned configuration changesets over ssh to ‘anywhere’, with some kind of impotency via system ‘tags’.

 

PS, the docker image is available from https://index.docker.io/u/svendowideit/ubuntu-foswiki/ , and uses my debian packages, so you should install new extensions using apt-get install

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Next Page »


Positions by Seo-Watcher
Statistical data collected by Statpress SEOlution (blogcraft).