Dec 09 2013

Boot2Docker for orchestration.

Tag: devops,Docker,enterprise,open source,virtualisationSven Dowideit @ 12:35 pm

Imagine that you have a PXE / USB / boot setup that boots in seconds, and as its last step, starts a docker privileged image labeled ‘dom0:latest’.

Most importantly, each of these servers then would ‘docker pull’ and ‘docker run’ the services and applications it provides – without modifying the base OS, or ‘dom0′

It also gives the server farm admin the opportunity to make a customized ‘dom0:latest’ image – containing the farm’s ‘dom0:latest’ thus orchestrating the inner container configurations further.

 

I’m using boot2docker on my ‘docker-server’ (an old x61 thinkpad), and using the inbuilt harddrive to store the docker images that are run on it. I’m totally avoiding modifying the base OS, and doing all the work in containers – enabling very fast run and teardown for development and testing.

now if only ‘turnbull-net’ wasn’t totally useless for pulling images (gee, thanks Malcolm Turnbull, 1MB ADSL with constant dropouts in the middle of Brisbane is sooooooo conducive to working).

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Dec 05 2013

Docker container network portability

Tag: devops,Docker,open source,virtualisationSven Dowideit @ 4:18 pm

Rather than hardcoding network links between a service consumer and provider, Docker encourages service portability.

eg, instead of 2 containers talking directly to each other:

(consumer) --> (redis)

requiring you to restart the consumer to attach it to a different redis service, you can add ambassador containers:

(consumer) --> (redis-ambassador) --> (redis)

or

(consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)

When you need to rewire your consumer to talk to a different resdis server, you can just restart the redis-ambassador container that the consumer is connected to.

This pattern also allows you to transparently move the redis server to a different docker host from the consumer.

Using the svendowideit/ambassador container, the link wiring is controlled entirely from the dockerrun parameters.

Two host Example

Start actual redis server on one Docker host

big-server $ docker run -d -name redis crosbymichael/redis

Then add an ambassador linked to the redis server, mapping a port to the outside world

big-server $ docker run -d -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador

On the other host, you can set up another ambassador setting environment variables for each remote port we want to proxy to the big-server

client-server $ docker run -d -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador

Then on the client-server host, you can use a redis client container to talk to the remote redis server, just by linking to the local redis ambassador.

client-server $ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG

How it works

The following example shows what the svendowideit/ambassador container does automatically (with a tiny amount of sed)

On the docker host (192.168.1.52) that redis will run on:

# start actual redis server
$ docker run -d -name redis crosbymichael/redis

# get a redis-cli container for connection testing
$ docker pull relateiq/redis-cli

# test the redis server by talking to it directly
$ docker run -t -i -rm -link redis:redis relateiq/redis-cli
redis 172.17.0.136:6379> ping
PONG
^D

# add redis ambassador
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 busybox sh

in the redis_ambassador container, you can see the linked redis containers’s env

$ env
REDIS_PORT=tcp://172.17.0.136:6379
REDIS_PORT_6379_TCP_ADDR=172.17.0.136
REDIS_NAME=/redis_ambassador/redis
HOSTNAME=19d7adf4705e
REDIS_PORT_6379_TCP_PORT=6379
HOME=/
REDIS_PORT_6379_TCP_PROTO=tcp
container=lxc
REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/

This environment is used by the ambassador socat script to expose redis to the world (via the -p 6379:6379 port mapping)

$ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 docker-ut sh

$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379

then ping the redis server via the ambassador

Now goto a different server

$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i  -expose 6379 -name redis_ambassador docker-ut sh

$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379

and get the redis-cli image so we can talk over the ambassador bridge

$ docker pull relateiq/redis-cli
$ docker run -i -t -rm -link redis_ambassador:redis relateiq/redis-cli
redis 172.17.0.160:6379> ping
PONG

The svendowideit/ambassador Dockerfile

The svendowideit/ambassador image is a small busybox image with socat built in. When you start the container, it uses a small sed script to parse out the (possibly multiple) link environment variables to set up the port forwarding. On the remote host, you need to set the variable using the -e command line option.

-expose 1234 -e REDIS_PORT_1234_TCP=tcp://192.168.1.52:6379 will forward the local 1234port to the remote IP and port – in this case 192.168.1.52:6379.

#
#
# first you need to build the docker-ut image using ./contrib/mkimage-unittest.sh
# then
#   docker build -t SvenDowideit/ambassador .
#   docker tag SvenDowideit/ambassador ambassador
# then to run it (on the host that has the real backend on it)
#   docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 ambassador
# on the remote host, you can set up another ambassador
#    docker run -t -i -name redis_ambassador -expose 6379 sh

FROM    docker-ut
MAINTAINER      SvenDowideit@home.org.au

CMD     env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/'  | sh && top

(this is pull request https://github.com/dotcloud/docker/pull/3038 so will eventually find its way into the Docker documentation)
[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Nov 27 2013

Docker 0.7 is here – welcome RPM distros (and anyone else that lacks AUFS)

The Docker project has continued its mostly-monthly releases with the long anticipated 0.7 release, this time making the storage backend pluggable, so fedora/redhat based users can use it without building a custom kernel.

I’m curious to see the performance differences between the 3 storage backends we have now – but I need to assimilate the wonders of Linking containers for adhoc scaling first.

Try it out – I’m even more convinced that Docker containers have an interesting future :)

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Oct 15 2013

easy install of Docker.io on Debian

Tag: debian,devops,Docker,new,virtualisationSven Dowideit @ 9:05 pm

UPDATE: for Docker 0.6.5, the ubuntu debian package also installs on Debian. You still need to enable IPv4 forwarding as below – then re-start the docker daemon

I’ve been doing some work on Docker – learning golang, Docker internals, and just some of the command line options that I didn’t know I needed to know about.

Because I was in a hurry, I threw an old unused disk into one of my old laptops and installed ubuntu. That was enough for me to learn that I wanted to know alot more about Docker.

So, I’m back to using the loaner T530 with my 128GB SSD in it – its been running Debian since the day I got the SSD, over 2 years ago.

it turns out that on Debian testing (with the 3.10-3-amd64 kernel), its incredibly easy to run docker:

sudo apt-get install lxc wget bsdtar curl golang git aufs-tools mercurial iptables
wget --output-document=docker https://get.docker.io/builds/Linux/x86_64/docker-latest
chmod +x docker
sudo su -
#enable IPv4 forwarding
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf
# set up and mount the cgroup mountpoint
echo 'none /sys/fs/cgroup cgroup defaults 0 0' | sudo tee -a /etc/fstab
mount /sys/fs/cgroup
#OK, you might need to reboot if it fails to mount?
./docker -d &

done.
from there, you can run the docker cli like normal (except that its not in your path yet).

I’m going to pull over the apt pinning installation documentation I wrote for publican the other week and re-write it (and test) for installing docker on Debian Stable, and we’ll all be much happier.

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Aug 21 2013

I wonder if Docker can replace Puppet.

I’ve finally spent a little time playing with Docker, and to be honest, the really simple

here’s a list of commands that get run to set up the image

feels awesome.

to test it out, I wrote the simplest steps I could think of to create a working foswiki installation into a Dockerfile:

FROM ubuntu
MAINTAINER    Sven Dowideit <svendowideit@home.org.au>

RUN echo deb http://fosiki.com/Foswiki_debian/ stable main contrib > /etc/apt/sources.
list.d/fosiki.list
RUN echo deb http://archive.ubuntu.com/ubuntu precise main restricted universe multive
rse >> /etc/apt/sources.list
RUN gpg –keyserver the.earth.li –recv-keys 379393E0AAEE96F6
RUN apt-key add //.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get install -y foswiki

#create the tmp dir
RUN mkdir /var/lib/foswiki/working/tmp
RUN chmod 777 /var/lib/foswiki/working/tmp
#TODO: randomise the admin pwd..
RUN htpasswd -cb /var/lib/foswiki/data/.htpasswd admin admin
RUN mv /etc/foswiki/LocalSite.cfg /etc/foswiki/LocalSite.cfg.orig
RUN grep –invert-match {Password} /etc/foswiki/LocalSite.cfg.orig > /etc/foswiki/Loca
lSite.cfg
RUN chown www-data:www-data /etc/foswiki/LocalSite.cfg

RUN bash -c ‘echo “/usr/sbin/apachectl start” >> /.bashrc’
RUN bash -c ‘echo “echo foswiki configure admin user password is ‘admin'” >> /.bashrc’

EXPOSE 80

and then I can create the image with a simple:

docker build -t svendowideit/ubuntu-foswiki .

and run that image by calling:

docker run -t -i -p 8888:80 svendowideit/ubuntu-foswiki /bin/bash

Which (assuming that port 8888 is unused on my host computer) means I can do some testing by pointing my web client to http://localhost:8888/foswiki

When I exit the bash shell, which allows me to debug what is happening, everything is shutdown, and all changes are lost. If I make changes, I can commit them, but at this point, I prefer to make a new Dockerfile.

The interesting thing is that Docker seems to create an image tag for every command, so if I make add some RUN lines, or make changes, it doesn’t need to re-do steps that it has done before…. which sounds to me just like Rex, Puppet, Ansible etc – but more re-useable.

And so, I’m curious to see how hard it would be to push out Docker versioned configuration changesets over ssh to ‘anywhere’, with some kind of impotency via system ‘tags’.

 

PS, the docker image is available from https://index.docker.io/u/svendowideit/ubuntu-foswiki/ , and uses my debian packages, so you should install new extensions using apt-get install

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Jun 05 2013

inkChilli wiki to Sharepoint migration tool

Tag: enterprise,foswiki,sharepoint,twiki,wiki,windowsSven Dowideit @ 11:17 am

2 months ago I was asked for help migrating a company’s TWiki site to Sharepoint, and then again a few days later from someone who’s company had declared that they needed to shift their foswiki information to Sharepoint too.

In response, I spent the 2 months researching through much of Microsoft’s undocumented API’s, and implementing a tool to do the job.

If you need to convert from TWiki or Foswiki to Sharepoint (2007, 2010, 2012 and Office 365), please head over to http://inkChilli.com, or email me

 

 

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

« Previous PageNext Page »


Positions by Seo-Watcher
Statistical data collected by Statpress SEOlution (blogcraft).