Wednesday, July 16, 2014

Running NetflixOSS and Cloud Fabric on Docker

Previously I had setup a docker to run IBM Cloud Services Fabric powered in part by NetflixOSS prototype.  Andrew has a lot of great posts on these topics.

It is time however, for me to refresh my environment.  I'll use the public git repository that was open sourced earlier this year.  In this post I'll simply capture my experiences. All of this is well documented in Andrew's youtube video, the git repository readme and on Andrews blog.

Conclusions 

This was really easy based on instructions on the Git repository.  Total time was approximately 4 hours but the majority of that was waiting for the docker images to build.  It would be good to restructure the build layers so that this is faster when making changes, and would also be good to publish images on dockerHub.

While the times to detect and recover were much longer than expected working within this environment is a lot of fun, and it is very interesting to be able to test reasonably complex scenarios on distributed environments.  Very valuable as a producer of micro-services package within docker containers.

I look forward to scenarios that allow me to test locally like this, then pass an application container to a pipeline that tests and deploys new versions of the application into a production environment that supports these Cloud Fabric and NetflixOSS services. 

Setup

Review topology

Setup docker virtual machine 

Setup a clean docker environment so that I can compare this environment to my previous.  To do this I'll simply use vagrant to stand up a virtual machine.
$ vagrant up
$ vagrant halt
Updated my script to forward all the ports docker containers like to use to the virtual machine. In Virtual Box I changed the settings to update the memory to 4096, and 4 cores. Then re-started the virtual machine.
$ vagrant ssh

Pull git repository and build containers 

The GIT repository lives here. On my docker virtual machine:
$ mkdir acme-air
$ git clone https://github.com/EmergingTechnologyInstitute/acmeair-netflixoss-dockerlocal.git 
$ cd acmeair-netflixoss-dockerlocal/bin
Review the env.sh to make sure it mapped to my environment
Update the id_rsa.pub and id_rsa with my SSH keys
$ ./acceptlicenses.sh
$ ./buildimages.sh
This took a long time ... like a couple of hours.

Starting the containers 

There is a useful script for starting the containers which only took about a minute to run.
$ ./startminimum.sh
 

Basic testing  

Open up two terminals to the docker virtual machine, in one monitor the instances and the other to test.
Monitor Health Manager:
$ ./showhmlog.sh
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)
D, [2014-07-17T15:56:11.156272 #16] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_webapp account=user01
D, [2014-07-17T15:56:11.157862 #16] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_auth_service account=user01

Execute some basic validation tests
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./testauth.sh 
200 HTTP://172.17.0.30/rest/api/authtoken/byuserid/uid0@email.com
id=71600f08-c220-44d2-9004-51ce1d5ffa8a
200 HTTP://172.17.0.30/rest/api/authtoken/71600f08-c220-44d2-9004-51ce1d5ffa8a
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./testwebapp.sh 
200 HTTP://172.17.0.31/rest/api/login
200 HTTP://172.17.0.31/rest/api/customer/byid/uid0@email.com
200 HTTP://172.17.0.31/rest/api/flights/queryflights
200 HTTP://172.17.0.31/rest/api/login/logout?login=uid0@email.com
200 HTTP://172.17.0.31/rest/api/login
By default this is testing directly against a single instance in the autoscaling group.
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./showasginstances.sh 
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)
"logging in @http://localhost:56785/asgcc/ user=user01 key=***** ..."
"OK"
"listing instances for autoscaling group: acmeair_auth_service"
INSTANCE_ID  | STATUS  | AVAILABILITY_ZONE | PRIVATE_IP_ADDRESS | HOSTNAME                       
-------------|---------|-------------------|--------------------|--------------------------------
9625eaaa3028 | RUNNING | docker-local-1a   | 172.17.0.30        | acmeair-auth-service-731744efdd

"listing instances for autoscaling group: acmeair_webapp"
INSTANCE_ID  | STATUS  | AVAILABILITY_ZONE | PRIVATE_IP_ADDRESS | HOSTNAME                 
-------------|---------|-------------------|--------------------|--------------------------
e57c9d74c97c | RUNNING | docker-local-1a   | 172.17.0.31        | acmeair-webapp-9023aa3b81
It is also worth noting that the scripts are SSH'ing directly into the running instances of the containers to get information. So if you are using boot2docker then you need to be running them directly from the boot2docker virtual machine not on the local mac command line.

Next lets test the web application hitting the zuul edge service to make sure that it has gotten the location of the web application from eureka.
Find the ip address of the zuul edge service, and then pass that into the basic test script
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./showipaddrs.sh 
172.17.0.19 skydns
172.17.0.20 skydock
172.17.0.22 cassandra1
172.17.0.25 eureka
172.17.0.26 zuul
172.17.0.27 microscaler
172.17.0.28 microscaler-agent
172.17.0.29 asgard
172.17.0.30 acmeair-auth-service-731744efdd
172.17.0.31 acmeair-webapp-9023aa3b81
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./testwebapp.sh 172.17.0.26
200 HTTP://172.17.0.26/rest/api/login
200 HTTP://172.17.0.26/rest/api/customer/byid/uid0@email.com
200 HTTP://172.17.0.26/rest/api/flights/queryflights
200 HTTP://172.17.0.26/rest/api/login/logout?login=uid0@email.com

Experimenting with the environment 

Testing the health manager and service lookup

Next lets test what happens with auto-scaling groups, failover and recovery. Currently we just have a single instance in the auto-scaling group and we will experiment with more later, working with a single instance should make cause and effect pretty straight forward. To do this we can start up the containers with autoscaling groups, run some tests continuously, and then kill the application containers. We should expect that the health manager will detect that a container is no longer responding, so will start another instance of the container.

I'll have 4 windows open
  1. window running continuous tests directly against instance
  2. window running continuous tests against zuul
  3. window tailing on the health manager logs
  4. command prompt from which I can monitor and kill containers
I copied testwebapp.sh to conttestwebapp.sh and made a quick adjustment so that it will simply keep on running
#!/bin/sh

. ./env.sh

webapp_addr=$1

if [ -z "$webapp_addr" ]; then
  webapp_addr=$($docker_cmd ps | grep 'acmeair/webapp' | head -n 1 | cut -d' ' -f1 | xargs $docker_cmd inspect --format '{{.NetworkSettings.IPAddress}}')
fi
while true; do
        sleep 2
        for i in `seq 0 1 10`
        do
         curl -sL -w "%{http_code} %{url_effective}\\n" -o /dev/null -c cookie.txt --data "login=uid0@email.com&password=password" $webapp_addr/rest/api/login
         curl -sL -w "%{http_code} %{url_effective}\\n" -o /dev/null -b cookie.txt $webapp_addr/rest/api/customer/byid/uid0@email.com
         curl -sL -w "%{http_code} %{url_effective}\\n" -o /dev/null -b cookie.txt --data "fromAirport=CDG&toAirport=JFK&fromDate=2014/03/31&returnDate=2014/03/31&oneWay=false" $webapp_addr/rest/api/flights/queryflights
         curl -sL -w "%{http_code} %{url_effective}\\n" -o /dev/null -b cookie.txt $webapp_addr/rest/api/login/logout?login=uid0@email.com
        done
done

Window1: get IP address of web-app instance, and test continuously
$ ./showipaddrs.sh | grep zuul 
172.17.0.19 skydns
172.17.0.20 skydock
172.17.0.22 cassandra1
172.17.0.25 eureka
172.17.0.26 zuul
172.17.0.27 microscaler
172.17.0.28 microscaler-agent
172.17.0.29 asgard
172.17.0.30 acmeair-auth-service-731744efdd
172.17.0.33 acmeair-webapp-dec27996fa
$ ./conttestwebapp.sh 172.17.0.33
200 HTTP://172.17.0.33/rest/api/login

Window2: test continuously against zuul
$ ./showipaddrs.sh | grep zuul 
172.17.0.26 zuul
$ ./conttestwebapp.sh 172.17.0.26
200 HTTP://172.17.0.33/rest/api/login

Window3: monitor health manager
$ ./showhmlog.sh 
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
stdin: is not a tty
D, [2014-07-17T16:26:54.349445 #16] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_webapp account=user01
D, [2014-07-17T16:26:54.351620 #16] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_auth_service account=user01
D, [2014-07-17T16:27:14.361714 #16] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_webapp account=user01

Window3: kill something
$ docker ps | grep webapp 
df371d9a62e9        acmeair/webapp-liberty:latest         /usr/bin/supervisord   7 minutes ago       Up 7 minutes        0.0.0.0:49180->22/tcp, 0.0.0.0:49181->9080/tcp, 0.0.0.0:49182->9443/tcp   acmeair-webapp-dec27996fa         

$ docker stop df371d9a62e9 
df371d9a62e9

$ docker rm df371d9a62e9
df371d9a62e9
Now we can see the instance become stale in the health manager logs, the tests failing and a new instance starting.


After a few seconds the instance will be up and running and the tests will begin to work. The direct tests to the web application will need to be re-directed to the new instance as the IP address has changed, while the tests against zuul will automatically pick up the new instance and begin to pass again. 


Pretty neat.  The netflixOSS libraries picked up that the instance was no longer and started another one using the micro-scaler for docker.

Default detection and recover times

Killing webapp 

  • ~15 seconds for health manager to detect  stale instance
  • ~29 seconds for health manager to successfully started a new instance 
  • ~30 seconds for direct tests to pass 
  • ~2 minutes 10 seconds for zuul tests to pass indicating it has a new reference to the new instance of the web application
Killing authentication service 
Very similar test.  In this case I killed the authentication service, ran tests against the authentication service directly, while at the same time running webapp tests against zuul.  As expected I see a new container started, and the webapplication tests (dependent upon auth) fail for some period of time until the webapp detects a new instance of the authentication service is available.
  • ~15 seconds for HM to detect the stale instance 
  • ~30 seconds for the HM to start a new auth instance 
  • ~55 seconds until the new auth service is successfully responding to tests 
  • ~1 minute 43 seconds for the web application (via zuul) to pass indicating that it had successfully picked up a new reference to the authentication service via eureka

Working with auto-scaling groups via asgard 

In this post I won't explore too many scenarios, but would like to make sure that the environment is setup and functioning correctly. Asgard provides the interface to manage auto-scaling groups (which are scaled using the micro-scaler). So lets have a look at the asgard console and play around.
Find the IP address of asgard, and quickly test that I can access it
$ ./showipaddrs.sh | grep asgard 
172.17.0.12 asgard

$ curl -sL -w "%{http_code}" -o /dev/null http://172.17.0.12
200
I can now open up a web browser and navigate Asgard to see my auto-scaling groups. However, I do not appear to be able to modify the existing auto-scaling groups currently in the web-ui.

 

Working with auto-scaling groups via command line

Because I am unable to update the auto-scaling groups via asgard currently, I'll steal from the command line scripts and configure it directly on the micro-scaler. I created a simple script which will login to the micro-scaler via SSH and update the min, max and desired size of the auto-scaling groups.
 
$ ./updateasg.sh -m 3 -x 6 -d 4 -a acmeair_auth_service 
setting MIN=3, MAX=6 and DESIRED=4
{"name"=>"acmeair_auth_service", "min_size"=>3, "max_size"=>6, "desired_capacity"=>4}
"updating autoscaling group {\"name\"=>\"acmeair_auth_service\", \"min_size\"=>3, \"max_size\"=>6, \"desired_capacity\"=>4} ..."
"OK"


$ ./updateasg.sh -m 1 -x 4 -d 2 
setting MIN=1, MAX=4 and DESIRED=2
{"name"=>"acmeair_webapp", "min_size"=>1, "max_size"=>4, "desired_capacity"=>2}
"updating autoscaling group {\"name\"=>\"acmeair_webapp\", \"min_size\"=>1, \"max_size\"=>4, \"desired_capacity\"=>2} ..."
"OK"


$ ./showasginstances.sh 
"listing instances for autoscaling group: acmeair_auth_service"
INSTANCE_ID  | STATUS  | AVAILABILITY_ZONE | PRIVATE_IP_ADDRESS | HOSTNAME                       
-------------|---------|-------------------|--------------------|--------------------------------
4fc7096f16db | RUNNING | docker-local-1a   | 172.17.0.13        | acmeair-auth-service-68810614db
07ec4db6ee74 | RUNNING | docker-local-1a   | 172.17.0.17        | acmeair-auth-service-c181cfb84d
df72cc31d9a7 | RUNNING | docker-local-1a   | 172.17.0.21        | acmeair-auth-service-783abcf2d8
cf7683e75d22 | RUNNING | docker-local-1a   | 172.17.0.22        | acmeair-auth-service-1575c11f44

"listing instances for autoscaling group: acmeair_webapp"
INSTANCE_ID  | STATUS   | AVAILABILITY_ZONE | PRIVATE_IP_ADDRESS | HOSTNAME                 
-------------|----------|-------------------|--------------------|--------------------------
3c9adb9e1c90 | RUNNING  | docker-local-1a   | 172.17.0.19        | acmeair-webapp-60b3d9c19c
ab3bbbb8dc58 | STOPPING | docker-local-1a   | 172.17.0.18        | acmeair-webapp-f4eb98e58b
425fd6f254c2 | STOPPING | docker-local-1a   | 172.17.0.16        | acmeair-webapp-3c41308a4c
7a86ce9cb5cb | RUNNING  | docker-local-1a   | 172.17.0.20        | acmeair-webapp-5e66cb688e

which is now reflected in the asgard console

Various Problems 

Including as perhaps this information is useful to help other debug and use the containers

Problem: accessing asgard from the local machine

Typically to access docker containers I leverage port forwarding. In virtualBox I setup my network to forward all ports that Docker maps onto the virtual machine, and then I have my docker daemon maps ports using the -P or -p options when running the container. This way I can access the exposed ports in my container from outside of the docker host. However, since the the docker containers do not expose ports I'll follow the instructions on setting up a host only network posted by Andrew. This allows me to directly access the IP addresses or any running containers from my local machine. This is pretty nice and behaves as a small private IaaS cloud. It is however, not clear to me why not simply expose certain ports in the docker containers and then leverage port forwarding out of the box. I'll explore this more later.

Problem (open): confusing ports on webapp

It  is not clear to me why the webapp container exposes port 9080 when the liberty server is listening on port 80.
 docker ps 
CONTAINER ID        IMAGE                                 COMMAND                CREATED             STATUS              PORTS                                                                     NAMES
e9e9031f0632        acmeair/webapp-liberty:latest         /usr/bin/supervisord   11 minutes ago      Up 11 minutes       0.0.0.0:49168->22/tcp, 0.0.0.0:49169->9080/tcp, 0.0.0.0:49170->9443/tcp   acmeair-webapp-5b9d28bae2
It is also not clear to me why the docker containers are not exposing ports. For example, the webapp-liberty container should perhaps have 'EXPOSE 80' in it's Dockerfile.

Problem: Containers would not start after docker vm was shut down

After rebooting my machine no containers were running.  Starting a new set of containers failed as the names were all reading taken by the now stopped containers.  I could start each of the containers again but it was simpler to simply remove then and start again.  To remove all my stopped containers:
$ docker rm $(docker ps -a -q)
Now I can run
$ ./startminimum.sh

Problem: No asgs running and incorrect network configuration for my docker host 

This happened because I did not read anything or watch the video fully, if I had I would have saved myself time (classic). I'm including this in the post as the debug process may itself be interesting to new users of this environment.

After building and starting the containers I would expect to have a set of containers running for the netflixoss services, and to have two auto-scaling groups running (one for the webapp, and one for the authentication service. To validate this I'll look at the auto scaling groups
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./showasgs.sh 
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
stdin: is not a tty
"logging in @http://localhost:56785/asgcc/ user=user01 key=***** ..."
"OK"
NAME                 | STATE   | AVAILABILITY_ZONES | URL | MIN_SIZE | MAX_SIZE | DESIRED_CAPACITY | LAUNCH_CONFIGURATION
---------------------|---------|--------------------|-----|----------|----------|------------------|---------------------
acmeair_webapp       | started | ["docker-local-... | N/A | 1        | 4        | 1                | acmeair_webapp      
acmeair_auth_service | started | ["docker-local-... | N/A | 1        | 4        | 1                | acmeair_auth_service
which did not seem to match the running containers
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ docker ps 
CONTAINER ID        IMAGE                              COMMAND                CREATED             STATUS              PORTS                                                NAMES
a45cb9300bc0        acmeair/asgard:latest              /usr/bin/supervisord   8 minutes ago       Up 8 minutes        22/tcp, 80/tcp, 8009/tcp                             asgard              
f9844069ffa0        acmeair/microscaler-agent:latest   /usr/bin/supervisord   8 minutes ago       Up 8 minutes        22/tcp                                               microscaler-agent   
41cc3ce7c103        acmeair/microscaler:latest         /usr/bin/supervisord   9 minutes ago       Up 9 minutes        22/tcp                                               microscaler         
a9bb6e8b86de        acmeair/zuul:latest                /usr/bin/supervisord   9 minutes ago       Up 9 minutes        22/tcp, 80/tcp, 8009/tcp                             zuul                
1b43bfb7aa49        acmeair/eureka:latest              /usr/bin/supervisord   9 minutes ago       Up 9 minutes        22/tcp, 80/tcp, 8009/tcp                             eureka              
2a2b68aeae1a        acmeair/cassandra:latest           /usr/bin/supervisord   10 minutes ago      Up 10 minutes       22/tcp                                               cassandra1          
276039ec2255        crosbymichael/skydock:latest       /go/bin/skydock -ttl   10 minutes ago      Up 10 minutes                                                            skydock             
c26630a74fa5        crosbymichael/skydns:latest        skydns -http 0.0.0.0   10 minutes ago      Up 10 minutes       172.17.42.1:53->53/udp, 172.17.42.1:8080->8080/tcp   skydns              
so I'll check the health manager to see if the asgs were started as expected
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./showhmlog.sh 
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
stdin: is not a tty
D, [2014-07-16T19:25:42.138373 #14] DEBUG -- : HM ---> asg=acmeair_webapp account=user01 is in COOLDOWN state! no action taken until cooldown expires.
D, [2014-07-16T19:25:42.140839 #14] DEBUG -- : HM ---> target=1, actual=0, stalled=0 asg=acmeair_auth_service account=user01
D, [2014-07-16T19:25:42.141929 #14] DEBUG -- : HM ---> scale-up asg=acmeair_auth_service account=user01
D, [2014-07-16T19:25:42.146276 #14] DEBUG -- : HM ---> starting 1 instances for acmeair_auth_service account=user01
D, [2014-07-16T19:25:42.148137 #14] DEBUG -- : launching instance for asg=acmeair_auth_service and user=user01 with az=docker-local-1a
D, [2014-07-16T19:25:42.151127 #14] DEBUG -- : {"name"=>"acmeair_auth_service", "availability_zones"=>["docker-local-1a"], "launch_configuration"=>"acmeair_auth_service", "min_size"=>1, "max_size"=>4, "desired_capacity"=>1, "scale_out_cooldown"=>300, "scale_in_cooldown"=>60, "domain"=>"auth-service.local.flyacmeair.net", "state"=>"started", "url"=>"N/A", "last_scale_out_ts"=>1405538742}
D, [2014-07-16T19:25:42.156640 #14] DEBUG -- : cannot lease lock for account user01 and asg acmeair_auth_service
W, [2014-07-16T19:25:42.156809 #14]  WARN -- : could not acquire lock for updating n instances
D, [2014-07-16T19:26:02.192764 #14] DEBUG -- : HM ---> asg=acmeair_webapp account=user01 is in COOLDOWN state! no action taken until cooldown expires.
nope :-(
Looks as though I have setup my network incorrectly as then I look at the logs for asgard I can see connection refused exceptions
172.17.0.107 skydns
172.17.0.108 skydock
172.17.0.110 cassandra1
172.17.0.112 eureka
172.17.0.113 zuul
172.17.0.114 microscaler
172.17.0.115 microscaler-agent
172.17.0.116 asgard
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ssh -i id_rsa root@172.17.0.116^C
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ clear

vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ./showipaddrs.sh 
172.17.0.107 skydns
172.17.0.108 skydock
172.17.0.110 cassandra1
172.17.0.112 eureka
172.17.0.113 zuul
172.17.0.114 microscaler
172.17.0.115 microscaler-agent
172.17.0.116 asgard
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ ssh -i id_rsa root@172.17.0.116
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Wed Jul 16 19:32:52 2014 from 172.17.42.1
root@asgard:~# more /opt/tomcat/logs/asgard.log 
[2014-07-16 19:31:05,268] [localhost-startStop-1] grails.web.context.GrailsContextLoader    Error initializing the application: Error creating bean with name 'com.netflix.asgard.LoadingFilters': Initialization of bean failed; nested exception is org.springframework.bean
s.factory.BeanCreationException: Error creating bean with name 'initService': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dockerLocalService': Cannot create inner bean '(inner
 bean)' while setting bean property 'target'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#7': Invocation of init method failed; nested exception is org.apache.http.conn.HttpHostConnectException
: Connection to http://172.17.42.1:2375 refused
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.netflix.asgard.LoadingFilters': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'initSer
vice': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dockerLocalService': Cannot create inner bean '(inner bean)' while setting bean property 'target'; nested exception is org.s
pringframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#7': Invocation of init method failed; nested exception is org.apache.http.conn.HttpHostConnectException: Connection to http://172.17.42.1:2375 refused
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
The URL http://172.17.42.1:2375 is based on information that I setup in the env.sh for the base docker url. From my container instances I can not even ping the ip address of the docker host (172.17.42.1). I need to enable remote API access of Docker daemon via TCP socket which was btw clearly called out in the instruction of which I did not read.
vagrant@vagrant-ubuntu-trusty-64:~/acme-air/acmeair-netflixoss-dockerlocal/bin$ sudo vi /etc/default/docker 
added:
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock"
restart docker and my containers
$ ./stopall.sh 
$ sudo service docker restart 
$ startminimum.sh
$ ./showasginstances.sh 
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
stdin: is not a tty
"logging in @http://localhost:56785/asgcc/ user=user01 key=***** ..."
"OK"
"listing instances for autoscaling group: acmeair_auth_service"
INSTANCE_ID  | STATUS  | AVAILABILITY_ZONE | PRIVATE_IP_ADDRESS | HOSTNAME                       
-------------|---------|-------------------|--------------------|--------------------------------
e255c316a75a | RUNNING | docker-local-1a   | 172.17.0.13        | acmeair-auth-service-886ff9d9ee

"listing instances for autoscaling group: acmeair_webapp"
INSTANCE_ID  | STATUS  | AVAILABILITY_ZONE | PRIVATE_IP_ADDRESS | HOSTNAME                 
-------------|---------|-------------------|--------------------|--------------------------
a02d0b2bdcd0 | RUNNING | docker-local-1a   | 172.17.0.14        | acmeair-webapp-a66656d356

$ ./showhmlog.sh 
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-27-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
stdin: is not a tty
D, [2014-07-16T20:00:14.447052 #14] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_webapp account=user01
D, [2014-07-16T20:00:14.449903 #14] DEBUG -- : HM ---> target=1, actual=1, stalled=0 asg=acmeair_auth_service account=user01
Looks good.

Friday, July 11, 2014

Continuous Delivery with Docker: Part 2, creating a docker pipeline


Continuous Delivery with Docker: Part 2, creating a docker pipeline

A basic pipeline for my container(s)

Let us start with an overview of a basic pipeline.  A pipeline is made up of a set of related environments.  Code enters at one end and moves through the pipeline based executed tests, approvals and events.  The purpose of the pipeline is to establish a repeatable process to taking code changes, testing those changes and deploying good changes into production or staging environments.

This is similar to the idea of "every time I check in a change, I should run unit tests and automatically build the application", we just take the notion a step further to also run automated functional and system tests and to deploy to production.  

 There are obviously many variations on Pipelines but here is a sketch of the one I want to build out:   
 
If it is not clear in the sketch here is an explanation of the phases in my pipeline: 
  • On the left had side ServiceDeveloper is making a code change.
  • The modified that enters Continuous Integration where the application is built and Docker images are created. Those images are then deployed and tested and a status is set on that particular version of the application indicating if it is good or not.
  • The image is pushed into the Private Docker Registry. A private Docker Registry is a way to share docker images amongst your friends without having to publish the images off your network to the DockerHub. It is worth noting that DockerHub has private repositories as well as public repositories but many people want to host their own within their firewall. The new image is pushed with the ‘latest’ tag, and if the tests have passed it is also pushed with the ‘tested’ tag.
  • ‘Consumer’ is a team member or someone that wants to get access to the Dockerized application so that they can deploy it, test their service alongside it or simply deploy it for their own purposes. If they want the latest they pull that tag, if they only want good versions they pull tested. For example to get the latest good version
    $docker pull leanjazz.rtp.raleigh.ibm.com:5000/simpletopologyservice:tested
    
  • The ‘Integration Test Env’ is like a staging environment and in this case I want to be able to deploy the latest version of my application regardless of the test status.
  • The ‘Integral LeanJazz Production’ environment should only have good versions of the application deployed into it.
  • ‘End Users’ are engineers in the organization that want to use the application/service. In this case they will use the deployed application to get access to a pre-deployed CLM topology
  • ‘Public DockerHub’ is a hosted registry for Docker Containers. This is a great way to make your packaged application available to others. 

Setting up my pipeline 

To manage these environments, test execution, status of versions and any approvals needed I’ll use IBM Urbancode Deploy.  IBM Urbancode Deploy (UCD) is good at tracking inventory, managing automated processes and providing an application centric view of the world.  Later I’ll need to setup a pipeline service that moves a new version of my application through the pipeline so more on that later.   

Step 1:  Creating my application and environments. 
  • Create ContinuousTestServices Application.  While currently I'll just have one  component/service I'll use this application moving forward for all services associated with my continuous test efforts. 
  • Under my ContinuousTestServices Application create a set of environments for my pipeline:



 For each environment I need to setup an agent and a resource(s).   The agent will be used to execute any processes I want to run against a specific environment.  The Test Environment is a Virtual Machine with it’s own agent.  Similarly the leanJazz Production environment will have an agent running on it.  The Private Docker Registry, and Public DockerHub are slightly different, for those I'll re-use the agent running on my leanJazz production server and create a new resource representing each of these repositories.  Here is my resource structure:   



Next I want to setup a gate on environments that we want to ensure only have good versions of the application deployed to.  A gate in UCD filters out versions of a component that can be deployed to an environment based on it's status.  In this case both the environments to the right of the stoplight in the sketch should be limited in this way  (leanJazz production and Public Dockerhub), and we only want to deploy versions that have passed their functional regression testing.  
 




Next I setup an approval process.  I needed a very basic approval process to record consent for versions of the application to be published on hub.docker.com.   When a deployment is requested to the Public DockerHub environment, an email will be sent to a set of 'approvers', they can reject or approve the deployment after recording consent information.  



Next I want to represent the various state a container may be in on a docker host.  In UCD under Settings I created a status to represent the states that a Docker container may have on a resource (Loaded, Running, Hosted).  This will allow us to see the state across multiple environments. 

Now we have setup the layout of environments for my purposes.  Now it is time to setup my processes to build and run docker containers. 

Step 2:  Creating component process to build, test and deploy docker images and containers
 I'll represent each of my micro-services as a component in IBM Urbancode Deploy.  In this case I just have one component, my SimpleTopologyService.  I created a component and then setup the source of my component to be my git repository on IBM DevOps Services. 

  For each environment there is a set of information about my component that I'll want to know such as the imageID, containerID, running instances and port information.  To start with I just decided to track the imageId and containerID information.  

To keep track of this information I created a set of Component Environment Properties.  These are properties which will have a value on all environments but can differ from one environment to another. 
 


Now I can create a set of component processes representing actions I will preform on my dockerized application.     

 Each of these processes are fairly simple.  They are common docker commands that developers are (probably) used to executing locally.  Capturing them in a UCD component process allows for us to execute these actions on various environments.  I'll point out a few interesting parts of the component processes.

The build process creates Docker images from Dockerfiles and sets interesting information on the environment properties such as the imageids.

To parse out information from docker we can use post-processing scripts.  Post processing scripts in UCD are run as a part of each step in a process and can be used to determine if a step has executed successfully or to pull out interesting information.   In this case I scanned the imageid and set that value on a property. 


The docker-run process sets environment information about the running container, and changes the inventory status to 'Running' representing that not only is the image loaded but now it is active.



 The test process, takes the mocha tests which have been packaged in a docker container (see Part 1) and then attaches that container to the SimpleTopologyService.  Based on the result of the tests the process will set a status on the component version of either FunctionalRegressionPassed or FunctionalRegressionFailed.  As we saw earlier when setting up the environments this status will be used to ensure that only good versions of the application and containers are deployed to production environments or shared. 




  We can also look at the inventory of the component to get a view on quality and state of versions.


The publish process is used to take a new docker image, and to make that available to others using a docker registry.  To do this we will tag the image using the hostname of the private registry, and then push it to the registry.   If the tests successfully passed then we also push the image using the tag 'tested'.  Now anyone on the team that wants to get the latest version of the application can pull directly from the registry using either:
$docker pull leanjazz.rtp.raleigh.ibm.com:5000/simpletopologyservice:latest
or (or last good image)
$docker pull leanjazz.rtp.raleigh.ibm.com:5000/simpletopologyservice:tested

Step 3:  Creating application processes

 Component processes represent operations that we can do with our docker image and containers.  Application processes represent a higher level of operation that we will execute on our environments.  Application processes are made up with a combination of component processes.  For this exercise I created 4 processes.  

The build_test_publish process takes a new version of the application, builds the docker containers, deploys those containers, then tests the containers and finally pushes the image images to the private repository with :latest and :tested tags as appropriate.




The pull_run process pulls the image from the private docker registry, and then starts the container.

Publish simply pushes a docker image out to a docker registry, in this case we will use it to login and push to the dockerhub. 

Running application processes 

Now I have my application dockerized, I have setup a set of environments for my pipeline, and automated actions using my docker container as IBM Urbancode Deploy processes.  When a commit is made to the GIT repository a new version of the component will be pulled in.  We can then run the appropriate processes on our pipeline.

For example to build and test my containers I will run the build_test_push Application process on my Private Docker Registry.  The result of this will be a new image is available in the registry with the latest application changes, and a status will be set according to the test results. 



To simply deploy a new version of the application in my production or test environment I will run the pull_run Application process.  This will grab the version of the application from the docker private registry and run it as a container. 


When I am ready to share the application with a broader community I can run the publish process on the DockerHub environment, which will generate an approval record and then proceed with pushing the new version of the application/image to my dockerhub registry. 




Conclusions 

Docker is a fantastic technology.  It provides many of the benefits of Virtual Images without the downsides.  The use of docker registries makes it very easy to share versions of the application.  Tagging images allows a simple means for down-stream consumers to get either the latest or last good version of a micro-service. 

The usage of IBM Urbancode Deploy provides a framework to manage multiple docker environments and mange a deployment pipeline for micro-services. This allows us to easily setup and manage a set of environments to support both our development efforts, as well as the testing and development efforts of teams consuming that application.  We can do this on an on-going basis since the cost of pulling a new version of a container and running it is minimal. 

This was a rough experiment to see how these two technologies can be used together.  With some improvements we should be able to make this simple moving forward.   I look forward to making some time in the future to improve theses processes and to see what can be done to make it re-usable. 

Continuous Delivery with Docker: Part 1, creating a dockerized application and test suite

Introduction  

Docker is gaining popularity and from a continuous delivery standpoint it is very exciting.  Docker allows us to easily package up an application in a container that can then be moved around from one environment to another in a nice self contained package.  This has many of the benefits of traditional virtual machines without the cost of very large files that are difficult to move and update.  From a developer perspective these containers are nice to work with because they are so light and they have nice features such as layering.  Layering means that if I am simply updating one portion of the stack in the container only that layer (and those below it) get updated.  

From an operations perspective they are very attractive because they are easy to consume, run and have a light footprint.  Where things get especially interesting is in the development of distributed applications or micro-services and I’ll talk about that at another time.    

For this post I simply wanted to explore how I could setup a basic pipeline that took an application, packaged that application in a docker container and then moved it through a set of environments for automated test, integration testing, production and sharing with others.   This post is in two parts: 
  • Part 1: creating a dockerized application and test suite 
  • Part 2: creating a pipeline for my dockerized application
Some motivations 
Within our organization we can reliably provision complicated deployments of our IBM Collaborative Lifecycle Management stack on WebSphere and DB2.  To do this we use IBM Urbancode Deploy and cloud platforms such as IBM Pure Application System.  However, even with automation these topologies take approximately an hour to provisionOur engineers want instant access to the latest builds.  We wanted to build a SimpleTopologyService which would simply receive notifications of new builds, pre-deploy a set of environments and then cache them for our engineers to ‘checkout'.  Overtime we also want to break down our large build processes into a set of services based on 12 factor app ideas.  This will allow us to rapidly innovate on certain areas without living in fear that we will 'break the build'.  
So developing a basic application for a SimpleTopologyService offers up an opportunity to experiment and learn about cool technologies like Docker and form an opinion on what role they should plan in our evolving DevOps story.  

Try it 
If you have not used Docker you should take 5 minutes and do so.  Knowing nothing you will have a good with the online tutorial  


A few up front admissions and conclusions 
I am not a Node developer, nor am I an expert in Docker or even Urbancode Deploy.  I wanted to spend a day looking at how these technologies work together.  Ultimately this wound up being 3 days of work due to typical interruptions and falling into a few rat-holes.  I’ll provide more information on the rat-holes later. 

All of these technologies are a lot of fun, are reasonably easy to pickup due to the great communities building around them.  I believe strongly that Docker will play an increasing role in Continuous Delivery and DevOps efforts.  As we see efforts around Cloud with no boundaries and Docker as a means to quickly package, move, deploy applications we will see (and contribute) many tools and services to make DevOps processes simple.


It is also worth noting that most if not all of the content below is covered in depth in various blogs and sites.  I've included some references at the end of this post.  

My simple application 

I choose Nodejs and MongoDb and leveraged expressjade templates and twitter bootstrap.  I used mocha as a testing framework.  The application has a basic webui and a rest (like) api.  

I won’t get into the details of developing the application itself as there are a lot of existing posts on using these technologies together.  One aspect that is worth noting is the application configuration.  I want the application to be able to use a local mongo database, or to use an external existing one.  I also need the application to be configurable in terms of what port to run on, whether or not to keep test data around etc.  Initially I had these as properties in the node application itself (bad), then moved them into a config.json file which worked ok.  However, based on http://12factor.net/config  I needed to have the configuration elements that vary from one deployment to another set using environment variables.  With this in mind I used https://github.com/flatiron/nconf
 so that the application would default to a config file, but environment variables could be set for aspects that may vary.

In my node application I can configure this order like so:

var fs = require('fs');
var nconf = require('nconf');
nconf.argv().env().file({ file: './config.json'});

This allows me to have defaults on my local host that can then be overridden by the environment so that I can run something like:
$ WEB_PORT=3000 DB_HOSTNAME=mydbhost DB_PORT=27017 node app.js 
Connecting to mongo with:mongodb://mydbhost:27017/simpleTopologyService
Express server listening on port 3000

This approach became useful both for deploying the application but also targeting the test suite to a deployed instance.

Setting up Docker 

There are great docs on setting up Docker.  I’ll quickly describe a few approaches I took. 

Setting up Docker on my personal machine
My development environment is a Mac.  To run docker locally there are a few options.  

The first is to use boot2docker which provides a small Virtual Machine running on VirtualBox.  It also provides a local command (boot2docker) to start and ssh into that virtual machine, and allows for you to use docker commands directly from the command prompt.  In addition it sets up a host only network that automatically allows you to access your deployed containers via mapped ports.  To access your applications you use the ip address of the boot2docker-vm which can be found simply by typing:
$boot2docker ip
The VM's Host only interface IP address is: 192.168.59.103
 A second approach is to use Vagrant to setup an ubuntu image with docker. With this approach you can then setup a shared folder in Virtual Box so that you can share source on your local machine with the Virtual Machine. In order to access your application running in a container you then need to map ports from your local machine, to ports on the running Virtual Machine (which in turn are mapped to the ports exposed by each container).
For the second approach here is the Vagrant file I used: With this I would simply run Vagrant up and then Vagrant ssh to connect to the docker virtual machine and go to work. To map the ports to access I used the following: Both approaches worked well.
Setting up docker on my integration and production machines 
I deployed a RedHat and an Ubuntu Virtual Machines in the Lab. I had no problems following the instructions to install docker on the ubuntu image. On the RedHat image a kernel upgrade was necessary, after which Docker would run but did not behave as expected (my containers could not get out even with iptables configured) - this was a 2 two hour rat-hole and I gave up and simply used ubuntu.
Setting up a private registry for Docker Containers

Could not have been simpler. On a host with Docker installed I simply ran:
$boot2docker ip
     docker run -p 5000:5000 registry 
To push an image to the private registry:
          docker tag [local-image] myregistry.hostname:5000/myname 
          docker push my registry.hostname:5000/myname 
To pull an image in the private registry simply:
          docker pull myregistry.hostname:5000/image:tag 

A few gotchas ... currently there is not a means (I am aware of) to easily remove images from a local registry. For authentication you can have the registry listen on localhost only and then use SSH tunneling to connect.
ssh -i mykey.key -L 5000:localhost:5000 user@myregistry.hostname
The registry is evolving so check here.
Setting up public (or private) registry on DockerHub 
Create an account at https://hub.docker.com/ and setup your registries https://registry.hub.docker.com/ 

Dockerizing the application

Why 
First off, why should I dockerize my application? It is a simple application to run it I simply need to: 
$ mongod 
$npm install 
$ node app.js 
Express server listening on port 3000
However, while this is simple as the application developer it is dependent upon me having mongo installed or access to a remote mongodb, having installed node, having internet access to run npm install and also being good about specifying versions of modules in my package.json etc. dockerizing my application allows me to capture my application and all of it’s dependencies in a container so that all anyone needs to do to run it is:
$ docker run -P rjminsha/simpletopologyservice
To run the container I don’t even need to know what the technology stack is. All I am doing is running the container. If I am consuming this application to test with, or to deploy into production this is really nice.

To get an updated version of the application I can simply pull the application
$docker pull rjminsha/simpletopologyservice
and then run it. The pull will only pull down the layers in the application that have changed. Also really nice.

Getting a basic node application running in a docker container 
The following article and example by Daniel Gasienica was pretty straight forward to follow and got me going within a few minutes.  Now I have a basic template to follow for taking my Node Application and running it within a Docker Container.  

Options
My container needs some flexibility.  It needs to be able to run both node and mongodb in the same container, mongodb in a seperate container or to simply connect to an existing mongodb in some SaaS environment. 

A common approach to running multiple processes in a container is to use supervisord . Supervisord provide a way to manage multiple processes (including restarting etc). This approach is very popular as it also allows a way to run an ssh daemon on the container so that you can login via SSH to the running container.

A second approach to running multiple processes outlined in Alexander Beletsky's Blog . Basically have a startup script that starts mongo in the background if needed, then starts node:

/usr/bin/mongod --dbpath /data/db &
node /app/app.js
The Dockerfile then simply runs this script.
CMD ["/app/start.sh”]
I tried both; both worked easily. Currently I am going with the second approach because it makes viewing the logs on my process simple and someone yelled at me that I should not treat containers like a virtual machine. There appears to be some differing opinions on one process per container vas light weight vm.
$docker logs simpletopologyservice
WEB_PORT:3001
DB_HOSTNAME:localhost
DB_PORT:27017
Starting Mongo DB
Starting node application
While I could/should have re-used some existing docker images my Dockerfile looks like the this: Not very polished but good enough right now to continue experimenting.

Testing with Docker containers   

Mocha and BDD 
To test my application I used a framework called mocha.  It is similar to Cucumber and allows you to write automated tests following Behavior Driven (BDD) Development concepts.  I really enjoyed using Mocha.  In BDD you using natural language to describe expected behavior, you implement the test case, then you implement the code.  A very simple example is: 

describe('SimpleTopologyService Webui Tests', function() {
  describe('GET /topology/topologies', function() {
    it('should return a 200 response code', function(done) {
      http.get({ hostname: topologyHostname, path: '/topology/topologies', port: topologyPort }, function(res) {
        assert.equal(res.statusCode, 200,
           'Expected: 200 Actual: ' + res.statusCode);
        done();
      });
    });
  });
One very nice thing is that you can have a test suite that has pending tests. For example:
describe('Topology Pool responds to a notification that there is a new build', function() {
    it('Topology provides REST API to notify of a new build');
    it('Topology creates event when new build is received');
    it('Topology Pool recieves event when a new build is created');
    it('When a new build event is recieved the Pool should purge old instances');
    it('When a new build even is recieved the Pool should new new instances of the build');
    it('When a new build event is received the Pool should notify existing users that a new instance is available');
});
These tests do not have an implementations. I can write a set of descriptions of the expected behavior up front which helps us think about how we want the application to behave.  It is also a great way to to collaborate with team members.  Pending tests are reported as pending rather than failed so the general flow is to  write your test cases up front, implement code and work your way through the function making sure you have not introduced regressions.
$ mocha --reporter=spec  
… 
    POST /api/v1/topology/topologies

      ✓ should return at 200 response code on success and return json 
      ✓ should return 400 response code if I try to create a document with the same name 
      ✓ should be able to delete new topology record and recieve 200 response code in response 
      ✓ should no longer be able to locate record of the topology I just removed so should recieve a 400 response code 
      ✓ should return at 400 response code if there is a validation error 
    GET /api/v1/topology/topologies:id
      ✓ topology name should be correct 
      ✓ topology should list of URIs for pools of this topology 
      ✓ topology should list of providers for this topology which include type, username, password 
    PUT /api/v1/topology/topologies:id
      ✓ should return at 200 response code if a valid referenceURL is passed 
      ✓ should return at 400 response code if invalid data is passed 
      ✓ should return at 404 response code if the tasks does not exist 
      ✓ can update pools with new pool URL 
SimpleTopologyService Topology API after removed test data
  35 passing (2s)
  15 pending
Testing with my dockerized application
Testing locally is fine but I needed to be able to test my application running in a container.  To do this I can pass in properties telling mocha about the location of my running container.  So if I run my container: 
 

$ docker run -P -d rjminsha/simpletopologyservice
And inspect the running container 
CONTAINER ID        IMAGE                                   COMMAND             CREATED             STATUS              PORTS                                                                         NAMES
137a72b6aca1        rjminsha/simpletopologyservice:latest   /app/start.sh       32 seconds ago      Up 30 seconds       0.0.0.0:49153->27017/tcp, 0.0.0.0:49154->28017/tcp, 0.0.0.0:49155->3001/tcp   drunk_goodall 
I see that the application is running, and the port 49155 is mapped to my application running in the container on port 3001. By running
boot2docker ip 
The VM's Host only interface IP address is: 192.168.59.103
I see my host only network is 192.168.59.103. So my application can be reached at http://192.168.59.103:49153/
curl http://192.168.59.103:49155 
<!--   Licensed under the Apache License, Version 2.0 (the "License");——>
To run my test suite against the application in the container (using the local db in container) I can pass in this information
$ env WEB_PORT=49155 WEB_HOSTNAME=192.168.59.103 DB_HOSTNAME=192.168.59.103 DB_PORT=49153 --reporter=spec mocha 
  35 passing (1s)
  15 pending
This helped me find some bugs in my test cases which in some places had assumed port information. The ability to develop code and write tests at the same time that are more than unit tests and don’t assume the location of the application is critical for having a continuous test process later in the pipeline.  

Dockerizing my test cases
I would like others to also be able to run the tests for my application as a part of the deployment process. Dockerizing my tests within a container is a nice way for me to hand off the test suite with the application. This has the added benefit of removing the need to understand all the information the tests need to know about the location of the application. When you link containers docker will update /etc/hosts with the ip address of the linked container and set environment variables about the container such as the exposed ports and addresses. This allows me to leverage those environment variables to automatically run the test suite against the linked container. I created a simple Dockerfile very similar to the application that installed node, and ran npm install to get dependencies such as mocha. This time the Dockerfile runs a shell script to invoke mocha using the alias information Docker provides when linking containers. The last CMD in my docker file executes the shell script
CMD ["/app/runtests.sh”] 
which contains
env
cd /app
echo "Running full test suite"
env WEB_PORT=$STS_ALIAS_PORT_3001_TCP_PORT WEB_HOSTNAME=$STS_ALIAS_PORT_3001_TCP_ADDR DB_PORT=$STS_ALIAS_PORT_27017_TCP_PORT DB_HOSTNAME=$STS_ALIAS_PORT_27017_TCP_ADDR mocha --reporter spec

Now to run my tests I first need to start the application
$docker run -P -d --name simpletopologyservice rjminsha/simpletopologyservice 
(note this time I named it simpletopologyservice)
$docker ps 
CONTAINER ID        IMAGE                                   COMMAND             CREATED             STATUS              PORTS                                                                         NAMES
d29ae62d874a        rjminsha/simpletopologyservice:latest   /app/start.sh       11 minutes ago      Up 11 minutes       0.0.0.0:49153->27017/tcp, 0.0.0.0:49154->28017/tcp, 0.0.0.0:49155->3001/tcp   silly_wilson/sts_alias,simpletopologyservice
And then run a linked tests container:
$ docker run --link simpletopologyservice:sts_alias rjminsha/simpletopologyservicetest
Connecting to mongo with:mongodb://192.168.59.103:49153/simpleTopologyService
topologyPort:49155
topologyHostname:192.168.59.103

  35 passing (2s)
  15 pending
This dockerized test suite is going to be useful later on if I integrate the application with a health manager.

At this point I have a basic application, I have dockerized that application and can run tests locally against it. I have also dockerized my test suite and can run those using an attached container or by pointing the test suite at a remote instance of the application. It is time to setup a Pipeline for the application so that I can check-in code and have those changes tested, shared and deployed.  

References
Things I read to learn a bit about Node
  • http://www.ibm.com/developerworks/library/wa-nodejs-polling-app/
  • http://pixelhandler.com/posts/develop-a-restful-api-using-nodejs-with-express-and-mongoose
  • http://www.andreagrandi.it/2013/02/24/using-twitter-bootstrap-with-node-js-express-and-jade/
  • http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api
  • http://madhatted.com/2013/3/19/suggested-rest-api-practices
Posts on ways to run node and mongo in containers 
  • Multiple containers: http://luiselizondo.net/blogs/luis-elizondo/how-create-docker-nodejs-mongodb-varnish-environment 
  • Single container: http://beletsky.net/2013/12/run-several-processes-in-docker-container.html
  • Using supervisord: http://docs.docker.com/examples/using_supervisord/