If you haven’t checked out CoreOS and fleet, you should definitely do so right now. I’ve tried all of the docker orchestration services, and so far it has been the most reliable and barebones way to host your application. Kubernetes and Swarm have really nice utilities, but if those utilities fail or don’t work one day while your live application is running, you’re kind of screwed.

I’ll go into more detail of CoreOS later on, but for now here’s an easy way of setting up your web app on a fleet machine.

Your service file should look something like this:

sampleweb@.service

Description=Sample Web App
After=docker.service
Requires=docker.service

[Service]
User=core
Restart=always

ExecStartPre=-/usr/bin/docker stop sampleweb-%i
ExecStartPre=-/usr/bin/docker rm -f sampleweb-%i
ExecStartPre=/usr/bin/docker pull msanterre/sampleweb

ExecStart=/bin/bash -c '/usr/bin/docker run --name sampleweb-%i \
-p 4444:80 msanterre/sampleweb'

[X-Fleet]
X-Conflicts=sampleweb@*.service

This is a pretty standard service file. If you don’t understand what’s going on here, please go here to see the official fleet documentation.

And that should get your container running on fleet! Success!

The problem now is that fleet starts on one of your machines, so where does your DNS point to?

If you’re an experience AWS user, you know to use Route 53 to point to an Elastic Load Balancer (ELB).

Well now you have a way to point your domain to your machines, you could just add all your fleet machines to your ELB instance and call it a day.

Health checks on ELB keep checking if your app is up on the specified port (4444 in this case). When it’s launched on one of the machine, it will eventually pass the ELB health check and will start receiving traffic. When it dies, it will fail the health check a few times and stop getting traffic. This method will make it possible for people to hit a stopped application until the AWS fails the health check

If this is good enough for you, good! It makes things pretty simple. If not, just keep on reading.

Make these additions to your service file:

ExecStartPre=/usr/bin/docker pull anigeo/awscli

ExecStartPost=/bin/bash -c '\
/usr/bin/docker run --rm \
-e AWS_SECRET_ACCESS_KEY=`etcdctl get /aws/secret_access_key` \
-e AWS_ACCESS_KEY_ID=`etcdctl get /aws/access_key_id` \
-e AWS_DEFAULT_REGION=`etcdctl get /aws/region`  \
anigeo/awscli elb register-instances-with-load-balancer --load-balancer-name sampleweb --instances $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)'

ExecStop=/bin/bash -c '\
/usr/bin/docker run --rm \
-e AWS_SECRET_ACCESS_KEY=`etcdctl get /aws/secret_access_key` \
-e AWS_ACCESS_KEY_ID=`etcdctl get /aws/access_key_id` \
-e AWS_DEFAULT_REGION=`etcdctl get /aws/region`  \
anigeo/awscli elb deregister-instances-from-load-balancer --load-balancer-name sampleweb --instances $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)'

Note: Make sure your aws credetials are in etcd

Everything should stay the same for you except sampleweb.

This will register your application onto the ELB when it starts and deregister it when you stop it. This way you should have no one hitting dead applications.

Final file

sampleweb@.service

Description=Sample Web App
After=docker.service
Requires=docker.service

[Service]
User=core
Restart=always

ExecStartPre=-/usr/bin/docker stop sampleweb-%i
ExecStartPre=-/usr/bin/docker rm -f sampleweb-%i
ExecStartPre=/usr/bin/docker pull msanterre/sampleweb
ExecStartPre=/usr/bin/docker pull anigeo/awscli

ExecStart=/bin/bash -c '/usr/bin/docker run --name sampleweb-%i \
-p 4444:80 msanterre/sampleweb'

ExecStartPost=/bin/bash -c '\
/usr/bin/docker run --rm \
-e AWS_SECRET_ACCESS_KEY=`etcdctl get /aws/secret_access_key` \
-e AWS_ACCESS_KEY_ID=`etcdctl get /aws/access_key_id` \
-e AWS_DEFAULT_REGION=`etcdctl get /aws/region`  \
anigeo/awscli elb register-instances-with-load-balancer --load-balancer-name sampleweb --instances $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)'

ExecStop=/bin/bash -c '\
/usr/bin/docker run --rm \
-e AWS_SECRET_ACCESS_KEY=`etcdctl get /aws/secret_access_key` \
-e AWS_ACCESS_KEY_ID=`etcdctl get /aws/access_key_id` \
-e AWS_DEFAULT_REGION=`etcdctl get /aws/region`  \
anigeo/awscli elb deregister-instances-from-load-balancer --load-balancer-name sampleweb --instances $(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)'

[X-Fleet]
X-Conflicts=sampleweb@*.service