Production anywhere

In a previous post, I gave some arguments to deploy production on a dev host. Here are some technical details about the how. :)

Test? There’s no Test. There’s only Production.

The first wrong assumption when someone creates a service is to consider that a test/dev stage is not like the production. Of course there are differences but only because of the physical constrains. This is not the target. The real target is to have other stages as close as possible to production.

Of course this is not possible entirely: we can’t create CPU or RAM. Those differences have to be considered as chances: your app is stressed and you can test it on some limits.

And that’s it. Everything else can be worked-around.

I will show some parts of it with docker but this can be done with other tools (such as vagrant) with more sweat.

You are not alone

We are used to consider that an app is nothing without the resources it needs, could them be files, DBs, APIs,… But the app is also useless if it’s not reachable: it is just a peace of the group which provides the service. So the infrastructure is also part of this service and thus what your infrastructure offers has to be present on all stages too.

Listening

Let’s focus on the entry point of the app: the port which is listening. I’ll illustrate that with a docker-compose.yml. We start it with docker-compose --project example up:

version: '2'
services:
    app:
        image: emilevauge/whoami
        command: --port 8080

For the sake of the example, the app is listening on the port 8080. In fact, it has to be reachable on port 80. If you are available over SSL, it will be over port 443 with an addition of security. But why would you have to code this port translation or this SSL logic? And if you have multiple instance you also need load-balancing.

This is here, precisely this moment, that the reverse proxy becomes a part of the service itself: the app doesn’t work properly without it. And I’m not even talking about the webapps that need more that a port translation. Yes, I think about you, wordpress and jenkins (and I curse you BTW).

So let’s write an updated docker-compose.yml:

version: '2'
services:
    app:
        image: emilevauge/whoami
        command: --port 8080
        labels:
            - traefik.frontend.rule=Host:example.com

    lb:
        image: traefik
        command: -c /dev/null --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
        ports:
          - "80:80"
          - "8080:8080"
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock

I use traefik in this example because it permits to avoid the configuration: it’s able to ask docker the labels of the containers and to configure itself. Here the container app is labelled to receive all the traffic sent to example.com.

As we have published the ports, we can reach our loadbalancer on the ports 80 and 8080. Just for validation, we can see its current configuration on localhost:8080.

Ok, that’s good but we still can’t reach our app correctly. If we go on localhost:80, the domain we use is localhost and not example.com, so traefik can’t match our requests . A solution would be to add the DNS resolution to /etc/hosts:

127.0.0.1   example.com

Ok. That’s enough to reach my app on example.com but I have to do this each time for each app and there’s a risk I forgot to remove it. Also I can’t test scaling as the published port is blocking the operation. A solution would be to use the IP address of the container that I got with docker inspect --format '{{ .NetworkSettings.IPAddress }}' example_traefik_1 but I have to do it each time. That’s boring and uneffective.

It should be as simple as using a DNS name without having to set it.

Dockerize or not to dockerize

The DNS name is available inside the docker world. So the process has to be inside a container (and in the correct network) to use it:

docker run --rm -ti --net example_default alpine wget -qO- http://app:8080

You could even use a dockerized firefox but then you have a tool to maintain (plugins, settings, users’ tastes,…).

Open a door

For playing, I created an image to do SSH over stdin/out of a container. I wanted stdin/out because this is the only contact point you know for sure when you start a container. We can make this container part of the docker network of our project:

ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ProxyCommand="docker run --rm -i --net example_default cell/ssh-over-docker $(cat ~/.ssh/id_rsa.pub)" root@127.0.0.1

Ok, that’s what you would have by default with docker but here we are using SSH, meaning we can use the dynamic port forwarding option:

ssh -CD 1080 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ProxyCommand="docker run --rm -i cell/ssh-over-docker $(cat ~/.ssh/id_rsa.pub)" root@127.0.0.1

Now, we have a SOCKS proxy which is using the DNS of docker. We can access to the app without caring about the IP of my container:

curl --socks5-hostname 127.0.0.1:1080 http://app:8080

We can also set this SOCKS proxy in the browser’s settings (and don’t forget to check for the remote DNS resolution: here, step 2 or use Foxyproxy). If there’s not SOCKS settings, tsocks is a good workaround.

This is not the Internet you’re looking for

This is going in the good direction, but we are still not able to use our app with it’s production’s URL http://example.com. Fixing this is easy: just place the name resolution you want on your loadbalancer via a net alias:

version: '2'
services:
    app:
        image: emilevauge/whoami
        command: --port 8080
        labels:
            - traefik.backend=whoami
            - traefik.frontend.rule=Host:example.com
            - traefik.port=8080

    lb:
        image: traefik
        command: -c /dev/null --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        networks:
            default:
                aliases:
                    - "example.com"

Now, we have an access with final name:

curl --socks5-hostname 127.0.0.1:1080  http://example.com

We can even test if it’s scaling correctly:

docker-compose -p example scale app=5

Now, production on a host.

Free to play

During this post, I only consider we are on a dev host but everything can be applied on test and, thanks to the isolation docker provides, we can start multiple environments in parallel without collision. As test and dev are the same, the automation of the tests is also easier.

If manual UAT can’t by avoided, it’s good to use the SOCKS proxy of SSH as entry point of the environment (see the option -g or a future post :) ).

Once you are able to have your platform where you want, you are able to keep an eye on your dependencies, possible vendor lock-in, and be prepared for any coming change.