Environments are so 2000's
How are environments made for and why we have to avoid them when possible.Gates of Hell
An application is never well tested but once it’s on production and running. As it’s a bit risky to just deploy some new stuff and have an impact on customers, we try to replicate production environment somewhere else.
But the harder part is not in building an environment, it’s keeping it close to the moving target (ea. production) ; especially when you need this ersatz more than once and you start to call those environments test, dev, pre-production,… What are the differences? IPs, names, connection pools,… Well usely, that’s just configuration changes.
Let’s make that clear: a change of a value in a configuration file is as important as a change of a function in your code. And you want to change configuration between stages? That means you won’t test your configuration then.. That’s where Hell opens its gate.
How can we keep things simple?
I am sorting the differences between environments in three:
What is in fact static
As it’s a best practice in code to have named constant values, people tend to name everything, and thus, they tend to have possibly-changing-values all around. What is a constant in the code becomes a variable on the infrastructure.
How often has the mysql port number changed? Or the port HTTP? Or the domain name?
Those are static and have to be considered like that.
What can be deduced
Some values like the amount of RAM, the diskspace, the number of CPU cores, can be discovered by the app.
Accordingly, the maximum number of threads, the maximum RAM used by the app can be calculated at launch time by the app or the configuration file can be generated before the app starts.
What is known only for a few
In fact all the information relevant for each machine can be discovered on the good machine. The question is how to make those information available for others.
This is not as trivial as it first seams. What is needed here is a service directory. But is this functionality provided by the infrastructure or the application?
If it’s from the infrastructure, then how to contact the service directory is part of the caracteristics of the environment. If it’s part of the application, we have a chicken-egg problem: How to provide a meeting point which is changing at each launch of the complete environment?
My solution is in fact in between: just use a PaaS solution :)
Here, there be buzz words
If you have been careful, with the first two points, and by removing the third part from the configuration, application and configuration are immutable. Immutable images and PaaS, that smells like easy provisioning!
I admit it, as a PaaS solution relying on immutable images, I am thinking about docker.
Docker
It’s perfect to spawn VM-like instances whatever is the machine, a dev’s ubuntu, a Q&A’s debian, or a coreos in production.
Once our image are created with a Dockerfile (well, the Dockerfile syntax is not powerful enough so I call puppet apply
in the Dockerfile), you have your immutable app.
Now, let’s see the service directory
Consul
Based on what I have written above, it would be good to have each app able to discuss and find each other. Of course having a system simple, light and fault-resistant is a must. So Let’s choose consul.
Consul is a go binary using a gossip protocol between instance to exchange information about states but has other cool features:
- Session, keep alive, dead-man detection: Just to check if your apps are responding.
- DNS interface: not very interesting at first glance but it’s an API all applications are compliant with.
- Key/Value store: perfect for exchanging basic information like:
- A shared secret (generated by the first member)
- Master election
- High available: thank you to the gossip protocol. It’s also possible to have a core (HA also) dedicated at the consistency of the K/V store.
- UI: just to see what is the state.
Demo time!
I build an image for playing with consul with default basic settings. Here are the repo and an image is available: cell/consul. I promise I kept the docker magic as low as possible.
Now to start the platform, you just have to follow those steps:
-
The first server to bootstrap the core:
docker run --rm -ti --name first cell/consul -server -bootstrap
-
The two other servers to have the quorum:
docker run --rm -ti --link first:consul cell/consul -server
-
One client, like it is with an app:
docker run --rm -ti --link first:consul --name app cell/consul
You can access to the UI provided by whichever consul. First get the IP address: docker inspect -f '{{.NetworkSettings.IPAddress}}' first
. Then go to the URL http://<IP>:8500
. Here you will that there’s one service (consul itself) and four nodes.
An app can easily declares its service using the HTTP API of consul. As we deploy a local agent, it’s easy to find it. First install curl in the container:
docker exec -ti app opkg-install curl
Now register the app (as service, well named, app):
docker exec -ti app curl -X PUT -d '{ "Name":"app","Port":8080 }' localhost:8500/v1/agent/service/register
In the UI, you directly see the service registered.
If an other service wants to find it… well it’s not easy, it’s obvious:
Start the other container:
docker run --rm -ti --link first:consul --name frontend cell/consul
And ping the app:
docker exec -ti frontend ping app
Actually, it’s exactly how we bootstrap our production environment but, as you can guess, test and dev also. With a good service discovery, we don’t need configuration change between the environments. In fact, if the app is well design, we don’t need any adaptation.
In the end, we are able to deploy our production-like environment on a dev host and if we want to replicate live, we just need a hardware powerful enough.