Late last year, octohost was created as a system to host websites:
- With no or minimal levels of manual intervention.
- With very little regard to underlying technology or framework.
- As a personal mini-PaaS modeled after Heroku with a
git pushinterface to deploy these sites.
- Using disposable, immutable and rebuildable containers of source code.
What have we found?
- Docker is and incredible tool to take containers on Linux to the next level.
- If you keep your containers simple and ruthlessly purge unnecessary features, they can run uninterrupted for long periods of time.
- Having the ability to install anything in a disposable container is awesome.
- You can utilize your server resources much more efficiently using containers to host individual websites.
As we’ve been using it, we’ve also been thinking about ways to make it better:
- How can we make it faster?
- How can we make it simpler and more reliable?
- How big can we make it? How many sites can we put on a single server?
- How can we combine multiple octohosts together as a distributed cluster that’s bigger and more fault-tolerant than a single one?
- How can we run the same container on different octohosts for fault-tolerance and additional scalability for a particular website?
- How can we persist configuration data beyond the lifecycle of the disposable container?
- How can we distribute and make this configuration data available around the system?
- How can we integrate remote data stores so that we can still keep the system itself relatively disposable?
- How can we trace an HTTP request through the entire chain from the proxy, to container and back?
- How can we lower the barrier to entry so that it can be built/spun up easier?
A number of these have been ‘accomplished’, but we’ve done a number of large changes to help to enable the next phases of octohost’s lifecycle.
- We replaced the Hipache proxy with Openresty which immediately sped everything up and allowed us to use Lua to extend the proxy’s capabilities.
- We moved from etcd to Consul to store and distribute our persistent configuration data. That change allowed us to make use of Consul’s Services and Health Check features.
- We removed the tentacles container which used Ruby, Sinatra and Redis to store a website’s endpoint. Due to how it was hooked up to nginx, it was queried for every hit so that it knew which endpoing to route the request to. The data model was also limited to a single endpoint and required a number of moving parts. I like less moving parts - removing it was a win in many ways.
- We refactored the
octocommand and the gitreceive script which enabled the launching of multiple containers for a single site.
- We added a configuration flag to use a private registry, so that an image only has to be built once and can be pulled onto other members of the cluster quickly and easily.
- We added a plugin architecture for the
octocommand, and the first plugin was for MySQL user and database creation.
- We replaced tentacles with the octoconfig gem that pulls the Service and configuration data out of Consul and writes an nginx config file. The gem should be extensible enough that we can re-use it for other daemons as needed.
So what are we working on going forward?
- Getting octohost clustered easily and reliably. At a small enough size and workload, each system should be able to proxy for any container in the cluster.
- Working on the movement, co-ordination and duplication of containers from octohost to octohost.
- Improving the consistency and efficiency of octohost’s current set of base images. We will be starting from Ubuntu 14.04LTS and rebuilding from there.
- Continuing to improve the traceability of HTTP requests through the proxy, to the container and back.
- Improving the performance wherever bottlenecks are found.
- Improving the documentation and setup process.
What are some pain points that you’ve found? What do you think of our plans?