Using octohost with Heroku Addons.

One of the assumptions of octohost is that the code that you git push over, turn into a Docker container and run are 'immutable' - they don't change at all. No file uploads. No databases inside the container.

If you're used to working with Heroku, then this is no surprise at all - but you still may need to connect to a database, or NoSQL data-store.

octohost has some basic support for data stores but it's still not apparent to me that running MySQL, Postgresql or other data store is a good idea inside of a Docker container. That still really scares me.

Heroku has amazing support for "Addons" - adding a Postgres database is as easy as: heroku addons:add heroku-postgresql. Adding MySQL: heroku addons:add cleardb. There are Addons that do so many things - and best of all: You don't have to manage them at all. Just add them, and then use them from your app.

After we added the recent ability to specify environment vars for octohost with octo config - I started thinking:

What if I deployed an octohost in USE-1 (where Heroku is located) - configured some Addons for a non-existant Heroku app - and then used those Addons from octohost?

Turns out that this is pretty easy now - let's create a Heroku app and attach Redis and Postgresql to it:

[master] darron@~/Dropbox/src/octoservices: heroku create octoservices
Creating octoservices... done, stack is cedar |
Git remote heroku added
[master] darron@~/Dropbox/src/octoservices: heroku addons:add redistogo
Adding redistogo on octoservices... done, v3 (free)
Use `heroku addons:docs redistogo` to view documentation.
[master] darron@~/Dropbox/src/octoservices: heroku addons:add heroku-postgresql
Adding heroku-postgresql on octoservices... done, v4 (free)
Database has been created and is available
 ! This database is empty. If upgrading, you can transfer
 ! data from another database with pgbackups:restore.
Use `heroku addons:docs heroku-postgresql` to view documentation.
[master] darron@~/Dropbox/src/octoservices: heroku config
=== octoservices Config Vars
REDISTOGO_URL:              redis://

So we have a Heroku app created, nothing's pushed and we have a couple of data stores provisioned to it.

Now let's connect those data stores to our octohost app - from the octohost server:

ubuntu@server:~$ octo config:set herokuleech/REDISTOGO_URL "redis://"
ubuntu@server:~$ octo config:set herokuleech/DATABASE_URL "postgres://"
ubuntu@server:~$ octo config herokuleech

We've set the environment variables, which are stored in etcd - for the octohost app named 'herokuleech'.

Now we push the app over to our octohost - and refer to: ENV['REDISTOGO_URL'] and ENV['DATABASE_URL'].

Here's an example app - herokuleech. Uses Posgresql, Redis and Sendgrid - all from Heroku.

Give it a try - this gives you:

  1. The ability to discard your app containers at will.
  2. The ability to upgrade your octohost without worrying about your data stores at all.
  3. The ability to use many of the Heroku Addons that exist - and there's a lot of them.

Note that:

  1. You don't actually have to use Heroku or their Addons.
  2. You can now run your services outside of octohost and just refer to them via Environment Variables.

Using octo config - easy environment variables - on octohost.

I was working on a problem this weekend - not a super hard problem - but an annoying one:

When you've got a system that is supposed to trigger at certain times, how can you verify that it's actually happening?

The system I was working with is web based - hit a specific URL at a specific time. It's called keepalive and it runs on Heroku. It hits a few URLs daily - but I wanted to make sure:

  1. It was working at all.
  2. It was happening at the right time.

So I built a quick little hack to do that - trigger emails when a specific URL is hit - and I uploaded it to Heroku.

I felt a little guilty when I added it to Heroku, because we're building octohost and this seemed like a perfect fit - a simple little web app with no database requirement. But octohost didn't have one thing I really liked to use: easily updatable environment variables. I really like Heroku's 12 factor pattern - especially when it relates to configuration information.

On Sunday, I rebuilt the 'quick little hack' that I made on Saturday night and added a bunch of new features - I called it Canary. By the way - Canary did help me to see that keepalive was working correctly (other than the DST change) - so it fulfilled its purpose.

Monday morning I decided I'd add some easily configured environment variables to octohost. I patterned them after Heroku's Config Vars.

It's pretty easy to use:

octohost:/home/git# octo config canary
octohost:/home/git# octo config:set canary/TESTING "This is only a test."
This is only a test.
octohost:/home/git# octo config canary
/canary/TESTING:This is only a test.
octohost:/home/git# octo config:rm canary/TESTING

octohost:/home/git# octo config canary

I'm pretty happy with how it all turned out.

You can see Canary here - more information on octohost here.

NOTE: We're only using octohost to serve really small sites at the moment. The more this works out, the more sites we'll be able to use on it.

100 days of commits.

100 days ago, I made a goal for myself to commit something to Github every day.

Github contributions

I was feeling a little stagnant and wanted to challenge myself and make sure that I was learning every day.

100 days later, I feel like I've kept my momentum.

I've written Chef cookbooks, created a Chef skeleton framework for myself, built a few websites, started an entire project - octohost, posted to my blog, learned about a whole bunch of new technologies: Serf, Docker, Laravel, built a whole bunch of AMIs, Droplets and Rackspace Images with Packer and Vagrant, wrote an article for Sysadvent, learned about Ansible, built some packages, updated some Heroku buildpacks, released a Capistrano 3 example repo and lots of other things.

Always. Be. Committing.

I've been feeling pretty renewed and envigorated as I have built and learned things I didn't know before.

I love it - going to see how long I can keep this up. At the pace I'm at, it's pretty sustainable I just need to keep going - one commit at a time.

NOTE: If you're not logged into my account, it looks like I've only got 15 days in a row - but lots of my commits are to private repos - I wish that would show on the graph - but that's how it is. and the Laravel PHP framework - a learning experience.

A few days ago, I found out about which is billed as "Ridiculously simple data sharing for the Internet of Things."

The entire concept was pretty interesting to me. I liked the simplicity of it, and how it was so transient - made for machines to talk to machines. I also liked that if a message wasn't picked up in 24 hours, then it clearly wasn't important and that message disappeared.

I have also wanted try a web framework called Laravel for a little while. Laravel bills itself as "the PHP framework for web artisans" and looked like something I could work with. It has all the right and proper buzzwords and seemed capable at first glance. At nonfiction we have been using mostly Ruby for our app creation for the last few years - even though our CMS is written in PHP - I wanted to use a PHP framework that was more modern than our inhouse toolset.

So I decided to re-impliment in Laravel - and in a few hours - under 5 in total - I did.

Overall I'm pretty happy with the result and happy that I was able to bend Laravel to my will. I have started a few projects like this where I wasn't able to actually launch - even though I spent more time than 5 hours - that's frustrating.

The result is and the code is available on Github.

It is mostly feature complete and on parity with the public documentation on It provided a small project where I could get used to at least parts of the framework, and unlike some other tools, I was able to work through it fairly quickly and without too much hassle.

If you're interseted in Dweet or Laravel, give it a spin.

If you've got a framework you want to try out, give this type of project a shot - it's a small enough feature set that's pretty easy to bite off and accomplish.

Virtual hosts and domain names with octohost.

On our main test server, we are running all sorts of different web applications.

When we deployed this octohost, we pointed a wildcard dns record to:

Every time we push a new container, the octohost knows the name of the git repository and tells Hipache to direct all requests for "" to that container.

You can also add additional domain name records into a CNAME file inside the repository. Here's an example with 2 domain records and a single domain record. Please note, I had to setup the record manually - because of the wildcard, the others are automatic.

Watch what happens when we push to the octohost:

[master] darron@~/Dropbox/src/octo-examples/virtual-host: git remote add octo
[master] darron@~/Dropbox/src/octo-examples/virtual-host: git push octo master
remote: Put repo in src format somewhere.
remote: Building Docker image.
remote: Base: virtual-test
remote: Nothing running - no need to look for a port.
remote: Uploading context 9.216 kB
remote: Uploading context 
remote: Step 0 : FROM octohost/nginx
remote:  ---> 664d4931580f
remote: Step 1 : ADD . /srv/www/
remote:  ---> 10f909ec8970
remote: Step 2 : EXPOSE 80
remote:  ---> Running in ff0858dad9cb
remote:  ---> 1aab2064e7c3
remote: Step 3 : CMD nginx
remote:  ---> Running in c64ba17e47c8
remote:  ---> 52c1ef70cfe3
remote: Successfully built 52c1ef70cfe3
remote: Adding
remote: Adding
remote: Adding
remote: Adding
remote: Not killing any containers.
remote: Your site is available at:
remote: Your site is available at:
 * [new branch]      master -> master

At the end, you can see each domain record being setup for Hipache - and all three point to the same location:


This deploy added another container to that server - for a total of 28 containers:

root@ip-10-250-22-233:~# octo status
ghost (56 MB): OK
hapi (16 MB): OK
harp (3 MB): OK
hexo (58 MB): OK
html (3 MB): OK
jekyll (3 MB): OK
kraken (51 MB): OK
martini (13 MB): OK
middleman (6 MB): OK
mojolicious (24 MB): OK (4 MB): OK
octopress (3 MB): OK
padrino (31 MB): OK
perldancer (14 MB): OK
php5-nginx (24 MB): OK
rails2 (40 MB): OK
rails3 (54 MB): OK
rails4 (55 MB): OK
rails4-ruby-2.1 (65 MB): OK
ramaze (30 MB): OK
revel (18 MB): OK
sails (95 MB): OK
sinatra (27 MB): OK
slim (13 MB): OK
ssl (3 MB): OK
virtual-test (4 MB): OK
web.go (13 MB): OK
www (4 MB): OK

For another example, the container responds to:


One of the best things about Docker is how it encapsulates all of an application's dependancies inside an LXC container. That feature makes running applications with different and even conflicting dependancies on the same server possible.

On this one server we're running Ruby apps, Go apps, Node.js apps, Perl apps, PHP apps and static sites with Nginx - with all sorts of different versions of dependancies. Docker makes this easy and allows us to support all of these different languages and frameworks without a problem.

Doing that on a regular server without Docker would be terrifying and likely impossible.