Aloak is the worst domain registrar I have ever used.

Late last year, I started to move 4 domain names off of Aloak - a registrar I had used for years. I was concerned:

  1. They weren't responsive to any request I had made in the last few years. I always had to ask and re-ask and continue to ask for small changes.

  2. Their web interface was abysmal and didn't work properly. I couldn't change items I needed to change.

  3. Their SSL certificate had actually expired in 2010.

After a couple of months, I had to abandon the effort - I stopped emailing them after nothing was done.

In May of 2014 I picked up the effort again and in June - it was finally done. We had enlisted DNSimple and their Concierge service - and it had finally been completed.

On July 9th I emailed a customer of ours and asked them to change their domain name server records - unfortunately their registrar was also Aloak - the worst domain registrar ever.

As I had previously experienced, the domain name changes that had been requested just weren't done.

We kept trying all throughout July, August and now through September, and the domain name servers haven't been changed all this time. Every so often we get a response like this:

Today - we got this response:

Over 3 months to change some domain name records - and it still hasn't been done.

Hey CIRA - they're "CIRA Certified"? Can you guys do anything about this?

I would transfer the domain name - but the last time it took approximately 6 months.

Any ideas for my client?

TestKitchen, Dropbox and Growl - a remote build server

I've been working on a lot of Chef cookbooks lately. We've been upgrading some old ones, adding tests, integrating them with TestKitchen and generally making them a lot better.

As such, there have been a ton of integration tests run. Once you add a few test suites a cookbook that tests 3 different platforms now turns into a 9 VM run. While it doesn't take a lot of memory, it certainly takes a lot of horsepower and time to launch 9 VM's, converge and then run the integration tests.

I have a few machines in my home office, and I've been on the lookout for more efficient ways to use them, here's one great way to pretty effortlessly use a different (and possibly more powerful) machine to run your tests.

Why would you want to do this?

You may not always working on your most powerful machine, or you're doing other things that you'd like to have additional horsepower for on your local machine - so why not use an idle machine to run them all for you?

What obstacles do we need to overcome?

  1. We need to get the files we're changing from one machine to another.
  2. We need to get that machine to automatically run the test suites.
  3. We need to get the results of those test suites back to other machine.

What do you need?

  1. A cookbook to test using TestKitchen.
  2. Dropbox installed and working on both machines. (This helps with #1 above.)
  3. Growl installed on both machines. Make sure to enable forwarded notifications and enter passwords where needed. (This helps with #3 above.)
  4. Growlnotify installed on the build machine - can also be installed from brew: brew cask install growlnotify
  5. Guard and Growl gems - here's an example Gemfile. (This helps with #2 above.)
  6. A Guardfile with Growl notification enabled - here's an example Guardfile

How do you start?

On the build box:

Change to the directory where you have your cookbook and run guard.

This will start up Guard, run any lint/syntax tests, kitchen creates all of your integration suites and platform targets and gets ready to run. Some sample output is below:

darron@: guard
11:33:24 - INFO - Guard is using Growl to send notifications.
11:33:24 - INFO - Inspecting Ruby code style of all files
Inspecting 16 files

16 files inspected, no offenses detected
11:33:25 - INFO - Linting all cookbooks

11:33:26 - INFO - Guard::RSpec is running
11:33:26 - INFO - Running all specs
Run options: exclude {:wip=>true}

Finished in 0.58145 seconds (files took 2.07 seconds to load)
9 examples, 0 failures

11:33:30 - INFO - Guard::Kitchen is starting
-----> Starting Kitchen (v1.2.1)
-----> Creating <default-ubuntu-1004>...
       Bringing machine 'default' up with 'virtualbox' provider...
       ==> default: Importing base box 'chef-ubuntu-10.04'...
       ==> default: Matching MAC address for NAT networking...
       ==> default: Setting the name of the VM: default-ubuntu-1004_default_1408728822930
       ==> default: Clearing any previously set network interfaces...
       ==> default: Preparing network interfaces based on configuration...
           default: Adapter 1: nat
       ==> default: Forwarding ports...
           default: 22 => 2222 (adapter 1)
       ==> default: Booting VM...
       ==> default: Waiting for machine to boot. This may take a few minutes...
           default: SSH address:
           default: SSH username: vagrant
# Lots of output snipped....
-----> Creating <crawler-ubuntu-1404>...
       Bringing machine 'default' up with 'virtualbox' provider...
       ==> default: Importing base box 'chef-ubuntu-14.04'...
       ==> default: Matching MAC address for NAT networking...
       ==> default: Setting the name of the VM: crawler-ubuntu-1404_default_1408729156367
       ==> default: Fixed port collision for 22 => 2222. Now on port 2207.
       ==> default: Clearing any previously set network interfaces...
       ==> default: Preparing network interfaces based on configuration...
           default: Adapter 1: nat
       ==> default: Forwarding ports...
           default: 22 => 2207 (adapter 1)
       ==> default: Booting VM...
       ==> default: Waiting for machine to boot. This may take a few minutes...
           default: SSH address:
           default: SSH username: vagrant
           default: SSH auth method: private key
           default: Warning: Connection timeout. Retrying...
       ==> default: Machine booted and ready!
       ==> default: Checking for guest additions in VM...
       ==> default: Setting hostname...
       ==> default: Machine not provisioning because `--no-provision` is specified.
       Vagrant instance <crawler-ubuntu-1404> created.
       Finished creating <crawler-ubuntu-1404> (0m45.51s).
-----> Kitchen is finished. (6m16.96s)
11:39:48 - INFO - Guard is now watching at '~/test-cookbook'
[1] guard(main)>

All of these suites and their respective platforms are now ready:

darron@: kitchen list
Instance             Driver   Provisioner  Last Action
default-ubuntu-1004  Vagrant  ChefZero     Created
default-ubuntu-1204  Vagrant  ChefZero     Created
default-ubuntu-1404  Vagrant  ChefZero     Created
jenkins-ubuntu-1004  Vagrant  ChefZero     Created
jenkins-ubuntu-1204  Vagrant  ChefZero     Created
jenkins-ubuntu-1404  Vagrant  ChefZero     Created
crawler-ubuntu-1004  Vagrant  ChefZero     Created
crawler-ubuntu-1204  Vagrant  ChefZero     Created
crawler-ubuntu-1404  Vagrant  ChefZero     Created

On your development box:

Once kitchen create is complete - if you've setup Dropbox and Growl correctly - you should get a notification on your screen. Here's the notifications I received:

In my case, Guard ran some syntax/lint tests, Rspec tests, and then got all of the integration platforms and suites ready to go.

Let's get our tests to run automagically.

In your cookbook, make a change to your code and save it.

Very quickly (a couple of seconds in my case), Dropbox will send your file to the other machine, Guard will notice that a file has changed and will run the tests automatically. If you're working on your integration tests, it will run a kitchen converge and kitchen verify for each suite and platform combination.

Once that's complete, you should get a notification on your screen - this is what I see:

If you're working on some Chefspec tests, this may be what you'd see:

To sum it up - this allows you to:

  1. Develop on one machine.
  2. Run your builds on another.
  3. Get notifications when the builds are complete.
  4. Profit.

If you've got a spare machine lying around your office - maybe even an underutilized MacPro - give it a try!

Any questions? Any problems? Let me know!

The recent octohost changes - where we're headed.

Late last year, octohost was created as a system to host websites:

  1. With no or minimal levels of manual intervention.
  2. With very little regard to underlying technology or framework.
  3. As a personal mini-PaaS modeled after Heroku with a git push interface to deploy these sites.
  4. Using disposable, immutable and rebuildable containers of source code.

What have we found?

  1. Docker is and incredible tool to take containers on Linux to the next level.
  2. If you keep your containers simple and ruthlessly purge unnecessary features, they can run uninterrupted for long periods of time.
  3. Having the ability to install anything in a disposable container is awesome.
  4. You can utilize your server resources much more efficiently using containers to host individual websites.

As we've been using it, we've also been thinking about ways to make it better:

  1. How can we make it faster?
  2. How can we make it simpler and more reliable?
  3. How big can we make it? How many sites can we put on a single server?
  4. How can we combine multiple octohosts together as a distributed cluster that's bigger and more fault-tolerant than a single one?
  5. How can we run the same container on different octohosts for fault-tolerance and additional scalability for a particular website?
  6. How can we persist configuration data beyond the lifecycle of the disposable container?
  7. How can we distribute and make this configuration data available around the system?
  8. How can we integrate remote data stores so that we can still keep the system itself relatively disposable?
  9. How can we trace an HTTP request through the entire chain from the proxy, to container and back?
  10. How can we lower the barrier to entry so that it can be built/spun up easier?

A number of these have been 'accomplished', but we've done a number of large changes to help to enable the next phases of octohost's lifecycle.

  1. We replaced the Hipache proxy with Openresty which immediately sped everything up and allowed us to use Lua to extend the proxy's capabilities.
  2. We moved from etcd to Consul to store and distribute our persistent configuration data. That change allowed us to make use of Consul's Services and Health Check features.
  3. We removed the tentacles container which used Ruby, Sinatra and Redis to store a website's endpoint. Due to how it was hooked up to nginx, it was queried for every hit so that it knew which endpoing to route the request to. The data model was also limited to a single endpoint and required a number of moving parts. I like less moving parts - removing it was a win in many ways.
  4. We refactored the octo command and the gitreceive script which enabled the launching of multiple containers for a single site.
  5. We added a configuration flag to use a private registry, so that an image only has to be built once and can be pulled onto other members of the cluster quickly and easily.
  6. We added a plugin architecture for the octo command, and the first plugin was for MySQL user and database creation.
  7. We replaced tentacles with the octoconfig gem that pulls the Service and configuration data out of Consul and writes an nginx config file. The gem should be extensible enough that we can re-use it for other daemons as needed.

So what are we working on going forward?

  1. Getting octohost clustered easily and reliably. At a small enough size and workload, each system should be able to proxy for any container in the cluster.
  2. Working on the movement, co-ordination and duplication of containers from octohost to octohost.
  3. Improving the consistency and efficiency of octohost's current set of base images. We will be starting from Ubuntu 14.04LTS and rebuilding from there.
  4. Continuing to improve the traceability of HTTP requests through the proxy, to the container and back.
  5. Improving the performance wherever bottlenecks are found.
  6. Improving the documentation and setup process.

What are some pain points that you've found? What do you think of our plans?

Send any comments to Darron or hit us up on Twitter.

Getting Apache basic authorization working using mod_authn_dbd and MySQL on Ubuntu 14.04LTS (Trusty).

I'm converting a number of old websites that were using mod_auth_mysql - which doesn't work anymore - and was having a hard time finding clear, concise and working information.

First off - DO NOT INSTALL libapache2-mod-auth-mysql - it doesn't work. I'm not even sure why it's in Ubuntu anymore, it doesn't even work with Apache 2.4.

Here's how to do get Apache 2.4 / mod_authn_dbd and MySQL to play nice together:

apt-get install apache2 apache2-utils
apt-get install mysql-server-5.6
apt-get install libaprutil1-dbd-mysql

Create a MySQL user that you can query your databases with.

Once that's done, let's setup the global dbd_mysql configuration in this file /etc/apache2/conf-available/dbd_mysql.conf:

DBDriver mysql
DBDParams "host= port=3306 user=username_here pass=password_here"
DBDMin  2
DBDKeep 4
DBDMax  10
DBDExptime 300

Now you need to enable a number of modules and this new configuration file:

a2enmod dbd
a2enmod authn_dbd
a2enconf dbd_mysql

Now configure the virtualhost where you need the Basic authentication - add something like this:

DBDParams "dbname=database_name_goes_here"

<Directory /var/www/password-protected-site>
  AuthName "You Must Login"
  AuthType Basic
  AuthBasicProvider dbd
  AuthDBDUserPWQuery "SELECT encrypt(password) AS password FROM password WHERE username = %s"
  Require valid-user

NOTE: The 'encrypt(password)' in the SQL statement is because the legacy information I'm moving over is in plaintext. If you've got your passwords encrypted, then you can use one of the options here and skip the encrypt call.

I am using a password table that looks like this:

CREATE TABLE `password` (
  `id` int(11) unsigned NOT NULL auto_increment,
  `username` varchar(255) default NULL,
  `password` varchar(255) default NULL,
  PRIMARY KEY  (`id`)

Insert a user and password into the table, then service apache2 restart and you're ready to go.

Hopefully this helps - I know I was pretty frustrated this afternoon with all the misinformation I found online.

Using logspout to get Docker container logs into Papertrail.

Two days ago, Jeff Lindsay released logspout - a Docker container that is:

A log router for Docker container output that runs entirely inside Docker. It attaches to all containers on a host, then routes their logs wherever you want.

As soon as I saw it, I knew that I had to see how I could get the logs out of my Docker containers and into something like Papertrail.

With our current Docker setup, we see the logs come into the HTTP proxy server and then out - but there wasn't a great way to see the logs from inside each Docker container. We have servers with a few dozen containers - we were really missing the visibility that comes with being able to see the logs easily.

nginx plus allows you to log to a remote syslog destination, but we're not using it as it would be cost prohibitive with our setup. Some online posts that talk about "how" to do it, either want you to run another daemon or log tailing utility. That seems a little kludgy - I don't want to manage more running processes.

The post that finally helped me solve it was here:

daemon off;
error_log /dev/stdout info;

http {
  access_log /dev/stdout;

I tried to create those devices in my Dockerfile:

RUN cd /dev && MAKEDEV fd

They installed when I built the image, but they didn't actually show up when I launched the container. Then I noticed that they were just links to /proc:

root@bd9e6c27ddce:/dev# MAKEDEV fd
root@bd9e6c27ddce:/dev# ls -l
total 0
crw------- 1 root root 136, 3 May 15 00:31 console
lrwxrwxrwx 1 root root     13 May 15 00:31 fd -> /proc/self/fd
crw-rw-rw- 1 root root   1, 7 May 15 00:31 full
crw-rw-rw- 1 root root   1, 3 May 15 00:31 null
lrwxrwxrwx 1 root root      8 May 15 00:31 ptmx -> pts/ptmx
drwxr-xr-x 2 root root      0 May 15 00:31 pts
crw-rw-rw- 1 root root   1, 8 May 15 00:31 random
drwxrwxrwt 2 root root     40 May 15 00:31 shm
lrwxrwxrwx 1 root root      4 May 15 00:31 stderr -> fd/2
lrwxrwxrwx 1 root root      4 May 15 00:31 stdin -> fd/0
lrwxrwxrwx 1 root root      4 May 15 00:31 stdout -> fd/1
crw-rw-rw- 1 root root   5, 0 May 15 00:31 tty
crw-rw-rw- 1 root root   1, 9 May 15 00:31 urandom
crw-rw-rw- 1 root root   1, 5 May 15 00:31 zero

So - one simple change to our nginx config - and we have all of our nginx logs from a Docker instance aggregated in one place:

This works with Apache too - should work with almost anything.

I have some work ahead of me to make the logs more useful and have better information in them - but at least now I can see what's happening inside each container without having to type docker logs over and over and over again.

Thanks Jeff - like I said the other night - you write some bad-ass tools: