Sorry Chef

Metaphoric link-bait subtitle: Hey there Chef, I still like to cook, but Docker is a microwave, and Heroku is a dinner buffet.

I’ve been a long time supporter of using Chef to describe server infrastructure in code. Chef is powerful and flexible, and I’ve enjoyed learning about it and working with it (most of the time). I’ve written about Chef. I’ve given talks on Chef. I’ve created walkthroughs of setting up applications on Chef with Capistrano. I have enjoyed learning Chef and I do enjoy using Chef.

After using Chef for as long as I have, it is clear that there are some issues. Not only that, but I’m finding other approaches to server deployment are better options. I’m going to run through the issues I see with Chef. Then discuss why I’m leaning towards deploying things with Docker. Finally, I’ll discuss why, in other cases, I’d prefer to just use Heroku.

Chef’s Issues

Deployment

Despite efforts to standardize my Chef approach, there is always something different with each project. There are slight differences between environments that cause complexity and frustration. Comparing my Vagrant VM, Linode’s Ubuntu 14.04, and Amazon’s Ubuntu 14.04, there are little differences that lead to complexity. Even when it is only the pre-installed package list, which Chef is actually pretty good at normalizing, but you might have extra packages on one of the pre-installed platforms. Chef will only remove them if you instruct it to.

The setup and configuration of networking is where I have the biggest issues. My local Vagrant networking setup is the most unlike the others. It uses DHCP and a different ip address range, and adds a layer of NAT to the setup. Sure, there are other networking options in Vagrant, but DHCP seems to work the best. Then in production there is often an AWS load balancer or Linode load balancer. I haven’t found great ways to abstract those differences. The simple solution add configuration noise, loading in different configuration files per environment.

Provisioning

Chef runs always seem to be a process of trial and error. I follow the steps of run, see what fails, run again. The failures are often due to networking issues, mirrors going down, etc. Sure, I could code around some of those issues, with a larger list of mirrors, for example, but I just want it to work the first time.

BitRot

This is by far the worst part of my Chef experience. When returning to a project a couple months later, I can almost guarantee something isn’t going to work. Sometimes the operating system no longer lists the version of a package you are using. Other times its a simple as a missing mirror from your list in the Chef recipe. Sometimes it is even as simple as a package that is being updated, but needs to confirm a configuration change. The prompt for confirmation will cause your noninteractive apt-get to fail.

Flexibility

Chef’s flexibility is both good and bad. Building recipes that run on many host operating systems takes effort. As a consultant I can’t normalize to the same host operating system. Different customers have different requirements or existing machines. Any recipes I write need to be portable, or need to be adjusted each time we use them. It is good that we can do this, but it also becomes a support nightmare.

Docker’s Advantages

Deployment

One of the greatest things about Docker is how minimal setup required. After a compatible host operating system install, Docker is simple to install. Once the Docker service and client are working, just authenticate to your Docker repository. You can pull the images you need and start them up with little effort.

Provisioning

In the beginning I had a hard time with how to get things set up and running with Docker. With most operating systems moving to systemd, my approach to starting and running containers is universal. No matter what the host operating system I’m using, my systemd unit files and Docker images should be portable.

BitRot

If the image is in a repository, it will always be the same, no matter how long it has been. This is like putting all your executables and supporting files into Git. Docker gives you that reproducible file system guarantee. Six or more months down the line, I should be able to pull that image and run it. That is reassuring for getting back into development on an idle project quickly. Sure, we’d want to update packages for security fixes etc. But, being able to pick a project back up without failed chef runs is a great feeling.

Flexibility

The container I build on my vagrant VM works everywhere (with the same CPU architecture). That alone is a wonderful thing.

Disposable nature

Heroku is powerful and flexible if you accept the approach that the running dyno is disposable. Heroku also has some downsides. Using a similar of disposable Docker containers, you can get similar power and flexibility. But, with Docker, you can host your containers anywhere and have no vendor lock-in. Taking an approach of versioning your entire running application as a Docker image makes deployments and rollbacks as easy as Heroku makes them. You get powerful flexibility and control, as long as you can manage your data correctly in volumes or external services.

What Chef is still good for

Service Setup

Chef is great for installing infrastructure services (nginx, PostgreSQL, Redis, Docker) with reasonable defaults. Creating and maintaining basic services configuration is a place where Chef can really shine. I still plan to use Chef for setting up databases, configuring user accounts, and establishing default configuration.

Consistent installs on the same base operating system

Yes, Chef should should work across any distribution, but there are differences in what you start with. Chef won’t automatically remove those differences, you need to code around them. Having the power and flexibility to do so is still worth something.

Insanely flexible

Chef is ruby code for your servers, if you can think it up and express it in ruby, you can do it. If that is important to you, consider sticking with Chef, or at least keeping it as part of your toolset.

Does it matter? Consider Heroku

My advice is that if your application can run on Heroku, start there. I used to feel Heroku was too expensive to make sense. As a recurring cost for a new startup or someone bootstrapped, Heroku seems expensive compared to a Linode or Digital Ocean server. Considering the RAM allocations on Heroku, that is even more true. But, Heroku is so simple and easy, you should start there. If you have a lot of free time to setup your own solution, then you can save money there. I would expect Heroku to be way cheaper when you consider your time spent building a custom setup.

Heroku is great until you don’t fit within its model. Some issues you might hit:

What I’m doing now

I’m not running to move everything I’m currently supporting into Docker. I’ll continue to improve my usage of Chef in the places where I’m already using it. It is working for me in those environments.

Most new projects I spin up go in Heroku by default. They have such a low barrier to entry and it is easy to get going. If Heroku’s pricing becomes an issue, or I want more control over things, then I’ll continue to use cloud VMs as I have in the past. On those VMs is where I’ll start to do things differently.

I expect to, at least for now, continue to use Chef to setup infrastructure like nginx, PostgreSQL, Redis. I think I will start to put my application code into Docker images. I’ll keep Chef’s power over established patterns for handling well known software packages. I’ll remove the complexity of my application’s own deployment from Chef and Capistrano. Docker will allow running my application code to be as simple as any other easy to install binary on my virtual machine. In addition to that, Docker’s image concept will make released versions of code very clean and clear, and rolling back is much easier. This approach will make it easy to consider my application as ‘just another binary’ on the system, orchestrated there by Chef.

Having so many great deployment options is a wonderful problem to have. These tools are all so powerful, you can’t really go wrong. I think the important thing to keep in mind is that the landscape is constantly changing, so keep your eyes open for new ways to make your life much easier when it comes to infrastructure management and application deployment.