Jet ski moving fast

When you build tons of kernels every day like my team does, you look for speed improvements anywhere you can. Caching repositories, artifacts, and compiled objects makes kernel builds faster and it reduces infrastructure costs.

Need for speed

We use GitLab CI in plenty of places, and that means we have a lot of gitlab-runner configurations for OpenShift (using the kubernetes executor) and AWS (using the docker-machine executor). The runner’s built-in caching makes it easy to upload and download cached items from object storage repositories like Google Cloud Storage or Amazon S3.

However, there’s an often overlooked feature hiding in the configuration for the docker executor that provides a great performance boost: mounting tmpfs inside your container. Not familiar with tmpfs? Arch Linux has a great wiki page for tmpfs and James Coyle has a well-written blog post about what makes it unique from the older ramfs.

RAM is much faster than your average cloud provider’s block storage. It also has incredibly low latency relative to most storage media. There’s a great interactive latency page that allows you to use a slider to travel back in time to 1990 and compare all kinds of storage performance numbers. (It’s really fun! Go drag the slider and be amazed.)

Better yet, many cloud providers give you lots of RAM per CPU on their instances, so if your work isn’t terribly memory intensive, you can use a lot of this RAM for faster storage.

Enabling tmpfs in Docker containers

⚠️ Beware of the dangers of tmpfs before adjusting your runner configuration! See the warnings at the end of this post.

This configuration is buried in the middle of the docker executor documentation. You will need to add some extra configuration to your [runners.docker] section to make it work:

[runners.docker]
  [runners.docker.tmpfs]
      "/ramdisk" = "rw,noexec"

This configuration mounts a tmpfs volume underneath /ramdisk inside the container. By default, this directory will be mounted with noexec, but if you need to execute scripts from that directory, change noexec to exec:

[runners.docker]
  [runners.docker.tmpfs]
      "/ramdisk" = "rw,exec"

In our case, compiling kernels requires executing scripts, so we use exec for our tmpfs mounts.

You must be specific for exec! As an example, this tmpfs volume will be mounted with noexec since that is the default:

[runners.docker]
  [runners.docker.tmpfs]
      "/ramdisk" = "rw"

Extra speed

For even more speed, we moved the objects generated by ccache to the ramdisk. The seek times are much lower and this allows ccache to look for its cached objects much more quickly.

Git repositories are also great things to stash on tmpfs. Big kernel repositories are usually 1.5GB to 2GB in size with tons of files. Checkouts are really fast when they’re done in tmpfs.

Dangers are lurking

⚠️ As mentioned earlier, beware of the dangers of tmpfs.

  • All of the containers on the machine will share the same amount of RAM for their tmpfs volumes. Be sure to account for how much each container will use and how many containers could be present on the same machine.

  • Be aware of how much memory your tests will use when they run. In our case, kernel compiles can consume 2-4GB of RAM, depending on configuration, so we try our best to leave some memory free.

  • These volumes also have no limits on how much data can go into the volume. However, if you put too much data into the tmpfs volume and your system runs critically low on available RAM, you could see a huge drop in performance, system instability, or even a crash. 🔥

Photo credit