major.io words of wisdom from a systems engineer

Very unscientific GlusterFS benchmarks

I’ve been getting requests for GlusterFS benchmarks from every direction lately and I’ve been a bit slow on getting them done. You may suspect that you know the cause of the delays, and you’re probably correct. ;-)

Quite a few different sites argue that the default GlusterFS performance translator configuration from glusterfs-volgen doesn’t allow for good performance. You can find other sites which say you should stick with the defaults that come from the script. I decided to run some simple tests to see which was true in my environment.

Here’s the testbed:

  • GlusterFS 3.0.5 running on RHEL 5.4 Xen guests with ext3 filesystems
  • one GlusterFS client and two GlusterFS servers are running in separate Xen guests
  • cluster/replicate translator is being used to keep the servers in sync
  • the instances are served by a gigabit network

It’s about time for some pretty graphs, isn’t it?

iozone re-reader benchmark results with default glusterfs translators from glusterfs-volgeniozone re-reader benchmark results with no glusterfs translators

The test run on the left used default stock client and server volume files as they come from glusterfs-volgen. The test run on the right used a client volume file with no performance translators (the server volume file was untouched). Between each test run, the GlusterFS mount was unmounted and remounted. I repeated this process four times (for a total of five runs) and averaged the data.

You’ll have to forgive the color mismatches and the lack of labeling on the legend (that’s KB/sec transferred) as I’m far from an Excel expert.

The graphs show that running without any translators at all will drastically hinder read caching in GlusterFS - exactly as I expected. Without any translators, the performance is very even across the board. Since my instances had 256MB of RAM each, their iocache translator was limited to about 51MB of cache. That’s reflected in the graph on the left - look for the vertical red/blue divider between the 32MB and 64MB file sizes. I’ll be playing around with that value soon to see how it can improve performance for large and small files.

Keep in mind that this test was very unscientific and your results may vary depending on your configuration. While I hope to have more detailed benchmarks soon, this should help some of the folks who have been asking for something basic and easy to understand.