Today's guest post comes from Itamar Haber. Itamar is Chief Developers Advocate at Redis Labs, a commercial Redis provider and Google Cloud Platform partner.

Imagine this (and this is based on my favorite NoSQL joke believe it or not):

You walk into a bar known for its flashiness and ask for two glasses of wine, one red, one white. Then, you proceed to watch awe struck as the bartender uncorks at least 30 bottles of red wine in a fraction of a second, whirling and spinning, pouring a little of each into your glass. Sure, he spills some wine here and there, but what does it matter when he’s putting on such a show. Then he does the same for your glass of white, except this time he uses 50 bottles.

Now imagine your expression when you get the bar tab, and you notice he’s billed you for 80 bottles of wine.

This is pretty much what we expect in the industry: flashy high speed servers, a big bill and a little bit of messiness. But we recently found out that by using Google Compute Engine, we pay for only two servers, as opposed to 80, for the same speed.

Here’s how we found this out:

Backstory

Awhile back, we set out to answer a question: How many Google Compute Engine nodes do you need to serve one million operations per second? Given our experience with Google Cloud Platform, we were fairly confident that we’d only need a few.

Benchmark Setup

To run the benchmark, we chose the biggest Google Compute Engine nodes currently available in us-central: the n1-highcpu-16. We ended up using two such servers for running the Redis Labs Enterprise Cluster software (downloadable from here). We used another pair of servers, this time n1-highcpu-8, for generating the load with memtier_benchmark using the following command line arguments:

memtier_benchmark -s 10.0.0.1 -p 6379 --ratio=1:1 --test-time=120 -d 100 -t 2 -c 50 --pipeline=50 --ratio=1:0

Benchmark Results

We ran the benchmark 3 times, first serving only reads, then serving only writes, and finally both in equal mix. The results from these runs are as follows:
  1. For read-only operations, our Redis database provided throughput of 1.29M read operations per second at an average latency of 0.15 milliseconds per operation.
  2. With write-only load, the cluster's measured throughput was at 1.14M operations per second at an average latency of 0.36 milliseconds per request.
  3. An equal mix of read and write operations gave a throughput of 1.16M operations per second at an average latency of 0.17 milliseconds per operation.

Our assumption was correct: Redis needed only two Google Compute Engine servers in the cluster to cross the 1M ops/s threshold.

With Redis running on Compute Engine, getting to, and exceeding 1 million operations per second doesn’t require a truckload of cloud servers. Just one or two will do just fine.
Learn more about running Redis on Compute Engine.