Skip to main content

Benchmarking Apache Cassandra (40 Nodes) vs ScyllaDB (4 Nodes)

 

Since the NoSQL revolution in database management systems kicked off over a decade ago, organizations of all sizes have benefitted from a key feature: massive scale using relatively inexpensive commodity hardware. It has enabled organizations to deploy architectures that would have been prohibitively expensive and impossible to scale with traditional relational database systems.

Commodity hardware itself has undergone a transformation over the same decade, but most modern software doesn’t take advantage of modern computing resources. Most frameworks that scale out for data-intensive applications don’t scale up. They aren’t able to take advantage of the resources offered by large nodes, such as the added CPU, memory and solid-state drives (SSDs), nor can they store large amounts of data on disk efficiently. Managed runtimes, like Java, are further constrained by heap size. Multi-threaded
code, with its locking overhead and lack of attention for non-uniform memory architecture (NUMA), imposes a significant performance penalty against modern hardware architectures.

Software’s inability to keep up with hardware advancements has led to the widespread belief that running database infrastructure on many small nodes is the optimal architecture for scaling massive workloads. The alternative, using small clusters of large nodes, is often viewed with skepticism. A few common concerns are that large nodes won’t be fully utilized, that they have a hard time streaming data when scaling out and, finally, they might have a catastrophic effect on recovery times.

Based on our experiences with ScyllaDB,– a fast and scalable database that takes full advantage of modern infrastructure and networking capabilities, we were confident that scaling up beats scaling out. So, we put it to the test.

ScyllaDB is API-compatible with Apache Cassandra (and DynamoDB compatible too); it provides the same Cassandra Query Language (CQL) interface and queries, the same drivers, even the same on-disk SSTable format. Our core Apache Cassandra 4.0 performance benchmarking used identical three-node hardware for both ScyllaDB and Cassandra (TL;DR, ScyllaDB performed 3x-8x better). Since ScyllaDB scales well vertically, we executed what we are calling a “4×40” test, with a large-scale setup where we used node sizes optimal for each database. Since ScyllaDB’s architecture takes full advantage of extremely large nodes, we compared a setup of four i3.metal machines (288 vCPUs in total) vs. 40 i3.4xlarge Cassandra machines (640 vCPUs in total, almost 2.5 times ScyllaDB’s resources).

In terms of CPU count, RAM volume or cluster topology, this would appear to be an apples-to-oranges comparison. You might wonder why anyone would ever consider such a test. After all, we’re comparing four machines to 40 very different machines. Due to ScyllaDB’s shard-per-core architecture and custom memory management, we know that ScyllaDB can utilize very powerful hardware. Meanwhile, Cassandra and its JVM’s garbage collectors are optimized to be heavily distributed with many smaller nodes.

The true purpose of this test was to see whether both CQL solutions could perform similarly in this duel, even with Cassandra requiring about 2.5 times more hardware, for 2.5x the cost. What’s really at stake here is a reduction in administrative burden: Could a DBA be maintaining just four servers instead of 40?

The 4 x 40 Node Setup

We set up clusters on Amazon EC2 in a single Availability Zone within us-east-2 data center. The ScyllaDB cluster consisted of four i3.metal VMs and the competing Cassandra cluster consisted of 40 i3.4xlarge VMs. Servers were initialized with clean machine images (AMIs) of Ubuntu 20.04 (Cassandra 4.0) or CentOS 7.9 (ScyllaDB 4.4).

Apart from the cluster, 15 loader machines were used to run cassandra-stress to insert data, and – later – to provide background load at CL=QUORUM to mess with the administrative operations.

 

 

Once up and running, both databases were loaded with random data at RF=3 until the cluster’s total disk usage reached approximately 40 TB. This translated to 1 TB of data per Cassandra node and 10 TB of data per ScyllaDB node. After loading was done, we flushed the data and waited until the compactions finished, so we could start the actual benchmarking.

Spoiler: We found that a ScyllaDB cluster can be 10 times smaller in node count and run on a cluster 2.5x less expensively, yet maintain the equivalent performance of Cassandra 4. Here’s how it played out.

Throughput and Latencies

UPDATE Queries

The following shows the 90- and 99-percentile latencies of UPDATE queries, as measured on:

  • Four-node ScyllaDB cluster (4 x i3.metal, 288 vCPUs in total)
  • 40-node Cassandra cluster (40 x i3.4xlarge, 640 vCPUs in total)

The workload was uniformly distributed — every partition in the multi-terabyte dataset had an equal chance of being selected/updated.

Under low load, Cassandra slightly outperformed ScyllaDB. The reason is that ScyllaDB runs more compaction automatically when it is idle and the default scheduler tick of 0.5 milliseconds hurts the P99 latency. (Note, there is a parameter that controls this, but we wanted to provide out-of-the-box results with zero custom tuning or configuration.)

Under high load with P99 latency <10 milliseconds, ScyllaDB’s throughput on four nodes was 33% higher than Cassandra’s on 40 nodes.

SELECT Queries

The following shows the 99-percentile latencies of SELECT queries, as measured on:

  • Four-node ScyllaDB cluster (4 x i3.metal, 288 vCPUs in total)
  • 40-node Cassandra cluster (40 x i3.4xlarge, 640 vCPUs in total

The workload was uniformly distributed — every partition in the multi-terabyte dataset had an equal chance of being selected/updated. Under low load Cassandra slightly outperformed ScyllaDB. Under high load with P99 latency <10 milliseconds, ScyllaDB’s throughput on four nodes was again 33% higher than Cassandra’s on 40 nodes.

Scaling up the Cluster by 25%

In this benchmark, we increase the capacity of the cluster by 25%:

  • By adding a single ScyllaDB node to the cluster (from four nodes to five)
  • By adding 10 Cassandra nodes to the cluster (from 40 nodes to 50 nodes)


Performing Major Compaction

In this benchmark, we measure the throughput of a major compaction. To compensate for Cassandra having 10 times more nodes (each having 1/10th of the data), this benchmark measures the throughput of a single ScyllaDB node performing major compaction and the collective throughput of 10 Cassandra nodes performing major compactions concurrently.

Here, ScyllaDB ran on a single i3.metal machine (72 vCPUs) and competed with a 10-node cluster of Cassandra 4 (10x i3.4xlarge machines; 160 vCPUs in total). ScyllaDB can split this problem across CPU cores, which Cassandra cannot. ScyllaDB performed 32x better in this case.

The Bottom Line

The bottom line is that a ScyllaDB cluster can be 10 times smaller and 2.5 times less expensive, yet still outperform Cassandra 4.0 by 42%. In this use case, choosing ScyllaDB over Cassandra 4.0 would result in hundreds of thousands annual savings in hardware costs alone, without factoring in reduced administration costs or environmental impact. Scaling the cluster is 11 times faster, and ScyllaDB provides additional features, from change data capture (CDC) to cache bypass and native Prometheus support and more. That’s why teams at companies such as Discord, Expedia, Fanatics and Rakuten have switched.

For more details on how this test was configured and how to replicate it, read the complete benchmark.