Big ScyllaDB Performance Gains on Google Cloud’s New Smaller Z3 Instances
Benchmarks of ScyllaDB on Google Cloud’s new Z3 small instances achieved higher throughput and lower latency than N2 equivalents, especially under heavy load. ScyllaDB recently had the privilege of examining Google Cloud’s shiny new small shape Z3 GCE instances in an early preview. The Z3 series is optimized for workloads that require low latency and high performance access to large data sets. Likewise, ScyllaDB is engineered to deliver predictable low latency, even with workloads exceeding millions of OPS per machine. Naturally, both ScyllaDB and Google Cloud were curious to see how these innovations translated to performance gains with data-intensive use cases. So, we partnered with Google Cloud to test ScyllaDB on the new instances. TL;DR When we tested ScyllaDB on these new Z3 small shape instances vs. the previous generation of N2 instances, we found significant throughput improvements as well as reduced latencies…particularly at high load scenarios. Why the New Z3 Instances Matter Z3 is Google Cloud’s first generation of Storage Optimized VMs, specifically designed to combine the latest CPU, memory, network, and high-density local SSD advancements. It introduces 36 TB of local SSD with up to 100 Gbps network throughput in its largest shape and brings in significant software-level improvements like partitioned placement policies, enhanced maintenance configurations, and optimized Hyperdisk support. The Z3 series has been available for over a year now. Previously, Z3 was only available in large configurations (88 and 176 vCPUs). With this new addition to the Z3 family, users can now choose from a broader range of high-performance instances, including shapes with 8, 16, 22, 32, and 44 vCPUs – all built on 4th Gen Intel Xeon Scalable (Sapphire Rapids), DDR5 memory, and local SSDs configured for maximum density and throughput. The new instance types — especially those in the 8 to 44 vCPU range — allow ScyllaDB to extend Z3 performance advantages to a broader set of workloads and customer profiles. And now that ScyllaDB X Cloud just introduced support for mixed-instance clusters, it’s the perfect timing for these new instances. Our customers can use them to expand and contract capacity with high precision. Or they can start small, then seamlessly shift to larger instances as their traffic grows. Test Methodology We evaluated the new Z3 instances against our current N2-based configurations using our standard weekly regression testing suite. These tests focus on measuring latency across a range of throughput levels, including an unthrottled phase to identify maximum operations per second. For all tests, each cluster consisted of 3 ScyllaDB nodes. The Z3 clusters used z3-highmem-16-highlssd instances, while the N2 clusters used n2-highmem-16 instances with attached 6 TB high-performance SSDs to match the Z3 clusters’ storage. Both instance families come with 16 vCPUs and 128 GB RAM. The replication factor was set to 3 to reflect our typical production setup. Four workloads were tested on ScyllaDB version 2025.1.2 with vnode-based keyspaces: Read (100% cache hit) Read (100% cache miss) Write Mixed (50% reads, 50% writes) For load generation, we used cassandra-stress with 1kb row size (one column). Each workload was progressively throttled to multiple fixed throughput levels, followed by an unthrottled phase. For throttled scenarios, we aimed for sub-millisecond to ~10ms latencies. For unthrottled loads, latency was disregarded to maximize throughput measurements. Benchmark Results First off, here’s an overview of the throughput results, combined: Now for the details… 1. Read Workload (100% Cache Hit) Latency results Load N2 P99 [ms] Z3 P99 [ms] 150k 0.64 0.5 300k 1.37 0.86 450k 7.23 6.23 600k 15.02 700k 13.1 The Z3 cluster consistently delivered better tail latencies across all load levels. For higher loads, the N2 based cluster couldn’t keep up, so we presented only results for the Z3 cluster. Maximum throughput results Load N2 Throughput Z3 Throughput Diff % Max 569,566 1,151,739 102 Due to superb performance gains from the CPU family upgrade, the Z3 cluster achieved a staggering 102% higher throughput than the N2 did at the unthrottled level. 2. Read Workload (100% Cache Miss) Latency results Load N2 P99 [ms] Z3 P99 [ms] 80k 2.53 2.02 165k 3.99 3.11 250k 4.7 Again, the Z3 cluster achieved better latency results across all tested loads and could serve higher throughput while keeping latencies low. Maximum throughput results Load N2 Throughput Z3 Throughput Diff % Max 236,528 310,880 31 With a 100% cache read workload that’s bounded by a mix of disk and CPU performance, the Z3 cluster achieved a significant 31% gain in maximum throughput. 3. Write Workload Latency results Load N2 P99 [ms] Z3 P99 [ms] 200k 3.27 3.21 300k >100 ms 4.19 Although latencies remained relatively similar under moderate load, the N2 instances couldn’t sustain them under higher loads. Maximum throughput results Load N2 Throughput Z3 Throughput Diff % Max 349,995 407,951 17 Due to heavy compactions and intensive disk utilization, the write workload also takes advantage of Z3’s advancements. Here, it achieved 17% higher throughput. 4. Mixed Workload (50% Read / 50% Write) Latency results Load N2 P99 Write [ms] Z3 P99 Write [ms] N2 P99 Read [ms] Z3 P99 Read [ms] 50k 2.07 2.04 2.08 2.11 150k 2.27 2.65 2.65 2.93 300k 4.71 3.88 5.12 4.15 450k >100 ms 15.49 >100 ms 16.13 The Z3 cluster maintained similar latency characteristics to the N2 one in lower throughput ranges. In higher ones, it kept a consistent edge since it was able to serve data reliably at a wider range. Maximum throughput results Load N2 Throughput Z3 Throughput Diff % Max 519,154 578,380 11 With a 50% read:write ratio, the Z3 instances achieved 11% higher throughput for both read and write operations. Sustained Low-Latency Throughput Comparison In addition to peak throughput metrics, we measured the maximum throughput each instance type can sustain while keeping latencies within our defined SLA thresholds. This scenario reflects typical production environments where maintaining predictable response times under load is critical. Workload N2 Throughput [kops] Z3 Throughput [kops] Diff % Read (100% cache hit) 530 1060 100 Read (100% cache miss) 165 280 70 Write 230 340 48 Mixed 450 500 11 These results confirm that Z3 can handle higher sustained loads within latency budgets, making it a stronger choice for latency-sensitive applications operating at scale. Our Verdict on the New Z3 Instances The addition of Z3 smaller shapes brings new flexibility to ScyllaDB Cloud users. Whether you’re looking to scale down while retaining high SSD performance or ramp up throughput in cost-sensitive environments, Z3 offers a compelling alternative to N2. We just added support for the smaller Z3 instance types in ScyllaDB Cloud. These VMs will complement the existing N2 options and enable more flexible deployment profiles for workloads that demand high storage IOPS and network bandwidth without committing to extremely large core counts. What’s Next This first round of testing found that performance improvements on Z3 become significantly more pronounced as the load scales. We believe that stems from ScyllaDB’s ability to fully utilize the underlying hardware. Moving forward, we’ll continue validating Z3 under other scenarios (e.g., higher disk utilization, large partitions, compaction pressure, heterogeneous cluster mixing) and uplift our internal tuning recommendations accordingly.