Yes, you get 10x the throughput while maintaining far better tail latency with Scylla over Cassandra.
How is 10ms vs 475 not a major improvement? How is 4 nodes vs 40 not a major improvement? If you're an SRE than how is managing 4 servers with far less tuning and maintenance not a major improvement? Also 99.9% percentile still matters. They're testing with 300k ops/sec which means 300/sec are facing extreme latency spikes that can be enough to fail and/or cause cascading issues through out the application.
There's no metric where Cassandra is better here and you can't tune your way to the same performance in the first place which is the whole point of Scylla. What even is your claim here? Spend more to get less?
2-3x maybe, 10x not likely, unless you compare it to untuned cassandra.
What makes it possible to run cassandra/scylla on nodes with TBs of data density is the TWCS compaction strategy from Jeff Jirsa. He was just a cassandra power user at the time, and I like to think that the invention was possible because of Java.
So, next time you read an ad piece from scylla about replacing 40 mid size boxes running CMS with 4 big boxes, don't forget about TWCS.
It's 4 servers doing the same as 40. That is 10x throughput, and with lower tail latency.
Scylla is far more than a compaction strategy. If it was that simple, than Cassandra would already be able to do it.
It's an objectively faster database in every metric. Datastax's enterprise distribution has more functionality but core Cassandra is now entirely outclassed by Scylla in speed and features.
Again, that's 40 mid size boxes with questionable GC (32G to 48G heap size is the no man land, G1GC target pause time of 500ms of course will result in 500ms P999 latency, etc), versus 4 big boxes, which have more than 4x higher specs. So 10 divides by 4, that's the 2-3x that I mentioned.
The TWCS is just a good example that raw performance is not everything. Performance also comes from things like compaction strategy, data modeling, and access pattern, while users and stakeholders also care about things like easiness to modify, friendly license, and steady stewardship.
How is 10ms vs 475 not a major improvement? How is 4 nodes vs 40 not a major improvement? If you're an SRE than how is managing 4 servers with far less tuning and maintenance not a major improvement? Also 99.9% percentile still matters. They're testing with 300k ops/sec which means 300/sec are facing extreme latency spikes that can be enough to fail and/or cause cascading issues through out the application.
There's no metric where Cassandra is better here and you can't tune your way to the same performance in the first place which is the whole point of Scylla. What even is your claim here? Spend more to get less?