Seastar is a fundamentally different way of programming from what you mentioned above. Let me give you an example. Seastar takes all the memory up front - never gives it back to the operating system (you can control how much via -m2G, etc). This gives you deterministic allocation latency, is just incrementing a couple of pointers. Memory is split evenly across the number of cores and the way you communicate between cores is message passing - which means you explicitly tell which thread is allowed to read which inbox (similar to actors) - i wrote about it here in 2017 https://www.alexgallego.org/concurrency/smf/2017/12/16/futur...
The point of seastar is to not tune the GC for each application workload So to bring that up means that you missed the whole point of seastar. Instead the programmer explicitly reserves memory units for each subsystem - say 30% for the RPC, 20% for the app specific page-cache (since it's all DMA no kernel page cache), 20% for write-behinds, etc. (obviously in practice most of this is dynamic). It is not one dimension as suggested and not apples to oranges. it is apples to apples. You have a service, you connect your clients - unchanged - and one has better latency. that simple.
It may be your experience that when you download a bin kafka say 2.4.1 you change the GC settings but in a multi-tenant environment that's a moving target. Most enterprises I have talked to, just use the default script to startup kafka w.r.t gc memory settings. (they may change some writers settings, caching, etc)
At the end of the day there is no substitute for testing in your own app with your own firewall settings w/ your own hardware. The result should still give you 10x lower latency.
I am familiar with Seastar too. It is one component that is pretty useless by itself. What is relevant in this topic is what is around it, the functionality that you provide. This is why Scylla is copying Cassandra. You can come up with a nice way of programming whatever you want but the end of the way the business functionality is what matters and there are different tradeoffs involved, still.
What do you mean "copying" Cassandra? Obviously they're offering the same API. Many people like the Cassandra data model and multiregional capabilities and that's why it was chosen.
What Scylla is doing is unlocking new performance potential with a C++ rewrite and an entirely different process-per-core architecture that gets around the fundamental limitations of Cassandra and makes it easier to run. This performance and stability has also led to the team making existing C* features like LWT, secondary indexes, and read-repair even faster and better than the original implementations.
Seastar is a fundamentally different way of programming from what you mentioned above. Let me give you an example. Seastar takes all the memory up front - never gives it back to the operating system (you can control how much via -m2G, etc). This gives you deterministic allocation latency, is just incrementing a couple of pointers. Memory is split evenly across the number of cores and the way you communicate between cores is message passing - which means you explicitly tell which thread is allowed to read which inbox (similar to actors) - i wrote about it here in 2017 https://www.alexgallego.org/concurrency/smf/2017/12/16/futur...
The point of seastar is to not tune the GC for each application workload So to bring that up means that you missed the whole point of seastar. Instead the programmer explicitly reserves memory units for each subsystem - say 30% for the RPC, 20% for the app specific page-cache (since it's all DMA no kernel page cache), 20% for write-behinds, etc. (obviously in practice most of this is dynamic). It is not one dimension as suggested and not apples to oranges. it is apples to apples. You have a service, you connect your clients - unchanged - and one has better latency. that simple.
It may be your experience that when you download a bin kafka say 2.4.1 you change the GC settings but in a multi-tenant environment that's a moving target. Most enterprises I have talked to, just use the default script to startup kafka w.r.t gc memory settings. (they may change some writers settings, caching, etc)
At the end of the day there is no substitute for testing in your own app with your own firewall settings w/ your own hardware. The result should still give you 10x lower latency.