"Ever tried to ssh into a one-thread-per-connection setup under heavy load? Assuming you managed to log in, it'll be very very difficult to get htop to execute when it's competing with 2k other processes for cpu time."
Nothing about your comment is compatible with my experience. It is certainly not true that a Linux box with 2000 threads blocked on i/o will be having any sort of bad time. If you're really got 2000 threads competing for CPU time then your problem transcends execution architecture: you've simply admitted more work than you can reasonably discharge. Neither threads nor callbacks can solve this problem for you.
Also I'm not sure why you think that the kernel thread scheduler is all that good, or why it shouldn't be considered "outside our control." For my users the kernel thread scheduler is just another black object inside a dark box. The standard scheduler is pretty good for general purposes but I doubt its optimal for any particular case. In some loads it might be useful to yield to a specific thread that you think is holding a mutex your thread needs to acquire. This is cooperative multitasking, basically.
> It is certainly not true that a Linux box with 2000 threads blocked on i/o will be having any sort of bad time.
You and I must have very different perceptions about the way a server is "having a bad time" :)
I was assuming they were at various stages of processing an incoming request, which means they were blocked on either legitimate disk i/o or swapping. It's very difficult to log in even locally in that case, because the login process can't read /etc/passwd in time.
> Also I'm not sure why you think that the kernel thread scheduler is all that good, or why it shouldn't be considered "outside our control.
I was just trying to say that it's dangerous to give non-trusted peers big influence on the way the scheduler behaves.
OK, but it sounds like the main problem there is local disk access, which is the great satan anyway. You an generate a machine-hosing writeout workload using only one thread on Linux, because Linux loves to starve readers if it can write instead.
I think the unstated second dimension of your comment is that most operating system distributions come out of the installer with absolutely the wrong parameters for running many threads. I think the default thread stack size is 2MB still, and the socket buffers are all huge, and there are limits on how many processes you can have that make people think those limits are meaningful, when they're really not.
Nothing about your comment is compatible with my experience. It is certainly not true that a Linux box with 2000 threads blocked on i/o will be having any sort of bad time. If you're really got 2000 threads competing for CPU time then your problem transcends execution architecture: you've simply admitted more work than you can reasonably discharge. Neither threads nor callbacks can solve this problem for you.
Also I'm not sure why you think that the kernel thread scheduler is all that good, or why it shouldn't be considered "outside our control." For my users the kernel thread scheduler is just another black object inside a dark box. The standard scheduler is pretty good for general purposes but I doubt its optimal for any particular case. In some loads it might be useful to yield to a specific thread that you think is holding a mutex your thread needs to acquire. This is cooperative multitasking, basically.