Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting how compilation benchmarks do not scale at all from 24 to 32 to 64 core Threadrippers. Neither Linux kernel nor LLVM compilation. I wonder why it's so bad.


At some point when you build LLVM.with the default options, you hit a moment where most of what's left to do is linking a number of executables, all statically linking libllvm.a. They are all relatively slow to link, and there aren't enough of them to fill 32 or 64 cores. And they require a lot of RAM, so if you don't have enough RAM, you might end up swapping too.


The article mentions storage seeming to be a bottleneck.


That's not true. I've observed a kernel build not hitting all the cores at 100% with everything in a tmpfs on a 3970x. I didn't investigate why.


Showing some curiosity might help. When you learn to profile and dig deeper, you will be able to understand what's going on, troubleshoot it and find the bottleneck(s). I remember the kernel compiling in sub-10 seconds on Egenera's multi-million dollar quad-socket blade racks a decade ago. Given the computing power, memory speed, bus speed, SSD performance and code-size today, there's no material reason similar figures shouldn't be attainable today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: