This is SUPER helpful! Just the other day I was wondering how someone like me could get involved in the hard scalability problems I read so much about here on the hackers news. But how to make my boring old highly cachable read-only web traffic into a major scalability problem? Then I read this blog entry, and wow, now each log entry on my site turns into a random btree update in MongoDB made while holding a global write lock. Thanks again hackers news, and thanks again BIG DATA!
Or think about it in a different way - instead of adding disk IO on the server itself, you're offloading the log processing to another server which does delay writes (you don't usually need immediate sync for remote logging) and gives you better log processing capabilities (semi-structured data).
If your workload cannot be handled this way - that's another thing. But how did we get from "mongo is webscale" to "mongo cannot be used for anything at all"? What happened to benchmarking and taking serious decisions backed by real data?
For write-only logging from stateless, single machine bound processes - yes. For analytics, automated tracking stateful sessions across many nodes, preserving context, dumping binary fragments... no, at least for me it did not always work.
You can reconstruct almost any system flow with good logging. While that's not always ideal (especially if you need to query the data), the more structured your data gets, the less it is a simple log. When you increase the specificity of your tools, they becomes less useful in the general case, turing tarpit notwithstanding.