Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I was more junior I was always told to use caching as a last resort. It's a good attitude to take to make sure you're not doing something stupid and hiding it with caching. These days though I look for caching opportunities up-front. In fact, I'll design with them in mind.

I did some work for a client some time ago that were expecting a lot of read load. Their backend had a bunch of constantly changing data in mongo - but it only refreshed every 10 seconds. I told them initially to just output the aggregated data to an S3 object and have all the clients access it from there. They decided to run loads of servers instead, they were muttering something about AWS Autoscale (even though I told them that wouldn't help).

As expected, I got a call one Friday evening asking if I could take a look at why their servers were timing out. When I got there, there were about 15 frontend servers hammering 1 poor mongo box that was aggregating the same query again and again - and within any 10 second window always getting the same result. I stripped it down to 1 frontend box with an nginx cache (after jumping through a hoop to support jsonp).

After the dust settled they apparently didn't want to admit that it was something that could just be solved with caching so it was described as a configuration issue to the business.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: