Cache management is hard to get right. Make sure the data you send is cacheable and let the browser manage it.
Browsers have complex mechanisms in place to deal with broken firewalls, high-latency links, non-conformant servers and proxy. Any client-side caching implementation is going to be slower and less complete than its browser counterpart.
I have to agree. As nice as these kinds of systems sound (I've designed experimental ones myself), in most applications its just not the right way to handle it. The first gotcha is that localStorage isn't async itself. If you have lots of data that has to be pulled out or stored, you might end up locking up the thread and bring the rest of the app to a standstill.
Another important issue is that all browsers already have caching mechanisms built in. Spend your time verifying and correcting that your headers are being set properly for your js and stylesheets.
Also, localStorage only gives you 5mb of storage. That space can be taken up very quickly if you aren't careful.
There are some very limited cases where this kind of thing can be useful. One that I'm reminded of is mobile web applications. Steve Souders talked about this briefly at the jQuery conference around this time last year. However, all the major mobile browsers already support cache manifests, which mostly obviates the need for something like this.
I agree, its not a good approach in general. Correct cache headers on the server is probably the way to go in many cases.
I originally created this handle working with an external API in a simple static files only app. It works really well for that usecase and solved my issues. It was then easy to make in a general way, so I decided to chuck it out in the wild and see if I can help anybody else.
Perhaps I should add an area to the README to define when you should (and maybe more importantly when you shouldn't) use this.
Browsers don't have built-in partial caching, which I consider the biggest missed opportunity for improving the state of modern web development. Cache HTML blocks, then send a list of tokens identifying those blocks to the server, so it doesn't re-generate those pieces of content. It's conceptually simple, declarative, possible to implement as 100% backwards-compatible and could be used pretty much everywhere.
Another big thing that's missing is partial forced expiration. This is when the entire page is cached, but certain blocks are excluded from caching. Upon the next request to the same page, the browser sends a normal request with one addition: an extra header listing the transient blocks. Thus, the server has an opportunity to respond with updates only, rather than the entire page.
SPDY only affects content transfer, whereas what I'm describing would affect both transfer and content generation.
Moreover, SPDY is a highly complex proposition, whereas what I'm describing is dead-simple. I have implemented something very similar using JS, cookies and local storage. Of course a built-in browser support would be much, much more desirable.
That’s pretty complex for a caching solution. Not to mention the edge cache would need to keep multiple copies of the same document to diff them, which makes it completely unusable for anything with quick turnover.
With HTML5 Offline (App Cache) you could easily do this on the application level, of course. And with SPDY you can just fetch the fragments separately without any performance penalty. Basically current trend is to enable more gradual caching instead of implementing some higher level resource multiplexing.
Biggest missed opportunity in my opinion? Preparsed DOM/XML. E.g. Fast Infoset[1].
I think you're misunderstanding what I'm asking for. As I've said, I've already implemented that kind of caching in JS. It's really simple. The problem is, local storage and cookies are not ideal implementations.
Local storage is a bad fit for caching. You can compose the document from smaller fragments and leverage the browser cache. It’s simple AJAX. That’s what I meant by gradual caching.
I thought a bit about your solution, and I guess it’s doable using existing technology AND leveraging gradual caching without JS.
Using standard HTTP/1.1 session protocol this would require additionally requesting /foo/fragment, SPDY can just use SS push.
> GET /foo/fragment
< 200 OK HTTP/1.1
< Etag: 1
[…]
Now on refresh conditional GET for /foo hits cache, and conditional GET for /foo/fragment doesn’t. Abracadabra: partial caching.
> GET /foo
> If-None-Match: 1
< 304 Not Modified HTTP/1.1
> GET /foo/fragment
> If-None-Match: 1
< 200 OK HTTP/1.1
< Etag: 2
[…]
While it looks like some new technology, it’s simple really. Browsers already compose pages by fetching scripts, images, embedded objects, iframes. This is not that different.
Moreover, this approach provides backwards compatibility, because if the browser doesn’t resolve XInclude, you can just do that yourself using AJAX.
This solves your problem, but requires changes only on the client side.
Browsers have complex mechanisms in place to deal with broken firewalls, high-latency links, non-conformant servers and proxy. Any client-side caching implementation is going to be slower and less complete than its browser counterpart.