The timing of this post is funny, as I just got finished reworking our fork of django-cache-machine. As the post points out, the limitation of Cache Machine as it is currently built is that only objects which are already within a queryset can invalidate that queryset. This is fine for selects on primary keys, but beyond that the invalidation logic is incomplete.
My changes ( https://github.com/theatlantic/django-cache-machine/ ) inspect the ORDER BY clauses (if there's a limit or offset) and WHERE constraints in the query and saves the list of "model-invaliding columns" (i.e., columns which, when changed, should broadly invalidate queries on the model) in a key in the cache. It also associates these queries with a model-flush list. Then, in a pre_save signal, it checks whether any of those columns have changed and marks the instance to invalidate the associated model flush list when it gets passed into the post_save signal. We have these changes up on a few of our sites and, if all goes well, we're looking to move the invalidation at the column level to make the cache-hit ratio even higher.
My changes ( https://github.com/theatlantic/django-cache-machine/ ) inspect the ORDER BY clauses (if there's a limit or offset) and WHERE constraints in the query and saves the list of "model-invaliding columns" (i.e., columns which, when changed, should broadly invalidate queries on the model) in a key in the cache. It also associates these queries with a model-flush list. Then, in a pre_save signal, it checks whether any of those columns have changed and marks the instance to invalidate the associated model flush list when it gets passed into the post_save signal. We have these changes up on a few of our sites and, if all goes well, we're looking to move the invalidation at the column level to make the cache-hit ratio even higher.