June 2009 Updated January 2013
This is part five of a five-part series on effectively scaling your App Engine-based apps. To see the other articles in the series, see Related links .
This article explores techniques for using memcache to improve performance of your application.
What is it?
Memcache is a distributed RAM cache in which you can store transient data using a key-value model. Writes to memcache never touch the disk and are approximately 10 times faster in comparison to writes to the datastore. Also, direct gets to memcache are approximately five times faster than a get request to the datastore. The tradeoff is that data held in memcache is transient and is evicted as the system runs out of memcache space. On rare occasions, all memcached data can be evicted at once. But thanks to its faster operation times, use of memcache as a lookup layer or transient storage can greatly improve the responsiveness of your application.
For more details, see the memcache section in the docs:
How should I use it?
Depending on the design of you application, there are likely several places in which memcache could be used to improve performance. Here are a few examples which might spark some ideas for where it would make sense to use memcache in your own application:
- Caching popular pages
- Saving transient, frequently updated, data
- Caching frequently fetched entities
Popular "pages"
If you have requests which are requested frequently and the same data is shown to all users, e.g. the front page of your blog or website homepage, overall performance and cost can be improved by caching the data in memcache.
To expand on the blog example, say that you use a Django template to render the 12 most recent blog posts which are stored in the datastore. Since new blog posts are added infrequently (once a day), there is no need to re-query and re-render the same content over and over. The rendered page could be stored in memcache with an expiration time. When the front page is requested, a cache hit will avoid the cost in time and quota for the datastore query and generation of the HTML from the template.
In pseudocode, this technique would look like this:
if memcache.has_cached('blog front page') { response.write(memcache.get('blog front page')) } else { template_values['recent_posts'] = BlogPost.all().orderby('-date').fetch(12) html = render_template(template_values, front_page_template) memcache.set('blog front page', 5 minutes, html) response.write(html) }
This technique could be used for a page which is often sent to an individual user as well, though care must be taken to ensure that the correct cached data is sent to the current user. In these cases, you can use the current user's ID as part of the memcache key.
Transient and frequently updated data
The caveat here is that this data may be cleared from memcache at any time. If you are storing something exclusively in memcache, your application should gracefully handle a situation in which this data is cleared.
One example of this type of usage could be a page view counter. Each visit could increment a counter and frequent updates would have more throughput when writing to memcache instead of writing to the datastore. Multiple writes per second to the same datastore entity (or entity group) can lead to contention, and techniques for avoiding contention are explored in more detail in Avoiding Datastore Contention . Instead of writing to the datastore with each increment, the application could increment in memcache and have a cron job to persist the view count. The persistence cron job would be run every so often and adds the hits since the last run to the persistent counter in the datastore. The memcache counter would be reset to zero and would then record hits until the next persistence job is run.
In pseudocode:
// Page view code: memcache.incr(current_page_name + ' hits')
// Cron job request handler if memcache.has_cached('front page hits') { persistent_counter = datastore.get_by_key_name('front page hits') persistent_counter.count += memcache.get('front page hits') persistent_counter.put() memcache.set('front page hits', 0) }
The open source samples include example sharded counters which use memcache written in Python and Java . The design is slightly different than what I've described above, as a counter shard is written to on each increment. Trading exact counts for fuzzier counts with fewer datastore writes could reduce the cost of these examples and improve scalability.
In this scenario, we are not concerned that our page hist counter be 100% accurate. It is okay if the number of hits is underreported.
Caching frequently fetched entities
Performing a get against memcache is usually faster than a datastore get on an entity. As of this writing, datastore gets usually take in the low to mid tens of milliseconds, while memcache gets are usually around one to two milliseconds (though these numbers may change in the future). If you have an entity which you plan on fetching frequently using the key, key name, or ID, storing a copy of the entity in memcache could speed up access. The design for this type of caching is straighforward, an update to an entity should be made to both the datastore and memcache, and reads should check memcache for the entity first, and if the entity is not in the cache, perform a get against the datastore.
Conclusion
When designing your application, take the time to consider which datasets can be cached for future reuse. These could be commonly viewed pages or often read datastore entities, just to name a few. There may also be some data in your application which you would like to have shared among all instances of your app but does not need to be persisted forever. In such cases, memcache can improve the scalability of your app by providing a fast and efficient distributed storage system for transient data. Adding memcache logic to your server side code is often well worth the few extra lines of code.