Blog

Caching In Elasticsearch

Responding to queries takes CPU time, memory, and in unfortunate cases, wall time as well. Increasing the power of a cluster helps, over-provisioning can be very expensive. Caching is one of the first tools pulled out of the optimization box. While older versions of Elastisearch cached everything cachable, newer versions can be quite selective by default. So what caching does Elasticsearch support, and what is the best way to take advantage of it?

Elasticsearch supports three kinds of caches: the node query cache, the shard request cache, and the field data cache.

Node Query Cache

The node cache is LRU cache shared by all shards on a node. It caches the results of queries being used in a filter context, and in previous versions of Elasticsearch, was called the filter cache for this reason. Clauses in a filter context are used to include (or exclude) documents from the result set, but do not contribute to scoring. Furthermore, Elasticsearch observed that many filters are quite fast to compute, particularly for small segments, and others are rare. To reduce churn, the node cache will only include filters that:Have been used multiple times in the last 256 queriesBelong to segments holding more than 10,000 documents (or 3% of the total documents, whichever is larger).

Shard Request Cache

The shard level request cache caches query results independently for each shard, also using LRU eviction. By default, the request cache also limits clauses:By default, only requests of size 0 such as aggregations, counts and suggestions will be cached. If you think a query should be cached, add the “request_cache=true” flag to the request.Not all clauses will be cached. DateTime clauses containing “now” will not be cached; The shard request cache is invalidated each time the shard is updated. This can lead to poor performance in a frequently-updated index.

Field Data Cache

When Elasticsearch computes aggregations on a field, it loads all the field values into memory. For this reason, computing aggregations in Elastisearch can be one of the most expensive operations on a query. The field data cache holds the field values while computing aggregations. While Elasticsearch does not track hit/miss rates, it is recommended to set this large enough to hold all the values in memory for a field.

Monitoring Caching

A number of integrations are available for monitoring Elasticsearch. Sematext and Datadog being some of the more common ones. But what if you just need to spot-check during development?

Elasticsearch gives a number of ways to check the cache utilization, but I like the _cat nodes API in this case, because it will give all of the above in one call:

GET _cat/nodes?v&h=id,queryCacheMemory,queryCacheEvictions,requestCacheMemory,requestCacheHitCount,requestCacheMissCount,flushTotal,flushTotalTime

Implications

  1. Separate aggregations from ‘normal’ query processing. This seems counter-intuitive: why compute something twice, or make two round-trip calls for something that stems from the same clauses? In practice, aggregations will be more cacheable. In addition, this will keep aggregations from being re-calculated as users page through data.
  2. Distinguish filters from match clauses. Common filters are highly cacheable, and quick to compute; scoring is more expensive, and difficult to cache. So caching a more general filter, then scoring a more precise subset is redundant, but helpful.
  3. Use reusable filters to narrow the result set before scoring. Similarly, use scripted fields for scoring but not filters.
  4. Filters are executed more-or-less in order. ES does some query re-writing, but in general, put the cheap filters first and more expensive filters second.
  5. If you must filter by timestamp, use a coarse granularity so the query value changes infrequently.

Further Reading

  • Elasticsearch’s caching overview
  • This article has a nice overview on using filters for performance optimization.
  • This has some more general speed-ups.
  • This is an older article with some good tips, particularly for new clusters.
  • This has a nice deep-dive into optimizing your aggregations.

As ever do let us know if we can help with your Elasticsearch project!

Did you know we offer training courses in how to tune relevance for Solr and Elasticsearch based search engines?