Reducing requested filter cache size of to the maximum allowed size of – How to solve related issues

Opster Team

Feb-20, Version: 1.7-8.0

Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.

This guide will help you check for common problems that cause the log ” Reducing requested filter cache size of to the maximum allowed size of ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: cache, filter and indices.

Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up to analyze Elasticsearch configuration and help resolve this error.

Log Context

Log “reducing requested filter cache size of [{}] to the maximum allowed size of [{}]” classname is IndicesFilterCache.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

     }

    private void computeSizeInBytes() {
        long sizeInBytes = MemorySizeValue.parseBytesSizeValueOrHeapRatio(size).bytes();
        if (sizeInBytes > ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes()) {
            logger.warn("reducing requested filter cache size of [{}] to the maximum allowed size of [{}]"; new ByteSizeValue(sizeInBytes);
                    ByteSizeValue.MAX_GUAVA_CACHE_SIZE);
            sizeInBytes = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes();
            // Even though it feels wrong for size and sizeInBytes to get out of
            // sync we don't update size here because it might cause the cache
            // to be rebuilt every time new settings are applied.




 

Try AutoOps to detect and fix issues in your cluster:

See how it works

Analyze Your Cluster

Skip to content