Exception during periodic field data cache cleanup – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-8.9

Briefly, this error occurs when Elasticsearch encounters an issue during its routine cleanup of the field data cache. This could be due to insufficient memory, a heavy load on the cluster, or a bug in the software. To resolve this issue, you could increase the heap size if memory is the problem, reduce the load on the cluster by optimizing your queries or increasing the number of nodes, or update Elasticsearch to the latest version to fix any potential bugs.

This guide will help you check for common problems that cause the log ” Exception during periodic field data cache cleanup: ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: indices, cache.

Log Context

Log “Exception during periodic field data cache cleanup:” classname is IndicesService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                logger.trace("running periodic field data cache cleanup");
            }
            try {
                this.cache.getCache().refresh();
            } catch (Exception e) {
                logger.warn("Exception during periodic field data cache cleanup:"; e);
            }
            if (logger.isTraceEnabled()) {
                logger.trace(
                    "periodic field data cache cleanup finished in {} milliseconds";
                    TimeValue.nsecToMSec(System.nanoTime() - startTimeNS)

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?