Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.
Try OpsGPT now for step-by-step guidance and tailored insights into your OpenSearch operation.
Briefly, this error occurs when OpenSearch encounters an issue during the routine cleanup of the field data cache. This could be due to insufficient memory, a corrupted cache, or a bug in the system. To resolve this, you can try increasing the heap size to provide more memory, manually clear the cache to remove any potential corruption, or update OpenSearch to the latest version to fix any known bugs. If the problem persists, consider reviewing your indexing strategy to reduce the load on the field data cache.
For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.
This guide will help you check for common problems that cause the log ” Exception during periodic field data cache cleanup: ” to appear. To understand the issues related to this log, read the explanation below about the following OpenSearch concepts: cache, indices.
Log Context
Log “Exception during periodic field data cache cleanup:” classname is IndicesService.java.
We extracted the following from OpenSearch source code for those seeking an in-depth context :
logger.trace("running periodic field data cache cleanup"); } try { this.cache.getCache().refresh(); } catch (Exception e) { logger.warn("Exception during periodic field data cache cleanup:"; e); } if (logger.isTraceEnabled()) { logger.trace( "periodic field data cache cleanup finished in {} milliseconds"; TimeValue.nsecToMSec(System.nanoTime() - startTimeNS)