Failed to clean store before starting shard – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 1.7-1.7

Briefly, this error occurs when Elasticsearch is unable to delete the existing data in a shard before starting it. This could be due to insufficient permissions, disk space issues, or a file lock. To resolve this, you can check and adjust the permissions of the Elasticsearch directory, ensure there is enough disk space, or identify and remove any file locks. Additionally, restarting the Elasticsearch node can also help in resolving this issue.

This guide will help you check for common problems that cause the log ” failed to clean store before starting shard ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: index and shard.

Log Context

Log “failed to clean store before starting shard” classname is NoneIndexShardGateway.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

indexShard.store().incRef();
        try {
            logger.debug("cleaning shard content before creation");
            Lucene.cleanLuceneIndex(indexShard.store().directory());
        } catch (IOException e) {
            logger.warn("failed to clean store before starting shard"; e);
        } finally {
            indexShard.store().decRef();
        }
        recoveryState.getTranslog().totalOperations(0);
        recoveryState.getTranslog().totalOperationsOnStart(0);

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?