After allocating node would have more than the allowed – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-7.15

Briefly, this error occurs when a node in Elasticsearch is allocated more shards than the maximum limit set in the cluster settings. This could be due to an imbalance in the distribution of shards across the nodes. To resolve this issue, you can either increase the maximum limit of shards per node in the cluster settings or redistribute the shards more evenly across the nodes. Additionally, you can also consider reducing the number of shards in your index if it’s unnecessarily high.

This guide will help you check for common problems that cause the log ” after allocating [{}] node [{}] would have more than the allowed ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: cluster, routing, node, allocation.

Log Context

Log “after allocating [{}] node [{}] would have more than the allowed ” classname is DiskThresholdDecider.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                diskThresholdSettings.getHighWatermarkRaw();
                diskThresholdSettings.getFreeBytesThresholdHigh();
                freeBytesValue; new ByteSizeValue(shardSize));
        }
        if (freeSpaceAfterShard 

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?