After allocating node would have more than the allowed – How to solve this OpenSearch error

Opster Team

Aug-23, Version: 1-1.1

Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.

Try OpsGPT now for step-by-step guidance and tailored insights into your OpenSearch operation.

Briefly, this error occurs when OpenSearch tries to allocate shards to a node, but doing so would exceed the maximum allowed storage capacity on that node. This is controlled by the cluster.routing.allocation.disk.watermark settings. To resolve this issue, you can either increase the disk capacity of the node, delete unnecessary data to free up space, or adjust the disk watermark settings to allow more storage usage. However, be cautious with the last option as it could lead to disk space issues. Alternatively, you can add more nodes to the cluster to distribute the load.

For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.

This guide will help you check for common problems that cause the log ” after allocating [{}] node [{}] would have more than the allowed ” to appear. To understand the issues related to this log, read the explanation below about the following OpenSearch concepts: allocation, routing, cluster, node.

Log Context

Log “after allocating [{}] node [{}] would have more than the allowed” classname is
We extracted the following from OpenSearch source code for those seeking an in-depth context :

                freeBytesValue; new ByteSizeValue(shardSize));
        if (freeSpaceAfterShard 


How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Get expert answers on Elasticsearch/OpenSearch