Elasticsearch Cluster Concurrent Rebalance High / Low

Elasticsearch Cluster Concurrent Rebalance High / Low

Opster Team

Nov 2020

In addition to reading this guide, run the free Elasticsearch Health Check-Up. Get actionable recommendations that can improve performance and prevent incidents (does not require any installation). Among the dozens of checks included are: shards sizes, search errors, thread pools, management queue size, circuit breakers and many more. Join over 700 users who use this free tool.


What Does it Mean

The cluster concurrent rebalance setting determines the maximum number of shards which the cluster can move to rebalance the distribution of disk space requirements across the nodes at any one time.

When moving shards, a shard rebalance is required in order to rebalance the disk usage requirements across the clusters. This rebalance uses cluster resources. Therefore, it’s advisable to reduce the concurrent rebalance setting to limit the number of shards that can be moved, so that the cluster doesn’t use up too many resources moving shards at any one time. The default value for this setting is 2. 

If, on the other hand, the concurrent rebalance setting is too low, the cluster may not be able to rebalance shards at all. This could cause some nodes to be unable to allocate shards due to full disks even if there is space available on other nodes. This could result in the cluster going yellow or red and not being able to write new data to certain indices.

How to Fix it

Check the current cluster settings.

GET _cluster/settings

If necessary, change the concurrent rebalance settings.  Remember that the default value is 2.

PUT _cluster/settings
  "transient": {
    "cluster.routing.allocation.cluster_concurrent_rebalance": 2

Improve Elasticsearch Performance

Run The Analysis