Low disk watermark exceeded on replicas will not be assigned to this node – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-7.15

Briefly, this error occurs when the disk space on a node in an Elasticsearch cluster falls below the “low disk watermark” threshold. Elasticsearch stops assigning new shards to this node to prevent further disk space consumption. To resolve this issue, you can either increase the disk space available on the node, delete unnecessary data to free up space, or adjust the “cluster.routing.allocation.disk.watermark.low” setting to a lower percentage or absolute value, allowing more disk usage. However, be cautious with the last option as it could lead to disk space issues if not monitored closely.

We recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.

Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).

Quick Summary

The cause of this error is low disk space on a data node, and as a preventive measure, Elasticsearch throws this log message and takes some preventive measures explained further.

Explanation

Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings related to log:
cluster.routing.allocation.disk.watermark – have three thresholds of low, high, and flood_stage and can be changed dynamically, accepting absolute values as well as percentage values.

Permanent fixes

a). Delete unused indices.
b) Merge segments to reduce the size of the shard on the affected node, more info on opster’s Elasticsearch expert’s STOF answer
c) Attach external disk or increase the disk used by the data node.

Temp hack/fixes

a) Changed settings values to a higher threshold by dynamically update settings using update cluster API:

PUT _cluster/settings

{

""transient"": {

""cluster.routing.allocation.disk.watermark.low"": ""100gb"", -->adjust according to your situations

""cluster.routing.allocation.disk.watermark.high"": ""50gb"",

""cluster.routing.allocation.disk.watermark.flood_stage"": ""10gb"",

""cluster.info.update.interval"": ""1m""

}

}

b) Disable disk check by hitting below cluster update API

{

""transient"": {

""cluster.routing.allocation.disk.threshold_enabled"" : false

}

}

C) Even After all these fixes, Elasticsearch won’t bring indices in write mode for that this API needs to be hit.

PUT _all/_settings

{

""index.blocks.read_only_allow_delete"": null

}

Log Context

Log “low disk watermark [{}] exceeded on {}; replicas will not be assigned to this node” classname is DiskThresholdMonitor.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                final boolean wasUnderLowThreshold = nodesOverLowThreshold.add(node);
                final boolean wasOverHighThreshold = nodesOverHighThreshold.remove(node);
                assert (wasUnderLowThreshold && wasOverHighThreshold) == false;

                if (wasUnderLowThreshold) {
                    logger.info("low disk watermark [{}] exceeded on {}; replicas will not be assigned to this node";
                        diskThresholdSettings.describeLowThreshold(); usage);
                } else if (wasOverHighThreshold) {
                    logger.info("high disk watermark [{}] no longer exceeded on {}; but low disk watermark [{}] is still exceeded";
                        diskThresholdSettings.describeHighThreshold(); usage; diskThresholdSettings.describeLowThreshold());
                }

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?