How To Solve Issues Related to Log – High disk watermark exceeded on ; shards will be relocated away from this node

Prevent ES settings related problems

Check if your ES issues are caused from misconfigured settings

Resolve Issue

Updated: Jan-20

In Page Navigation (click to jump) :
Troubleshooting Background       
Related Issues  
Log Context
About Opster

Opster Offer’s World-Class Elasticsearch Expertise In One Powerful Product
Try Our Free Check-Up   Prevent Incident

Quick Summary

The cause of this error is low disk space on a data node, and as a preventive measure, Elasticsearch throws this log message and takes some preventive measures explained further.

Disk watermarks in Elasticsearch

Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error, Reason being Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues above.

Relevant settings related 

cluster.routing.allocation.disk.watermark and have three thresholds of low, high, and flood_stage and can be changed dynamically as well. It Accepts absolute as well as percentage values. More information available on official ES doc.

How to fix logs messages related to Disk watermarks

Permanent fixes:

a) Delete unused indices. 

b) Merge segments to reduce the size of the shard on the affected node, more info on opster’s Elasticsearch expert’s STOF answer 

c) Attach external disk or increase the disk used by the data node

Temp hack/fixes:

a) Changed these settings values to a higher threshold by dynamically update settings using below update cluster API.

PUT _cluster/settings :

{

  “transient”: {

    “cluster.routing.allocation.disk.watermark.low”: “100gb”, –>adjust according to your situations

    “cluster.routing.allocation.disk.watermark.high”: “50gb”,

    “cluster.routing.allocation.disk.watermark.flood_stage”: “10gb”,

    “cluster.info.update.interval”: “1m”

  }

}

b) Disable disk check by hitting below cluster update API

{

    “transient”: {

       “cluster.routing.allocation.disk.threshold_enabled” : false

    }

}

Even After all these fixes, elasticsearch won’t bring indices in write mode, and for that below, API needs to be hit.

PUT _all/_settings

{

     “index.blocks.read_only_allow_delete”: null

}

Troubleshooting background

To troubleshoot Elasticsearch log “High disk watermark exceeded on ; shards will be relocated away from this node” it’s important to understand common problems related to Elasticsearch concepts: allocation, cluster, monitor, node, rest-high-level, routing, shards. See detailed explanations below complete with common problems, examples and useful tips.

Cluster in Elasticsearch

What is it

In Elasticsearch a cluster is a collection of one or more nodes (servers / VMs). A cluster can consist of an unlimited number of nodes. The cluster provides interface for indexing and storing data and search capability across all of the data which is stored in the data nodes

Each cluster has a single master node that is elected by the master eligible nodes. In cases where the master is not available the other connected master eligible nodes elect a new master. Clusters are identified by a unique name, which defaults to “Elasticsearch”.


To help troubleshoot related issues we have gathered selected Q&A from the community and issues from Github , please review the following for further information :

1 High disk watermark exceeded even when there is not much data in my index 35 35.92 K 

2 high disk watermark [90%] exceeded in JHipster Spring Boot app   – 0.86 K 


Log Context

Log ”High disk watermark [{}] exceeded on {}; shards will be relocated away from this node” classname is DiskThresholdMonitor.java
We have extracted the following from Elasticsearch source code to get an in-depth context :

<pre class="wp-block-syntaxhighlighter-code">         // Check absolute disk values
        if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdFloodStage().getBytes()) {
            logger.warn("flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only";
                diskThresholdSettings.getFreeBytesThresholdFloodStage(); usage);
        } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdHigh().getBytes()) {
            logger.warn("high disk watermark [{}] exceeded on {}; shards will be relocated away from this node";
                diskThresholdSettings.getFreeBytesThresholdHigh(); usage);
        } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdLow().getBytes()) {
            logger.info("low disk watermark [{}] exceeded on {}; replicas will not be assigned to this node";
                diskThresholdSettings.getFreeBytesThresholdLow(); usage);
        }




</pre>

About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster’s solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues.

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster

Did this page help you?