Log high disk watermark [90%] exceeded on – How To Solve Related Issues



Log high disk watermark [90%] exceeded on – How To Solve Related Issues

Opster Team

Feb-20, Version: 1.7-8.0



Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up  which can resolve issues causing many log errors (free and no installation required)

 

This guide will help you check for common problems that cause the log “high disk watermark [90%] exceeded on” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: allocation, cluster, rest-high-level, routing, shards, threshold.


Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to discover the cause of many errors and provides suitable actionable recommendations. 

“””

Quick Summary

The cause of this error is low disk space on a data node, and as a preventive measure, Elasticsearch throws this log message and takes some preventive measures explained further.


Explanation


Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings related to log:
cluster.routing.allocation.disk.watermark – have three thresholds of low, high, and flood_stage and can be changed dynamically, accepting absolute values as well as percentage values.


Permanent fixes


a). Delete unused indices.
b) Merge segments to reduce the size of the shard on the affected node, more info on opster’s Elasticsearch expert’s STOF answer
c) Attach external disk or increase the disk used by the data node.

Temp hack/fixes

a) Changed settings values to a higher threshold by dynamically update settings using update cluster API: PUT _cluster/settings
{

""""transient"""": {

""""cluster.routing.allocation.disk.watermark.low"""": """"100gb"""", -->adjust according to your situations

""""cluster.routing.allocation.disk.watermark.high"""": """"50gb"""",

""""cluster.routing.allocation.disk.watermark.flood_stage"""": """"10gb"""",

""""cluster.info.update.interval"""": """"1m""""

}

}
b) Disable disk check by hitting below cluster update API
{

""""transient"""": {

""""cluster.routing.allocation.disk.threshold_enabled"""" : false

}

}
C) Even After all these fixes, Elasticsearch won’t bring indices in write mode for that this API needs to be hit. PUT _all/_settings
{

""""index.blocks.read_only_allow_delete"""": null

}
“””

Log Context

Log”Elasticsearch high disk watermark [90%] exceeded on” classname is DiskThresholdDecider.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                                     entry; DiskThresholdDecider.this.rerouteInterval);
                        }
                    }
                }
                if (reroute) {
                    logger.info("high disk watermark exceeded on one or more nodes; rerouting shards");
                    // Execute an empty reroute; but don't block on the response
                    client.admin().cluster().prepareReroute().execute();
                }
            }
        }




 

Related issues to this log

We have gathered selected Q&A from the community and issues from Github, that can help fix related issues please review the following for further information :

1 High disk watermark exceeded even when there is not much data in my index -views 37.89 K ,score 38

low disk watermark [??%] exceeded on -views   59.87 K,score 58

 

About Opster

Opster line of products and support services detects, prevents, optimizes and automates everything needed to manage mission-critical Elasticsearch.

Find Configuration Errors

Analyze Now