How To Solve Issues Related to Log – Flood stage disk watermark exceeded on ; all indices on this node will be marked read-only

How To Solve Issues Related to Log – Flood stage disk watermark exceeded on ; all indices on this node will be marked read-only

Updated: Jan-20

Elasticsearch Version: 1.7-8.0

Background

Before you begin reading this guide try our beta Elasticsearch Health Check-Up it analyses JSON’s to provide personalized recommendations that can improve your clusters performance.


To troubleshoot log “Flood stage disk watermark exceeded on ; all indices on this node will be marked read-only” it’s important to understand a few problems related to Elasticsearch concepts allocation, cluster, indices, monitor, node, routing, threshold. See bellow important tips and explanations on these concepts

"

Quick Summary

The cause of this error is low disk space on a data node, and as a preventive measure, Elasticsearch throws this log message and takes some preventive measures explained further.


Explanation


Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings related to log:
cluster.routing.allocation.disk.watermark - have three thresholds of low, high, and flood_stage and can be changed dynamically, accepting absolute values as well as percentage values.


Permanent fixes


a). Delete unused indices.
b) Merge segments to reduce the size of the shard on the affected node, more info on opster's Elasticsearch expert's STOF answer
c) Attach external disk or increase the disk used by the data node.

Temp hack/fixes

a) Changed settings values to a higher threshold by dynamically update settings using update cluster API: PUT _cluster/settings
{

""transient"": {

""cluster.routing.allocation.disk.watermark.low"": ""100gb"", -->adjust according to your situations

""cluster.routing.allocation.disk.watermark.high"": ""50gb"",

""cluster.routing.allocation.disk.watermark.flood_stage"": ""10gb"",

""cluster.info.update.interval"": ""1m""

}

}
b) Disable disk check by hitting below cluster update API
{

""transient"": {

""cluster.routing.allocation.disk.threshold_enabled"" : false

}

}
C) Even After all these fixes, Elasticsearch won't bring indices in write mode for that this API needs to be hit. PUT _all/_settings
{

""index.blocks.read_only_allow_delete"": null

}
"

Log Context

Log”Flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only” classname is DiskThresholdMonitor.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

      * Warn about the given disk usage if the low or high watermark has been passed
     */
    private void warnAboutDiskIfNeeded(DiskUsage usage) {
        // Check absolute disk values
        if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdFloodStage().getBytes()) {
            logger.warn("flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only";
                diskThresholdSettings.getFreeBytesThresholdFloodStage(); usage);
        } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdHigh().getBytes()) {
            logger.warn("high disk watermark [{}] exceeded on {}; shards will be relocated away from this node";
                diskThresholdSettings.getFreeBytesThresholdHigh(); usage);
        } else if (usage.getFreeBytes() < diskThresholdSettings.getFreeBytesThresholdLow().getBytes()) {




Related issues to this log

We have gathered selected Q&A from the community and issues from Github, that can help fix related issues please review the following for further information :

1 Elasticsearch error: cluster_block_exception [FORBIDDEN 24.53 K 59

elasticsearch 6 index change to read only after few second   7.81 K 15

About Opster

Opster detects root causes of Elasticsearch problems, provides automated recommendations and can perform various actions to prevent issues and optimize performance

Find Configuration Errors

Analyze Now