Flood stage disk watermark exceeded on all indices on this node will be marked read-only – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-7.15

Briefly, this error occurs when the disk usage exceeds the “flood stage” watermark level, which is 95% by default in Elasticsearch. This is a protective measure to prevent the node from crashing due to lack of disk space. To resolve this issue, you can either increase your disk space, delete unnecessary data, or adjust the “flood stage” watermark level. However, adjusting the watermark level should be done cautiously as it might lead to disk space issues.

Crossing high disk watermarks can be avoided if detected earlier. Before you read this guide, we strongly recommend you run the Elasticsearch Error Check-Up which detects issues in ES that cause ES errors and specifically problems that causes disk space to run out quickly.
The tool can prevent flood disk watermark [95%] from being exceeded from occuring again. It’s a free tool that requires no installation and takes 2 minutes to complete. You can run the Check-Up here.

Quick summary

This error is caused by low disk space on a data node. As a preventive measure, Elasticsearch throws this log message and takes some measures as explained further. 

To pinpoint how to resolve issues causing flood stage disk watermark [95%] to be breached, run Opster’s free Elasticsearch Health Check-Up. The tool has several checks on disk watermarks and can provide actionable recommendations on how to resolve and prevent this from occurring (even without increasing disk space).

Explanation

Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or block all index write operations on a data node based on a different threshold of this error. This is because Elasticsearch indices consist of different shards which are persisted on data nodes and low disk space can cause issues.

Relevant settings related to log:

cluster.routing.allocation.disk.watermark – There are three thresholds: low, high, and flood_stage. These can be changed dynamically, accepting absolute values as well as percentage values. Threshold can be specified both as percentage and byte values, but the former is more flexible and easier to maintain (in case different nodes have different disk sizes, like in hot/warm deployments).

Permanent fixes

  1. Delete unused indices.
  2. Attach external disk or increase the disk used by the data node.
  3. Manually move shards away from the node using cluster reroute API.
  4. Reduce replicas count to 1 (if replicas > 1).
  5. Add new data nodes.

Temporary hacks/fixes

1. Change the settings values to a higher threshold by dynamically updating the settings using update cluster API:

PUT _cluster/settings

{
   "transient": {
       "cluster.routing.allocation.disk.watermark.low": "100gb",
       "cluster.routing.allocation.disk.watermark.high": "50gb",
       "cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
       "cluster.info.update.interval": "1m"
   }
}

2.  Disable disk check by hitting below cluster update API:

PUT /_cluster/settings
{
    "transient": {
        "cluster.routing.allocation.disk.threshold_enabled": false
    }
}

3. Even after all these fixes, Elasticsearch won’t remove the write block on indices. In order to achieve that, the following API needs to be hit:

PUT _all/_settings
{
    "index.blocks.read_only_allow_delete": null
}

Log Context

Log “flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only” classname is DiskThresholdMonitor.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                        indicesToMarkReadOnly.add(indexName);
                        indicesNotToAutoRelease.add(indexName);
                    }
                }

                logger.warn("flood stage disk watermark [{}] exceeded on {}; all indices on this node will be marked read-only";
                    diskThresholdSettings.describeFloodStageThreshold(); usage);

                continue;
            }

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?