In addition to reading this guide, run the Elasticsearch Health Check-Up. Detect problems and improve performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and many more.
Free tool that requires no installation with +1000 users.
Overview
Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings
cluster.routing.allocation.disk.watermark and have three thresholds of low, high, and flood_stage and can be changed dynamically as well. It accepts absolute as well as percentage values. More information available on official ES doc.
How to fix log messages related to disk watermarks
Permanent fixes
1. Delete unused indices.
2. Merge segments to reduce the size of the shard on the affected node, more info on opster’s Elasticsearch expert’s STOF answer
3. Attach external disk or increase the disk used by the data node
Temporary hacks/fixes
1. Changed these settings values to a higher threshold by dynamically update settings using below update cluster API.
PUT _cluster/settings :
{ “transient”: { “cluster.routing.allocation.disk.watermark.low”: “100gb”, –>adjust according to your situations “cluster.routing.allocation.disk.watermark.high”: “50gb”, “cluster.routing.allocation.disk.watermark.flood_stage”: “10gb”, “cluster.info.update.interval”: “1m” } }
2. Disable disk check by hitting below cluster update API
{ “transient”: { “cluster.routing.allocation.disk.threshold_enabled” : false } }
Even After all these fixes, Elasticsearch won’t bring indices in write mode for that this API needs to be activated
PUT _all/_settings
{ “index.blocks.read_only_allow_delete”: null }