Crossing high disk watermarks can be avoided if detected earlier. In addition to reading this guide, we strongly recommend you run the Elasticsearch Error Check-Up which detects issues in ES that cause ES errors and specifically problems that causes disk space to run out quickly and prevent high disk watermark from exceeding.It’s a free tool that requires no installation and takes 2 minutes to complete. You can run the Check-Up here.
“
Quick Summary
The cause of this error is low disk space on a data node, and as a preventive measure, Elasticsearch throws this log message and takes some preventive measures explained further.
Explanation
Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues.
Relevant settings related to log:
cluster.routing.allocation.disk.watermark – have three thresholds of low, high, and flood_stage and can be changed dynamically, accepting absolute values as well as percentage values.
Permanent fixes
a). Delete unused indices.
b) Merge segments to reduce the size of the shard on the affected node, more info on opster’s Elasticsearch expert’s STOF answer
c) Attach external disk or increase the disk used by the data node.
Temp hack/fixes
a) Changed settings values to a higher threshold by dynamically update settings using update cluster API:
PUT _cluster/settings
{ ""transient"": { ""cluster.routing.allocation.disk.watermark.low"": ""100gb"", -->adjust according to your situations ""cluster.routing.allocation. disk.watermark.high"": ""50gb"", ""cluster.routing.allocation. disk.watermark.flood_stage"": ""10gb"", ""cluster.info.update. interval"": ""1m"" } }
b) Disable disk check by hitting below cluster update API
{ ""transient"": { ""cluster.routing.allocation.disk.threshold_enabled"" : false } }
C) Even After all these fixes, Elasticsearch won’t bring indices in write mode for that this API needs to be hit.
PUT _all/_settings
{ ""index.blocks.read_only_allow_delete"": null }
”
Overview
There are various “watermark” thresholds on your Elasticsearch cluster. As the disk fills up on a node, the first threshold to be crossed will be the “low disk watermark”. Once this threshold is passed, then the cluster will stop allocating new shards on the node in question but will continue to write data on existing shards on the node. The second threshold will then be the “high disk watermark”. If you pass this threshold then Elasticsearch will try to relocate shards away from the node to other nodes in the cluster.
How to resolve this issue
Passing this threshold is a warning and you should not delay in taking action before the higher threshold flood_stage is reached. Here are possible actions you can take to resolve the issue:
- Delete old indices
- Remove documents from existing indices
- Reduce the number of replicas (on older indices)
- “Increase disk space on all nodes
- Add new nodes to the cluster
Although you may be reluctant to delete data, in a logging system it is often better to delete old indices (which you may be able to restore from a snapshot later if available) than to lose new data. However, this decision will depend upon the architecture of your system and the queueing mechanisms you have available.
Check the disk space on each node
You can see the space you have available on each node by running:
GET _nodes/stats/fs
Check if the cluster is rebalancing
If the high level watermark has been passed, then Elasticsearch should start rebalancing shards from that node to other nodes which are still below the low watermark. You can check to see if any rebalancing is going on by calling:
GET _cluster/health/
If you think that your cluster should be rebalancing shards to other nodes but it is not, there are probably some other cluster allocation rules which are preventing this from happening. The most likely causes are:
- The other nodes are already above the low disk watermark
- There are cluster allocation rules which govern the distribution of shards between nodes and conflict with the rebalancing requirements. (eg. zone awareness allocation).
- There are already too many rebalancing operations in progress
- The other nodes already contain the primary or replica shards of the shards that could be rebalanced.
Check the cluster settings
You can see the settings you have applied with this command:
GET _cluster/settings
If they are not appropriate, you can modify them using a command such as below:
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": "85%", "cluster.routing.allocation.disk.watermark.high": "90%", "cluster.routing.allocation.disk.watermark.flood_stage": "95%", "cluster.info.update.interval": "1m" } }
Note: Threshold can be specified both as percentage and byte values, but the former is more flexible and easier to maintain (in case different nodes have different disk sizes, like in hot/warm deployments).
How to prevent
There are various mechanisms that allow you to automatically delete stale data.
How to automatically delete stale data:
- Apply ILM (Index Lifecycle Management)
Using ILM you can get Elasticsearch to automatically delete an index when your current index reaches a given age.
- Use date based indices
If your application uses date based indices, then it is easy to delete old indices using either a script, ILM or a tool such as Elasticsearch curator.
- Use snapshots to store data offline
It may be appropriate to store snapshotted data offline and restore it in the event that the archived data needs to be reviewed or studied.
- Automate / simplify process to add new data nodes
Use automation tools such as terraform to automate the addition of new nodes to the cluster. If this is not possible, at the very least ensure you have a clearly documented process to create new nodes, add TLS certificates and configuration and bring them into the Elasticsearch cluster in a short and predictable time frame.
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Log Context
Log “High disk watermark [{}] exceeded on {}; shards will be relocated away from this node” classname is DiskThresholdMonitor.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
entry; DiskThresholdDecider.this.rerouteInterval); } } } if (reroute) { logger.info("high disk watermark exceeded on one or more nodes; exceeded on one or more nodes. rerouting shards"); // Execute an empty reroute; but don't block on the response client.admin().cluster().prepareReroute().execute(); } } }
Run the Check-Up to get customized insights on your system: