Opster Team
Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many errors.
To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them. Take a self-guided product tour to see for yourself (no registration required).
This guide will help you check for common problems that cause the log ” Cluster state applier task took above the warn threshold of ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: cluster, task and threshold.
Overview
A task is an Elasticsearch operation, which can be any request performed on an Elasticsearch cluster, such as a delete by query request, a search request and so on. Elasticsearch provides a dedicated Task API for the task management which includes various actions, from retrieving the status of current running tasks to canceling any long running task.
Examples
Get all currently running tasks on all nodes of the cluster
Apart from other information, the response of the below request contains task IDs of all the tasks which can be used to get detailed information about the particular task in question.
GET _tasks
Get detailed information of a particular task
Where clQFAL_VRrmnlRyPsu_p8A:1132678759 is the ID of the task in below request
GET _tasks/clQFAL_VRrmnlRyPsu_p8A:1132678759
Get all the current tasks running on particular nodes
GET _tasks?nodes=nodeId1,nodeId2
Cancel a task
Where clQFAL_VRrmnlRyPsu_p8A:1132678759 is the ID of the task in the below request
POST /_tasks/clQFAL_VRrmnlRyPsu_p8A:1132678759/_cancel?pretty
Notes
- The Task API will be most useful when you want to investigate the spike of resource utilization in the cluster or want to cancel an operation.
Overview
Elasticsearch uses several parameters to enable it to manage hard disk storage across the cluster.
What it’s used for
- Elasticsearch will actively try to relocate shards away from nodes which exceed the disk watermark high threshold.
- Elasticsearch will NOT locate new shards or relocate shards on to nodes which exceed the disk watermark low threshold.
- Elasticsearch will prevent all writes to an index which has any shard on a node that exceeds the disk.watermark.flood_stage threshold.
- The info update interval is the time it will take Elasticsearch to re-check the disk usage.
Examples
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": "85%", "cluster.routing.allocation.disk.watermark.high": "90%", "cluster.routing.allocation.disk.watermark.flood_stage": "95%", "cluster.info.update.interval": "1m" } }
Notes and good things to know
- You can use absolute values (100gb) or percentages (90%), but you cannot mix the two on the same cluster.
- In general, it is recommended to use percentages, since this will work in case the disks are resized.
- You can put the cluster settings on the elasticsearch.yml of each node, but it is recommended to use the PUT _cluster/settings API because it is easier to manage, and ensures that the settings are coherent across the cluster.
- Elasticsearch comes with sensible defaults for these settings, so think twice before modifying them. If you find you are spending a lot of time fine-tuning these settings, then it is probably time to invest in new disk space.
- In the event of the flood_stage threshold being exceeded, once you delete data, Elasticsearch should detect automatically that the block can be released (bearing in mind the update interval which could be, for instance, a minute). However if you want to accelerate this process, you can unblock an index manually, with the following call:
PUT /my_index/_settings { "index.blocks.read_only_allow_delete": null }
Common problems
Inappropriate cluster settings (if the disk watermark.low is too low) can make it impossible for Elasticsearch to allocate shards on the cluster. In particular, bear in mind that these parameters work in combination with other cluster settings (for example shard allocation awareness) which cause further restraints on how Elasticsearch can allocate shards.

Log Context
Log “Cluster state applier task [{}] took [{}] above the warn threshold of {}” classname is ClusterApplierService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} } protected void warnAboutSlowTaskIfNeeded(TimeValue executionTime; String source) { if (executionTime.getMillis() > slowTaskLoggingThreshold.getMillis()) { logger.warn("cluster state applier task [{}] took [{}] above the warn threshold of {}"; source; executionTime; slowTaskLoggingThreshold); } } class NotifyTimeout implements Runnable {
Find & fix Elasticsearch problems
Opster AutoOps diagnoses & fixes issues in Elasticsearch based on analyzing hundreds of metrics.
Fix Your Cluster IssuesConnect in under 2 minutes
Adam Bregenzer
CTO at Groupsense