Elasticsearch Status Yellow

Elasticsearch Status Yellow

Opster Team

July 2020, Version: 1.7-8.0

Before you begin reading the explanation below, try running the free ES Health Check-Up get actionable recommendations that can improve Elasticsearch performance and prevent serious incidents. Just 2 minutes to complete and you can check your threadpools, memory, snapshots and many more

What does Yellow Status mean?

Yellow status indicates that one or more of the replica shards on the Elasticsearch cluster are not allocated to a node. No need to  panic! There are several reasons why a yellow status can be perfectly normal, and in many cases Elasticsearch will recover to green by itself, so the worst thing you can do is start tweaking things without knowing exactly what the cause is. While status is yellow, search and index operations are still available.

How to resolve

There are several reasons why your Elasticsearch cluster could indicate a yellow status.

1. You only have 1 node

(Or number_of_replicas gte number of nodes )

Elasticsearch will never assign a replica to the same node as the primary shard, so if you only have one node it is perfectly normal and expected for your cluster to indicate yellow.  If you feel better about it being green, then change the number of replicas on each index to be 0.

PUT /my-index/_settings
    "index" : {
        "number_of_replicas" : 0

Similarly if the number of replicas is equal to or exceeds the number of nodes, then it will not be possible to allocate one or more of the shards for the same reason.

2. You have restarted a node

If you have temporarily restarted a node, then normally no action is necessary, Elasticsearch will recover the shards automatically and recover to a green status.  You can monitor this process by calling:

GET _cluster/health 

And you will see that the number of unallocated shards progressively reduces until green status is reached.

However if you see that this process is occurring repeatedly, then some other issue is causing the cluster to become unstable and requires investigation.

3. Node Crashes

If nodes become overwhelmed or stop operating for any reason,  the first symptom will probably be that nodes become yellow or red as the shards fail to sync. Nodes could disconnect due to long GC pauses which occur due to “out of memory” errors or high memory demand due to heavy searches.

4. Networking Issues

If nodes are not able to reach each other reliably, then the nodes will lose contact with one another and shards will get out of sync resulting in a red or yellow status.  You may be able to detect this situation by finding repeated messages in the logs about nodes leaving or rejoining the cluster.

5. Disk Space Issues

Insufficient disk space may prevent Elasticsearch from allocating a shard to a node.  Typically this will happen when disk utilisation goes above the setting below:


Here the solution requires deleting indices, increasing disk size, or adding a new node to the cluster.  Of course you can also temporarily increase the watermark to keep things running while you decide what to do, but just putting off the decision until later is not the best course of action.

PUT _cluster/settings
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "85%",
     "cluster.info.update.interval": "1m"

You can also get:

cannot allocate because allocation is not permitted to any of the nodes

Typically this happens when a node disk utilisation goes above the flood stage, creating a write block on the cluster.  As above, you must delete data, or add a new node.  You can buy time with:

PUT _cluster/settings
  "transient": {
    "cluster.routing.allocation.disk.watermark.flood_stage": "97%",
    "cluster.info.update.interval": "1m"

6. Node Allocation Awareness

Sometimes there may be specific issues with the allocation rules that have been created on the cluster which prevent the cluster from allocating shards. For example, it is possible to create rules that require that a shard’s replicas be spread over a specific set of nodes (“allocation awareness”), such as AWS availability zones or different host machines in a kubernetes setup.  On occasion, these rules may conflict with other rules (such as disk space) and prevent shards being allocated.

Find the Cause of Non Allocation

You can use the cluster allocation API:

GET /_cluster/allocation/explain

By running the above command. You will get an explanation of the allocation status of the first unallocated shard found.

  "index" : "my_index",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2017-01-04T18:53:59.498Z",
    "details" : "node_left[G92ZwuuaRY-9n8_tc-IzEg]",
    "last_allocation_status" : "no_attempt"
  "can_allocate" : "allocation_delayed",
  "allocate_explanation" : "cannot allocate because the cluster is still waiting 59.8s for the departed node holding a replica to rejoin, despite being allowed to allocate the shard to at least one other node",
  "configured_delay" : "1m",                      
  "configured_delay_in_millis" : 60000,
  "remaining_delay" : "59.8s",                    
  "remaining_delay_in_millis" : 59824,
  "node_allocation_decisions" : [
      "node_id" : "pmnHu_ooQWCPEFobZGbpWw",
      "node_name" : "node_t2",
      "transport_address" : "",
      "node_decision" : "yes"
      "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
      "node_name" : "node_t0",
      "transport_address" : "",
      "node_decision" : "no",
      "store" : {                                 
        "matching_size" : "4.2kb",
        "matching_size_in_bytes" : 4325
      "deciders" : [
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[my_index][0], node[3sULLVJrRneSg0EfBB-2Ew], [P], s[STARTED], a[id=eV9P8BN1QPqRc3B4PLx6cg]]"

The above api returns :

“unassigned_info”  => The reason why the shard became unassigned.

“node_allocation_decision” => A list of explanations for each node explaining whether it could potentially receive the shard.

“deciders” => The decision and the explanation of that decision.

About Opster

Opster detects, prevents, optimizes and automates everything needed to run mission-critical Elasticsearch

Find Configuration Errors

Analyze Now