Primary shards were not active shards= active= – How to solve this Elasticsearch exception

Opster Team

August-23, Version: 7.13-8.9

Briefly, this error occurs when Elasticsearch cannot find any active primary shards. This could be due to a number of reasons such as network issues, disk space problems, or node failures. To resolve this issue, you can try restarting the Elasticsearch cluster, checking for any network issues, ensuring there is enough disk space, or checking the health of the nodes. If the problem persists, you may need to reindex your data.

Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.

Try OpsGPT now for step-by-step guidance and tailored insights into your Elasticsearch/OpenSearch operation.

Briefly, this error occurs when a primary shard is not active in Elasticsearch. This can happen if the node that holds the primary shard is down or not responding. The solution is to check the node status and try to restart the node.

For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.

This guide will help you check for common problems that cause the log ” Primary shards were not active shards=; active= ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: shards, plugin.

Primary shards not active error message indicates that a cluster has not (yet) recovered from a red status. This indicates that one or more indices you are trying to either update or query do not have their primary shard allocated.  

What this error means

This error message indicates that a primary shard has been lost or has not been allocated on the cluster. However, users should not panic and start firing off commands without finding out what is really going on first, because Elasticsearch has mechanisms in place to remedy the situation automatically.

Why this error occurs

There can be several reasons why this error would appear:

1. The cluster is in the recovery process, which is not yet complete

If you carry out the command: 

GET _cluster/health

You  can see the cluster status and should be able to see if there are shards in the “initializing” state. If this is the case, then you can wait and the cluster will recover on its own.

2. Disk space issues

Insufficient disk space may prevent Elasticsearch from allocating a shard to a node, and this could be the reason why shards are not active. Typically this will happen when disk utilization goes above the setting below (by default 85%).

cluster.routing.allocation.disk.watermark.low

Here, the solution would require users to delete indices, increasing disk size, or adding a new node to the cluster.  Users can also temporarily increase the watermark to keep things running while deciding what to do, but doing nothing is not the best course of action.

PUT _cluster/settings
{
  "transient": {
    "cluster.routing.allocation.disk.watermark.low": "85%",
    "cluster.info.update.interval": "1m"
  }
}

You can also get: 

cannot allocate because allocation is not permitted to any of the nodes

Typically this happens when node disk utilization goes above the flood stage, creating a write block on the cluster.  Just like above, users must delete data, or add a new node.  The following can be used to buy some time:

PUT _cluster/settings
{
  "transient": {
 
    "cluster.routing.allocation.disk.watermark.flood_stage": "97%",
    "cluster.info.update.interval": "1m"
  }
}

For more information, read this guide.

3. Node allocation awareness

Sometimes there may be specific issues with the allocation rules created on the cluster, which prevent the cluster from allocating shards. For example, it is possible to create rules that require a shard’s replicas be spread over a specific set of nodes (“allocation awareness”), such as AWS availability zones or different host machines in a kubernetes setup. On occasion, these rules conflict with other rules (such as disk space) and prevent shards from being allocated.

For further information about Node Allocation Awareness, read this guide.

Find the cause of non-allocation

Use the cluster allocation API:

GET /_cluster/allocation/explain

By running the above command you can get an explanation of the allocation status of the first unallocated shard found.

{
  "index" : "my_index",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "NODE_LEFT",
    "at" : "2017-01-04T18:53:59.498Z",
    "details" : "node_left[G92ZwuuaRY-9n8_tc-IzEg]",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "allocation_delayed",
  "allocate_explanation" : "cannot allocate because the cluster is still waiting 59.8s for the departed node holding a replica to rejoin, despite being allowed to allocate the shard to at least one other node",
  "configured_delay" : "1m",                      
  "configured_delay_in_millis" : 60000,
  "remaining_delay" : "59.8s",                    
  "remaining_delay_in_millis" : 59824,
  "node_allocation_decisions" : [
    {
      "node_id" : "pmnHu_ooQWCPEFobZGbpWw",
      "node_name" : "node_t2",
      "transport_address" : "127.0.0.1:9402",
      "node_decision" : "yes"
    },
    {
      "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
      "node_name" : "node_t0",
      "transport_address" : "127.0.0.1:9400",
      "node_decision" : "no",
      "store" : {                                 
        "matching_size" : "4.2kb",
        "matching_size_in_bytes" : 4325
      },
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[my_index][0], node[3sULLVJrRneSg0EfBB-2Ew], [P], s[STARTED], a[id=eV9P8BN1QPqRc3B4PLx6cg]]"
        }
      ]
    }
  ]
}

The above API returns:

“unassigned_info”  => The reason why the shard became unassigned.
“node_allocation_decision” => A list of explanations for each node, explaining whether it could potentially receive the shard.
“deciders” => The decision and explanation of that decision.

Log Context

Log “Primary shards were not active [shards={}; active={}]” class name is GetGlobalCheckpointsAction.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :

 handleIndexNotReady(state; request; listener);
 } else {
 int active = routingTable.primaryShardsActive();
 int total = indexMetadata.getNumberOfShards();
 listener.onFailure(
 new UnavailableShardsException(null; "Primary shards were not active [shards={}; active={}]"; total; active)
 );
 }
 }
 }

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?