Failing snapshot of shard on closed node – Elasticsearch Log Diagnostic

Remediate Elasticsearch Issues

Opster detects and predicts root causes of Elasticsearch problems

Learn More

Opster has analyzed this log so you don’t have to
Navigate directly to one of the sections in this page :

Troubleshooting Background – start here to get the full picture       
Log Context – usefull for experts
Related Issues – selected resources from the community  
About Opster – offering a diffrent approach to troubleshoot Elasticsearch

Check My Elasticsearch 


Troubleshooting background

To troubleshoot Elasticsearch log “Failing snapshot of shard on closed node” it’s important to know common problems related to Elasticsearch concepts: node, shard, snapshot, snapshots. See below-detailed explanations complete with common problems, examples and useful tips.

Nodes in Elasticsearch

What it is

Simply explained a node is a single server that is part of a cluster. Each node is assigned with one or more roles, which describes the node responsibility and operations – Data nodes stores the data, and participates in the cluster’s indexing and search capabilities, while master nodes are responsible for managing the cluster activities and storing the cluster state, including the metadata.

While it’s possible to run several Node instances of Elasticsearch on the same hardware, it’s considered a best practice to limit a server to a single running instance of Elasticsearch.

Nodes connect to each other and form a cluster by using a discovery method. 

Roles
Master node

Master nodes are in charge of cluster-wide settings and changes  – deleting or creating indices and fields, adding or removing nodes and allocating shards to nodes. Each cluster has a single master node that is elected from the master eligible nodes using a distributed consensus algorithm and is reelected if the current master node fails.

Coordinator Node (aka client node)

Coordinator Node – is a node that does not hold any configured role. It doesn’t hold data, not part of the master eligible group nor execute ingest pipelines. Coordinator node serves incoming search requests and is acting as the query coordinator – running the query and fetch phases, sending requests to every node which holds a shard being queried. The client node also distributes bulk indexing operations and route queries to shards copies based on the nodes responsiveness.

Shards in Elasticsearch

What it is

Data in an Elasticsearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each Elasticsearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the Elasticsearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.

Examples

It is when an index is created that the number of shards is set, and this cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index

PUT /sensor
2
{
3
    "settings" : {
4
        "index" : {
5
            "number_of_shards" : 6,
6
            "number_of_replicas" : 2
7
        }
8
    }
9
}

The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.

During their lifetime, shards can go through a number of states, including:

  • Initializing: An initial state before the shard can be used.
  • Started: A state in which the shard is active and can receive requests.
  • Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, for example, when the node they are on is running out of disk space.
  • Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens, for example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).

In order to view all shards, their states, and other metadata, use the following request:

GET _cat/shards

To view shards for a specific index, append the name of the index to the URL, for example

sensor:
GET _cat/shards/sensor

This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.

sensor 5 p STARTED    0  283b 127.0.0.1 ziap
sensor 5 r UNASSIGNED                   
sensor 2 p STARTED    1 3.7kb 127.0.0.1 ziap
sensor 2 r UNASSIGNED                   
sensor 3 p STARTED    3 7.2kb 127.0.0.1 ziap
sensor 3 r UNASSIGNED                   
sensor 1 p STARTED    1 3.7kb 127.0.0.1 ziap
sensor 1 r UNASSIGNED                   
sensor 4 p STARTED    2 3.8kb 127.0.0.1 ziap
sensor 4 r UNASSIGNED                   
sensor 0 p STARTED    0  283b 127.0.0.1 ziap
sensor 0 r UNASSIGNED
Notes and good things to know
  • Having shards that are too large is simply inefficient. Moving huge indices across machines is time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
  • Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.

Snaphshots in Elasticsearch

What it is

An Elasticsearch snapshot is a backup of an index taken from a running cluster. Snapshots are taken incrementally, i.e. when it creates a snapshot of an index, Elasticsearch will not copy any data that is already stored in the elasticsearch repository as part of an earlier snapshot of the index (for the one that is already completed with no further writes ). Therefore you can take snapshots quite often and efficiently.

You can restore snapshots into a running cluster via the restore API. Snapshots can only be restored to versions of Elasticsearch that can read the indices. Check the version compatibility before you restore. You can’t restore an index to a cluster that is more than one version above the index version.

The following repository types are supported:

  • File System Location

  • S3 Object Storage 

  • HDFS

  • Azure, and Google Cloud Storage

Examples: 

An example of using S3 repository for Elasticsearch

PUT _snapshot/backups
{
    "type": "s3",
    "settings": {
      "bucket": "elastic",
      "endpoint": "10.3.10.10:9000",
      "protocol": "http"
    }
}

You will also need to set the S3 access key and secret key in Elasticsearch key store.

bin/elasticsearch-keystore add s3.client.default.access_key
bin/elasticsearch-keystore add s3.client.default.secret_key

Taking a snapshot:
Once the repo is set, taking a snapshot is just an API call.

PUT /_snapshot/backup/my_snapshot-01-10-2019

Where backup is the name of snapshot repo, and my_snapshot-01-10-2019 is the name of the snapshot. The above example will take a snapshot of all the indices. To take a snapshot of specific indices, you can provide the names of those indices. 

PUT /_snapshot/backup/my_snapshot-01-10-2019
{
  "indices": "my_index_1,my_index_2"
  }
}

Restoring a Snapshot:
Restoring from a snapshot is also an API call:

POST /_snapshot/backup/my_snapshot-01-10-2019
/_restore
{
  "indices": "index_1,index_2"
}

This will restore index_1 and index_2 from the snapshot my_snapshot-01-10-2019 in backup repository.

Notes and good things to know :
  • Snapshot repository needs to be set up before you can take a snapshot, and you will need to install the S3 repository plugin as well if you plan to use a repository with S3 as backend storage.

sudo bin/elasticsearch-plugin install repository-s3
  • You can use curator_cli tool to automate taking snapshots such as cron , jenkins or kubernetes job schedule

  • It is better to use Elasticsearch snapshots instead of disk backups/snapshots.

  • If you are going to restore an index that already exists ( because it may have incomplete data or have gone corrupt  ), the restore will fail until you either close the index first or delete it.

  • Snapshot and restore mechanism can also be used to copy data from one cluster to another cluster.

  • If you don’t have S3 storage , you can run minio with NFS backend to create an S3 equivalent for your cluster snapshots.

Common Problems 
  • When taking snapshots or restoring to remote repositories on low bandwidth, or when having a repository storage with low throughput, the snapshot may fail due to timeouts and hence it will result in partial snapshots as shown below. 
  • Retrying the snapshot operation again (and perhaps several times ) will finally result in a complete snapshot and relatively faster as compared to initial snapshot as it will just try to back up the failed shards only on each re-try. It’s better to have the snapshot repo on local network with elasticsearch or configure/design the repository for high write throughput so that you don’t have to deal with partial snapshots.

  • The snapshot operation will fail if there is some missing index. By setting the ignore_unavailable option to true will cause indices that do not exist to be ignored during snapshot operation. 

  • If you are using some open source security tool such as SearchGuard, you will need to configure the elasticsearch snapshot restore settings on the cluster before you can restore any snapshot. 

  • In elasticsearch.yml

searchguard.enable_snapshot_restore_privilege: true


Log Context

Log ”Failing snapshot of shard on closed node” classname is SnapshotsService.java
To help get the right context about this log, we have extracted the following from Elasticsearch source code

                                 if (nodes.nodeExists(shardStatus.nodeId())) {
                                    shards.put(shardId; shardStatus);
                                } else {
                                    // TODO: Restart snapshot on another node?
                                    snapshotChanged = true;
                                    logger.warn("failing snapshot of shard [{}] on closed node [{}]";
                                        shardId; shardStatus.nodeId());
                                    shards.put(shardId; new ShardSnapshotStatus(shardStatus.nodeId(); State.FAILED; "node shutdown"));
                                }
                            } else {
                                shards.put(shardId; shardStatus);






To help troubleshoot related issues we have gathered selected answers from STOF & Discuss and issues from Github, please review the following for further information :

Github Issue Number 35229
github.com/elastic/elasticsearch/issues/35229

 

Snapshot Failed With Partial State
discuss.elastic.co/t/snapshot-failed-with-partial-state/160613

 


About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster’s solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues

We are constantly updating our analysis of Elasticsearch logs, errors, and exceptions. Sharing best practices and providing troubleshooting guides.

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster

Did this page help you?

Get an Elasticsearch Check-Up


Check if your ES issues are caused from misconfigured settings
(Free 2 min process)

Check