Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.
Try OpsGPT now for step-by-step guidance and tailored insights into your Elasticsearch operation.
Briefly, this error occurs when Elasticsearch encounters an issue during the recovery process of a shard, causing it to fail. This could be due to a variety of reasons such as disk space issues, network connectivity problems, or corruption of the shard data. To resolve this issue, you can try freeing up disk space, checking network connectivity, or restoring the shard from a backup. If the shard is not critical, you can also consider deleting it and reindexing the data.
For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.
This guide will help you check for common problems that cause the log ” unexpected error during recovery [{}]; failing shard ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: indices, recovery, shard.
Overview
In Elasticsearch, recovery refers to the process of recovering an index or shard when something goes wrong. There are many ways to recover an index or shard, such as by re-indexing the data from a backup / failover cluster to the current one, or by restoring from an Elasticsearch snapshot. Alternatively, Elasticsearch performs recoveries automatically, such as when a node restarts or disconnects and connects again. There is an API to check the updated status of index / shard recoveries.
GET /<index>/_recoveryGET /_recovery
In summary, recovery can happen in the following scenarios:
- Node startup or failure (local store recovery)
- Replication of primary shards to replica shards
- Relocation of a shard to a different node in the same cluster
- Restoring a snapshot
Examples
Getting recovery information about several indices:
GET my_index1 GET my_index2/_recovery
Notes and good things to know
- When a node is disconnected from the cluster, all of its shards go to an unassigned state. After a certain amount of time, the shards will be allocated somewhere else on other nodes. This setting determines the number of concurrent shards per node that will be recovered.
PUT _cluster/settings{"transient":{"cluster.routing.allocation.node_concurrent_recoveries":3}}
- You can also control when to start recovery after a node disconnects. This is useful if the node just restarts, for example, because you may not want to initiate any recovery for such transient events.
PUT _all/_settings{"settings":{"index.unassigned.node_left.delayed_timeout":"6m"}}
- Elasticsearch limits the speed that is allocated to recovery in order to avoid overloading the cluster. This setting can be updated to make the recovery faster or slower, depending on your requirements.
PUT _cluster/settings{"transient":{"indices.recovery.max_bytes_per_sec":"100mb"}}
Overview
Data in an Elasticsearch index can grow to massive proportions. In order to keep it manageable, it is split into a number of shards. Each Elasticsearch shard is an Apache Lucene index, with each individual Lucene index containing a subset of the documents in the Elasticsearch index. Splitting indices in this way keeps resource usage under control. An Apache Lucene index has a limit of 2,147,483,519 documents.
Examples
The number of shards is set when an index is created, and this number cannot be changed later without reindexing the data. When creating an index, you can set the number of shards and replicas as properties of the index using:
PUT /sensor { "settings" : { "index" : { "number_of_shards" : 6, "number_of_replicas" : 2 } } }
The ideal number of shards should be determined based on the amount of data in an index. Generally, an optimal shard should hold 30-50GB of data. For example, if you expect to accumulate around 300GB of application logs in a day, having around 10 shards in that index would be reasonable.
During their lifetime, shards can go through a number of states, including:
- Initializing: An initial state before the shard can be used.
- Started: A state in which the shard is active and can receive requests.
- Relocating: A state that occurs when shards are in the process of being moved to a different node. This may be necessary under certain conditions, such as when the node they are on is running out of disk space.
- Unassigned: The state of a shard that has failed to be assigned. A reason is provided when this happens. For example, if the node hosting the shard is no longer in the cluster (NODE_LEFT) or due to restoring into a closed index (EXISTING_INDEX_RESTORED).
In order to view all shards, their states, and other metadata, use the following request:
GET _cat/shards
To view shards for a specific index, append the name of the index to the URL, for example:
sensor: GET _cat/shards/sensor
This command produces output, such as in the following example. By default, the columns shown include the name of the index, the name (i.e. number) of the shard, whether it is a primary shard or a replica, its state, the number of documents, the size on disk, the IP address, and the node ID.
sensor 5 p STARTED 0 283b 127.0.0.1 ziap sensor 5 r UNASSIGNED sensor 2 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 2 r UNASSIGNED sensor 3 p STARTED 3 7.2kb 127.0.0.1 ziap sensor 3 r UNASSIGNED sensor 1 p STARTED 1 3.7kb 127.0.0.1 ziap sensor 1 r UNASSIGNED sensor 4 p STARTED 2 3.8kb 127.0.0.1 ziap sensor 4 r UNASSIGNED sensor 0 p STARTED 0 283b 127.0.0.1 ziap sensor 0 r UNASSIGNED
Notes and good things to know
- Having shards that are too large is simply inefficient. Moving huge indices across machines is both a time- and labor-intensive process. First, the Lucene merges would take longer to complete and would require greater resources. Moreover, moving the shards across the nodes for rebalancing would also take longer and recovery time would be extended. Thus by splitting the data and spreading it across a number of machines, it can be kept in manageable chunks and minimize risks.
- Having the right number of shards is important for performance. It is thus wise to plan in advance. When queries are run across different shards in parallel, they execute faster than an index composed of a single shard, but only if each shard is located on a different node and there are sufficient nodes in the cluster. At the same time, however, shards consume memory and disk space, both in terms of indexed data and cluster metadata. Having too many shards can slow down queries, indexing requests, and management operations, and so maintaining the right balance is critical.
How to reduce your Elasticsearch costs by optimizing your shards
Watch the video below to learn how to save money on your deployment by optimizing your shards.
Log Context
Log “unexpected error during recovery [{}]; failing shard” classname is PeerRecoveryTargetService.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
@Override public void onFailure(Exception e) { try (RecoveryRef recoveryRef = onGoingRecoveries.getRecovery(recoveryId)) { if (recoveryRef != null) { logger.error(() -> new ParameterizedMessage("unexpected error during recovery [{}]; failing shard"; recoveryId); e); onGoingRecoveries.failRecovery( recoveryId; new RecoveryFailedException(recoveryRef.target().state(); "unexpected error"; e); true // be safe );