Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many configuration errors.
To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them.
This guide will help you check for common problems that cause the log ” Failed to remove snapshot from cluster state ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: snapshot and cluster.
Overview
An Elasticsearch snapshot is a backup of an index taken from a running cluster. Snapshots are taken incrementally. This means that when Elasticsearch creates a snapshot of an index, it will not copy any data that was already backed up in an earlier snapshot of the index (unless it was changed). Therefore, it is recommended to take snapshots often.
You can restore snapshots into a running cluster via the restore API. Snapshots can only be restored to versions of Elasticsearch that can read the indices. Check the version compatibility before you restore. You can’t restore an index to a cluster that is more than one version above the index version.
The following repository types are supported:
- File system location
- S3 object storage
- HDFS
- Azure and Google Cloud storage
Examples
An example of using S3 repository for Elasticsearch:
PUT _snapshot/backups { "type": "s3", "settings": { "bucket": "elastic", "endpoint": "10.3.10.10:9000", "protocol": "http" } }
You will also need to set the S3 access key and secret key in Elasticsearch key store.
bin/elasticsearch-keystore add s3.client.default.access_key bin/elasticsearch-keystore add s3.client.default.secret_key
Taking a snapshot
Once the repo is set, taking a snapshot is just an API call.
PUT /_snapshot/backup/my_snapshot-01-10-2019
Where backup is the name of snapshot repo, and my_snapshot-01-10-2019 is the name of the snapshot. The above example will take a snapshot of all the indices. To take a snapshot of specific indices, provide the names of the indices you would like a snapshot of.
PUT /_snapshot/backup/my_snapshot-01-10-2019 { "indices": "my_index_1,my_index_2" } }
Restoring a snapshot
Restoring from a snapshot is also an API call:
POST /_snapshot/backup/my_snapshot-01-10-2019 /_restore { "indices": "index_1,index_2" }
This will restore index_1 and index_2 from the snapshot my_snapshot-01-10-2019 in backup repository.
Notes and good things to know
- Snapshot repository needs to be set up before you can take a snapshot, and you will need to install the S3 repository plugin as well if you plan to use a repository with S3 as backend storage.
sudo bin/elasticsearch-plugin install repository-s3
- You can use curator_cli tool to automate taking snapshots such as Cron, Kenkins or Kubernetes job schedule.
- It is better to use Elasticsearch snapshots instead of disk backups/snapshots. An index must be closed in order to be restored.
- Another option is to delete the index before restoring it. The snapshot and restore mechanism can also be used to copy data from one cluster to another cluster.
- If you don’t have S3 storage , you can run minio with NFS backend to create an S3 equivalent for your cluster snapshots
- When the operation is retried, it will only try to snapshot any shards that failed on the initial operation, until the snapshot succeeds.
- It is better to have the snapshot repo on the local network with Elasticsearch or configure/design the repository for high write throughput so that you don’t have to deal with partial snapshots.
- The snapshot operation will fail if there is a missing index. Setting the ignore_unavailable option to true will cause indices that do not exist to be ignored during snapshot operation.
- If you are using some open source security tool such as SearchGuard, you will need to configure the Elasticsearch snapshot restore settings on the cluster before you can restore any snapshot.
- In elasticsearch.yml:
searchguard.enable_snapshot_restore_privilege: true
Create data backups automatically without using snapshots
If having backups of your data is important to you and your operations, snapshots may not be ideal for you. Firstly, there are the problems mentioned above, but you also run the risk of losing any data generated in the time elapsed since the last snapshot was stored.
If, for example, you designate a snapshot and restore process to occur every 5 minutes, the data being backed up is always 5 minutes behind. If a cluster fails 4 minutes after the last snapshot was taken, 4 minutes of data will be completely lost.
Opster’s Multi-Cluster Load Balancer mirrors data to multiple clusters in real time to ensure complete data recovery, meaning there are zero time gaps and you’ll never run the risk of losing valuable data. To book a demo of the Mutli-Cluster Load Balancer, click here.
Overview
An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in:
Backups
Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Log Context
Log “Failed to remove snapshot from cluster state”classname is SnapshotsService.java We extracted the following from Elasticsearch source code for those seeking an in-depth context :
@Override public void onFailure(String source; Exception e) { logger.warn(() -> new ParameterizedMessage("[{}] failed to remove snapshot metadata"; snapshot); e); failSnapshotCompletionListeners( snapshot; new SnapshotException(snapshot; "Failed to remove snapshot from cluster state"; e)); } @Override public void onNoLongerMaster(String source) { failSnapshotCompletionListeners(
See how you can use AutoOps to resolve issues