Opster Team
Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many errors.
To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them.
This guide will help you check for common problems that cause the log ” Failed to index snapshot history item in index ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: index, plugin and snapshot.

Index and indexing in Elasticsearch - 3 min
Overview
In Elasticsearch, an index (plural: indices) contains a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.
Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.
Examples
Create index
The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1
PUT /test_index1?pretty { "settings" : { "number_of_shards" : 2, "number_of_replicas" : 1 }, "mappings" : { "properties" : { "tags" : { "type" : "keyword" }, "updated_at" : { "type" : "date" } } } }
List indices
All the index names and their basic information can be retrieved using the following command:
GET _cat/indices?v
Index a document
Let’s add a document in the index with the command below:
PUT test_index1/_doc/1 { "tags": [ "opster", "elasticsearch" ], "date": "01-01-2020" }
Query an index
GET test_index1/_search { "query": { "match_all": {} } }
Query multiple indices
It is possible to search multiple indices with a single request. If it is a raw HTTP request, index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.
GET test_index1,test_index2/_search
Delete indices
DELETE test_index1
Common problems
- It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
- Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:
DELETE /*
To disable this, you can add the following lines in the elasticsearch.yml:
action.destructive_requires_name: true
Overview
A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.
Examples
Get all of the instructions for the plugin:
sudo bin/elasticsearch-plugin -h
Installing the S3 plugin for storing Elasticsearch snapshots on S3:
sudo bin/elasticsearch-plugin install repository-s3
Removing a plugin:
sudo bin/elasticsearch-plugin remove repository-s3
Installing a plugin using the file’s path:
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip
Notes and good things to know
- Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of the Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
- A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
- You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
- When a plugin is removed, you will need to restart every Elasticsearch node in order to complete the removal process.
Common issues
- Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the DEB or RPM packages then the plugin has to be installed using the root user. Otherwise you can install the plugin as the user that owns all of the Elasticsearch files.
- In the case of DEB or RPM package installation, it is important to check the permissions of the plugins directory after you install it. You can update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory
- If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.
Overview
An Elasticsearch snapshot is a backup of an index taken from a running cluster. Snapshots are taken incrementally. This means that when Elasticsearch creates a snapshot of an index, it will not copy any data that was already backed up in an earlier snapshot of the index (unless it was changed). Therefore, it is recommended to take snapshots often.
You can restore snapshots into a running cluster via the restore API. Snapshots can only be restored to versions of Elasticsearch that can read the indices. Check the version compatibility before you restore. You can’t restore an index to a cluster that is more than one version above the index version.
The following repository types are supported:
- File system location
- S3 object storage
- HDFS
- Azure and Google Cloud storage
Examples
An example of using S3 repository for Elasticsearch:
PUT _snapshot/backups { "type": "s3", "settings": { "bucket": "elastic", "endpoint": "10.3.10.10:9000", "protocol": "http" } }
You will also need to set the S3 access key and secret key in Elasticsearch key store.
bin/elasticsearch-keystore add s3.client.default.access_key bin/elasticsearch-keystore add s3.client.default.secret_key
Taking a snapshot
Once the repo is set, taking a snapshot is just an API call.
PUT /_snapshot/backup/my_snapshot-01-10-2019
Where backup is the name of snapshot repo, and my_snapshot-01-10-2019 is the name of the snapshot. The above example will take a snapshot of all the indices. To take a snapshot of specific indices, provide the names of the indices you would like a snapshot of.
PUT /_snapshot/backup/my_snapshot-01-10-2019 { "indices": "my_index_1,my_index_2" } }
Restoring a snapshot
Restoring from a snapshot is also an API call:
POST /_snapshot/backup/my_snapshot-01-10-2019 /_restore { "indices": "index_1,index_2" }
This will restore index_1 and index_2 from the snapshot my_snapshot-01-10-2019 in backup repository.
Notes and good things to know
- Snapshot repository needs to be set up before you can take a snapshot, and you will need to install the S3 repository plugin as well if you plan to use a repository with S3 as backend storage.
sudo bin/elasticsearch-plugin install repository-s3
- You can use curator_cli tool to automate taking snapshots such as Cron, Kenkins or Kubernetes job schedule.
- It is better to use Elasticsearch snapshots instead of disk backups/snapshots. An index must be closed in order to be restored.
- Another option is to delete the index before restoring it. The snapshot and restore mechanism can also be used to copy data from one cluster to another cluster.
- If you don’t have S3 storage , you can run minio with NFS backend to create an S3 equivalent for your cluster snapshots
- When the operation is retried, it will only try to snapshot any shards that failed on the initial operation, until the snapshot succeeds.
- It is better to have the snapshot repo on the local network with Elasticsearch or configure/design the repository for high write throughput so that you don’t have to deal with partial snapshots.
- The snapshot operation will fail if there is a missing index. Setting the ignore_unavailable option to true will cause indices that do not exist to be ignored during snapshot operation.
- If you are using some open source security tool such as SearchGuard, you will need to configure the Elasticsearch snapshot restore settings on the cluster before you can restore any snapshot.
- In elasticsearch.yml:
searchguard.enable_snapshot_restore_privilege: true
Create data backups automatically without using snapshots
If having backups of your data is important to you and your operations, snapshots may not be ideal for you. Firstly, there are the problems mentioned above, but you also run the risk of losing any data generated in the time elapsed since the last snapshot was stored.
If, for example, you designate a snapshot and restore process to occur every 5 minutes, the data being backed up is always 5 minutes behind. If a cluster fails 4 minutes after the last snapshot was taken, 4 minutes of data will be completely lost.
Opster’s Multi-Cluster Load Balancer mirrors data to multiple clusters in real time to ensure complete data recovery, meaning there are zero time gaps and you’ll never run the risk of losing valuable data. To book a demo of the Mutli-Cluster Load Balancer, click here.

Log Context
Log “failed to index snapshot history item in index [{}]: [{}]” classname is SnapshotHistoryStore.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
.source(builder); client.index(request; ActionListener.wrap(indexResponse -> { logger.debug("successfully indexed snapshot history item with id [{}] in index [{}]: [{}]"; indexResponse.getId(); SLM_HISTORY_ALIAS; item); }; exception -> { logger.error(new ParameterizedMessage("failed to index snapshot history item in index [{}]: [{}]"; SLM_HISTORY_ALIAS; item); exception); })); } catch (IOException exception) { logger.error(new ParameterizedMessage("failed to index snapshot history item in index [{}]: [{}]"; SLM_HISTORY_ALIAS; item); exception);
Find & fix Elasticsearch problems
Opster AutoOps diagnoses & fixes issues in Elasticsearch based on analyzing hundreds of metrics.
Fix Your Cluster IssuesConnect in under 2 minutes
Arpit Ghiya
Senior Lead SRE at Coupa