How To Solve Issues Related to Log – Failed to open / find files while reading metadata snapshot

Get an Elasticsearch Check-Up

Check if your ES issues are caused from misconfigured settings
(Free 2 min process)


Last update: Jan-20

Elasticsearch Error Guide In Page Navigation (click to jump) :

Troubleshooting Background – start here to get the full picture       
Related Issues – selected resources on related issues  
Log Context – usefull for experts
About Opster – offering a diffrent approach to troubleshoot Elasticsearch

Check Your Elasticsearch Settings for Painfull Mistakes 

Troubleshooting background

To troubleshoot Elasticsearch log “Failed to open / find files while reading metadata snapshot” it’s important to know common problems related to Elasticsearch concepts: index, metadata, snapshot. See below-detailed explanations complete with common problems, examples and useful tips.

Index in Elasticsearch

What it is

In Elasticsearch, an index (indices in plural) can be thought of as a table inside a database that has a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.

Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.

Create Index

The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1

PUT /test_index1?pretty
    "settings" : {
        "number_of_shards" : 2,
        "number_of_replicas" : 1
    "mappings" : {
        "properties" : {
            "tags" : { "type" : "keyword" },
            "updated_at" : { "type" : "date" }
List Indices

All the index names and their basic information can be retrieved using the following command:

GET _cat/indices?v
Index a document

Let’s add a document in the index with below command:

PUT test_index1/_doc/1
  "tags": [
  "date": "01-01-2020"
Query an index
GET test_index1/_search
  "query": {
    "match_all": {}
Query Multiple Indices

It is possible to search multiple indices with a single request. If it is a raw HTTP request, Index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.

GET test_index1,test_index2/_search
Delete Indices
DELETE test_index1
Common Problems
  • It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
  • Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:

To disable this, you can add the following lines in the elasticsearch.yml:

action.destructive_requires_name: true

Metadata in Elasticsearch

What it is

Metadata is information about the data. In Elasticsearch, each document has associated metadata such as _id and _index meta fields.

  • Routing meta field:
    • _routing, a routing value that places a document in a particular shard.
  • Other meta field
    • _meta, not used by Elasticsearch but can be used to store application-specific metadata.
PUT index_01
  "mappings": {
    "_meta": { 
      "class": "App01::User01",
      "version": "01"

  • Identity meta fields: 
    _id , the id of the document, for example querying based on _id field
PUT index01/_doc/1
  "text": "Document with ID 1"
PUT index01/_doc/2
  "text": "Document with ID 2"
GET index01/_search
  "query": {
    "terms": {
      "_id": [ "1", "2" ] 

Snaphshots in Elasticsearch

What it is

An Elasticsearch snapshot is a backup of an index taken from a running cluster. Snapshots are taken incrementally, i.e. when it creates a snapshot of an index, Elasticsearch will not copy any data that is already stored in the elasticsearch repository as part of an earlier snapshot of the index (for the one that is already completed with no further writes ). Therefore you can take snapshots quite often and efficiently.

You can restore snapshots into a running cluster via the restore API. Snapshots can only be restored to versions of Elasticsearch that can read the indices. Check the version compatibility before you restore. You can’t restore an index to a cluster that is more than one version above the index version.

The following repository types are supported:

  • File System Location

  • S3 Object Storage 

  • HDFS

  • Azure, and Google Cloud Storage


An example of using S3 repository for Elasticsearch

PUT _snapshot/backups
    "type": "s3",
    "settings": {
      "bucket": "elastic",
      "endpoint": "",
      "protocol": "http"

You will also need to set the S3 access key and secret key in Elasticsearch key store.

bin/elasticsearch-keystore add s3.client.default.access_key
bin/elasticsearch-keystore add s3.client.default.secret_key

Taking a snapshot:
Once the repo is set, taking a snapshot is just an API call.

PUT /_snapshot/backup/my_snapshot-01-10-2019

Where backup is the name of snapshot repo, and my_snapshot-01-10-2019 is the name of the snapshot. The above example will take a snapshot of all the indices. To take a snapshot of specific indices, you can provide the names of those indices. 

PUT /_snapshot/backup/my_snapshot-01-10-2019
  "indices": "my_index_1,my_index_2"

Restoring a Snapshot:
Restoring from a snapshot is also an API call:

POST /_snapshot/backup/my_snapshot-01-10-2019
  "indices": "index_1,index_2"

This will restore index_1 and index_2 from the snapshot my_snapshot-01-10-2019 in backup repository.

Notes and good things to know :
  • Snapshot repository needs to be set up before you can take a snapshot, and you will need to install the S3 repository plugin as well if you plan to use a repository with S3 as backend storage.

sudo bin/elasticsearch-plugin install repository-s3
  • You can use curator_cli tool to automate taking snapshots such as cron , jenkins or kubernetes job schedule

  • It is better to use Elasticsearch snapshots instead of disk backups/snapshots.

  • If you are going to restore an index that already exists ( because it may have incomplete data or have gone corrupt  ), the restore will fail until you either close the index first or delete it.

  • Snapshot and restore mechanism can also be used to copy data from one cluster to another cluster.

  • If you don’t have S3 storage , you can run minio with NFS backend to create an S3 equivalent for your cluster snapshots.

Common Problems 
  • When taking snapshots or restoring to remote repositories on low bandwidth, or when having a repository storage with low throughput, the snapshot may fail due to timeouts and hence it will result in partial snapshots as shown below. 
  • Retrying the snapshot operation again (and perhaps several times ) will finally result in a complete snapshot and relatively faster as compared to initial snapshot as it will just try to back up the failed shards only on each re-try. It’s better to have the snapshot repo on local network with elasticsearch or configure/design the repository for high write throughput so that you don’t have to deal with partial snapshots.

  • The snapshot operation will fail if there is some missing index. By setting the ignore_unavailable option to true will cause indices that do not exist to be ignored during snapshot operation. 

  • If you are using some open source security tool such as SearchGuard, you will need to configure the elasticsearch snapshot restore settings on the cluster before you can restore any snapshot. 

  • In elasticsearch.yml

searchguard.enable_snapshot_restore_privilege: true

To help troubleshoot related issues we have gathered selected Q&A from the community and issues from Github , please review the following for further information :

1. Corrupted index on elasticsearch after the network connectivity issue with AWS – Stats : ♥ 0.51 K  Ι √ –

2. How To Get Back My Primary Shards      

Github Issue Number 15392

Log Context

Log ”Failed to open / find files while reading metadata snapshot” classname is
We have extracted the following from Elasticsearch source code to get an in-depth context :

             failIfCorrupted(dir; shardId);
            return new MetadataSnapshot(null; dir; logger);
        } catch (IndexNotFoundException ex) {
            // that's fine - happens all the time no need to log
        } catch (FileNotFoundException | NoSuchFileException ex) {
  "Failed to open / find files while reading metadata snapshot");
        } catch (ShardLockObtainFailedException ex) {
   -> new ParameterizedMessage("{}: failed to obtain shard lock"; shardId); ex);
        return MetadataSnapshot.EMPTY;

About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster’s solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster

Did this page help you?