Elasticsearch How to set up snapshot repositories in OpenSearch (S3, GCS, Azure)

Opster Expert Team - Gustavo

Dec 20, 2022 | 3 min read

Opster Team

Dec 20, 2022 | 3 min read

In addition to reading this guide, we recommend you run the Elasticsearch Health Check-Up. It will detect issues and improve your Elasticsearch performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and more.

The Elasticsearch Check-Up is free and requires no installation.

To easily set up OpenSearch snapshot repositories and define your ISM policies, you can use Opster’s Management Console (OMC). By using the OMC you can also deploy multiple clusters, configure node roles, scale cluster resources, and more – all from a single interface, for free. Check it out here.

Quick links

Introduction

In OpenSearch Snapshot: Backing-up an Index (opster.com), we learned how to configure a repository, take a backup, and then restore. This article will show you how to configure a repository for the following service providers: 

  • Amazon S3
  • Azure Blob Storage
  • Google Cloud Storage (GCS)

NOTE: If you are using MacOSx and seeing this error: “could not find java in bundled /Users/youruser/…/java when running the sudo commands”, add the -E flag before ./bin/opensearch-plugin.

How to configure a snapshot repository for Amazon S3

1. Install the repository-s3 plugin in all the nodes.

sudo ./bin/opensearch-plugin install repository-s3

2. Add your AWS credentials to the keystore. Once you run each of these commands, the values will be prompted. This also needs to be done in all the nodes.

sudo ./bin/opensearch-keystore add s3.client.default.access_key
sudo ./bin/opensearch-keystore add s3.client.default.secret_key

Note: You can store many sets of credentials for many repositories by changing “default” to a different string. This value must match with the “client” value of the repository you want to associate these credentials with.

3. If you are using temporary credentials, use the following to add the session token:

sudo ./bin/opensearch-keystore add s3.client.default.session_token

4. If you want to connect under a proxy, use the following: 

sudo ./bin/opensearch-keystore add s3.client.default.proxy.username
sudo ./bin/opensearch-keystore add s3.client.default.proxy.password

For extra settings, like using IAM credentials, you can refer to the official docs.

If you changed something in the opensearch.yml file, you need to restart each of the nodes. If you didn’t, you can run the following command to load up the keystore values:

POST _nodes/reload_secure_settings

Once you have loaded the credentials, you can go ahead and register the repository.

PUT _snapshot/my-s3-repository-name
{
  "type": "s3",
  "settings": {
    "bucket": "my-s3-bucket",
    "base_path": "my/snapshot/directory",
    "client": "default"
  }
}

The bucket needs to be created on S3, and the client value must match with the keystore credentials name.

How to configure a snapshot repository for Azure Blob Storage

1. Install the repository-azure plugin in all the nodes.

sudo ./bin/opensearch-plugin install repository-azure

2. Add your Azure credentials to the keystore. Once you run each of these commands, the values will be prompted. This also needs to be done in all the nodes.

sudo ./bin/opensearch-keystore add azure.client.default.account
sudo ./bin/opensearch-keystore add azure.client.default.key

Note: You can store many sets of credentials for many repositories by changing “default” to a different string. This value must match with the “client” value of the repository you want to associate these credentials with.

3. If you want to connect under a proxy, use the following: 

sudo ./bin/opensearch-keystore add azure.client.default.proxy.username
sudo ./bin/opensearch-keystore add azure.client.default.proxy.password

If you changed something in the opensearch.yml file, you need to restart each of the nodes. If you didn’t, you can run the following command to load up the keystore values:

POST _nodes/reload_secure_settings

Once you have loaded the credentials, you can go ahead and register the repository.

PUT _snapshot/my-azure-repository-name
{
  "type": "azure",
  "settings": {
    "container": "my-azure-bucket",
    "base_path": "my/snapshot/directory",
    "client": "default"
  }
}

The bucket needs to be created on Azure, and the client value must match with the keystore credentials name.

How to configure a snapshot repository for Google Cloud Storage

1. Install the repository-gcs plugin in all the nodes.

sudo ./bin/opensearch-plugin install repository-gcs

2. Add your GCP credentials to the keystore. Once you run each of these commands, the values will be prompted. This also needs to be done in all the nodes.

You need to add the Service Account Key JSON file to the keystore. The service account needs to have the storage.admin role, so Opensearch can read, write and list the bucket objects.

sudo ./bin/opensearch-keystore add-file gcs.client.default.credentials_file /path/service-account.jso

Note: You can store many sets of credentials for many repositories changing “default” to a different string. This value must match with the “client” value of the repository you want to associate these credentials with.

3. If you want to connect under a proxy, use the following: 

sudo ./bin/opensearch-keystore add gcs.client.default.proxy.username
sudo ./bin/opensearch-keystore add gcs.client.default.proxy.password

If you changed something in the opensearch.yml file, you need to restart each of the nodes. If you didn’t, you can run the following command to load up the keystore values:

POST _nodes/reload_secure_settings

Once you have loaded the credentials, you can go ahead and register the repository.

PUT _snapshot/my-gcs-repository-name
{
  "type": "gcs",
  "settings": {
    "bucket": "my-gcs-bucket",
    "base_path": "my/snapshot/directory",
    "client": "default"
  }
}

The bucket needs to be created on Google Cloud Storage, and the client value must match with the keystore credentials name.


Watch product tour

Watch how AutoOps finds & fixes Elasticsearch problems

Analyze Your Cluster
Skip to content