Elasticsearch How to Reduce the Number of Shards in an OpenSearch Cluster

By Opster Expert Team - Saskia

Updated: Aug 16, 2023

| 5 min read

Quick links

Overview

When you have too many shards in your cluster, there are a few steps you can take in order to reduce the number of shards. Deleting or closing indices and reindexing into larger indices are covered in this Opster guide.

Below, we will review how to reduce the number of shards of newly created indices, how to reduce the number of shards of already existing indices, how to reduce the number of primary shards and how to reduce the number of shards for time-based indices.

How to reduce the number of shards of newly created indices

While creating new indices, you should  make sure that you configure the number of newly created shards to the lowest number possible. 

To demonstrate this option, let’s assume you’re running OpenSearch and using it for storing your logs. By default, 1 primary shard is created per index. These 5 shards can easily fit 100-250GB of data. If you know that you generate a much smaller amount of data you should adjust the default for your cluster to 1 shard per 50GB of data per index. 

The easiest way to achieve this is to create an index template and store it in your cluster state. 

This request will create an index template called “template_1”. It will be applied to all newly created index names starting with “log” and set the number of shards per index to 1.  

PUT _index_template/template_1
{
  "index_patterns": ["log*"],
  "template": {
	"settings": {
  	 "index.number_of_shards": 1
       }
  }
}

How to reduce the number of shards of your existing indices

Reduce the number of replica shards

An easy way to reduce the number of shards is to reduce the number of replicas. Changing the number of replicas can be done dynamically with a request and takes just a few seconds. 

Usually it is recommended to have 1 replica shard per index, so one copy of each shard that will be allocated on another node (unless you have many search requests running in parallel). Replicas are used for fail-safety so that if one node goes down, your data is still available for searching and to be stored. They enhance cluster resilience and distribute the load more evenly.

However, if your cluster is completely overloaded, a quick and easy fix could be to set the replicas to zero. If you do so, be aware that it’s not ideal and shouldn’t be kept like that for long periods of time.

This request will set the number of replica shards to 0 for all index names starting with “log”:

PUT log*/_settings
{
  "index" : {
	"number_of_replicas" : 0
  }
}

Reduce the number of primary shards

The number of primary shards per index can not be changed dynamically – it is immutable. The routing algorithm that is used for distributing new documents among multiple shards relies on the total number of primary shards.  That’s why it’s a bit harder to reduce the number of primary shards of an existing index. 

You basically need to create a new index and copy all the data over. There are 2 APIs that you can use for that: the Shrink API and the Reindex API. Each API has its own pros and cons, of course, but one will usually be more ideal based on your setup and requirements. The Shrink API is faster, but it can only be used for read only indices (indices where you don’t update or insert any documents). The Reindex API can be used for active indices. 

Reduce the number of shards with the Shrink API

You could shrink oversharded indices down to 1 shard or perhaps more depending on the size of your index – refer to this documentation about sizing for guidance. There are pretty strict requirements on the number of shards that you can shrink your index down to. Please read the docs to make sure this option is right for you. 

Keep in mind that some maintenance time is to be expected when shrinking indices. First, all the primary shards of an index need to be allocated to the same node. Please make sure that there is enough storage space on that node (if there isn’t, you might consider using the Reindex API as shown below). Then, the index needs to be marked as read only. 

This request will set the index to read only for all index names starting with “log” and move all shards to the same node.   

PUT log*/_settings
{
  "settings": {
	"index.blocks.write": true,
	"index.routing.allocation.require._name": "shrink_node_name"                               	 
  }
}

You will need to shrink your indices one by one with this command. The allocation of the shards will revert back to default and the index will be writable again.

POST log-1/_shrink/log-shrink-1
{
  "settings": {
	"index.routing.allocation.require._name": null,
	"index.blocks.write": null
  }
}

Both the Shrink API and the Reindex API mentioned below are methods which involve creating a new index and copying data. This can be resource-intensive and may impact the performance of your cluster. Therefore, these operations should be planned and executed carefully, preferably during off-peak hours.

Reduce the number of shards with the Reindex API

If the index you’re trying to shrink is still actively written to you can use the Reindex API instead. 

The Reindex API can be used to copy over documents in batches from one index to another. In this case all the work that needs to be performed at index time needs to be repeated. This makes using the Reindex API slower than copying over entire shards, but it’s still a fairly quick process. 

The Reindex API creates a snapshot of the current state of the index and then copies over all those documents. Documents that are created newly during the reindexing process are ignored. In order to copy those newly created documents to the new index you need to repeat the reindexing process with a few settings that make sure you only create new documents. 

If your index is being actively written to and you cannot allow any documents to be missing in the new index, it’s recommended to write to both indices for some time to make sure there is no delta. 

This request will copy all documents from the index called “log-1” to a new index called “log-shrink-1”.

Make sure that the index template from the first section is applied. 

POST _reindex
{
  "source": {
	"index": "log-1"
  },
  "dest": {
	"index": "log-shrink-1",
	"op_type" : "index"
  }
}

Second iteration to only add documents that were created during the reindex operation:

POST _reindex
{
  "source": {
	"index": "log-1"
  },
  "dest": {
	"index": "log-shrink-1",
	"op_type" : "create"
  }
}

Remember that this should be done during off-peak hours as it can be resource-intensive and impact the performance of your cluster.

Reducing the number of shards of your time-based indices

If you’re using time-based index names, for example daily indices for logging, and you don’t have enough data, a good way to reduce the number of shards would be to switch to a weekly or a monthly pattern. 

You can also group old read-only indices., by month, quarter or year. The easiest way to do this is to use the Reindex API. 

Reducing the number of shards for multi-tenancy indices

When you have a multi-tenancy use case with an index per customer, reducing shards seems much harder. 

In multi-tenancy situations, you might have a few big customers with one or more shards per index and the size seems about right. At the same time, you’ll also have many customers with smaller indices. 

If you need to reduce shards in such a setup while still needing an index per customer, you can use Filtered Aliases.

Reducing the number of shards with Filtered Aliases

Filtered Aliases are not as well known as “normal” index aliases, but they allow you to create multiple views of one index. 

Instead of creating one index per tenant or customer, you create one index for all smaller tenants that are too small to deserve their own index. Then you add a keyword or numeric field to distinguish those tenants (customer name or customer ID). 

After creating the index you can now simply store a filter on that index and provide the tenant name as index name. 

The applications will not notice that you’re using a filter in the background. They can simply use the index name. But you can store many different tenants in one index and avoid oversharding.

This is all assuming that you are using a compatible data model for all customers. 

How to create a Filtered Alias on an index:

POST _aliases
{
   "actions":[
  	{
     	"add":{
        	     "index":"small-customers",
        	      "alias":"customer-1",
        	       "filter":{
           	           "term":{
              	 "customer-id":"1"
           	           }
        	        }
     	    }
  	}
     ]
}

For reducing the number of shards in a multi-tenant environment you can create a new index for all the small tenants and simply copy over the data with the Reindex API. If you need to add a new field to distinguish the tenants, like “customer-id” you can use an ingest pipeline to add a new field.

Read more about shards

OpenSearch shards – https://opster.com/guides/opensearch/opensearch-basics/opensearch-shards/

Oversharding – https://opster.com/guides/opensearch/opensearch-capacity-planning/opensearch-oversharding/

Choosing the correct number of shards – https://opster.com/guides/opensearch/opensearch-capacity-planning/how-to-choose-the-correct-number-of-shards-per-index-in-opensearch/

Max Shards Per Node Exceeded – https://opster.com/guides/opensearch/opensearch-basics/opensearch-max-shards-per-node-exceeded/

Shards are too large – https://opster.com/guides/opensearch/opensearch-basics/opensearch-shards-too-large/

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?