Elasticsearch Choosing the right amount of memory based on number of shards in Elasticsearch

By Opster Team

Updated: Mar 10, 2024

| 3 min read

Introduction

In version 8.x, this guidance is no longer applicable due to changes in the Elasticsearch requirements for memory per shard. The overhead per shard has been significantly reduced. 

In older Elasticsearch versions, the rule of thumb is that for a certain amount of shards on disk, there is a need for a minimal amount of memory. The exact number of shards per 1 GB of memory depends on the use case, with the best practice of 1 GB of memory for every 20 shards on disk.

Background

Elasticsearch requires that each node maintains the names and locations of all the cluster’s shards in memory, together with all index mappings (what is collectively known as the ‘cluster state’). If the cluster state is large, it will require significant memory. 

If the ratio of memory to number of shards in the cluster is low, it suggests that you have insufficient memory compared to the volume of shards in the cluster, and so are likely to experience cluster instability. In addition, data nodes need to contain certain data structures in memory for each shard it has on disk. Having too many shards on the disk will impact the node operation due to limited memory and risk of performance issues and out_of_memory errors. 

If, on the other hand, the ratio of memory to shards is high, it means that the data nodes have more memory than required for the number of shards they hold, and this could be an opportunity to reduce your hardware costs.

What to do when memory to shard ratio is low

There are several steps you can take to rectify the ratio between your memory and shards.

1. Delete unnecessary indices

You can delete unnecessary indices using:

DELETE my-index

2. Close infrequently used indices where possible

The command below will mean that the index is no longer searchable, and will free almost all of the memory associated with maintaining the index in the cluster state. It can be easily reversed running the _open command.

DELETE my-index

POST my-index/_close

3. Shrink indices where possible

It is possible to run a shrink operation on indices to reduce the number of shards.

This is only feasible if:

  1. The resulting size of shards would be less than 50gb 
  2. The index is no longer being written to

In order to do this, it is also necessary to ensure that all shards of the index are on the same node, and recommended to remove replicas during the shrink operation to save resources.

The commands for this are:

PUT /my-index/_settings
{
  "settings": {
    "index.number_of_replicas": 0,                                
    "index.routing.allocation.require._name": "node_name", 
    "index.blocks.write": true                                    
  }
}

POST /my-index/_shrink/my-shrinked-index
{
  "settings": {
    "index.routing.allocation.require._name": null, 
    "index.blocks.write": null 
  }
}

4. Merge small indices into larger indices

In general this is much easier if the indices in question are not being actively updated or written to. This process requires re-indexing.  

For example, here is how you would reindex daily indices into monthly indices for the month of March 2022:

POST _reindex
{
  "source": {
    "index": "my-index-2022-03-*"
  },
  "dest": {
    "index": "reindexed-2022-03"
  }
}

Once the operation is complete (and you have checked that all records have been copied correctly), you can add an alias to the index so that applications can continue to read the data as though it were in the original index, and delete the old index.

POST _aliases
{
  "actions": [
    {
      "add": {
        "index": "reindexed-2022-03",
        "alias": "my-index-2022-03"
      }
    }
  ]
}

5. Increase memory per node

You can increase the RAM memory of the node, but bear in mind that the memory of the JVM cannot exceed 26GB (in some cases up to 30GB) and continue to use compressed ordinary object pointers. Note: JVM should be at least 50% of total RAM, so while increasing memory you need to make sure you still have at least 50% available for the operating system cache.

6. Add new data nodes to the cluster

This will reduce the number of shards per node, and help you to reduce the ratio of shards to memory to a suitable level.

What to do when memory to shard ratio is high

If the ratio of memory to the number of shards in your cluster is high, it might be an indication that you can reduce the node resources. Elasticsearch performance will not be negatively impacted by this, as the node has more memory than it actually needs, however it’s an opportunity to reduce resources and cost.

You can follow the best practice that indicates that for every 20 shards on disk, the node should have 1 GB of memory. 

Recommendations:

  1. Consider reducing the number of data nodes
  2. Switch to instance types with less memory
  3. Reduce JVM memory, leaving more available memory for the operating system cache

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?