Elasticsearch Elasticsearch Too Much Memory Allocated to Content Nodes

By Opster Team

Updated: Jul 5, 2023

| 2 min read

What does this mean? 

The memory allocated to the content nodes in your Elasticsearch cluster can be reduced. This excess memory allocation can lead to inefficiencies and increased costs.

This issue is monitored by Opster AutoOps in real-time with personalized recommendations provided for your own system. You can also configure notifications to avoid this issue forming in the future.

Why does this occur?

This occurs when the memory-to-disk ratio in your Elasticsearch cluster is not optimal. The memory-to-disk ratio is a critical factor in determining the performance and cost efficiency of your search deployment. An imbalance in this ratio can lead to increased hardware needs, inefficient resource utilization, and higher costs.

Possible impact and consequences of

The possible impact of this issue includes:

  1. Reduced performance efficiency: Excess memory allocation can lead to inefficient use of resources, resulting in suboptimal performance.
  2. Higher memory requirements: Allocating more memory than required can increase the overall memory needs of your search deployment.
  3. Inefficient scaling: An imbalanced memory-to-disk ratio can hinder the ability to scale your search deployment efficiently.
  4. Increased I/O operations: Excess memory allocation can lead to increased I/O operations, which can negatively impact performance.
  5. Reduced resource consolidation: An imbalanced memory-to-disk ratio can make it difficult to consolidate resources, leading to increased hardware needs and costs.

How to resolve

To resolve the issue of excess memory allocation to content nodes in Elasticsearch, follow these recommendations:

1. Improve your memory-to-disk ratio: Move to instances with a smaller amount of memory and reduce the memory allocated to the nodes. This can be done by creating a custom JVM options file (to be located in the config/jvm.options.d folder) and setting the appropriate heap size for your data nodes. For example:

# Set the heap size to 50% of available memory, up to a maximum of 32GB

2. Reduce the number of content data nodes and increase disk allocated accordingly: By reducing the number of content data nodes in your cluster, you can consolidate resources and allocate more disk space to the remaining content nodes. This can be done by updating the cluster settings to drain data to other content nodes so that the specified content node can be deprovisioned:

PUT /_cluster/settings
  "transient": {
    "cluster.routing.allocation.exclude._ip": "IP_ADDRESS_OF_NODE_TO_REMOVE"

Replace “IP_ADDRESS_OF_NODE_TO_REMOVE” with the IP address of the content data node you want to remove from the cluster.


By following this guide, you can address the issue of excess memory allocation to content nodes in Elasticsearch, leading to improved performance efficiency, lower memory requirements, efficient scaling, reduced I/O operations, and streamlined resource consolidation. This will ultimately result in cost savings for your search deployment.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?