Elasticsearch Memory Allocation Issues in Cold Nodes

By Opster Team

Updated: Jul 4, 2023

| 2 min read

What does this mean?

The amount of memory assigned to the cold nodes in your Elasticsearch cluster is higher than necessary. Cold nodes are used to store less frequently accessed data, and optimizing their memory allocation can lead to improved performance and cost savings.

This issue is monitored by Opster AutoOps in real-time with personalized recommendations provided for your own system. You can also configure notifications to avoid this issue forming in the future.

Why does this occur?

This occurs when the memory-to-disk ratio in your Elasticsearch cluster is not optimized. The memory-to-disk ratio is a crucial factor in determining the efficiency of your search deployment. An unoptimized ratio can lead to increased memory requirements, reduced scaling efficiency, and increased I/O operations, which can negatively impact the overall performance and cost of your Elasticsearch deployment.

Possible impact and consequences of memory allocation issues

The potential consequences include:

  1. Increased hardware requirements: Unoptimized memory allocation can lead to the need for additional hardware resources to maintain the same level of performance.
  2. Reduced performance efficiency: Excessive memory allocation can result in reduced performance efficiency, as resources are not being utilized optimally.
  3. Increased costs: The need for additional hardware and reduced efficiency can lead to increased costs for your Elasticsearch deployment.

How to resolve

To resolve the issue of excessive memory allocation in Elasticsearch cold nodes, you can take the following steps:

1. Improve your memory-to-disk ratio: Move to instances with a smaller amount of memory and reduce the memory allocated to the nodes.This can be done by creating a custom JVM options file (to be located in the config/jvm.options.d folder) and setting the appropriate heap size for your data nodes. For example:

# Set the heap size to 50% of available memory, up to a maximum of 32GB
-Xms16g
-Xmx16g

2. Reduce the number of cold data nodes and increase disk allocated accordingly: By reducing the number of cold data nodes in your cluster, you can consolidate resources and allocate more disk space to the remaining cold nodes. This can be done by updating the cluster settings to drain data to other cold nodes so that the specified cold node can be deprovisioned:

PUT /_cluster/settings
{
  "transient": {
    "cluster.routing.allocation.exclude._ip": "IP_ADDRESS_OF_NODE_TO_REMOVE"
  }
}

Replace “IP_ADDRESS_OF_NODE_TO_REMOVE” with the IP address of the cold data node you want to remove from the cluster.

Conclusion

By addressing the issue of excessive memory allocation in Elasticsearch cold nodes, you can optimize your search deployment’s performance and reduce costs. Following the steps outlined in this guide will help you maintain a balanced memory-to-disk ratio and ensure efficient resource utilization in your Elasticsearch cluster.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?