Elasticsearch OpenSearch Memory and Disk Usage Management

By Opster Team

Updated: Jun 28, 2023

| 2 min read

Before you dig into the details of this technical guide, have you tried asking OpsGPT?

You'll receive concise answers that will help streamline your Elasticsearch/OpenSearch operations.


Try OpsGPT now for step-by-step guidance and tailored insights into your Elasticsearch/ OpenSearch operation.

Before you dig into the details of this guide, have you tried asking OpsGPT? You’ll receive concise answers that will help streamline your OpenSearch/Elasticsearch operation.

Try OpsGPT now for step-by-step guidance and tailored insights into your search operation.

To easily resolve issues in your deployment and locate their root cause, try AutoOps for OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them. Try AutoOps for free.

How to fine-tune how much disk and memory resources are needed in OpenSearch

AutoOps optimizes search performance and helps users reduce their costs while improving resource utilization. One of the many recommendations AutoOps provides to save costs is related to disk to memory ratio. You can try AutoOps for free here.

This article is also related to Opster’s Cost Insight tool. Cost Insight is free, does not require any installation and helps users reduce Elasticsearch and OpenSearch hardware costs. Read more about it here, and run the tool here.

Overview

When you’d like to check if your resources are both efficient and cost-efficient, one way to do so is to evaluate the ratio of disk usage to the memory allocated.

OpenSearch nodes require a lot of RAM memory, for both indexing and search operations. The RAM memory required to run an OpenSearch cluster is generally proportional to the volume of data on the cluster.

Recommended disk/RAM ratio

The recommended disk/RAM ratio differs according to tier. The recommendations are as follows:

  • Data/Hot/Content – 30x
  • Warm – 160x
  • Cold – 300x
  • Frozen – 1000x

Memory to disk ratio is high 

According to the best practice for ratio between memory and disk, if you have more than 1GB of memory to 20GB of disk space, this would be considered high memory to disk ratio, meaning the cluster has a lot of memory.

If the cluster’s performance is good and you’re looking to reduce costs, reducing the memory might be an opportunity to cut expenses because the ratio here is high. In this case it is improbable that you will be able to take advantage of all of the RAM resources on your cluster.

You may have high memory to disk ratios in situations such as:

  • Very low data retentions (eg. 1 week)
  • High volume of updates rather than new data indexing
  • Search intensive applications (large number of queries or heavy aggregations against a relatively low volume of data)

If you’re interested in reducing costs, then you should consider reducing the RAM memory on the existing nodes to cut down your expenses.

Memory to disk ratio is low

According to the best practice for ratio between memory and disk, if you have less than 1 GB of memory to 80 GB of disk space, the cluster does not have enough memory resources.

In this case you will likely be unable to take advantage of all of the available disk space, or if you do, you are likely to have performance issues. You may have low memory to disk ratios in situations such as:

  • Very long data retentions 
  • Non-search intensive applications (low client query rates, minimal aggregations)
  • Warm nodes data tier

If your cluster performance is poorer than you’d like, then you may want to consider one or more of the following options:

  • Increase the RAM memory of your nodes up to a heap size of 32GB
  • Reduce the disk size on your nodes or add additional data nodes

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?


Related log errors to this OS concept


No handler for type type declared on field fieldName
Cannot determine current memory usage due to JDK-8207200
Override handler for allocation id
GetFreePhysicalMemorySize is not available
Exception retrieving free physical memory
GetTotalPhysicalMemorySize is not available
Exception retrieving total physical memory
Transport response handler not found of id
Unable to retrieve max size virtual memory JNACLibrary strerror Native getLastError
Unable to lock JVM memory Failed to set working set size Error code
Unknown error when adding console ctrl handler
Cannot register console handler because JNA is not available

< Page: 1 of 2 >

Get expert answers on Elasticsearch/OpenSearch