Elasticsearch Memory Usage Guide

Elasticsearch Memory Usage Guide

Opster Team

March 2021


In addition to reading this guide, we recommend you run the Elasticsearch Health Check-Up. It will detect issues and improve your Elasticsearch performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and more.

The Elasticsearch Check-Up is free and requires no installation.

You can avoid memory usage issues in Elasticsearch by running the Elasticsearch Health Check-Up. It will help check for many configuration errors that are related to memory usage in your cluster and offer overall improvements and suggestions to your setup.

Run the Elasticsearch check-up to receive recommendations like this:

checklist Run Check-Up
error

The high disk watermark threshold is about to be reached in specific nodes

error-img

Description

There are various “watermark” thresholds on each Elasticsearch cluster. When the high disk watermark threshold has been exceeded, it means disk space is running out. The node will…

error-img

Recommendation

Based on your specific ES deployment you should…

1

X-PUT curl -H [a customized code snippet to resolve the issue]

Elasticsearch memory requirements

The Elasticsearch process is very memory intensive. Elasticsearch uses a JVM (Java Virtual Machine), and close to 50% of the memory available on a node should be allocated to JVM. The JVM machine uses memory because the Lucene process needs to know where to look for index values on disk. The other 50% is required for the file system cache which keeps data that is regularly accessed in memory.

For a full explanation of JVM management, please see: Heap Size Usage and JVM Garbage Collection in ES – A Detailed Guide. 

It is also common to receive warnings from the different types of circuit breakers. This is discussed here: How to Handle Circuit Breaker Exceptions in Elasticsearch

jvm.mem

The most important memory section is the JVM heap. 

GET _nodes/stats/jvm

The output could look like this:

"jvm" : {
    	"timestamp" : 1603351829573,
    	"uptime_in_millis" : 150932107,
    	"mem" : {
      	"heap_used_in_bytes" : 258714272,
      	"heap_used_percent" : 24,
      	"heap_committed_in_bytes" : 1073741824,
      	"heap_max_in_bytes" : 1073741824,
      	"non_heap_used_in_bytes" : 192365488,

      	"non_heap_committed_in_bytes" : 209186816,

Note that the heap_used_in_bytes in a healthy JVM will follow a saw tooth pattern due to the garbage collection process, increasing steadily up to around 70%, then reducing sharply to 30% when garbage collection occurs.  

The JVM heap_max will depend on the value set in jvm.options file, and you should set it to be 50% of the RAM available for your container or server.

For more information on JVM heap issues, please see: Heap Size Usage and JVM Garbage Collection in ES – A Detailed Guide. 

Explaining the different types of memory statistics

When looking at the memory statistics, we need to be aware that many Elasticsearch applications are running inside containers on much larger machines. This is typical if you are using a hosted service such as AWS Elasticsearch service or Elastic cloud, or if you are running Elasticsearch on Docker or Kubernetes. In such cases, it’s important  to be careful to interpret the memory statistics available to us.

There are various memory statistics available from the Elasticsearch monitoring APIs, as explained below:

GET _nodes/stats/os

 	"os" : {
    	"timestamp" : 1603350306857,
    	"cpu" : {
      	"percent" : 13,
      	"load_average" : {
        	"1m" : 3.37,
        	"5m" : 3.18,
        	"15m" : 2.8
      	}
    	},
    	"mem" : {
      	"total_in_bytes" : 16703369216,
      	"free_in_bytes" : 361205760,
      	"used_in_bytes" : 16342163456,
      	"free_percent" : 2,
      	"used_percent" : 98
    	},
    	"swap" : {
      	"total_in_bytes" : 1023406080,
      	"free_in_bytes" : 1302528,
      	"used_in_bytes" : 1022103552
    	},
    	"cgroup" : {
      	"cpuacct" : {
        	"control_group" : "/",
        	"usage_nanos" : 2669636420088
      	},
      	"cpu" : {
        	"control_group" : "/",
        	"cfs_period_micros" : 100000,
        	"cfs_quota_micros" : -1,
        	"stat" : {
          	"number_of_elapsed_periods" : 0,
          	"number_of_times_throttled" : 0,
          	"time_throttled_nanos" : 0
        	}
      	},
      	"memory" : {
        	"control_group" : "/",
        	"limit_in_bytes" : "9223372036854771712",
        	"usage_in_bytes" : "4525641728"
      	}
    	}
  	}

The above statistics are for a development node running on docker. Let’s interpret the sections we received here:

os.mem

The first section “mem” refers to the host server where the machine is running. In this case we are running on docker, so 16GB refers to the memory of the host machine where the container is running. Note that it is quite normal for a machine to be using close to 100% of its memory, and this does not indicate a problem.

os.swap

The swap section again refers to the host machine. In this case, we can see that the host machine allows swapping. This is quite normal when we are running a container inside a host with swapping still enabled. We can double check that inside the docker container no swap is permitted by running:

GET _nodes?filter_path=**.mlockall

os.cgroup

Finally, we can see the cgroup section which refers to the container itself. In this case the container is using 4GB of memory.

process.mem

We also have access to a statistic virtual memory:

GET _nodes/stats/process

  	"process" : {
    	"timestamp" : 1603352751181,
    	"open_file_descriptors" : 436,
    	"max_file_descriptors" : 1048576,
    	"cpu" : {
      	"percent" : 0,
      	"total_in_millis" : 1964850
    	},
    	"mem" : {
      	"total_virtual_in_bytes" : 5035376640
    	}

Note that total_virtual_in_bytes includes the entire memory available to the process including virtual memory available via mmap. 



Run the Check-Up to get a customized report like this:

Analyze your cluster