In addition to reading this guide, run the Elasticsearch Health Check-Up. Detect problems and improve performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and many more.
Free tool that requires no installation with +1000 users.
Overview
Memory is one of the most critical resources to monitor in Elasticsearch. Elasticsearch runs on JVM and uses heap memory areas for query cache, request cache, accessing lucene segments and storing fielddata for aggregations and sorting.
Common problems and important points
- The most common error that arises in Elasticsearch is OutOfMemory error. This error comes when the node is not able to cope up with the required heap size space. To avoid this, you need to closely monitor the heap utilization and garbage collector performance.
- As per the most up-to-date best practices you should not allocate more than 50 percent of total RAM to JVM heap size. Starting from Elasticsearch version 5.x onward this can be set using -Xms and -Xmx parameters inside jvm.options configuration file. The defaults are set to 1 GB for both minimum and maximum heap size.
- The heap size should not set more than 31 GB in any case to avoid the poor garbage collection.
Related log errors to this ES concept
< Page: 1 of 2 >