Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which can resolve issues that cause many errors.
This guide will help you check for common problems that cause the log ” New used memory for data of would be larger than configured breaker: ; breaking ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: breaker, circuit and memory.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Breaker in Elasticsearch
Elasticsearch has the concept of circuit breakers to deal with OutOfMemory errors that cause nodes to crash. When a request reaches Elasticsearch nodes, the circuit breakers first estimate the amount of memory needed to load the required data. They then compare the estimated size with the configured heap size limit. If the estimated size is greater than the heap size, the query is terminated and an exception is thrown to avoid the node loading more than the available heap size.
What it is used for:
Elasticsearch has several circuit breakers available such as fielddata, requests, network, indices and script compilation. Each breaker is used to limit the memory an operation can use. In addition, Elasticsearch has a parent circuit breaker which is used to limit the combined memory used by all the other circuit breakers.
Examples:
Increasing circuit breaker size for fielddata limit – The default limit for fielddata breakers is 40%. The following command can be used to increase it to 60%:
PUT /_cluster/settings { "persistent": { "indices.breaker.fielddata.limit": "60%" } }
Notes:
- Each breaker ships with default limits and their limits can be modified as well. But this is an expert level setting and you should understand the pitfalls carefully before changing the limits, otherwise the node may start throwing OOM exceptions.
- Sometimes it is better to fail a query instead of getting an OOM exception, because when OOM appears JVM becomes unresponsive.
- It is important to keep indices.breaker.request.limit lower than indices.breaker.total.limit so that request circuit breakers trip before the total circuit breaker.
Common problems:
- The most common error resulting from circuit breakers is “data too large” with 429 status code. The application should be ready to handle such exceptions.
- If the application starts throwing exceptions because of circuit breaker limits, it is important to review the queries and memory requirements. In most cases, a scaling is required by adding more resources to the cluster.
Overview
Memory is one of the most critical resources to monitor in Elasticsearch. Elasticsearch runs on JVM and uses heap memory areas for query cache, request cache, accessing lucene segments and storing fielddata for aggregations and sorting.
Common problems and important points
- The most common error that arises in Elasticsearch is OutOfMemory error. This error comes when the node is not able to cope up with the required heap size space. To avoid this, you need to closely monitor the heap utilization and garbage collector performance.
- As per the most up-to-date best practices you should not allocate more than 50 percent of total RAM to JVM heap size. Starting from Elasticsearch version 5.x onward this can be set using -Xms and -Xmx parameters inside jvm.options configuration file. The defaults are set to 1 GB for both minimum and maximum heap size.
- The heap size should not set more than 31 GB in any case to avoid the poor garbage collection.
Log Context
Log “[{}] New used memory {} [{}] for data of [{}] would be larger than configured breaker: {} [{}]; breaking” classname is ChildMemoryCircuitBreaker.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
new ByteSizeValue(bytes); label; new ByteSizeValue(newUsed); memoryBytesLimit; new ByteSizeValue(memoryBytesLimit); newUsedWithOverhead; new ByteSizeValue(newUsedWithOverhead)); } if (memoryBytesLimit > 0 && newUsedWithOverhead > memoryBytesLimit) { logger.warn("[{}] New used memory {} [{}] for data of [{}] would be larger than configured breaker: {} [{}]; breaking"; this.name; newUsedWithOverhead; new ByteSizeValue(newUsedWithOverhead); label; memoryBytesLimit; new ByteSizeValue(memoryBytesLimit)); circuitBreak(label; newUsedWithOverhead); }