Attempting to trigger G1GC due to high heap usage – How to solve this OpenSearch error

Opster Team

Aug-23, Version: 1-2.9

Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.

Try OpsGPT now for step-by-step guidance and tailored insights into your OpenSearch operation.

Briefly, this error occurs when OpenSearch is trying to trigger the Garbage First Garbage Collector (G1GC) due to high heap usage. This means that the JVM heap space is almost full, which can lead to performance issues or even crashes. To resolve this issue, you can increase the heap size if your server has enough memory. Alternatively, you can optimize your queries or indices to reduce memory usage. Also, consider checking for memory leaks in your application or using a tool to analyze heap usage.

For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.

This guide will help you check for common problems that cause the log ” attempting to trigger G1GC due to high heap usage [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following OpenSearch concepts: breaker, indices.

Log Context

Log “attempting to trigger G1GC due to high heap usage [{}]” classname is HierarchyCircuitBreakerService.java.
We extracted the following from OpenSearch source code for those seeking an in-depth context :

                    long begin = timeSupplier.getAsLong();
                    leader = begin >= lastCheckTime + minimumInterval;
                    overLimitTriggered(leader);
                    if (leader) {
                        long initialCollectionCount = gcCountSupplier.getAsLong();
                        logger.info("attempting to trigger G1GC due to high heap usage [{}]"; memoryUsed.baseUsage);
                        long localBlackHole = 0;
                        // number of allocations; corresponding to (approximately) number of free regions + 1
                        int allocationCount = Math.toIntExact((maxHeap - memoryUsed.baseUsage) / g1RegionSize + 1);
                        // allocations of half-region size becomes single humongous alloc; thus taking up a full region.
                        int allocationSize = (int) (g1RegionSize >> 1);

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Get expert answers on Elasticsearch/OpenSearch