Updating max merge at once from to – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 2.3-2.3

Briefly, this error occurs when Elasticsearch is trying to update the maximum number of segments that can be merged at once. This could be due to a configuration change or an automatic adjustment by Elasticsearch to optimize performance. To resolve this issue, you can manually set the “index.merge.scheduler.max_merge_at_once” setting in the Elasticsearch configuration. Alternatively, you can monitor your system’s performance to ensure it’s not being overloaded, causing Elasticsearch to automatically adjust this setting. Lastly, ensure your Elasticsearch version is up-to-date as this could be a bug in older versions.

This guide will help you check for common problems that cause the log ” updating [max_merge_at_once] from [{}] to [{}] ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: index, merge and shard.

Log Context

Log “updating [max_merge_at_once] from [{}] to [{}]” classname is MergePolicyConfig.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :


        final int oldMaxMergeAtOnce = mergePolicy.getMaxMergeAtOnce();
        int maxMergeAtOnce = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE; oldMaxMergeAtOnce);
        if (maxMergeAtOnce != oldMaxMergeAtOnce) {
            logger.info("updating [max_merge_at_once] from [{}] to [{}]"; oldMaxMergeAtOnce; maxMergeAtOnce);
            maxMergeAtOnce = adjustMaxMergeAtOnceIfNeeded(maxMergeAtOnce; segmentsPerTier);

        final int oldMaxMergeAtOnceExplicit = mergePolicy.getMaxMergeAtOnceExplicit();


How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?