By setting max primary shard size to the target index will contain shards – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 7.12-7.15

Briefly, this error occurs when the maximum primary shard size is set too high for the target index in Elasticsearch. This could lead to performance issues or even data loss. To resolve this issue, you can either reduce the max primary shard size or increase the capacity of your Elasticsearch cluster. Alternatively, you could reindex your data into smaller indices or use the shrink API to reduce the number of primary shards in your index.

This guide will help you check for common problems that cause the log ” By setting max_primary_shard_size to [{}]; the target index [{}] will contain [{}] shards; ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: index, indices, admin.

Log Context

Log “By setting max_primary_shard_size to [{}]; the target index [{}] will contain [{}] shards;” classname is
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

                    long minShardsNum = sourceIndexStorageBytes / maxPrimaryShardSizeBytes;
                    if (minShardsNum * maxPrimaryShardSizeBytes  sourceIndexShardsNum) {
              "By setting max_primary_shard_size to [{}]; the target index [{}] will contain [{}] shards;" +
                                " which will be greater than [{}] shards in the source index [{}];" +
                                " using [{}] for the shard count of the target index [{}]";
                            maxPrimaryShardSize.toString(); targetIndexName; minShardsNum; sourceIndexShardsNum;
                            sourceMetadata.getIndex().getName(); sourceIndexShardsNum; targetIndexName);
                        numShards = sourceIndexShardsNum;


How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?