Can t collect more than – How to solve this Elasticsearch exception

Opster Team

August-23, Version: 7.9-7.15

Briefly, this error occurs when Elasticsearch tries to collect more shards than the limit set by the system. This could be due to a large number of indices, each having multiple shards. To resolve this issue, you can increase the limit by adjusting the ‘action.search.shard_count.limit’ setting in the Elasticsearch configuration. Alternatively, you can reduce the number of shards by consolidating indices or reducing the number of shards per index.

This guide will help you check for common problems that cause the log ” Can’t collect more than [ ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: aggregations, search.

Log Context

Log “Can’t collect more than [” class name is BucketsAggregator.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :

 final long bucketCount = bucketOrds.bucketsInOrd(owningBucketOrds[ordIdx]);
 bucketsInOrd[ordIdx] = (int) bucketCount;
 totalOrdsToCollect += bucketCount;
 }
 if (totalOrdsToCollect > Integer.MAX_VALUE) {
 throw new AggregationExecutionException("Can't collect more than [" + Integer.MAX_VALUE
 + "] buckets but attempted [" + totalOrdsToCollect + "]");
 }
 long[] bucketOrdsToCollect = new long[(int) totalOrdsToCollect];
 int b = 0;
 for (int ordIdx = 0; ordIdx < owningBucketOrds.length; ordIdx++) {

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?