Max running job capacity localMaxAllowedRunningJobs reached – How to solve this Elasticsearch exception

Opster Team

August-23, Version: 7.2-7.15

Briefly, this error occurs when the maximum number of concurrently running jobs in Elasticsearch is reached. This limit is set to prevent overloading the system. To resolve this issue, you can either increase the limit of maximum running jobs in the Elasticsearch settings, or manage your jobs better by ensuring that they are completed or closed before starting new ones. Also, consider optimizing your jobs to run more efficiently, thus freeing up capacity.

This guide will help you check for common problems that cause the log ” max running job capacity [” + localMaxAllowedRunningJobs + “] reached ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin.

Log Context

Log “max running job capacity [” + localMaxAllowedRunningJobs + “] reached” class name is We extracted the following from Elasticsearch source code for those seeking an in-depth context :

 // Closing jobs can still be using some or all threads in MachineLearning.JOB_COMMS_THREAD_POOL_NAME
 // that an open job uses; so include them too when considering if enough threads are available.
 int currentRunningJobs = processByAllocation.size();
 // TODO: in future this will also need to consider jobs that are not anomaly detector jobs
 if (currentRunningJobs > localMaxAllowedRunningJobs) {
 throw new ElasticsearchStatusException("max running job capacity [" + localMaxAllowedRunningJobs + "] reached";
 }  String jobId = jobTask.getJobId();
 notifyLoadingSnapshot(jobId; autodetectParams);


How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?