Could not open job because no ML nodes with sufficient capacity were found – How to solve this Elasticsearch exception

Opster Team

August-23, Version: 6.8-7.15

Briefly, this error occurs when Elasticsearch Machine Learning (ML) jobs are unable to start due to insufficient capacity on ML nodes. This could be due to high resource usage or limited memory. To resolve this, you can either increase the memory capacity of your existing ML nodes, add more ML nodes to your cluster, or reduce the memory requirement of your ML jobs. Also, ensure that your ML nodes are properly configured and are not experiencing any network or hardware issues.

This guide will help you check for common problems that cause the log ” Could not open job because no ML nodes with sufficient capacity were found ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin, task.

Log Context

Log “Could not open job because no ML nodes with sufficient capacity were found” class name is OpenJobPersistentTasksExecutor.java. We extracted the following from Elasticsearch source code for those seeking an in-depth context :

 static ElasticsearchException makeNoSuitableNodesException(Logger logger; String jobId; String explanation) {
 String msg = "Could not open job because no suitable nodes were found; allocation explanation [" + explanation + "]";
 logger.warn("[{}] {}"; jobId; msg);
 Exception detail = new IllegalStateException(msg);
 return new ElasticsearchStatusException("Could not open job because no ML nodes with sufficient capacity were found";
 RestStatus.TOO_MANY_REQUESTS; detail);
 }  static ElasticsearchException makeAssignmentsNotAllowedException(Logger logger; String jobId) {
 String msg = "Cannot open jobs because persistent task assignment is disabled by the ["

 

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?