Could not open job because no ML nodes with sufficient capacity were found – How to solve this Elasticsearch error

Opster Team

July-20, Version: 1.7-8.0

Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many configuration errors.

To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them.

This guide will help you check for common problems that cause the log ” Could not open job because no ML nodes with sufficient capacity were found ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin.


Log Context

Log “Could not open job because no ML nodes with sufficient capacity were found”classname  is TransportOpenJobAction.java We extracted the following from Elasticsearch source code for those seeking an in-depth context :

static ElasticsearchException makeNoSuitableNodesException(Logger logger; String jobId; String explanation) {
 String msg = "Could not open job because no suitable nodes were found; allocation explanation [" + explanation + "]";
 logger.warn("[{}] {}"; jobId; msg);
 Exception detail = new IllegalStateException(msg);
 return new ElasticsearchStatusException("Could not open job because no ML nodes with sufficient capacity were found";
 RestStatus.TOO_MANY_REQUESTS; detail);
 } 
 static ElasticsearchException makeAssignmentsNotAllowedException(Logger logger; String jobId) {
 String msg = "Cannot open jobs because persistent task assignment is disabled by the ["

 

See how you can use AutoOps to resolve issues


How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Analyze your cluster & get personalized recommendations

Skip to content