Could not open job because no ML nodes with sufficient capacity were found – How to solve this Elasticsearch error

Could not open job because no ML nodes with sufficient capacity were found – How to solve this Elasticsearch error

Opster Team

July-20, Version: 1.7-8.0

Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which can resolve issues that cause many errors.

This guide will help you check for common problems that cause the log ” Could not open job because no ML nodes with sufficient capacity were found ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: plugin.

Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).

Log Context

Log”Could not open job because no ML nodes with sufficient capacity were found”classname  is TransportOpenJobAction.java We extracted the following from Elasticsearch source code for those seeking an in-depth context :

static ElasticsearchException makeNoSuitableNodesException(Logger logger; String jobId; String explanation) {
  String msg = "Could not open job because no suitable nodes were found; allocation explanation [" + explanation + "]";
  logger.warn("[{}] {}"; jobId; msg);
  Exception detail = new IllegalStateException(msg);
  return new ElasticsearchStatusException("Could not open job because no ML nodes with sufficient capacity were found";
  RestStatus.TOO_MANY_REQUESTS; detail);
  }
 
  static ElasticsearchException makeAssignmentsNotAllowedException(Logger logger; String jobId) {
  String msg = "Cannot open jobs because persistent task assignment is disabled by the ["

 

Run the Check-Up to get a customized report like this:

Analyze your cluster