Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.
This guide will help you check for common problems that cause the log ” Falling back to allocating job by job counts because a memory requirement refresh could not be scheduled ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: memory, plugin and refresh.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Overview
Memory is one of the most critical resources to monitor in Elasticsearch. Elasticsearch runs on JVM and uses heap memory areas for query cache, request cache, accessing lucene segments and storing fielddata for aggregations and sorting.
Common problems and important points
- The most common error that arises in Elasticsearch is OutOfMemory error. This error comes when the node is not able to cope up with the required heap size space. To avoid this, you need to closely monitor the heap utilization and garbage collector performance.
- As per the most up-to-date best practices you should not allocate more than 50 percent of total RAM to JVM heap size. Starting from Elasticsearch version 5.x onward this can be set using -Xms and -Xmx parameters inside jvm.options configuration file. The defaults are set to 1 GB for both minimum and maximum heap size.
- The heap size should not set more than 31 GB in any case to avoid the poor garbage collection.
Overview
A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.
Examples
Get all the instructions for the plugin
sudo bin/elasticsearch-plugin -h
Installing the S3 plugin for storing Elasticsearch snapshots on S3
sudo bin/elasticsearch-plugin install repository-s3
Removing a plugin
sudo bin/elasticsearch-plugin remove repository-s3
Installing a plugin using the file’s path
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip
Notes and good things to know
- Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of the Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
- A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
- You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
- When a plugin is removed, you will need to restart every elasticsearch node in order to complete the removal process.
Common issues
- Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the deb or rpm package then the plugin has to be installed using the root user, or else you can install the plugin as the user that owns all of the Elasticsearch files.
- In case of deb or rpm package installation, it is important to check the permission of the plugins directory after plugin installation and update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory
- If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins at once and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.
Overview
When indexing data, Elasticsearch requires a “refresh” operation to make indexed information available for search. This means that there is a time delay between indexing and the updated information actually becoming available for the client applications.
How it works
Index operations occur in memory. The operations are accumulated in a buffer until refreshed, which requires that the buffer itself be transferred to a newly created lucene segment. Refresh happens by default every second, but it is also possible to change this frequency for a given index, or directly request a refresh through the refresh api.
Examples
You can set the refresh interval on an index like this:
PUT /my_index/_settings { "index" : { "refresh_interval" : "30s" } }
You can use a value of -1 to stop refreshing but remember to set it back once you’ve finished indexing!
You can force a refresh on a given index like this:
POST my_index/_refresh
You can also force a refresh at the end of an index operation by adding an extra parameter in the URL like this:
POST /my_index/_index?refresh=waitfor
In this case, the “waitfor” parameter will force the client to wait for the refresh to complete before returning (useful in scripts), or you can use “true” to force the refresh without keeping the script waiting.
Notes and good things to know
Refreshing is very resource intensive, so you can increase indexing speed by reducing the refresh rate. You can do this temporarily if you need to reload a lot of data. For some logging applications it is perfectly acceptable to have a 30s latency, for instance, before data actually becomes available.
Beware of the refresh interval when scripting or updating. Scripts often work faster than the refresh interval, so if necessary, you might need to call a refresh before retrieving or updating data in your scripts, or use the waitfor parameter while indexing as described above.
Log Context
Log “Falling back to allocating job [{}] by job counts because a memory requirement refresh could not be scheduled” classname is TransportOpenJobAction.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
// Try to allocate jobs according to memory usage; but if that's not possible (maybe due to a mixed version cluster or maybe // because of some weird OS problem) then fall back to the old mechanism of only considering numbers of assigned jobs boolean allocateByMemory = isMemoryTrackerRecentlyRefreshed; if (isMemoryTrackerRecentlyRefreshed == false) { logger.warn("Falling back to allocating job [{}] by job counts because a memory requirement refresh could not be scheduled"; jobId); } Listreasons = new LinkedList(); long maxAvailableCount = Long.MIN_VALUE;
Run the Check-Up to get customized recommendations like this:

Heavy merges detected in specific nodes

Description
A large number of small shards can slow down searches and cause cluster instability. Some indices have shards that are too small…

Recommendations Based on your specific ES deployment you should…
Based on your specific ES deployment you should…
X-PUT curl -H [a customized code snippet to resolve the issue]