Falling back to allocating job by job counts because a memory requirement refresh could not be scheduled – How to solve related issues

Average Read Time

2 Mins

Falling back to allocating job by job counts because a memory requirement refresh could not be scheduled – How to solve related issues

Opster Team

Jan-20, Version: 1.7-8.0

Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.

This guide will help you check for common problems that cause the log ” Falling back to allocating job by job counts because a memory requirement refresh could not be scheduled ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: memory, plugin and refresh.

Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).

Log Context

{{mpg_logstarted}}

Log “Falling back to allocating job [{}] by job counts because a memory requirement refresh could not be scheduled” classname is TransportOpenJobAction.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

 
        // Try to allocate jobs according to memory usage; but if that's not possible (maybe due to a mixed version cluster or maybe
        // because of some weird OS problem) then fall back to the old mechanism of only considering numbers of assigned jobs
        boolean allocateByMemory = isMemoryTrackerRecentlyRefreshed;
        if (isMemoryTrackerRecentlyRefreshed == false) {
            logger.warn("Falling back to allocating job [{}] by job counts because a memory requirement refresh could not be scheduled";
                jobId);
        }

        List reasons = new LinkedList();
        long maxAvailableCount = Long.MIN_VALUE;




 

Run the Check-Up to get a customized report like this:

Analyze your cluster