How To Solve Issues Related to Log – Requested thread pool size for is too large; setting to maximum instead

Get an Elasticsearch Check-Up


Check if your ES issues are caused from misconfigured settings
(Free 2 min process)

ES Check Up

Elasticsearch Error Guide In Page Navigation :

Troubleshooting Background – start here to get the full picture       
Related Issues – selected resources on related issues  
Log Context – usefull for experts
About Opster – offering a diffrent approach to troubleshoot Elasticsearch

Check My Elasticsearch 


Troubleshooting background

To troubleshoot Elasticsearch log “Requested thread pool size for is too large; setting to maximum instead” it’s important to know common problems related to Elasticsearch concepts: pool, thread, threadpool. See below-detailed explanations complete with common problems, examples and useful tips.

Elasticsearch Threadpool

What it is

Elasticsearch uses Threadpools to manage how requests are processed and to optimize the use of resources on each node in the cluster.

What it used for

The main threadpools are for search, get and write, but there are a number of others which you can see by running: 

GET /_cat/thread_pool/?v&h=id,name,active,rejected,completed,size,type&pretty

You can see by running the above command that each node has a number of different thread pools, what the size and type of the thread pool are, and you can see which nodes have rejected operations. Elasticsearch automatically configures the threadpool management parameters based on the number of processors detected in each node.  

Threadpool Types
Fixed

A fixed number of threads, with a fixed queue size

thread_pool:
    write:
        size: 30
        queue_size: 1000
Scaling

A variable number of threads that Elasticsearch scales automatically according to workload.

thread_pool:
    warmer:
        core: 1
        max: 8
fixed_autoqueue_size

A fixed number of threads with a variable queue size which changes dynamically in order to maintain a target response time

thread_pool:
    search:
        size: 30
        queue_size: 500
        min_queue_size: 10
        max_queue_size: 1000
        auto_queue_frame_size: 2000
        target_response_time: 1s
Examples

To see which threads are using the highest CPU or taking the longest time you can use the following query.  This may help find operations that are causing your cluster to underperform.

GET /_nodes/hot_threads
Notes and good things to know:

In general it is not recommended to tweak thread pool settings.  However, it is worth noting that the thread pools are set based upon the number of processors that Elasticsearch has detected to be available on the underlying hardware.  If that detection fails, then you should explicitly set the number of processors available in your hardware in elasticsearch.yml like this:

processors: 4

Most threadpools also have queues associated with them to enable Elasticsearch to store requests in memory while waiting for resources to become available to process the request.  However the queues are usually of a finite size, and if that size becomes exceeded, then Elasticsearch will reject the request.

Sometimes you may be tempted to increase the queue size to prevent requests being rejected, but this will only treat the symptom and not the underlying cause of the problem.  Indeed, it may even be counter productive, since by allowing a larger queue size, the node will need to use more memory to store the queue, and will have less space to actually manage requests.  Furthermore increasing the queue size will also increase the length of time that operations are kept in the queue, resulting in client applications facing time out issues.

Usually, the only case where it can be justified to increase the queue size is where requests are received in uneven surges and you are unable to manage this process client-side.

You can monitor thread pools to better understand the performance of your Elasticsearch cluster.  The Elasticsearch monitoring panel in Kibana shows your graphs of the search,get and writes thread queues and any queue rejections.  Growing queues indicate that Elasticsearch is having difficulty keeping up with requests, and rejections indicate that queues have grown to the point that Elasticsearch rejects calls to the server.

Check the underlying causes of increases in queues. Try to balance activity across the nodes in the cluster and try to balance the demands on the cluster thread pool by taking actions on the client-side.


To help troubleshoot related issues we have gathered selected Q&A from the community and issues from Github , please review the following for further information :

ElasticSearch gives error about queue size
stackoverflow.com/questions/20683440/elasticsearch-gives-error-about-queue-size

Number of Views : 33.58 K  Score on Stackoverflow : 26

Logstash Isn’t Authenticating With Elasticsearch Shield
stackoverflow.com/questions/36338580/logstash-isnt-authenticating-with-elasticsearch-shield

Number of Views : 0.67 K  Score on Stackoverflow : 1


Log Context

Log ”Requested thread pool size for is too large; setting to maximum instead” classname is ThreadPool.java
We have extracted the following from Elasticsearch source code to get an in-depth context :

         if ((name.equals(Names.BULK) || name.equals(Names.INDEX)) && size > availableProcessors) {
            // We use a hard max size for the indexing pools; because if too many threads enter Lucene's IndexWriter; it means
            // too many segments written; too frequently; too much merging; etc:
            // TODO: I would love to be loud here (throw an exception if you ask for a too-big size); but I think this is dangerous
            // because on upgrade this setting could be in cluster state and hard for the user to correct?
            logger.warn("requested thread pool size [{}] for [{}] is too large; setting to maximum [{}] instead";
                        size; name; availableProcessors);
            size = availableProcessors;
        }

        return size;






About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster’s solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues

We are constantly updating our analysis of Elasticsearch logs, errors, and exceptions. Sharing best practices and providing troubleshooting guides.

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster

Did this page help you?