Elasticsearch Queue

Avoid the Next Incident

Check if your ES issues are caused from misconfigured settings

2-min process

Stop Elasticsearch settings related incidents :  Fix My Settings

Last Update: February 2020

Queue in Elasticsearch


The queue term in Elasticsearch is used in the context of Thread Pools. Each node of the Elasticsearch cluster holds various thread pools to manage the memory consumption on that node for different types of requests. The queues come up with initial default limits as per node size but can be modified dynamically using _settings REST endpoint.

What it is used for

Queues are used to hold the pending requests for the corresponding thread pool instead of requests being rejected. For example, if there are too many search requests coming on the node which can not be processed at the same time, the requests are sent to the search thread pool queue.

Examples

Monitoring the thread pools using _cat API:

GET /_cat/thread_pool?v

Get details about each thread pool, including current current size:

GET /_nodes/thread_pool

Notes
  • Thread pool queues are one of the most important stats to monitor in Elasticsearch as they have a direct impact on the cluster performance and may halt the indexing and search requests.
  • The specific thread pool queue size can be changed using its type-specific parameters.
  • It is possible to update thread pool queue size dynamically using cluster setting API in version 2.x.
  • From Elasticsearch version 5.x onward, it is not possible to update the thread pool settings  dynamically via the cluster setting API. Rather, it is a node level setting and it must be configured inside elasticsearch.yml on each node and a node restart is required after the updates.

Common Problems
  • The most common problem that arises in Elasticsearch related to queues is EsRejectedExecutionException that occurs when queues are full and Elasticsearch nodes cannot keep up with the speed of the requests. This may lead to nodes not responding as well. To deal with this issue, thread pools need continuous monitoring and based on thread pool queue utilization, you may need to review and control the indexing/search requests or increase the resources of the cluster.
  • In case of bulk indexing queue rejection, increasing the size of the queue may cause the node to keep more data in memory, which may cause requests taking longer to complete and more heap space to be consumed. As a result you may face impact on cluster performance and stability.

About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues

Learn more: Glossary | Blog| Troubleshooting guides

Need help with any Elasticsearch issue ? Contact Opster

Avoid the next incident use our settings check-up : Prevent Issues


Click below to learn how to fix common problems related to these concepts
« Back to Index