Reloading watcher reason cancelled queued tasks – How to solve this Elasticsearch error

Opster Team

Aug-23, Version: 6.8-8.9

Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.

Try OpsGPT now for step-by-step guidance and tailored insights into your Elasticsearch operation.

Briefly, this error occurs when Elasticsearch’s Watcher, a tool for alerting and notification based on changes in data, is reloaded but there are still tasks in the queue. This could be due to a high load on the system or a slow processing of tasks. To resolve this issue, you can try to reduce the load on the system by optimizing your queries or increasing the system resources. Alternatively, you can increase the queue size or the timeout settings for the Watcher to allow more time for tasks to be processed.

For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.

This guide will help you check for common problems that cause the log ” reloading watcher; reason [{}]; cancelled [{}] queued tasks ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: plugin.

Log Context

Log “reloading watcher; reason [{}]; cancelled [{}] queued tasks” classname is
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

        // changes

        int cancelledTaskCount = executionService.clearExecutionsAndQueue(() -> {});"reloading watcher; reason [{}]; cancelled [{}] queued tasks"; reason; cancelledTaskCount);

        executor.execute(wrapWatcherService(() -> reloadInner(state; reason; false); e -> logger.error("error reloading watcher"; e)));



How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?