This log is related to search problems, in addition to reading the guide below you can use the free Search Log Analyzer. With Opster’s Analyzer, you can easily locate slow searches and understand what led to them adding additional load to your system. You’ll receive customized recommendations for how to reduce search latency and improve your search performance. The tool is free and takes just 2 minutes to run
Overview
In Elasticsearch, the concept of scroll comes into play when you have a large set of search results. Large search results are exhaustive for both the Elasticsearch cluster and the requesting client in terms of memory and processing. The scroll API enables you to take a snapshot of a large number of results from a single search request.
Examples
To perform a scroll search, you need to add the scroll parameter to a search query and specify how long Elasticsearch should keep the search context viable.
GET mydocs-2019/_search?scroll=40s { "size": 5000, "query": { "match_all": {} }, "sort": [ { "_doc": { "order": "asc" } } ] }
This query will return a maximum of 5000 hits. If the scroll is idle for more than 40 seconds, then it will be deleted. The response will return the first page of the results and a scroll ID. You can use the scroll ID to get additional documents from the scroll. You’re able to keep retrieving the documents until you have all of them.
Notes
- Changes made to documents after the scroll will not show up in your results.
- When you are done with the scroll, you can delete it manually using the scroll ID.
DELETE _search/scroll/<scroll_id>
Overview
Search refers to the searching of documents in an index or multiple indices. The simple search is just a GET request to the _search endpoint. The search query can either be provided in query string or through a request body.
Examples
When looking for any documents in this index, if search parameters are not provided, every document is a hit and by default 10 hits will be returned.
GET my_documents/_search
A JSON object is returned in response to a search query. A 200 response code will mean the request completed successfully.
{ "took" : 1, "timed_out" : false, "_shards" : { "total" : 2, "successful" : 2, "failed" : 0 }, "hits" : { "total" : 2, "max_score" : 1.0, "hits" : [ ... ] } }
Notes and good things to know
- Distributed search is challenging and every shard of the index needs to be searched for hits, and then those hits are combined into a single sorted list as a final result.
- There are two phases of search: the query phase and the fetch phase.
- In the query phase, the query is executed on each shard locally and top hits are returned to the coordinating node. The coordinating node merges the results and creates a global sorted list.
- In the fetch phase, the coordinating node brings the actual documents for those hit IDs and returns them to the requesting client.
- A coordinating node needs enough memory and CPU in order to handle the fetch phase.
Log Context
Log “Clear SC failed on node[{}]” classname is ClearScrollController.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
listener.onResponse(new ClearScrollResponse(succeeded; freedSearchContexts.get())); } } private void onFailedFreedContext(Throwable e; DiscoveryNode node) { logger.warn(() -> new ParameterizedMessage("Clear SC failed on node[{}]"; node); e); /* * We have to set the failure marker before we count down otherwise we can expose the failure marker before we have set it to a * racing thread successfully freeing a context. This would lead to that thread responding that the clear scroll succeeded. */ hasFailed.set(true);
Run the Check-Up to get customized recommendations like this:

Heavy merges detected in specific nodes

Description
A large number of small shards can slow down searches and cause cluster instability. Some indices have shards that are too small…

Recommendations Based on your specific ES deployment you should…
Based on your specific ES deployment you should…
X-PUT curl -H [a customized code snippet to resolve the issue]