Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.
This guide will help you check for common problems that cause the log ” Failed to refresh settings for ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: node, refresh and settings.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Overview
Simply put a node is a single server that is part of a cluster. Each node is assigned one or more roles, which describe the node’s responsibility and operations – Data nodes stores the data, and participates in the cluster’s indexing and search capabilities, while master nodes are responsible for managing the cluster’s activities and storing the cluster state, including the metadata.
While it is possible to run several node instances of Elasticsearch on the same hardware, it’s considered a best practice to limit a server to a single running instance of Elasticsearch.
Nodes connect to each other and form a cluster by using a discovery method.
Roles
Master node
Master nodes are in charge of cluster-wide settings and changes – deleting or creating indices and fields, adding or removing nodes and allocating shards to nodes. Each cluster has a single master node that is elected from the master eligible nodes using a distributed consensus algorithm and is reelected if the current master node fails.
Coordinator or client node
Coordinator Nodes are nodes that do not hold any configured role. They don’t hold data, are not part of the master eligible group nor execute ingest pipelines. Coordinator nodes serve incoming search requests and act as the query coordinator running query and fetch phases, send requests to every node that holds a shard being queried. The client node also distributes bulk indexing operations and route queries to shards based on the node’s responsiveness.
Overview
When indexing data, Elasticsearch requires a “refresh” operation to make indexed information available for search. This means that there is a time delay between indexing and the updated information actually becoming available for the client applications.
How it works
Index operations occur in memory. The operations are accumulated in a buffer until refreshed, which requires that the buffer itself be transferred to a newly created lucene segment. Refresh happens by default every second, but it is also possible to change this frequency for a given index, or directly request a refresh through the refresh api.
Examples
You can set the refresh interval on an index like this:
PUT /my_index/_settings { "index" : { "refresh_interval" : "30s" } }
You can use a value of -1 to stop refreshing but remember to set it back once you’ve finished indexing!
You can force a refresh on a given index like this:
POST my_index/_refresh
You can also force a refresh at the end of an index operation by adding an extra parameter in the URL like this:
POST /my_index/_index?refresh=waitfor
In this case, the “waitfor” parameter will force the client to wait for the refresh to complete before returning (useful in scripts), or you can use “true” to force the refresh without keeping the script waiting.
Notes and good things to know
Refreshing is very resource intensive, so you can increase indexing speed by reducing the refresh rate. You can do this temporarily if you need to reload a lot of data. For some logging applications it is perfectly acceptable to have a 30s latency, for instance, before data actually becomes available.
Beware of the refresh interval when scripting or updating. Scripts often work faster than the refresh interval, so if necessary, you might need to call a refresh before retrieving or updating data in your scripts, or use the waitfor parameter while indexing as described above.
Overview
In ElasticSearch, you can configure cluster-level settings, node-level settings and index level settings. Here we discuss each of them.
A. Cluster settings
These settings can be either persistent, meaning they apply across restarts, or transient, meaning they won’t survive a full cluster restart. If a transient setting is reset, the first one of these values that is defined is applied:
- the persistent setting
- the setting in the configuration file
- the default value
The order of precedence for cluster settings is:
- transient cluster settings
- persistent cluster settings
- settings in the elasticsearch.yml configuration file
Examples
An example of persistent cluster settings update:
PUT /_cluster/settings { "persistent" : { "indices.recovery.max_bytes_per_sec" : "500mb" } }
An example of a transient update:
PUT /_cluster/settings { "transient" : { "indices.recovery.max_bytes_per_sec" : "40mb" } }
B. Index settings
These are the settings that are applied to individual indices. There is an API to update index level settings.
Examples
The following API call will set the number of replica shards to 5 for my_index index.
PUT /my_index/_settings { "index" : { "number_of_replicas" : 5 } }
To revert a setting to the default value, use null.
PUT /my_index/_settings { "index" : { "refresh_interval" : null } }
C. Node settings
These settings apply to nodes. Nodes can fulfill different roles. These include the master, data, and coordination roles. Node settings are set through the elasticsearch.yml file for each node.
Examples
Setting a node to be a data node (in the elasticsearch.yml file)
node.data: true
Disabling the ingest role for the node (which is enabled by default)
node.ingest: false
For production clusters, you will need to run each type of node on a dedicated machine with two or more instances of each, for HA (minimum three for master nodes).
Notes and good things to know
- Learning the cluster settings and index settings is important, it can spare you a lot of trouble. For example, if you are going to ingest huge amount of data into an index, then if the number of replica shards are set , for example at 5 replica shards, the indexing process will be super slow because the data will be replicated at the same time it is indexed. What you can do to speed up indexing is to set the replica shards to 0 by updating the settings, and set it back to the original number when indexing is done, using the settings API.
- Another useful example of using cluster-level settings is when a node has just joined the cluster and the cluster is not assigning any shards to the node. Although shard allocation is enabled by default on all nodes, someone may have disabled shard allocation at some point (for example, in order to perform a rolling restart), and forgot to re-enable it later. To enable shard allocation, you can update the Cluster Settings API:
PUT /_cluster/settings{"transient":{"cluster.routing.allocation.enable":"all"}}
- It’s better to set cluster-wide settings with Settings API instead of with the elasticsearch.yml file and to use the file only for local changes. This will keep the same setting on all nodes. But if you define different settings on different nodes by accident using the elasticsearch.yml configuration file, it is hard to notice these discrepancies.
- See also: Recovery
Log Context
Log “failed to refresh settings for [{}]” classname is NodeSettingsService.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
ESLoggerFactory.getLogger(component).setLevel(entry.getValue()); } } } } catch (Exception e) { logger.warn("failed to refresh settings for [{}]"; e; "logger"); } lastSettingsApplied = event.state().metaData().settings(); globalSettings = lastSettingsApplied; }
Run the Check-Up to get customized recommendations like this:

Heavy merges detected in specific nodes

Description
A large number of small shards can slow down searches and cause cluster instability. Some indices have shards that are too small…

Recommendations Based on your specific ES deployment you should…
Based on your specific ES deployment you should…
X-PUT curl -H [a customized code snippet to resolve the issue]