Before you begin reading the explanation below, try running the free ES Health Check-Up get actionable recommendations that can improve Elasticsearch performance and prevent serious incidents. Just 2 minutes to complete and you can check your threadpools, memory, snapshots and many more
What Does it Mean?
Master nodes are responsible for actions such as creating or deleting indices, deciding which shards should be allocated on which nodes, and maintaining the cluster state on all of the nodes. The cluster state includes information about which shards are on which node, index mappings, which nodes are in the cluster and other settings necessary for the cluster to operate. Even though these actions are not resource intensive, it is essential for cluster stability to ensure that the master nodes remain available at all times to carry out these tasks.
Although in small clusters it is possible to have master nodes which also carry out search and index operations (which is the default configuration), searching and indexing are both resource intensive, resulting in the node not having sufficient resources to carry out the master node tasks, ultimately resulting in cluster instability.
For this reason, once a cluster reaches a certain size it is highly recommended to create 3 dedicated master nodes in different availability zones. The master nodes require excellent connectivity with the rest of the nodes in the cluster and should be in the same network.
How to Create a Dedicated Master Node Configuration
Create 3 (and exactly 3) Dedicated Master Nodes
Elasticsearch uses quorum-based decision making to create a robust architecture, and prevent the “split brain problem”. The split brain problem refers to a situation where in the event of nodes losing contact with the cluster you could potentially end up with two clusters. This is prevented by Elasticsearch requiring that half +1 of master eligible nodes must vote to elect a new master node. For this reason it is highly recommended to use 3 nodes to provide a structure which is capable of losing 1 master node and remaining stable.
Depending on the size of the cluster, a master node typically requires less resources than a data node. For example in a 20 node cluster while the data nodes may use 64GB ram machines, it would be usual to find 3 master nodes with 1 or 2GB ram.
node.master: true node.voting_only: false node.data: false node.ingest: false node.ml: false node.transform: false node.remote_cluster_client: false
Set Node Master False on Remaining Nodes
If your data nodes are currently configured to be master nodes, it is convenient to stop them being master nodes by setting:
VERY IMPORTANT! You should change this setting on one node at a time, restarting each node and waiting for 60 seconds to ensure that Elasticsearch has time to remove the node from the Elasticsearch voting configuration before moving to the next one. Failure to do this can cause your cluster to fail. See Minimum Master Node Higher Than for more information.
Alternatively, you can use:
To manually remove a node from the voting configuration.
Opster detects, prevents, optimizes and automates everything needed to run mission-critical Elasticsearch