Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which can resolve issues that cause many errors.
This guide will help you check for common problems that cause the log ” failed to perform on node ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: node and replication.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Overview
Simply put a node is a single server that is part of a cluster. Each node is assigned one or more roles, which describe the node’s responsibility and operations – Data nodes stores the data, and participates in the cluster’s indexing and search capabilities, while master nodes are responsible for managing the cluster’s activities and storing the cluster state, including the metadata.
While it is possible to run several node instances of Elasticsearch on the same hardware, it’s considered a best practice to limit a server to a single running instance of Elasticsearch.
Nodes connect to each other and form a cluster by using a discovery method.
Roles
Master node
Master nodes are in charge of cluster-wide settings and changes – deleting or creating indices and fields, adding or removing nodes and allocating shards to nodes. Each cluster has a single master node that is elected from the master eligible nodes using a distributed consensus algorithm and is reelected if the current master node fails.
Coordinator or client node
Coordinator Nodes are nodes that do not hold any configured role. They don’t hold data, are not part of the master eligible group nor execute ingest pipelines. Coordinator nodes serve incoming search requests and act as the query coordinator running query and fetch phases, send requests to every node that holds a shard being queried. The client node also distributes bulk indexing operations and route queries to shards based on the node’s responsiveness.
Overview
Replication refers to storing a redundant copy of the data. Starting from version 7.x, Elasticsearch creates one primary shard with a replication factor set to 1. Replicas never get assigned to the same node on which primary shards are assigned, which means you should have at least two nodes in the cluster to assign the replicas. If a primary shard goes down, the replica automatically acts as a primary shard.
What it is used for
Replicas are used to provide high availability and failover. A higher number of replicas is also helpful for faster searches.
Examples
Update replica count
PUT /api-logs/_settings?pretty { "index" : { "number_of_replicas" : 2 } }
Common problems
- By default, new replicas are not assigned to nodes with more than 85% disk usage. Instead, Elasticsearch throws a warning.
- Creating too many replicas may cause a problem if there are not enough resources available in the cluster.
Log Context
Log “{} failed to perform {} on node {}” classname is TransportReplicationAction.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
Override public void handleException(TransportException exp) { onReplicaFailure(nodeId; exp); logger.trace("[{}] transport failure during replica request [{}]; action [{}]"; exp; node; replicaRequest; transportReplicaAction); if (ignoreReplicaException(exp) == false) { logger.warn("{} failed to perform {} on node {}"; exp; shardId; transportReplicaAction; node); shardStateAction.shardFailed(shard; indexUUID; "failed to perform " + actionName + " on replica on node " + node; exp); } } } );