Before you dig into the details of this technical guide, have you tried asking OpsGPT?
You'll receive concise answers that will help streamline your Elasticsearch/OpenSearch operations.
Try OpsGPT now for step-by-step guidance and tailored insights into your Elasticsearch/ OpenSearch operation.
Before you dig into the details of this guide, have you tried asking OpsGPT? You’ll receive concise answers that will help streamline your Elasticsearch/OpenSearch operations.
Try OpsGPT now for step-by-step guidance and tailored insights into your search operation.
To set up Elasticsearch node roles correctly, try AutoOps for Elasticsearch. AutoOps will also help you optimize other important settings to improve performance.
What are coordinating nodes?
A node is Coordinating Only (CO) – also often called a “dedicated coordinating node” – when it is not a data and/or a master-eligible node. When a cluster is designed with these, HTTP clients are then configured so that all indexing requests, all search requests, or both, are sent exclusively to those nodes.
The main purpose of this arrangement is to isolate data and master nodes from the resource demands of handling those HTTP(S) requests.
Benefits of dedicated coordinating nodes
As described below, indexing and search requests can place higher heap demand on the coordinating node than what is placed on data nodes that receive only shard-level requests. This demand is even greater if the request rate is high enough that some requests must be put in a queue.
CPU and networking demand can also be an issue, depending on the size and complexity of these requests.
In certain scenarios, use of CO nodes can provide lower maximum and average request latencies.
Inclusion of CO nodes can also simplify the configuration of client applications. As a cluster has data nodes added, the client configuration (or interposing load balancer) can continue to have references to just the CO nodes.
The process of search requests
When a node receives a search request, it determines which indices must be queried, and sends the request to one copy of each shard of those indices (the query phase). Each shard returns document IDs and relevance scores for up to “size” documents.
The coordinating node merges these responses, sorts them by relevance, and retrieves only the top “size” documents from the shards that reported them (the fetch phase). They are then included in the response to the original search request.
The process of indexing requests
When a node receives an indexing request for a document, it must route the request firstly to the primary shard, and then to all of the replicas where the document is stored. In the case of a bulk indexing request, this must be done for each document in the request, and the outcome of each individual request must be monitored so that the correct bulk response can be composed and sent.
Costs of adding coordinating only nodes
Every node of any type that belongs to an Elasticsearch cluster must keep an up-to-date copy of the cluster state. The elected master node sends each change in the cluster state to each of those nodes, and waits for an acknowledgement. Hence, the addition of any node represents a potential increase in the latency of applying any given change.
The more obvious cost is that of committing additional physical resources to another Elasticsearch instance after first determining the optimal design (number of CPUs, amount of heap, networking capacity).
When to use dedicated coordinating nodes
A cluster should have CO nodes when the benefits outweigh the costs. This is typically based on an existing deployment having some nodes become unstable (especially due to heap management), or nodes having periodic high latency per request due to intermixed handling of HTTP requests and shard-level requests.
Factors that influence the cost/benefit of using CO nodes include, among others:
- The number of data nodes and their sizing
- The uniformity and complexity of documents and search requests
- How “bursty” indexing and search requests are over time
- How finely each index is sharded
However, there are no specific criteria for any of these that determine when to use CO nodes, how many to use and what size they should be. In widespread practice, introduction of CO nodes tends to be when the cluster reaches between 10 and 20 data nodes, but in some scenarios it’s fewer data nodes, and there is no substitute for testing and monitoring your cluster to determine the optimum configuration. The recommended way of testing would be to use a test environment, such as https://esrally.readthedocs.io/en/stable/, where you can evaluate query/indexing throughput for a given cluster configuration.
How to use dedicated coordinating nodes
If you choose to use CO nodes, you should have at least two (for redundancy) and traffic should be load balanced across them. The nodes will need enough RAM to manage the gather phase of requests, so this will depend upon the size and frequency of search and bulk indexing responses to be collated.
Once you have determined the desired sizing for your additional nodes, update their configuration to assign node roles:
node.roles: [ ]
Deploy and launch them using the same methods as the existing nodes in the target cluster.
Once they have joined the cluster and you confirm that they are accepting requests, reconfigure any HTTP clients to target the new nodes, preferably in a round-robin fashion.
Beware of creating a new bottleneck
If you use coordinating nodes, beware of creating a new bottleneck. If you don’t create enough coordinating nodes, then you can find that you have insufficient capacity to perform the coordination role, leaving your data nodes infra-utilized.
Shard routing preference
Remember that if you want to encourage your cluster to route search requests from the same client to the same shards to take full advantage of node level caching then you can use:
This parameter will work for both coordinating nodes and standard data nodes.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?