Elasticsearch Node Disconnected

Average Read Time

2 Mins

Elasticsearch Node Disconnected

Opster Team

October 2021

Average Read Time

2 Mins


In addition to reading this guide, we recommend you run the Elasticsearch Health Check-Up. It will detect issues and improve your Elasticsearch performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and more.

The Elasticsearch Check-Up is free and requires no installation.

In addition to reading this guide, we recommend you run the Elasticsearch Health Check-Up. It will detect issues and improve your Elasticsearch performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and more.

The Elasticsearch Check-Up is free and requires no installation.

Run the Elasticsearch check-up to receive recommendations like this:

checklist Run Check-Up
error

The following nodes have disconnected: 123, 456, 789

error-img

Description

One or more data nodes can disconnect from a cluster for a number of possible reasons. When a node disconnects it causes cluster instability because of the strain on the other nodes that need to take on all the activity that was previously shared. Additionally, the cluster needs to dedicate resources to regenerating the replica shards that have been lost from the cluster...

error-img

Recommendation

According to your specific configuration, we recommend that you...

1

X-PUT curl -H "Content-Type: application/json" [customized recommendation]

Overview

There are a number of possible reasons for a node to become disconnected from a cluster. It is important to take into account that node disconnection is often a symptom of some underlying problem which must be investigated and solved. 

How to diagnose

The best way to understand what is going on in your cluster is to:

  • Look at monitoring data
  • Look at Elasticsearch logs

Possible causes

Excessive garbage collection from JVM

If you can see that the JVM heap is not following a regular saw tooth pattern, but is showing an irregular curve upwards, or if you see many logs like this:

[2020-04-10T13:35:51,628][WARNING][o.e.m.j.JvmGcMonitorService] [ES2-QUERY] [gc][550567] overhead, spent [615ms] collecting in the last [1s]

Then you almost certainly have a JVM garbage collection issue. This in turn is likely to be caused by configuration issues or the type and intensity of queries or indexing on the cluster, as explained on the following page: Heap Size Usage in Elasticsearch – A Detailed Guide with Pictures

Configuration issues

Configuration issues typically appear immediately when a node is started/restarted, or when nodes are added or removed from the cluster.  However, some configuration issues can only come to the surface when the cluster is under stress (see excessive garbage collection above), or loses one or more nodes. Learn more here: Master Node Not Discovered in Elasticsearch – An In Depth Explanation

Other issues

  • Intentional node restart/reboot
  • Intentional increase or reduction in the number of nodes
  • Hardware / networking issues

How to prevent node disconnection

It is highly recommended to monitor your cluster on an independent Elasticsearch cluster, so that the monitoring data is available when you need it.  The last thing you want is not to be able to see your monitoring data because your Elasticsearch cluster has gone down.

Look out for warnings and errors in your Elasticsearch logs which may indicate issues that could bring a node and possibly your entire cluster down.  Proactively acting upon these issues can result in them being solved before they cause more serious problems.

Optimize your search queries based on this guide: 10 Important Tips to Improve Search in Elasticsearch.

Optimize your indexing performance based on this guide: Improve Elasticsearch Indexing Speed with These Tips.

You can use Opster’s free Check-Up tool to provide you with suggestions to improve your Elasticsearch configuration.

You can also use Opster’s Essentials to resolve and prevent ES incidents and get access to advanced tools that improve performance and automate operation. 



Run the Check-Up to get a customized report like this:

Analyze your cluster