Elasticsearch Node Disconnected РPossible Root Causes

Elasticsearch Node Disconnected РPossible Root Causes

Opster Team

July 2020, Version: 1.7-8.0

Before you begin reading the explanation below, try running the free ES Health Check-Up get actionable recommendations that can improve Elasticsearch performance and prevent serious incidents. Just 2 minutes to complete and you can check your threadpools, memory, snapshots and many more

What Does it Mean?

There are a number of possible reasons for a node to become disconnected from a cluster. It is important to take into account that node disconnection is often a symptom of some underlying problem which must be investigated and solved. 

How To Diagnose 

The best way to understand what is going on in your cluster is to:

  • Look at monitoring data
  • Look at Elasticsearch logs

Possible Causes

Excessive Garbage Collection from JVM

If you can see that the JVM heap is not following a regular saw tooth pattern, but is showing an irregular curve upwards, or if you see many logs like this:

[2020-04-10T13:35:51,628][WARNING][o.e.m.j.JvmGcMonitorService] [ES2-QUERY] [gc][550567] overhead, spent [615ms] collecting in the last [1s]

Then you almost certainly have a JVM garbage collection issue. This in turn is likely to be caused by configuration issues or the type and intensity of queries or indexing on the cluster, as explained on the following page: Heap Size Usage in Elasticsearch – A Detailed Guide with Pictures

Configuration Issues

Configuration issues typically appear immediately when a node is started/restarted, or when nodes are added or removed from the cluster.  However, some configuration issues can only come to the surface when the cluster is under stress (see excessive garbage collection above), or loses one or more nodes. Learn more here: Master Node Not Discovered in Elasticsearch – An In Depth Explanation

Other Issues

  • Intentional node restart/reboot
  • Intentional increase or reduction in the number of nodes
  • Hardware/ networking issues

How to Prevent it 

It is highly recommended to monitor your cluster on an independent Elasticsearch cluster, so that the monitoring data is available when you need it.  The last thing you want is not to be able to see your monitoring data because your Elasticsearch cluster has gone down.

Look out for warnings and errors in your Elasticsearch logs which may indicate issues that could bring a node and possibly your entire cluster down.  Proactively acting upon these issues can result in them being solved before they cause more serious problems.

Optimize your search queries based on this guide: 10 Important Tips to Improve Search in Elasticsearch.

Optimize your indexing performance based on this guide: Improve Elasticsearch Indexing Speed with These Tips.

You can use Opster’s free check up tool to provide you with suggestions to improve your Elasticsearch configuration.

You can also use Opster’s Essentials to resolve and prevent ES incidents and get access to advanced tools that improve performance and automate operation. 

About Opster

Opster detects, prevents, optimizes and automates everything needed to run mission-critical Elasticsearch

Find Configuration Errors

Analyze Now