Elasticsearch guides

AJAX progress indicator
  • a
  • Autocomplete Guide
    How to avoid critical performance mistakes, why the Elasticsearch default solution doesn't cut it and important implementation considerations. Background All modern-day websites have autocomplete features on their search bar to improve user experience (no one wants to type entire search terms...). It's imperative that the autocomplete be faster than the standard search, as the whole point of autocomplete is to start showing the results while the user is typing. If the latency is(...) Read More
  • b
  • Bootstrap Checks
    Bootstrap Checks in Elasticsearch Overview Elasticsearch has many settings that can cause significant performance problems if not set correctly. To prevent this happening Elasticsearch will carry out what are known as bootstrap checks to ensure that these important settings have been covered.   If any of the checks fail then elasticsearch will write an error to the logs and will not start. Bootstrap checks are carried out when the network.host setting in network.host:(...) Read More
  • c
  • Cluster Blocks Read-Only
    An Explanation On cluster.blocks.read_only  & cluster.blocks.read_only_allow_delete What Does it Mean? A read-only delete block can be applied automatically by the cluster because of a disk space issue, or may be applied manually by an operator to prevent indexing to the Elasticsearch cluster. There are two types of block: cluster.blocks.read_onlycluster.blocks.read_only_allow_delete A read-only block is typically applied by an operator because some sort of cluster(...) Read More
  • Cluster Concurrent Rebalance High / Low
    An overview of CLUSTER_CONCURRENT_REBALANCE_HIGH and CLUSTER_CONCURRENT_REBALANCE_LOW. What Does it Mean The cluster concurrent rebalance setting determines the maximum number of shards which the cluster can move to rebalance the distribution of disk space requirements across the nodes at any one time. When moving shards, a shard rebalance is required in order to rebalance the disk usage requirements across the clusters. This rebalance uses cluster resources. Therefore, it’s(...) Read More
  • d
  • Dangerous Default Settings
    A review of two dangerous default settings in Elasticsearch: Cluster Name and Data Path. Cluster Name is Default ‘elasticsearch’ What Does it Mean? It is important to change the name of the cluster in elasticsearch.yml to avoid Elasticsearch nodes joining the wrong cluster. This is particularly important when development, staging and production environments can find themselves on the same network.  How to Prevent it from Happening If you want to change the name of the(...) Read More
  • Dedicated Client Node / Coordinating and Ingest Nodes
    What Does it Mean? There is some confusion in the use of coordinating node terminology. Client nodes were removed from Elasticsearch after version 2.4 and became Coordinating Nodes. At the same time a new node type, Ingest Node, also appeared. Many clusters do not use dedicated coordinating or ingest nodes, and leave the ingest and coordination functions to the data nodes.  Coordinating Node A coordinating (or client) node is a node which has: node.master: false(...) Read More
  • Dedicated Master Node
    What Does it Mean? Master nodes are responsible for actions such as creating or deleting indices, deciding which shards should be allocated on which nodes, and maintaining the cluster state on all of the nodes. The cluster state includes information about which shards are on which node, index mappings, which nodes are in the cluster and other settings necessary for the cluster to operate. Even though these actions are not resource intensive, it is essential for cluster stability to(...) Read More
  • Disk Watermark
    Disk watermarks in Elasticsearch Elasticsearch considers the available disk space before deciding whether to allocate new shards, relocate shards away or put all indices on read mode based on a different threshold of this error. The reason is Elasticsearch indices consists of different shards which are persisted on data nodes and low disk space can cause issues. Relevant settings related  cluster.routing.allocation.disk.watermark and have(...) Read More
  • e
  • Enable Adaptive Replica Selection
    What Does it Mean? Adaptive replica selection is a process intended to prevent a distressed Elasticsearch node from delaying the response to queries, while reducing the search load on that node. To understand how it works, imagine a situation where a single node is in distress. This could be because of hardware, network or configuration issues, but as a consequence the response time for shards on that node are much longer than the response time from the other nodes. When an(...) Read More
  • Enable Shard Rebalance and Shard Allocation
    What does it mean? Cluster shard rebalancing and allocation are often confused with each other. Cluster shard allocation This refers to the process by which any shard including new, recovered or rebalanced shards are allocated to Elasticsearch nodes. Cluster shard allocation may be temporarily disabled during maintenance in order to avoid shards from being relocated to nodes which are being restarted and may temporarily leave the cluster. If cluster shard allocation is NOT(...) Read More
  • Enable X-Pack Basic Security
    What does it mean? The growing popularity of Elasticsearch has made both Elasticsearch and Kibana targets for hackers and ransomware, so it is important never to leave your Elasticsearch cluster unprotected. From Elasticsearch Version 6.8 and onwards,  X Pack Basic License (free) includes security in the standard Elasticsearch version, while prior to that it was a paid for feature. How to resolve Bear in mind that the following steps will inevitably require some cluster down(...) Read More
  • Expensive Queries are Allowed to Run
    What does it mean? By default this setting is set to true. This means that users can use certain query types which require a lot of resources to return results, causing slow results for other users and possibly affecting the stability of the cluster. It is particularly appropriate in installations where you have no control over the queries being run (eg. where users have access to kibana or other graphical interface tools). Setting this to false will prevent running the following(...) Read More
  • f
  • Flood stage disk watermark
    What Does it Mean? There are various “watermark” thresholds on your Elasticsearch cluster.  As the disk fills up on a node, the first threshold to be crossed will be the “low disk watermark”.  The second threshold will then be the “high disk watermark threshold”.  Finally, the “disk flood stage” will be reached. Once this threshold is passed, the cluster will then block writing to ALL indices that have one shard (primary or replica) on the node which has passed the watermark. Reads(...) Read More
  • h
  • Heap Size Usage and JVM Garbage Collection
    What Does it Mean? The heap size is the amount of RAM allocated to the Java Virtual Machine of an Elasticsearch node.   As a general rule, you should set -Xms and -Xmx to the SAME value, which should be 50% of your total available RAM subject to a maximum of (approximately) 31GB.    A higher heap size will give your node more memory for indexing and search operations. However, your node also requires memory for cache, so using 50% maintains a healthy balance between the two. For(...) Read More
  • Heavy Merges Were Detected
    What Does it Mean? Elasticsearch indices are stored in shards, and each shard in turn stores the data on disk in segments. Elasticsearch processes such as updates and deletion can result in many small segments being created on disk, which Elasticsearch will merge into bigger sized segments in order to optimize disk usage. The merging process uses cpu, memory and disk resources, which can slow down the cluster’s response speed. How to Fix it In general, the Elasticsearch(...) Read More
  • High Disk Watermark
    What Does it Mean? There are various “watermark” thresholds on your Elasticsearch cluster.  As the disk fills up on a node, the first threshold to be crossed will be the “low disk watermark”.  The second threshold will then be the “high disk watermark threshold”.  If you pass this threshold then Elasticsearch will try to relocate shards from the node to other nodes in the cluster. How to Resolve it Passing this threshold is a warning and you should not delay in taking action(...) Read More
  • l
  • Loaded Client Nodes/Coordinating Nodes
    What Does it Mean Sometimes you can observe that the CPU and load on some coordinating nodes (client nodes) is higher than others.This can be caused by applications that are not load balancing correctly across the coordinating nodes, and are making all their HTTP calls to just one or some of the nodes. Possible Effects A saturated coordinating node could cause an increase in search or indexing response latency, or an increase in write queue/search queue when the cluster is under(...) Read More
  • Loaded Data Nodes
    What Does it Mean? Sometimes you can observe that the CPU and load on some of your data nodes is higher than on others. This can occasionally be caused by applications that are not load balancing correctly across the data nodes, and are making all their HTTP calls to just one or some of the nodes. You should fix this in your application. However it is more frequently caused by “hot” indices being located on just a small number of nodes.  A typical example of this would be a(...) Read More
  • Loaded Master Nodes
    What Does it Mean Sometimes you can observe that the CPU and load on one of your master nodes is higher than on others. This is absolutely normal behavior assuming that the loaded master node is the elected master. Although you need more than one master node (and ideally an odd number), only one of these nodes will be active at any one time. If CPU is very high and the node appears to be overloaded, then this may be cause for concern, since an overloaded master node may cause(...) Read More
  • Low Disk Watermark
    What Does it Mean? There are various “watermark” thresholds on your Elasticsearch cluster.  As the disk fills up on a node, the first threshold to be crossed will be the “low disk watermark”.  Once this threshold is crossed, the Elasticsearch cluster will stop allocating replica shards to that node.  This means that your cluster may become YELLOW. How to Resolve it Passing this threshold is a warning and you should not delay in taking action before the higher thresholds are(...) Read More
  • m
  • Master Node Not Discovered
    What Does it Mean? An Elasticsearch cluster requires a master node to be identified in the cluster in order for it to start properly. Furthermore, the election of the master node requires that there be a quorum of 50% and one of the nodes must have voting rights. If the cluster lacks a quorum, it will not start. For further information please see this guide on the split-brain problem. Possible Causes Incorrect Discovery Settings If you are getting this warning in the(...) Read More
  • Max Shards Per Node Exceeded
    What Does it Mean? Elasticsearch permits you to set a limit of shards per node, which could result in shards not being allocated once that limit is exceeded. The effect of having unallocated replica shards is that you do not have replica copies of your data, and could lose data if the primary shard is lost or corrupted (cluster yellow). The outcome of having unallocated primary shards is that you are not able to write data to the index at all (cluster red). If you get this warning it(...) Read More
  • Minimum Master Node Higher Than
    An Overview on Errors What Does it Mean? This error is produced when the Elasticsearch cluster does not have a “quorum” of nodes with voting rights to elect a new master node.   Nodes with voting rights may be any nodes with either of the following configurations: node.master: true node.voting_only: true It does not matter whether the node is a dedicated master node or not. Quorum can be lost for one or more of the following reasons: Bad configuration (insufficient(...) Read More
  • n
  • Node Disconnected - Possible Root Causes
    What Does it Mean? There are a number of possible reasons for a node to become disconnected from a cluster. It is important to take into account that node disconnection is often a symptom of some underlying problem which must be investigated and solved.  How To Diagnose  The best way to understand what is going on in your cluster is to: Look at monitoring dataLook at Elasticsearch logs Possible Causes Excessive Garbage Collection from JVM If you can see that the JVM(...) Read More
  • r
  • Register Snapshot Repository
    What does it mean? To backup Elasticsearch indices you need to use the Elasticsearch snapshot mechanism. It is not sufficient to have backups of the individual data directories of the data nodes, because if you were to restore these directories there is no guarantee that the data recovered would form a consistent copy of the cluster. At best, data could be lost, and at worst it could be impossible to restore the cluster entirely. To create and restore snapshots, you need to register(...) Read More
  • s
  • Search is Slow in nodesNames
    What Does it Mean Slow search might become a bottleneck and may cause a waiting queue to build. There are a number of possible causes for slow search on particular nodes. Your application is not load balancing properly across all of the data nodes.Search and/or indexing operations are concentrated on specific nodes because of the way shards are allocated.The queries running on certain indices (concentrated on the nodes in question) are slow and need optimization.There are other(...) Read More
  • Search Latency In-Depth Guide
    Opster incorporates deep knowledge learned from some of the best Elasticsearch experts around the world. This troubleshooting guide is based on our very own Elasticsearch expert’s first-hand encounter with a burst of search traffic and focuses on how the correct configuration of primary shards and replicas can help ES  handle such cases (explained through a case study). For the basic internals and optimization of shards and replicas please visit our blog post: Elasticsearch Shards and(...) Read More
  • Shards Too Large
    What Does it Mean? It is a best practice that Elasticsearch shard size should not go above 50GB for a single shard.   The limit for shard size is not directly enforced by Elasticsearch. However, if you go above this limit you can find that Elasticsearch is unable to relocate or recover index shards (with the consequence of possible loss of data) or you may reach the lucene hard limit of 2 ³¹ documents per index. How to Resolve it If your shards are too large, then you have(...) Read More
  • Shards Too Small (Oversharding)
    What Does it Mean? While there is no minimum limit for an Elastic Shard size, a large number of shards on an Elasticsearch cluster requires extra resources since the cluster needs to maintain metadata on the state of all the shards in the cluster state. While there is no absolute limit, as a guideline, the ideal shard size is between a few GB and a few tens of GB. You can learn more about scalability in this official guide. This issue should be considered in combination with(...) Read More
  • Slow Indexing in Nodes
    What does it mean? If the indexing queue is high or produces time outs, this indicates that one or more Elasticsearch nodes cannot keep up with the rate of indexing. Rejected indexing might occur as a result of slow indexing. Elasticsearch will reject indexing requests when the number of queued index requests exceeds the queue size. See the recommendations below to resolve this. Possible Causes Suboptimal Indexing Procedure Apply as many of the indexing tips as you can from(...) Read More
  • Slow Log Search Queries
    Overview Search Queries Slow Log can be very handy while troubleshooting Elasticsearch performance issues. There are two main operations in Elasticsearch (search and indexing) and both are logged separately.  This troubleshooting snippet targets the Search heavy systems where search TPS (Transaction per second) is much higher than the indexing TPS, such as with e-commerce sites or medium, Quora-like platforms. Slow queries are often caused by:  Poorly written or expensive search(...) Read More
  • Split Brain
    Overview Elasticsearch is a distributed system and may contain one more node in each cluster. For a cluster to become operational, Elasticsearch needs a quorum of a minimum number of master nodes. By default, every node in Elasticsearch is master eligible. These master nodes are responsible for all the cluster coordination tasks to manage the cluster state.  When you create a cluster, no matter how many nodes you are configuring, the quorum is by default set to one. That means if a(...) Read More
  • Status Red
    A red status indicates that one or more indices do not have allocated primary shards. The causes may be similar to those described in Status Yellow, but certainly indicate that  something is not right with the cluster. What does it mean? A red status indicates that not only has the primary shard been lost, but also that a replica has not been promoted to primary in its place.  However, just as with yellow status, you should not panic and start firing off commands without finding(...) Read More
  • Status Yellow
    There are several reasons why your Elasticsearch cluster could indicate a yellow status. What does it mean? Yellow status indicates that one or more of the replica shards on the Elasticsearch cluster are not allocated to a node. No need to  panic! There are several reasons why a yellow status can be perfectly normal, and in many cases Elasticsearch will recover to green by itself, so the worst thing you can do is start tweaking things without knowing exactly what the cause is.(...) Read More
  • t
  • The Bootstrap Memory Lock Setting is Set to False
    What Does it Mean Elasticsearch performance can be heavily penalised if the node is allowed to swap memory to disk. Elasticsearch can be configured to automatically prevent memory swapping on its host machine by adding the bootstrap memory_lock true setting to elasticsearch.yml. If bootstrap checks are enabled, Elasticsearch will not start if memory swapping is not disabled. You can learn more about bootstrap checks here: Bootstraps Check in Elasticsearch - A Detailed Guide With(...) Read More
  • u
  • Use of Wildcards Can Accidentally Cause Index Deletion
    What Does it Mean? It is possible to reduce the risk of accidental deletion of indices by preventing the use of wildcard for destructive (delete) operations. How to Fix it To check whether this setting exists on the cluster, run: GET /_cluster/settings/action* Look for a setting called: action.destructive_requires_name To apply this setting use: PUT /_cluster/settings { "transient": { "action.destructive_requires_name":true } } To remove this setting(...) Read More
  • z
  • ZEN_DISCOVERY_ SETTINGS_NOT_USED
    What Does it Mean? Zen discovery settings for cluster formation were deprecated in Elasticsearch version 7. If these settings are included in elasticsearch.yml files for version 7 and above, they should be removed to avoid confusion. Reason for the Changes Up until version 6 it was possible, using zen discovery mechanism, to inadvertently set unsafe settings which could result in a cluster becoming separated into two separate clusters (the split brain problem). The changes(...) Read More