Elasticsearch Elasticsearch Cluster State

By Opster Team

Updated: Mar 14, 2024

| 4 min read

To learn how to resolve issues related to large cluster state causing time-outs and errors while syncing, check out this customer post mortem.

Cluster state – introduction and API

Elasticsearch clusters are managed by the master nodes, or more specifically the elected master node. The master node tracks all of the cluster data, including the state of all other nodes, by using the dataset which is known as “cluster state”. The cluster state includes metadata information about the: nodes, indices, shards, allocation of shards, mapping & settings of the indices and more. All of this information must be shared between all the nodes to ensure that operations across the cluster are coherent.

The Elasticsearch cluster needs to maintain the cluster state in memory on each and every nodes, which can require a large amount of resources. If the cluster state becomes too large, it can (at best) reduce performance and (at worst) cause the cluster to become unstable.

The main causes of excessively large cluster state are:

  • Many indices and shards on the cluster 
  • Many fields in the indices
  • Many templates, some of which might not be in use
  • Many ingest pipelines
  • Many stored scripts
  • Large routing tables

In this guide, we’ll be diving into the first two main causes of large cluster state: having too many shards in the cluster and having an excessive number of fields.  

All of the situations listed above can cause cluster instability and performance degradation, which often leads to an increase in cost as well. If you’re looking to reduce the cost of your Elasticsearch deployment, the following solutions could resolve the issues you’re experiencing and also decrease the resources needed, and the cost involved.

Too many shards in the cluster

This is typically caused by the creation of a large number of small indices, also known as oversharding. The underlying cause is usually application related. For example:

  • You have many small, daily indices 
  • Many small customer indices
  • A lack of “housekeeping” to tidy up unnecessary data

There are various ways to fix the above:

1.Delete or close indices

The quickest way to reduce the number of shards and improve cluster performance will be to run:

DELETE myindex-2014*
POST myindex-2014*/_close

Closing an index will release the memory resources being used by the indices while keeping the index on disk, and is easily and quickly reversed (POST myindex/_open).

2. Reindex small indices into bigger indices

The maximum recommended shard size for an Elasticsearch shard is 30-50GB, so it would be ideal to reindex smaller indices into bigger indices that are as close to this size as possible. In practice, any size between 10 and 50GB would be reasonable.

For instance, you can:

  • Reindex daily indices into monthly indices
  • Reindex multiple “customer” indices into a single index for all customers

Sometimes this type of reindexing requires work on the aliases to ensure the data stays consistent and the index queries won’t be affected. This type of automation can be carried out by AutoOps – learn more here.

You can also reduce the number of replicas to 1, or 0.

3. Use ILM to optimize index shard size

If you have time-based indices (such as logs) then Index Lifecycle Management (ILM) is a very effective tool. Using ILM you can:

  1. Maintain optimal shard size by automatically creating a new index every time a shard reaches the optimal size.
  2. Delete indices automatically once they reach a certain age according to a data retention policy.

If you aren’t using ILM, it is recommended to disable it. Kibana monitoring features, ILM and SLM all create small daily indices, that are sometimes empty. So if these are not necessary, it’s best to disable them to avoid the creation of empty indices.

4. Too many empty indices

Empty indices also contribute to the number of shards on the cluster and add to the burden of maintaining the cluster state.   

Empty indices are often caused by ILM (Index lifecycle management) rolling over indices because they have got to a certain age. For example, if an ILM policy defines that an index should be rolled over every 30 days, an empty index would continue to be rolled over at the specified time even though it is unused, creating a number of unnecessary empty indices. To solve this problem you should ensure that you delete all unnecessary ILM-managed indices.

Sometimes applications can create unnecessary indices which need to be cleaned up periodically. If this is the case, consider using automated scripts to detect unused empty indices and delete them regularly.

Excessive number of fields in the indices

Elasticsearch mappings for all indices are also stored in the cluster state. If you have a large number of fields (especially if replicated across a large number of indices), then this can also add to the burden of maintaining the cluster state.

Here are a few ways to avoid creating an excessive number of fields and to resolve this issue if it’s present:

1. Eliminate dynamic mapping

Consider switching off dynamic mapping on your index templates. This can be done by adding the following to your templates:

dynamic: false

2. Optimize dynamic mapping rules / Remove multi-field mappings

If you choose not to disable dynamic mapping, then consider modifying the default dynamic mapping rules so as not to create multiple fields for text fields. By default, Elasticsearch will create both a text and keyword field for any text field, duplicating the number of fields in your index. You can modify this behavior by changing the dynamic mapping rules, as shown below:

"mappings" : {
            "dynamic_templates" : [
              {
                "analyzed_text_strings" : {
                  "match_pattern" : "regex",
                  "mapping" : {
                    "type" : "text"
                  },
                  "match_mapping_type" : "string",
                  "match" : "description|body|title|message"
                }
              },
              {
                "keyword_strings" : {
                  "mapping" : {
                    "ignore_above" : 256,
                    "type" : "keyword"
                  },
                  "match_mapping_type" : "string"
                }
              }
            ],

The above dynamic template would apply a full text analyzer only to fields with description|body|title|message in the field name. All remaining strings would be indexed as only keyword type. 

This modification is recommended for new indices, but be careful with indices that are already in production. In addition, you should be aware that you would also need to modify your search applications and re-index historical data to take account of the changed field names, since “my_field.keyword” would need to be replaced by “my_field” without the suffix.

3. Reduce the number of fields in your templates

Often mapping templates contain a large number of unnecessary fields which client applications add “just in case” to avoid risk of mapping conflicts.  This is particularly the case of the “beats” family of applications which often create templates which create thousands of fields to cover a large variety of logging applications so that they will work “out of the box”.  While this can be convenient, it will also reduce the performance of your cluster, so consider replacing these templates with your own templates which contain reduced versions with only those fields that you really need.  This is especially important if you have a large number of indices which are linked to these templates.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?