Elasticsearch OpenSearch Disk Threshold

By Opster Team

Updated: Mar 29, 2023

| 1 min read

In addition to reading this guide, we recommend you run the Elasticsearch Health Check-Up. It will detect issues and improve your Elasticsearch performance by analyzing your shard sizes, threadpools, memory, snapshots, disk watermarks and more.

The Elasticsearch Check-Up is free and requires no installation.

Before you begin reading this guide, we recommend you run the free OpenSearch Error Check-Up which analyzes 2 JSON files to detect many configuration errors.

To automate the monitoring and management of your OpenSearch disk thresholds, try AutoOps for OpenSearch. AutoOps offers a range of automated features to help optimize your OpenSearch settings and boost performance.

Overview

OpenSearch uses several parameters to enable it to manage hard disk storage across the cluster. 

What it’s used for

  • OpenSearch will actively try to relocate shards away from nodes which exceed the disk watermark high threshold.
  • OpenSearch will NOT locate new shards or relocate shards on to nodes which exceed the disk watermark low threshold.
  • OpenSearch will prevent all writes to an index which has any shard on a node that exceeds the disk.watermark.flood_stage threshold.
  • The info update interval is the time it will take OpenSearch to re-check the disk usage.

Examples

PUT _cluster/settings
{
  "transient": {
   
    "cluster.routing.allocation.disk.watermark.low": "85%",
    "cluster.routing.allocation.disk.watermark.high": "90%",
    "cluster.routing.allocation.disk.watermark.flood_stage": "95%",
    "cluster.info.update.interval": "1m"
  }
}

Notes and good things to know

  • You can use absolute values (100gb) or percentages (90%), but you cannot mix the two on the same cluster. 
  • In general, it is recommended to use percentages, since this will work in case the disks are resized.
  • You can put the cluster settings on the opensearch.yml of each node, but it is recommended to use the PUT _cluster/settings API because it is easier to manage, and ensures that the settings are coherent across the cluster.
  • OpenSearch comes with sensible defaults for these settings, so think twice before modifying them. If you find you are spending a lot of time fine-tuning these settings, then it is probably time to invest in new disk space.
  • In the event of the flood_stage threshold being exceeded, once you delete data, OpenSearch should detect automatically that the block can be released (bearing in mind the update interval which could be, for instance, a minute). However if you want to accelerate this process, you can unblock an index manually, with the following call:
PUT /my_index/_settings
{
  "index.blocks.read_only_allow_delete": null
}

Common problems

Inappropriate cluster settings (if the disk watermark.low is too low) can make it impossible for OpenSearch to allocate shards on the cluster. In particular, bear in mind that these parameters work in combination with other cluster settings (for example shard allocation awareness) which cause further restraints on how OpenSearch can allocate shards.

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?


Analyze your cluster & get personalized recommendations

Skip to content