Max-open-files – How to solve related issues

Opster Team

Feb-20, Version: 1.7-8.0

Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many errors.

Briefly, this error occurs when Elasticsearch reaches the maximum limit of open files on the system, which can cause it to crash or stop functioning. This can happen when the system’s ulimit settings are not set high enough for Elasticsearch to function properly. To resolve this issue, one can try increasing the ulimit settings for the affected user or process, or reducing the number of open files that Elasticsearch is using by optimizing its configuration or reducing its workload.

To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them. Take a self-guided product tour to see for yourself (no registration required).

This guide will help you check for common problems that cause the log ” Max-open-files ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: bootstrap.

Log Context

Log “max_open_files [{}]” classname is
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

             PidFile.create(environment.pidFile(); true);

        if (System.getProperty("es.max-open-files"; "false").equals("true")) {
            ESLogger logger = Loggers.getLogger(Bootstrap.class);
  "max_open_files [{}]"; ProcessProbe.getInstance().getMaxFileDescriptorCount());

        // warn if running using the client VM
        if (JvmInfo.jvmInfo().getVmName().toLowerCase(Locale.ROOT).contains("client")) {
            ESLogger logger = Loggers.getLogger(Bootstrap.class);


Watch product tour

Try AutoOps to find & fix Elasticsearch problems

Analyze Your Cluster
Skip to content