Elasticsearch How to Upgrade Elasticsearch Versions

By Opster Team

Updated: Jan 28, 2024

| 7 min read

For guides on how to upgrade specific versions, see:

  1. How to Upgrade Elasticsearch from Version 5 to Version 6
  2. How to Upgrade Elasticsearch from Version 6 to Version 7
  3. How to Upgrade Elasticsearch from Version 7 to Version 8

Quick links

Introduction 

Updating distributed systems, including Elasticsearch, can be a complex process due to the large volume of data, the number of nodes involved and the different configurations you may have in your cluster. In this article, we explain how to minimize risks when upgrading your Elasticsearch cluster.

There are two approaches to upgrading Elasticsearch service versions: full cluster restarts and rolling restarts. In the rest of the article we will discuss each in turn.

Always keep in mind that any changes to your system may result in data loss if you do not follow the instructions correctly. Always test and plan your upgrade carefully and take a backup of your data before performing any upgrades.

We’ll start with a review of important considerations, but you can jump to the instructions for full cluster restart upgrades and rolling restart upgrades anytime. 

Elastic product EOL (End Of Life) dates 

Each Elasticsearch version has its own end of life, and Elasticsearch explicitly mentions that each major version (release) of products is supported for 18 months after the release.

Elasticsearch & Kibana & Logstash & Beat
Version
EOL Date
5.0.x2018-04-26
6.0.x2019-05-14
7.0.x2020-10-10
7.10.x2022-05-11
7.11.x2022-08-10
7.12.x2022-09-23
7.13.x2022-11-25
7.14.x2023-02-03
7.15.x2023-03-22
7.16.x2023-06-07
7.17.x2023-08-01
8.0.x2023-08-10

Read more

Note: In general it is recommended to always upgrade to the highest version possible, subject to compatibility / availability of any client libraries or plugins you are using. Also, please note that there are no EOL dates per minor version anymore. They just state that the 8.x release train will be maintained until “the later of 2024-08-10 or 6 months after the release date of 9.0.”

Compatibility matrix

Upgrade from versionUpgrade to versionRolling upgrade supportedFull cluster upgrade neededJDK Version
upgrade needed
Kibana Upgrade assistant supported
Elasticsearch version 5.6.16Elasticsearch version 6.8.23
Elasticsearch version 6.8.23Elasticsearch version 7.17.x
Elasticsearch version 7.17.xElasticsearch version 8.7.1

Note: Elasticsearch version 7.17 is the last version that works with JDK 1.8 . 

Master nodes or non-master nodes? Which should be upgraded first? 

Before starting an upgrade, you need to divide your cluster into: 

  • Master-eligible nodes
  • Master-ineligible nodes

The reason for this categorization is that higher version Elasticsearch master-ineligible nodes can always join a cluster with a lower version master node, but lower version master-ineligible nodes cannot join a higher version cluster.

This means that if you upgrade your master-eligible nodes before master-ineligible nodes, there is the possibility that some of the master-ineligible nodes leave the cluster and cannot rejoin. For this reason it is necessary to  first upgrade all master-ineligible nodes.

Old Master-Eligible NodesNew
Master-Eligible Node
Old Master-ineligible Nodes
New Master-ineligible Nodes

Preparing to upgrade Elasticsearch nodes

Elasticsearch nodes cannot be downgraded after upgrading. Before starting the upgrade process you should review the following: 

Check the deprecation log

You should read and resolve any issues highlighted in the deprecation log. These logs are usually located in: 

/var/log/elasticsearch/Your-Cluster-Name_deprecation.log

Upgrade Assistant in Kibana 

In the last minor version of each major version, eg. 6.8.23 or 7.17.10, you will find an upgrade assistant to help you ensure that your cluster settings and configurations are compatible with the next version. Use the menu in Kibana to navigate to the assistant. You must clear all issues using the assistant before attempting to upgrade to the next major version.

Review the breaking changes

With each new version, breaking changes documentation is published to make you aware of any functionality which may change or disappear. You should always check this to ensure that none of these settings, configurations or mappings are being used in your setup. The main things that could be affected are:

Check the Elasticsearch plugins compatibility

If you are using any Elasticsearch plugins, you should check the availability and compatibility of the plugin with the new version.

Set up a test environment

You should test the upgrade process in a test or staging environment before upgrading your production cluster in order to check and resolve all issues before upgrading production.

Take a backup and snapshots from your data

Remember that it is not possible to downgrade an Elasticsearch node, and the only way to practically reverse a failed upgrade will be to create a new cluster with the old version and recover your data from snapshots. Therefore it is essential to take snapshots of all Elasticsearch indices before starting the upgrade process.

Full cluster restart upgrade (offline upgrade)

A full cluster restart upgrade consists of shutting down all the Elasticsearch nodes at once, upgrading all of them and then restarting each node. This type of upgrade will inevitably require your Elasticsearch cluster to be down for the duration of the upgrading process.  

Offline upgrades are usually easier than online upgrades since you do not have to manage a cluster with different versions of nodes at the same time. 

The steps are:

  1. Disable shard allocation
  2. Optionally stop non-essential indexing and perform a flush 
  3. Optionally stop the tasks associated with ML jobs and datafeeds
  4. Stop all Elasticsearch nodes and upgrade them 
  5. Upgrade any plugins
  6. Start the Elasticsearch cluster
  7. Re-enable shard allocation 
  8. Upgrade client libraries to new version
  9. Restart master-eligible nodes
  10. Restart master-ineligible nodes
  11. Optionally restart the tasks associated with ML jobs and datafeeds

Note that in a full cluster restart the master nodes must be started before the non-master nodes. This is to ensure that the master nodes can form the cluster in order to enable other nodes to join, and is contrary to a rolling upstart where the master-ineligible nodes should be upgraded before the master-eligible ones.

Rolling restart upgrade (online upgrade)

A rolling restart upgrade will upgrade a cluster without any downtime. In this case, each node is upgraded and restarted in turn without ever stopping the entire Elasticsearch cluster.

It is NOT possible to carry out rolling restart upgrades involving a change in MAJOR versions, with the exception of the following:

  • Upgrading Elasticsearch version 5.6.16 to version 6.x.x
  • Upgrading Elasticsearch version 6.8.23 to version 7.x.x
  • Upgrading Elasticsearch version 7.17.10 to version 8.x.x

For this reason, if you want to carry out a rolling restart upgrade between major versions, you should ALWAYS use the latest minor version as a stepping stone to upgrade to the next major version. For example, if you are using Elasticsearch 5.x.x you can upgrade to  5.6.16 and then to 6.8.23.

The correct order for upgrading nodes

  1. Begin by upgrading nodes that are not master-eligible. To find these nodes, use either the GET /_nodes/_all,master:false/_none API call or locate nodes configured with node.master: false.
  2. Proceed with the upgrade tier-by-tier, starting with the frozen tier. Complete the upgrade for all nodes in each data tier before moving to the next one. Upgrade the frozen tier first, followed by the cold, warm, and finally the hot tier. This ensures that the data can still flow through the tiers during the upgrade. To obtain a list of nodes in a specific tier, use the GET /_nodes request. For example, GET /_nodes/data_frozen:true/_none.
  3. Finally, upgrade the master-eligible nodes. Retrieve a list of these nodes using GET /_nodes/master:true.

Following this order guarantees that all nodes can join the cluster during the upgrade process. Upgraded nodes can join a cluster with an older master, but older nodes may not be able to join a cluster with an upgraded master.

How to upgrade nodes in a rolling upgrade


The process for upgrading your nodes is as follows, upgrading all NON master-eligible nodes first.

  1. Make sure your cluster is stable, green

    You need to make sure that all replicas are available in order to ensure that shutting down the node will not result in loss of data.

  2. Disable unnecessary indexing

    Wherever it is practically feasible to do so, you should stop all indexing processes, since this will increase cluster stability.

  3. Optionally stop the tasks associated with ML jobs and datafeeds

    Even though it is possible to leave your machine learning jobs running during the upgrade process, that will put an unnecessary pressure on the cluster.

  4. Disable shard allocation

    It is important to stop shards rebalancing so that when you stop a node for upgrade the cluster does not reallocate shards to another node. (See command below).

  5. Stop the Elasticsearch node

    Stop the Elasticsearch node before moving on to the next step

  6. Upgrade Elasticsearch

    The method used to upgrade will depend upon the installation method used to install.

  7. Upgrade plugins

    Elasticsearch will not start if the plugin is not of exactly the same version as Elasticsearch.

  8. Start Elasticsearch

    Start Elasticsearch before moving on to the next step.

  9. Re-enable shard allocation

    Using the command given below.

  10. Check that the upgraded node has rejoined the cluster

    Using the command below, you can check how many nodes are in the cluster.

  11. Wait for cluster status to turn green

    The command provided below will also show you the progress of the shard recovery process on the upgraded node, until the cluster reaches a green state.
    Do not be in a hurry to upgrade your nodes, wait for the cluster to fully recover before moving on. If the cluster does not go green, look in the logs to find any issues that may indicate problems with the upgrade or configuration.

  12. Repeat

    Repeat the full process above for each node.

  13. Optionally restart ML jobs and datafeeds

To disable shard allocation, run:

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.enable": "primaries"
  }
}

To re-enable shard allocation, run:

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}

Get cluster status and see how many nodes are in the cluster using:

GET _cluster/health

How helpful was this guide?

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?