Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.
Try OpsGPT now for step-by-step guidance and tailored insights into your OpenSearch operation.
Briefly, this error occurs when there’s a delay or failure in processing the updated cluster state version in OpenSearch. This could be due to network issues, heavy load on the cluster, or software bugs. To resolve this, you can try the following: 1) Check and improve your network connectivity, 2) Reduce the load on your cluster by optimizing your queries or increasing your cluster resources, 3) Upgrade your OpenSearch version to the latest one to fix potential bugs, and 4) Check your cluster’s health and logs for any anomalies.
For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.
This guide will help you check for common problems that cause the log ” cluster state with version [{}] that is published locally has neither been processed nor failed ” to appear. To understand the issues related to this log, read the explanation below about the following OpenSearch concepts: version, cluster, discovery.
Overview
A version corresponds to the OpenSearch built-in tracking system that tracks the changes in each document’s update. When a document is indexed for the first time, it is assigned a version 1 using _version key. When the same document gets a subsequent update, the _version is incremented by 1 with every index, update or delete API call.
What it is used for
A version is used to handle the concurrency issues in OpenSearch which come into play during simultaneous accessing of an index by multiple users. OpenSearch handles this issue with an optimistic locking concept using the _version parameter to avoid letting multiple users edit the same document at the same time and protects users from generating incorrect data.
Notes
You cannot see the history of the document using _version. That means OpenSearch does not use _version to keep track of original changes that had been performed on the document. For example, if a document has been updated 10 times, it’s _version would be marked by OpenSearch as 11, but you cannot go back and see what version 5 of the document looked like. This has to be implemented independently.
Common problems
If optimistic locking is not implemented while making updates to a document, OpenSearch may return a conflict error with the 409 status code, which means that multiple users are trying to update the same version of the document at the same time.
POST /ratings/123?version=50 { "name": "Joker", "rating": 50 }
Overview
An OpenSearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables OpenSearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the OpenSearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – OpenSearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the opensearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many OpenSearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the OpenSearch cluster is unable to safely elect a Master node. These type problems include:
- Master node not discovered
- Split brain problem
Backups
Because OpenSearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an OpenSearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an OpenSearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Overview
The process known as discovery occurs when an OpenSearch node starts, restarts or loses contact with the master node for any reason. In those cases, the node needs to contact other nodes in the cluster to find any existing master node or initiate the election of a new master node.
How it works
Upon startup, each node looks for other nodes, firstly by contacting the IP addresses of eligible master nodes held in the previous cluster state. If they are not available, it will look for nodes based upon the seed host provider mechanisms available.
Seed host providers may be defined in 3 ways: list based, file based or plugin based. All of these methods provide a list of IP addresses or hostnames which the node should contact in order to obtain a list of master eligible nodes. The node will contact all of these addresses in turn, until either an active master is found, or failing that, until sufficient nodes can be found to elect a new master node.
Examples
The simplest form is to define a list of seed host providers in opensearch.yml:
discovery.seed_hosts: - 192.168.1.10:9300 - 192.168.1.11 - seeds.mydomain.com
An alternative way is to refer to a file using the following setting:
discovery.seed_providers: file
The file MUST be placed in the following filepath: $ES_PATH_CONF/unicast_hosts.txt
10.10.10.5 10.10.10.6:9305 10.10.10.5:10005 # an IPv6 address [2001:0db8:85a3:0000:0000:8a2e:0370:7334]:9301
Note that the use of a port is optional. If not used, then the default port range of 9300-9400 will be used.
If you use AWS or GCS then you can install and use a plugin to obtain a list of seed hosts from an API. A plugin also exists for Azure but is deprecated since version 5.
AWS plugin
A typical configuration could be as follows:
discovery.seed_providers: ec2 discovery.ec2.tag.role: master discovery.ec2.tag.environment: dev discovery.ec2.endpoint: ec2.us-east-1.amazonaws.com cloud.node.auto_attributes: true cluster.routing.allocation.awareness.attributes: aws_availability_zone
The above configuration would look for all nodes with a tag called “environment” set to “dev” and a tag called “role” set to “master”, in the AWS zone us-east-1. The last two lines set up cluster routing allocation awareness based upon aws availability zones. (Not necessary, but nice to have).
GCE plugin
A typical configuration could be as follows:
discovery.seed_providers: gce cloud.gce.project_id: <your-google-project-id> cloud.gce.zone: <your-zone> discovery.gce.tags: <my-tag-name>
The above configuration would look for all virtual machines inside your project, zone and with a tag set to the tag name you provide.
Notes and good things to know
Cluster formation depends on correct setup of the network.host settings in opensearch.yml. Make sure that the nodes can reach each other across the network using their IP addresses / hostname, and are not getting blocked due to firewall settings on the ports required.
Log Context
Log “cluster state with version [{}] that is published locally has neither been processed nor failed” classname is ZenDiscovery.java.
We extracted the following from OpenSearch source code for those seeking an in-depth context :
boolean sentToApplier = processNextCommittedClusterState("master " + newState.nodes().getMasterNode() + " committed version [" + newState.version() + "] source [" + clusterChangedEvent.source() + "]"); if (sentToApplier == false && processedOrFailed.get() == false) { assert false : "cluster state published locally neither processed nor failed: " + newState; logger.warn("cluster state with version [{}] that is published locally has neither been processed nor failed"; newState.version()); publishListener.onFailure(new FailedToCommitClusterStateException("cluster state that is published locally has neither " + "been processed nor failed")); } }