Before you dig into reading this guide, have you tried asking OpsGPT what this log means? You’ll receive a customized analysis of your log.
Try OpsGPT now for step-by-step guidance and tailored insights into your OpenSearch operation.
Briefly, this error occurs when OpenSearch is performing cleanup operations on a specific repository. This could be due to a scheduled maintenance task or an automatic process triggered by the system. It’s not necessarily an error, but an informational message. However, if it’s causing issues, you can resolve it by checking the repository’s health and configuration, ensuring there’s enough disk space, and verifying the repository’s permissions. If the cleanup operation is not completing successfully, you may need to manually intervene to resolve any underlying issues.
For a complete solution to your to your search operation, try for free AutoOps for Elasticsearch & OpenSearch . With AutoOps and Opster’s proactive support, you don’t have to worry about your search operation – we take charge of it. Get improved performance & stability with less hardware.
This guide will help you check for common problems that cause the log ” Running cleanup operations on repository [{}][{}] ” to appear. To understand the issues related to this log, read the explanation below about the following OpenSearch concepts: repository, cluster, repositories, admin.
Quick links
Overview
An OpenSearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories can be created per cluster. The following repository types are supported:
Repository types
Repository type | Configuration type |
---|---|
Shared file system | Type: “fs” |
S3 | Type : “s3” |
HDFS | Type :“hdfs” |
Azure | Type: “azure” |
Google Cloud Storage | Type : “gcs” |
Examples
To register an “fs” repository:
PUT _snapshot/my_repo_01 { "type": "fs", "settings": { "location": "/mnt/my_repo_dir" } }
Notes and good things to know
- S3, HDFS, Azure and Google Cloud require a relevant plugin to be installed before it can be used for a snapshot.
- The setting, path.repo: /mnt/my_repo_dir needs to be added to opensearch.yml on all the nodes if you are planning to use the repo type of file system. Otherwise, it will fail.
- When using remote repositories, the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally, otherwise you will end up with partial snapshots.
Overview
An OpenSearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables OpenSearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.
Search or indexing requests will usually be load-balanced across the OpenSearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.
Notes and good things to know
The key elements to clustering are:
Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.
Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.
Cluster Formation – OpenSearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the opensearch.yml config file, environment variables on the node, or within the cluster state.
Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.
Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.
Common problems
Many OpenSearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:
- Shards too small
- Too many fields (field explosion)
Problems may also be caused by inadequate configurations causing situations where the OpenSearch cluster is unable to safely elect a Master node. These type problems include:
- Master node not discovered
- Split brain problem
Backups
Because OpenSearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an OpenSearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.
Cluster resilience
When designing an OpenSearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.
Log Context
Log “Running cleanup operations on repository [{}][{}]” classname is TransportCleanupRepositoryAction.java.
We extracted the following from OpenSearch source code for those seeking an in-depth context :
final BlobStoreRepository blobStoreRepository = (BlobStoreRepository) repository; final StepListenerrepositoryDataListener = new StepListener(); repository.getRepositoryData(repositoryDataListener); repositoryDataListener.whenComplete(repositoryData -> { final long repositoryStateId = repositoryData.getGenId(); logger.info("Running cleanup operations on repository [{}][{}]"; repositoryName; repositoryStateId); clusterService.submitStateUpdateTask( "cleanup repository [" + repositoryName + "][" + repositoryStateId + ']'; new ClusterStateUpdateTask() { private boolean startedCleanup = false;