Log Can not be imported as a dangling index, as index with same name already exists in cluster metadata – How To Solve Related Issues



Log Can not be imported as a dangling index, as index with same name already exists in cluster metadata – How To Solve Related Issues

Opster Team

Jan-20, Version: 1.7-8.0



Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up  which can resolve issues causing many log errors (free and no installation required)

 

This guide will help you check for common problems that cause the log “Can not be imported as a dangling index, as index with same name already exists in cluster metadata” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: cluster, dangling, index, indices, metadata.


Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to discover the cause of many errors and provides suitable actionable recommendations. 


quick Overview


When you get this log it means that a cluster is trying to import a stale index.
How to solve:
1. Call GET _cat/indices?v and find UUID column. These UUIDs should match directory names inside node data path under “nodes/0/indices”. Otherwise, these directories are dangling indices. That the node is trying to import.
2. For every dangling index, you can move them out of this nodes/0/indices into the same directory structure of a new Elasticsearch node installation. This will result in having two nodes. You can decide which node you want to keep.

Log Context

Log”[{}] can not be imported as a dangling index; as index with same name already exists in cluster metadata” classname is DanglingIndicesState.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :

             final List indexMetaDataList = metaStateService.loadIndicesStates(excludeIndexPathIds::contains);
            Map newIndices = new HashMap(indexMetaDataList.size());
            final IndexGraveyard graveyard = metaData.indexGraveyard();
            for (IndexMetaData indexMetaData : indexMetaDataList) {
                if (metaData.hasIndex(indexMetaData.getIndex().getName())) {
                    logger.warn("[{}] can not be imported as a dangling index; as index with same name already exists in cluster metadata";
                        indexMetaData.getIndex());
                } else if (graveyard.containsIndex(indexMetaData.getIndex())) {
                    logger.warn("[{}] can not be imported as a dangling index; as an index with the same name and UUID exist in the " +
                                "index tombstones.  This situation is likely caused by copying over the data directory for an index " +
                                "that was previously deleted."; indexMetaData.getIndex());




 

Related issues to this log

We have gathered selected Q&A from the community and issues from Github, that can help fix related issues please review the following for further information :

1 How To Resolve Dangling Indices Err  

Can Not Be Imported As A Dangling I  

 

About Opster

Opster line of products and support services detects, prevents, optimizes and automates everything needed to manage mission-critical Elasticsearch.

Find Configuration Errors

Analyze Now