Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which can resolve issues that cause many errors.
This guide will help you check for common problems that cause the log ” Hadoop authentication method is set to SIMPLE; but a Kerberos principal is ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: plugins, repositories and repository-azure.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Overview
A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.
Examples
Get all the instructions for the plugin
sudo bin/elasticsearch-plugin -h
Installing the S3 plugin for storing Elasticsearch snapshots on S3
sudo bin/elasticsearch-plugin install repository-s3
Removing a plugin
sudo bin/elasticsearch-plugin remove repository-s3
Installing a plugin using the file’s path
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip
Notes and good things to know
- Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of the Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
- A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
- You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
- When a plugin is removed, you will need to restart every elasticsearch node in order to complete the removal process.
Common issues
- Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the deb or rpm package then the plugin has to be installed using the root user, or else you can install the plugin as the user that owns all of the Elasticsearch files.
- In case of deb or rpm package installation, it is important to check the permission of the plugins directory after plugin installation and update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory
- If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins at once and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.
Overview
An Elasticsearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read the glossary term snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories can be created per cluster. The following repository types are supported:
Repository types
Repository type | Configuration type |
---|---|
Shared file system | Type: “fs” |
S3 | Type : “s3” |
HDFS | Type :“hdfs” |
Azure | Type: “azure” |
Google Cloud Storage | Type : “gcs” |
Examples
To register a repository of type fs:
PUT _snapshot/my_repo_01 { "type": "fs", "settings": { "location": "/mnt/my_repo_dir" } }
Notes and good things to know
- S3, HDFS, Azure and Google Cloud requires a relevant plugin to be installed before it can be used for a snapshot.
- The setting, path.repo: /mnt/my_repo_dir needs to be added to elasticsearch.yml on all the nodes in case you are planning to use repo type of file system otherwise it will fail.
- When using remote repositories, the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally, otherwise you will end up with partial snapshots.
Log Context
Log “Hadoop authentication method is set to [SIMPLE]; but a Kerberos principal is” classname is HdfsRepository.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
// Check if the user added a principal to use; and that there is a keytab file provided String kerberosPrincipal = repositorySettings.get(CONF_SECURITY_PRINCIPAL); // Check to see if the authentication method is compatible if (kerberosPrincipal != null && authMethod.equals(AuthenticationMethod.SIMPLE)) { logger.warn("Hadoop authentication method is set to [SIMPLE]; but a Kerberos principal is " + "specified. Continuing with [KERBEROS] authentication."); SecurityUtil.setAuthenticationMethod(AuthenticationMethod.KERBEROS; hadoopConfiguration); } else if (kerberosPrincipal == null && authMethod.equals(AuthenticationMethod.KERBEROS)) { throw new RuntimeException("HDFS Repository does not support [KERBEROS] authentication without " + "a valid Kerberos principal and keytab. Please specify a principal in the repository settings with [" +