Before you begin reading this guide, we recommend you try running the Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many configuration errors.
To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them.
This guide will help you check for common problems that cause the log ” Invalid azure client settings with name ” + clientName + ” ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: settings, repositories, plugins, client and repository-azure.
Settings in Elasticsearch
In Elasticsearch, you can configure cluster-level settings, node-level settings and index level settings. Here is a quick rundown of each level.
A. Cluster settings
These settings can either be:
- Persistent, meaning they apply across restarts, or
- Transient, meaning they won’t survive a full cluster restart.
If a transient setting is reset, the first one of these values that is defined is applied:
- The persistent setting
- The setting in the configuration file
- The default value
The order of precedence for cluster settings is:
- Transient cluster settings
- Persistent cluster settings
- Settings in the elasticsearch.yml configuration file
Examples
An example of persistent cluster settings update:
PUT /_cluster/settings { "persistent" : { "indices.recovery.max_bytes_per_sec" : "500mb" } }
An example of a transient update:
PUT /_cluster/settings { "transient" : { "indices.recovery.max_bytes_per_sec" : "40mb" } }
B. Index settings
These are the settings that are applied to individual indices. There is an API to update index level settings.
Examples
The following API call will set the number of replica shards to 5 for my_index index.
PUT /my_index/_settings { "index" : { "number_of_replicas" : 5 } }
To revert a setting to the default value, use null.
PUT /my_index/_settings { "index" : { "refresh_interval" : null } }
C. Node settings
These settings apply to nodes. Nodes can fulfill different roles. These include the master, data, and coordination roles. Node settings are set through the elasticsearch.yml file for each node.
Examples
Setting a node to be a data node (in the elasticsearch.yml file):
node.data: true
Disabling the ingest role for the node (which is enabled by default):
node.ingest: false
For production clusters, you will need to run each type of node on a dedicated machine with two or more instances of each, for HA (minimum three for master nodes).
Notes and good things to know
- Learning more about the cluster settings and index settings is important – it can spare you a lot of trouble. For example, if you are going to ingest huge amounts of data into an index and the number of replica shards is set to say, 5, the indexing process will be super slow because the data will be replicated at the same time it is indexed. What you can do to speed up indexing is to set the replica shards to 0 by updating the settings, and set it back to the original number when indexing is done, using the settings API.
- Another useful example of using cluster-level settings is when a node has just joined the cluster and the cluster is not assigning any shards to the node. Although shard allocation is enabled by default on all nodes, someone may have disabled shard allocation at some point (for example, in order to perform a rolling restart), and forgot to re-enable it later. To enable shard allocation, you can update the Cluster Settings API:
PUT /_cluster/settings{"transient":{"cluster.routing.allocation.enable":"all"}}
- It’s better to set cluster-wide settings with Settings API instead of with the elasticsearch.yml file and to use the file only for local changes. This will keep the same setting on all nodes. However, if you define different settings on different nodes by accident using the elasticsearch.yml configuration file, it is hard to notice these discrepancies.
- See also: Recovery
Overview
An Elasticsearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories can be created per cluster. The following repository types are supported:
Repository types
Repository type | Configuration type |
---|---|
Shared file system | Type: “fs” |
S3 | Type : “s3” |
HDFS | Type :“hdfs” |
Azure | Type: “azure” |
Google Cloud Storage | Type : “gcs” |
Examples
To register an “fs” repository:
PUT _snapshot/my_repo_01 { "type": "fs", "settings": { "location": "/mnt/my_repo_dir" } }
Notes and good things to know
- S3, HDFS, Azure and Google Cloud require a relevant plugin to be installed before it can be used for a snapshot.
- The setting, path.repo: /mnt/my_repo_dir needs to be added to elasticsearch.yml on all the nodes if you are planning to use the repo type of file system. Otherwise, it will fail.
- When using remote repositories, the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally, otherwise you will end up with partial snapshots.
Overview
A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.
Examples
Get all of the instructions for the plugin:
sudo bin/elasticsearch-plugin -h
Installing the S3 plugin for storing Elasticsearch snapshots on S3:
sudo bin/elasticsearch-plugin install repository-s3
Removing a plugin:
sudo bin/elasticsearch-plugin remove repository-s3
Installing a plugin using the file’s path:
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip
Notes and good things to know
- Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of the Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
- A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
- You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
- When a plugin is removed, you will need to restart every Elasticsearch node in order to complete the removal process.
Common issues
- Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the DEB or RPM packages then the plugin has to be installed using the root user. Otherwise you can install the plugin as the user that owns all of the Elasticsearch files.
- In the case of DEB or RPM package installation, it is important to check the permissions of the plugins directory after you install it. You can update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory
- If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.
Log Context
Log “Invalid azure client settings with name [” + clientName + “]”classname is AzureStorageService.java We extracted the following from Elasticsearch source code for those seeking an in-depth context :
throw new SettingsException("Unable to find client with name [" + clientName + "]"); } try { return new Tuple<>(buildClient(azureStorageSettings); () -> buildOperationContext(azureStorageSettings)); } catch (InvalidKeyException | URISyntaxException | IllegalArgumentException e) { throw new SettingsException("Invalid azure client settings with name [" + clientName + "]"; e); } } private CloudBlobClient buildClient(AzureStorageSettings azureStorageSettings) throws InvalidKeyException; URISyntaxException { final CloudBlobClient client = createClient(azureStorageSettings);
See how you can use AutoOps to resolve issues