Opster Team
This log could be avoided if detected earlier. Before you read this guide, we recommend you run the Elasticsearch Error Check-Up which detects issues in ES that cause log errors.The Check-Up includes checks that would help you prevent log “Security index is unavailable. short circuiting retrieval of user”. It’s a free tool that requires no installation and takes 2 minutes to complete. You can run the Check-Up here.
What the security index is
From Elasticsearch version 6.8 and onwards, the Security feature is available for free. This means you can secure your cluster by creating multiple users and roles, and all of this information is stored in a unique index called .security-<es-major-version>.
Please note the dot ‘.’ at the beginning of the index name.
What this error means
Elasticsearch index can have several states, and sometimes due to several factors, it can become unavailable, for instance because of missing primary shards, an Elasticsearch cluster running out of disk space and so on. When Elasticsearch needs to read the user information for a request, several steps occur internally.
Note that `_security` is the endpoint name used for the security API call. This is an API that would require Elasticsearch to find the information stored in the security index. The following things happen internally to figure out the user information (its ID, role, permission etc):
- Elasticsearch checks if the security index is available or not.
- Elasticsearch freezes the security index, so others can’t update the security index when it’s reading the sensitive (security) information.
- If the index isn’t available, then there is no point of querying the security index and short circuiting the query part, and it logs this as an error message as below:
security index is unavailable. short circuiting retrieval of user.
Quick troubleshooting steps
- Check if the `.security` index exists or not, by using below _cat/indices?v API and if the index exists, the output of this API would look like:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .security-7 9blPln4uSKScEzWtMfJXNA 1 0 7 0 24.3kb 24.3kb
- Check if the security index is available, because there is no direct API that can show this. Still, as mentioned earlier, cluster RED state or disk space can cause an index to become unavailable, and checking and fixing these issues will help make the index available.
This Opster Guide can help identify and fix issues caused by low disk space.
Overview
There are various “watermark” thresholds on your Elasticsearch cluster. As the disk fills up on a node, the first threshold to be crossed will be the “low disk watermark”. Once this threshold is crossed, the Elasticsearch cluster will stop allocating shards to that node. This means that your cluster may become yellow.
The second threshold will then be the “high disk watermark threshold”. Finally, the “disk flood stage” will be reached. Once this threshold is passed, the cluster will then block writing to ALL indices that have one shard (primary or replica) on the node which has passed the watermark. Reads (searches) will still be possible.
Watch 2 min video for quick troubleshooting steps to resolve low disk watermark in Elasticsearch:
How to resolve it
Passing this threshold is a warning and you should not delay in taking action before the higher thresholds are reached. Here are possible actions you can take to resolve the issue:
- Delete old indices
- Remove documents from existing indices
- Increase disk space on the node
- Add new nodes to the cluster
You can see the settings you have applied with this command:
GET _cluster/settings
If they are not appropriate, you can modify them using a command such as below:
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": "85%", "cluster.routing.allocation.disk.watermark.high": "90%", "cluster.routing.allocation.disk.watermark.flood_stage": "95%", "cluster.info.update.interval": "1m" } }
How to avoid it
There are various mechanisms to automatically delete stale data.
How to automatically delete stale data:
- Apply ISM (Index State management)
Using ISM you can get OpenSearch to automatically delete an index when your current index size reaches a given age.
- Use date-based indices
If your application uses date-based indices, then it is easy to delete old indices using a script.
- Use snapshots to store data offline
It may be appropriate to store snapshotted data offline and restore it in the event that the archived data needs to be reviewed or studied.
- Automate / simplify process to add new data nodes
Use automation tools such as terraform to automate the addition of new nodes to the cluster. If this is not possible, at the very least ensure you have a clearly documented process to create new nodes, add TLS certificates and configuration and bring them into the OpenSearch cluster in a short and predictable time frame.
Overview
The growing popularity of Elasticsearch has made both Elasticsearch and Kibana targets for hackers and ransomware, so it is important never to leave your Elasticsearch cluster unprotected.
From Elasticsearch Version 6.8 and onwards, X Pack Basic License (free) includes security in the standard Elasticsearch version, while prior to that it was a paid for feature.
How to resolve it
Bear in mind that the following steps will inevitably require some cluster down time. If your cluster is already in production, it is advisable to carry out the following on a staging environment first to ensure that you familiarise yourself with all the steps involved before causing down-time in production.
Enable security
In elasticsearch.yml:
xpack.security.enabled:true
Do not restart your node yet, until you have followed the following steps.
Create and install TLS certificates on all nodes
Note that the certificates must be inside your elasticsearch configuration directory, with permissions set to allow the elasticsearch user to read the files.
Optionally, you can use different certificates for transport and http, but usually it is sufficient to use the same certificates for both purposes.
It is usually preferable to use self-signed certificates with relatively long expiry dates rather than lets encrypt or similar, in order to avoid the complexities of restarting your nodes every time the certificates renew.
You can create certificates for your nodes using the certutil tool available inside each elasticsearch node as described here: elasticsearch-certutil | Elasticsearch Reference [7.9]
Include TLS paths in your Elasticsearch config files
Modify elasticsearch.yml:
xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.key: certs/instance/instance.key #xpack.security.transport.ssl.key_passphrase: mypassphrase xpack.security.transport.ssl.certificate: certs/instance/instance.crt xpack.security.transport.ssl.certificate_authorities: [ "certs/ca.crt" ] xpack.security.http.ssl.enabled: true xpack.security.http.ssl.key: certs/instance/instance.key #xpack.security.http.ssl.key_passphrase: mypassphrase xpack.security.http.ssl.certificate: certs/instance/instance.crt xpack.security.http.ssl.certificate_authorities: [ "certs/ca.crt" ]
Restart your nodes
Be prepared for some down time while you restart all your nodes. Even if you get your configuration right first time, then there will be some down time while you restart all the nodes, set-up the passwords for the first time, and finally update all of your client applications with the new configurations.
Check your logs on the Elasticsearch nodes to pick up any configuration errors or permissions issues.
Set up passwords
Run the following command from /usr/share/elasticsearch directory:
bin/elasticsearch-setup-passwords interactive
Implement HTTPS on all of your Elasticsearch client applications
Once you have implemented HTTPS on your cluster, you will have to update the configurations on all of your client applications. Typically this will involve changing http to https, adding user and password and a path to the CA authority certificate that was used to sign your elasticsearch certificates installed on your cluster.
Protect your Elasticsearch and Kibana ports from unauthorised users
It is also recommended to restrict access to Elasticsearch and Kibana ports (9200-9300 and 5601) using your firewall. If this is not possible then consider using some sort of software protection to rate limit access to users with password failures eg. Fail2Ban.
Enabling security without TLS
If you have a single node cluster which listens on loopback interface (localhost) then you can enable security without setting up https. In that case all that is necessary is:
In elasticsearch.yml:
xpack.security.enabled:true
Run the following command from /usr/share/elasticsearch directory:
bin/elasticsearch-setup-passwords interactive
However note that this only provides a minimum deterrent, and does not provide production-grade security.

Log Context
Log “Security index is unavailable. short circuiting retrieval of user [{}]” classname is NativeUsersStore.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
final SecurityIndexManager frozenSecurityIndex = securityIndex.freeze(); if (frozenSecurityIndex.isAvailable() == false) { if (frozenSecurityIndex.indexExists()) { logger.trace("could not retrieve user [{}] because security index does not exist"; user); } else { logger.error("security index is unavailable. short circuiting retrieval of user [{}]"; user); } listener.onResponse(null); } else { securityIndex.checkIndexVersionThenExecute(listener::onFailure; () -> executeAsyncWithOrigin(client.threadPool().getThreadContext(); SECURITY_ORIGIN;
Find & fix Elasticsearch problems
Opster AutoOps diagnoses & fixes issues in Elasticsearch based on analyzing hundreds of metrics.
Fix Your Cluster IssuesConnect in under 2 minutes
Jose Rafaelly
Head of System Engineering at Everymundo