How To Solve Issues Related to Log – Failed to bulk index audit events:

Prevent Your Next ELK Incident

Try our free Check Up to test if your ES issues are caused from misconfigured settings

Prevent Issue

Updated: Jan-20

In-Page Navigation (click to jump) :

Opster Offer’s World-Class Elasticsearch Expertise In One Powerful Product
Try Our Free ES Check-Up   Prevent Incident

Troubleshooting background

To troubleshoot Elasticsearch log “Failed to bulk index audit events:” it’s important to understand common problems related to Elasticsearch concepts: bulk, index, plugin. See detailed explanations below complete with common problems, examples and useful tips.

Bulk in Elasticsearch

What it is

In Elasticsearch, when using the Bulk API it is possible to perform many write operations in a single API call, which increases the indexing speed. Using the Bulk API is more efficient than sending multiple, separate requests. This can be done for the following four actions:

  • Index
  • Update
  • Create 
  • Delete
Examples

The bellow bulk request will index a document, delete another document, and update an existing document.

POST _bulk
{ "index" : { "_index" : "myindex", "_id" : "1" } }
{ "field1" : "value" }
{ "delete" : { "_index" : “myindex", "_id" : "2" } }
{ "update" : {"_id" : "1", "_index" : "myindex"} }
{ "doc" : {"field2" : "value5"} }
Notes
  • Bulk API is useful when you need to index data streams that can be queued up and indexed in batches of hundreds or thousands, such as logs.
  • There is no correct number of actions or limits to perform on a single bulk call, but you will need to figure out the optimum number by experimentation, given the cluster size, number of nodes, hardware specs etc.

Index in Elasticsearch

What it is

In Elasticsearch, an index (indices in plural) can be thought of as a table inside a database that has a schema and can have one or more shards and replicas. An Elasticsearch index is divided into shards and each shard is an instance of a Lucene index.

Indices are used to store the documents in dedicated data structures corresponding to the data type of fields. For example, text fields are stored inside an inverted index whereas numeric and geo fields are stored inside BKD trees.

Examples
Create Index

The following example is based on Elasticsearch version 5.x onwards. An index with two shards, each having one replica will be created with the name test_index1

PUT /test_index1?pretty
{
    "settings" : {
        "number_of_shards" : 2,
        "number_of_replicas" : 1
    },
    "mappings" : {
        "properties" : {
            "tags" : { "type" : "keyword" },
            "updated_at" : { "type" : "date" }
        }
    }
}
List Indices

All the index names and their basic information can be retrieved using the following command:

GET _cat/indices?v
Index a document

Let’s add a document in the index with below command:

PUT test_index1/_doc/1
{
  "tags": [
    "opster",
    "elasticsearch"
  ],
  "date": "01-01-2020"
}
Query an index
GET test_index1/_search
{
  "query": {
    "match_all": {}
  }
}
Query Multiple Indices

It is possible to search multiple indices with a single request. If it is a raw HTTP request, Index names should be sent in comma-separated format, as shown in the example below, and in the case of a query via a programming language client such as python or Java, index names are to be sent in a list format.

GET test_index1,test_index2/_search
Delete Indices
DELETE test_index1
Common Problems
  • It is good practice to define the settings and mapping of an Index wherever possible because if this is not done, Elasticsearch tries to automatically guess the data type of fields at the time of indexing. This automatic process may have disadvantages, such as mapping conflicts, duplicate data and incorrect data types being set in the index. If the fields are not known in advance, it’s better to use dynamic index templates.
  • Elasticsearch supports wildcard patterns in Index names, which sometimes aids with querying multiple indices, but can also be very destructive too. For example, It is possible to delete all the indices in a single command using the following commands:
DELETE /*

To disable this, you can add the following lines in the elasticsearch.yml:

action.destructive_requires_name: true

Plugin in Elasticsearch

What it is

A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.

Examples:
  • Get all the instructions for the plugin usage
sudo bin/elasticsearch-plugin -h
  • Installing S3 plugin using URL for storing Elasticsearch snapshots on S3
sudo bin/elasticsearch-plugin install repository-s3
  • Removing a plugin
sudo bin/elasticsearch-plugin remove repository-s3
  • Installing a plugin using the file path
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip

Notes:
  • Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
  • A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
  • You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
  • When a plugin is removed, you will need to restart every elasticsearch node in order to complete the removal process.

Common Problems:
  • Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the deb or rpm package then the plugin has to be installed using the root user, or else you can install the plugin as the user that owns all of the Elasticsearch files.
  • In case of deb or rpm package installation, it is important to check the permission of the plugins directory after plugin installation and update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory 
  • If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins at once and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.

To help troubleshoot related issues we have gathered selected Q&A from the community and issues from Github , please review the following for further information :

1 Upgrade Elasticsearch From 6 0 0 To  

2Github Issue Number 15622  

Github Issue Number 12318


Log Context

Log ”Failed to bulk index audit events: [{}]” classname is IndexAuditTrail.java
We have extracted the following from Elasticsearch source code to get an in-depth context :

             }

            
Override
            public void afterBulk(long executionId; BulkRequest request; BulkResponse response) {
                if (response.hasFailures()) {
                    logger.info("failed to bulk index audit events: [{}]"; response.buildFailureMessage());
                }
            }

            
Override
            public void afterBulk(long executionId; BulkRequest request; Throwable failure) {




About Opster

Opster identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to prevent issues, optimize performance and save resources.

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster