Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which can resolve issues that cause many errors.
This guide will help you check for common problems that cause the log ” Unable to process bulk failure ” to appear. It’s important to understand the issues related to the log, so to get started, read the general overview on common issues and tips related to the Elasticsearch concepts: bulk, delete, delete-by-query, deletebyquery and plugins.
Advanced users might want to skip right to the common problems section in each concept or try running the Check-Up which analyses ES to pinpoint the cause of many errors and provides suitable actionable recommendations how to resolve them (free tool that requires no installation).
Overview
In Elasticsearch, when using the Bulk API it is possible to perform many write operations in a single API call, which increases the indexing speed. Using the Bulk API is more efficient than sending multiple, separate requests. This can be done for the following four actions:
Examples
The bulk request below will index a document, delete another document, and update an existing document.
POST _bulk { "index" : { "_index" : "myindex", "_id" : "1" } } { "field1" : "value" } { "delete" : { "_index" : “myindex", "_id" : "2" } } { "update" : {"_id" : "1", "_index" : "myindex"} } { "doc" : {"field2" : "value5"} }
Notes
- Bulk API is useful when you need to index data streams that can be queued up and indexed in batches of hundreds or thousands, such as logs.
- There is no correct number of actions or limits to perform on a single bulk call, but you will need to figure out the optimum number by experimentation, given the cluster size, number of nodes, hardware specs etc.
Overview
DELETE is an Elasticsearch API which removes a document from a specific index. This API requires an index name and _id document to delete the document.
Delete a document
DELETE /my_index/_doc/1
Notes
- A delete request throws 404 error code if the document does not already exist in the index.
- If you want to delete a set of documents that matches a query, you need to use delete by query API.
Overview
Delete-by-query is an Elasticsearch API, which was introduced in version 5.0 and provides functionality to delete all documents that match the provided query. In lower versions, users had to install the Delete-By-Query plugin and use the DELETE /_query endpoint for this same use case.
What it is used for
This API is used for deleting all the documents from indices based on a query. Once the query is executed, Elasticsearch runs the process in the background to delete all the matching documents so you don’t have to wait for the process to be completed.
Examples
Delete all the documents of an index without deleting the mapping and settings:
POST /my_index/_delete_by_query?conflicts=proceed&pretty { "query": { "match_all": {} } }
The conflict parameter in the request is used to proceed with the request even in the case of version conflicts for some documents. The default conflict behavior is to abort the request altogether.
Notes
- A long-running delete_by_query can be terminated using _task API.
- Inside the query body, you can use the same syntax for queries that are available under the _search API.
Common problems
Elasticsearch takes a snapshot of the index when you hit delete by query request and uses the _version of the documents to process the request. If a document gets updated in the meantime, it will result in a version conflict error and the delete operation will fail.
Log Context
Log “unable to process bulk failure” classname is TransportDeleteByQueryAction.java
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
} logger.trace("scrolling document terminated due to scroll request failure [{}]"; scrollId); finishHim(scrollId; hasTimedOut(); failure); } catch (Throwable t) { logger.error("unable to process bulk failure"; t); finishHim(scrollId; false; t); } } void finishHim(final String scrollId; boolean scrollTimedOut; Throwable failure) {
Run the Check-Up to get customized recommendations like this:

Heavy merges detected in specific nodes

Description
A large number of small shards can slow down searches and cause cluster instability. Some indices have shards that are too small…

Recommendations Based on your specific ES deployment you should…
Based on your specific ES deployment you should…
X-PUT curl -H [a customized code snippet to resolve the issue]