Opster Team
Before you begin reading this guide, we recommend you run Elasticsearch Error Check-Up which analyzes 2 JSON files to detect many errors.
To easily locate the root cause and resolve this issue try AutoOps for Elasticsearch & OpenSearch. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them. Take a self-guided product tour to see for yourself (no registration required).
This guide will help you check for common problems that cause the log ” Built a DLS BitSet that uses bytes. the DLS BitSet cache has a maximum size of bytes. ” to appear. To understand the issues related to this log, read the explanation below about the following Elasticsearch concepts: cache, document and plugin.
Overview
Elasticsearch uses three types of caches to improve the efficiency of operation.
- Node request cache
- Shard data cache
- Field data cache
How they work
Node request cache maintains the results of queries used in a filter context. The results are evicted on a least recently used basis.
Shard data cache maintains the results of frequently used queries where size=0, particularly the results of aggregations. This cache is particularly relevant for logging use cases where data is not updated on old indices, and regular aggregations can be kept in cache to be reused.
The field data cache is used for sorting and aggregations. To keep these operations quick Elasticsearch loads these values into memory.
Examples
Elasticsearch usually manages cache behind the scenes, without the need for any specific settings. However, it is possible to monitor and limit the amount of memory being used on each node for a given cache type by putting the following in elasticsearch.yml :
indices.queries.cache.size: 10% indices.fielddata.cache.size: 30%
Note, the above values are in fact the defaults, and there is no need to set them specifically. The default values are good for most use cases, and should rarely be modified.
You can monitor the use of caches on each node like this:
GET /_nodes/stats/indices/fielddata GET /_nodes/stats/indices/query_cache GET /_nodes/stats/indices/request_cache
Notes and good things to know
Construct your queries with reusable filters. There are certain parts of your query which are good candidates to be reused across a large number of queries, and you should design your queries with this in mind. Anything thing that does not need to be scored should go in the filter section of a bool query. For example, time ranges, language selectors, or clauses that exclude inactive documents are all likely to be excluded in a large number of queries, and should be included in filter parts of the query so that they can be cached and reused.
In particular, take care with time filters. “now-15m” cannot be reused, because “now” will continually change as the time window moves on. On the other hand “now-15/m” will round to the nearest minute, and can be re-used (via cache) for 60 seconds before rolling over to the next minute.
For example when a user enters the search term “brexit”, we may want to also filter on language and time period to return relevant articles. The query below leaves only the query term “brexit” in the “must” part of the query, because this is the only part which should affect the relevance score. The time filter and language filter can be reused time and time again for new queries for different searches.
POST results/_search { "query": { "bool": { "must": [ { "match": { "message": { "query": "brexit" } } } ], "filter": [ { "range": { "@timestamp": { "gte": "now-10d/d" } } }, { "term": { "lang.keyword": { "value": "en", "boost": 1 } } } ] } } }
Limit the use of field data. Be careful about using fielddata=true in your mapping where the number of terms will result in a high cardinality. If you must use fielddata=true, you can also reduce the requirement of fielddata cache by limiting the requirements for fielddata for a given index using a field data frequency filter.
POST results/_search { "query": { "bool": { "must": [ { "match": { "message": { "query": "brexit" } } } ], "filter": [ { "range": { "@timestamp": { "gte": "now-10d/d" } } }, { "term": { "lang.keyword": { "value": "en", "boost": 1 } } } ] } } }
Document in Elasticsearch
What is an Elasticsearch document?
While an SQL database has rows of data stored in tables, Elasticsearch stores data as multiple documents inside an index. This is where the analogy must end however, since the way that Elasticsearch treats documents and indices differs significantly from a relational database.
For example, documents could be:
- Products in an e-commerce index
- Log lines in a data logging application
- Invoice lines in an invoicing system
Document fields
Each document is essentially a JSON structure, which is ultimately considered to be a series of key:value pairs. These pairs are then indexed in a way that is determined by the document mapping. The mapping defines the field data type as text, keyword, float, time, geo point or various other data types.
Elasticsearch documents are described as schema-less because Elasticsearch does not require us to pre-define the index field structure, nor does it require all documents in an index to have the same structure. However, once a field is mapped to a given data type, then all documents in the index must maintain that same mapping type.
Each field can also be mapped in more than one way in the index. This can be useful because we may want a keyword structure for aggregations, and at the same time be able to keep an analysed data structure which enables us to carry out full text searches for individual words in the field.
For a full discussion on mapping please see here.
Document source
An Elasticsearch document _source consists of the original JSON source data before it is indexed. This data is retrieved when fetched by a search query.
Document metadata
Each document is also associated with metadata, the most important items being:
_index – The index where the document is stored
_id – The unique ID which identifies the document in the index
Documents and index architecture
Note that different applications could consider a “document” to be a different thing. For example, in an invoicing system, we could have an architecture which stores invoices as documents (1 document per invoice), or we could have an index structure which stores multiple documents as “invoice lines” for each invoice. The choice would depend on how we want to store, map and query the data.
Examples:
Creating a document in the user’s index:
POST /users/_doc { "name" : "Petey", "lastname" : "Cruiser", "email" : "petey@gmail.com" }
In the above request, we haven’t mentioned an ID for the document so the index operation generates a unique ID for the document. Here _doc is the type of document.
POST /users/_doc/1 { "name" : "Petey", "lastname" : "Cruiser", "email" : "petey@gmail.com" }
In the above query, the document will be created with ID 1.
You can use the below ‘GET’ query to get a document from the index using ID:
GET /users/_doc/1
Below is the result, which contains the document (in _source field) as metadata:
{ "_index": "users", "_type": "_doc", "_id": "1", "_version": 1, "_seq_no": 1, "_primary_term": 1, "found": true, "_source": { "name": "Petey", "lastname": "Cruiser", "email": "petey@gmail.com" } }
Notes
Starting version 7.0 types are deprecated, so for backward compatibility on version 7.x all docs are under type ‘_doc’, starting 8.x type will be completely removed from ES APIs.
Overview
A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.
Examples
Get all of the instructions for the plugin:
sudo bin/elasticsearch-plugin -h
Installing the S3 plugin for storing Elasticsearch snapshots on S3:
sudo bin/elasticsearch-plugin install repository-s3
Removing a plugin:
sudo bin/elasticsearch-plugin remove repository-s3
Installing a plugin using the file’s path:
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip
Notes and good things to know
- Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of the Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
- A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
- You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
- When a plugin is removed, you will need to restart every Elasticsearch node in order to complete the removal process.
Common issues
- Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the DEB or RPM packages then the plugin has to be installed using the root user. Otherwise you can install the plugin as the user that owns all of the Elasticsearch files.
- In the case of DEB or RPM package installation, it is important to check the permissions of the plugins directory after you install it. You can update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory
- If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.

Log Context
Log “built a DLS BitSet that uses [{}] bytes; the DLS BitSet cache has a maximum size of [{}] bytes;” classname is DocumentSubsetBitsetCache.java.
We extracted the following from Elasticsearch source code for those seeking an in-depth context :
return NULL_MARKER; } else { final BitSet bs = BitSet.of(s.iterator(); context.reader().maxDoc()); final long bitSetBytes = bs.ramBytesUsed(); if (bitSetBytes > this.maxWeightBytes) { logger.warn("built a DLS BitSet that uses [{}] bytes; the DLS BitSet cache has a maximum size of [{}] bytes;" + " this object cannot be cached and will need to be rebuilt for each use;" + " consider increasing the value of [{}]"; bitSetBytes; maxWeightBytes; CACHE_SIZE_SETTING.getKey()); } else if (bitSetBytes + bitsetCache.weight() > maxWeightBytes) { maybeLogCacheFullWarning();
Find & fix Elasticsearch problems
Opster AutoOps diagnoses & fixes issues in Elasticsearch based on analyzing hundreds of metrics.
Fix Your Cluster IssuesConnect in under 2 minutes
Billy McCarthy
Senior SysAdmin at Backblaze