Schedule was triggered for job [ – Elasticsearch Log Diagnostic

Get an Elasticsearch Check-Up


Check if your ES issues are caused from misconfigured settings
(Free 2 min process)

Check

Opster has analyzed this log so you don’t have to
Navigate directly to one of the sections in this page :

Troubleshooting Background – start here to get the full picture       
Log Context – usefull for experts
Related Issues – selected resources from the community  
About Opster – offering a diffrent approach to troubleshoot Elasticsearch

Check My Elasticsearch 


Troubleshooting background

To troubleshoot Elasticsearch log “Schedule was triggered for job [” it’s important to know common problems related to Elasticsearch concepts: indexer, indexing, plugin. See below-detailed explanations complete with common problems, examples and useful tips.

Elasticsearch Indexing

What it is

Indexing is the process of adding or updating new documents to an Elasticsearch index.

Examples

In its simplest form, you can index a document like this:

POST /test/_doc
{
  "message": "Opster Rocks Elasticsearch Management"
}

This will create the index “test” (if it doesn’t already exist) and add a document with the source equal to the body of the POST call.  In this case, the ID will be created automatically. If you repeat this command, a second document will be created with an identical source but a different ID.
Alternatively, you can do this: 

PUT /test/_doc/1
{
  "message": "Opster Elasticsearch Management and Troubleshooting"
}

This is almost the same, but in this case, the call sets the ID of the document to 1.  If you repeat the command modifying the message, you will modify the original document, replacing the previous source with the latest source.

However note that this is NOT the same as an UPDATE operation, which is a different API and allows us to modify certain fields of the document while leaving others unchanged.

Notes and good things to know

You can set your own ID if necessary (especially if you later need to update the same ID) but this comes at a performance penalty.  If you don’t need to update documents, then let Elasticsearch set its own ID automatically.

If you need to index many documents at once, it is much more efficient to use the BULK API to carry out these operations with a single call.

Indexing is not an immediate Automatic process.  Documents will not be available for search until the index has refreshed. Refresh time by default is 1 second. Increasing this time reduces the burden on the cluster of indexing, increasing indexing speed. It is possible to modify the refresh time in the index settings.  

You can apply version control by setting the version parameter (?version=3) and indicating version_type=external.  By doing this Elasticsearch will reject any index requests where the version specified is less than the current version.  This can be useful when running distributed processes and you cannot guarantee that updated documents arrive in the correct order.

PUT test/_doc/1?version=20&version_type=external
{
	"message" : "using external version the document will be modified only  if version is greater than previous!"
}
The process of indexing is as follows:

The index request is sent to the primary shard. Once the primary shard is updated, then the replication process request will be relayed to the replica shards. The command will not return until the primary shard (at least) has been updated. For greater resilience, you can specify a minimum number of shard replicas to be available before proceeding with the operation by using the parameter ?wait_for_active_shards=2

You can also specify which specific shard the index operation is sent to by using the “routing” command.  There are 2 reasons that this might be done:  

  • Certain Elasticsearch functions (parent-child documents) that require that the parent and child documents be held on the same shard.  
  • Secondly, it may be possible to increase search speeds and reduce load on elasticsearch by storing similar documents together on the same shard and then specifying the routing for both indexing and searching.  Although this can be done explicitly during indexing, it is not recommended. It would be preferable to set this up using the index mapping, so that the routing is determined by an ID value on the source document.

Plugin in Elasticsearch

What it is

Plugins are used to extend the functionality of Elasticsearch. In addition to the core plugins available to you, it is possible to write custom plugins as well. Plugins are generated in a zip format with the mandatory file structure.

Examples:
  • Core Plugins: Xpack for Security and monitoring, Discovery plugins for EC2
  • Adding S3 plugin for storing snapshots on S3
sudo bin/elasticsearch-plugin install repository-s3
  • Adding HDFS plugin for storing snapshots on HDFS
sudo bin/elasticsearch-plugin install repository-hdfs
  • Removing a plugin
sudo bin/elasticsearch-plugin remove repository-hdfs
Notes:
  • Plugins are installed using the Elasticsearch-plugin script, which enables actions such as  listing, removing and installing plugins.
  • Core plugins can be installed simply by providing the name of the plugin to the Elasticsearch-plugin command.
  • You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
  • When a plugin is removed, you will need to restart the elasticsearch node(s) in order to complete the removal process.
Common Problems:
  • You need to install the required plugins in your Elasticsearch deployment before moving it to production machines. (as it’s likely your production machines are behind a proxy and it’s very hard to get plugins installed behind a proxy).
  • The same is true when you are going to deploy elasticsearch using Docker images, you will most likely be rebuilding the standard image and including your required plugins in the custom docker build. Make sure the docker build is run on a build machine that is not behind a proxy, otherwise the plugin installation will fail during docker build.

Click here to get to our list of the Most frequent issues caused by Elasticsearch Plugins


Log Context

Log ”Schedule was triggered for job [“ classname is AsyncTwoPhaseIndexer.java
To help get the right context about this log, we have extracted the following from Elasticsearch source code

         final IndexerState currentState = state.get();
        switch (currentState) {
        case INDEXING:
        case STOPPING:
        case ABORTING:
            logger.warn("Schedule was triggered for job [" + getJobId() + "]; but prior indexer is still running " +
                "(with state [" + currentState + "]");
            return false;

        case STOPPED:
            logger.debug("Schedule was triggered for job [" + getJobId() + "] but job is stopped.  Ignoring trigger.");






To help troubleshoot related issues we have gathered selected answers from STOF & Discuss and issues from Github, please review the following for further information :

Question On Watcher Trigger Schedul
discuss.elastic.co/t/question-on-watcher-trigger-schedule-activate-from-6am-to-10pm-daily-run-every-5-minutes/99654

 

Munity.Dev
discuss.opendistrocommunity.dev/t/open-distro-for-elasticsearch-job-scheduler-is-under-development/397

 


About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster’s solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues

We are constantly updating our analysis of Elasticsearch logs, errors, and exceptions. Sharing best practices and providing troubleshooting guides.

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster

Did this page help you?