How To Solve Issues Related to Log – Cannot access in container :

Prevent Your Next ELK Incident

Try our free Check Up to test if your ES issues are caused from misconfigured settings

Fix Issue

Updated: Jan-20

In-Page Navigation (click to jump) :

Opster Offer’s World-Class Elasticsearch Expertise In One Powerful Product
Try Our Free ES Check-Up   Prevent Incident

Troubleshooting background

To troubleshoot Elasticsearch log “Cannot access in container :” it’s important to understand common problems related to Elasticsearch concepts: container, plugins, repositories, repository-azure. See detailed explanations below complete with common problems, examples and useful tips.

Plugin in Elasticsearch

What it is

A plugin is used to enhance the core functionalities of Elasticsearch. Elasticsearch provides some core plugins as a part of their release installation. In addition to those core plugins, it is possible to write your own custom plugins as well. There are several community plugins available on GitHub for various use cases.

Examples:
  • Get all the instructions for the plugin usage
sudo bin/elasticsearch-plugin -h
  • Installing S3 plugin using URL for storing Elasticsearch snapshots on S3
sudo bin/elasticsearch-plugin install repository-s3
  • Removing a plugin
sudo bin/elasticsearch-plugin remove repository-s3
  • Installing a plugin using the file path
sudo bin/elasticsearch-plugin install file:///path/to/plugin.zip

Notes:
  • Plugins are installed and removed using the elasticsearch-plugin script, which ships as a part of Elasticsearch installation and can be found inside the bin/ directory of the Elasticsearch installation path.
  • A plugin has to be installed on every node of the cluster and each of the nodes has to be restarted to make the plugin visible.
  • You can also download the plugin manually and then install it using the elasticsearch-plugin install command, providing the file name/path of the plugin’s source file.
  • When a plugin is removed, you will need to restart every elasticsearch node in order to complete the removal process.

Common Problems:
  • Managing permission issues during and after plugin installation is the most common problem. If Elasticsearch was installed using the deb or rpm package then the plugin has to be installed using the root user, or else you can install the plugin as the user that owns all of the Elasticsearch files.
  • In case of deb or rpm package installation, it is important to check the permission of the plugins directory after plugin installation and update the permission if it has been modified using the following command:
chown -R elasticsearch:elasticsearch path_to_plugin_directory 
  • If your Elasticsearch nodes are running in a private subnet without internet access, you cannot install a plugin directly. In this case, you can simply download the plugins at once and copy the files inside the plugins directory of the Elasticsearch installation path on every node. The node has to be restarted in this case as well.

Repository in Elasticsearch

What it is

An Elasticsearch snapshot provides a backup mechanism that takes the current state and data in the cluster and saves it to a repository (read the Glossary term Snapshot for more information). The backup process requires a repository to be created first. The repository needs to be registered using the _snapshot endpoint, and multiple repositories per cluster can be created. The following repository types are supported. 

Repository Types:
Repository TypeConfiguration Type
Shared file systemType: “fs”
S3Type : “s3”
HDFSType :“hdfs”
AzureType: “azure”
Google Cloud StorageType : “gcs”
Examples

To register a repository of type fs:

PUT _snapshot/my_repo_01
{
"type": "fs",
"settings": {
"location": "/mnt/my_repo_dir"
  }
}
Notes and common problems
  • S3, HDFS , Azure and Google Cloud requires a relevant plugin to be installed before it can be used for a snapshot.
  • The setting, path.repo: /mnt/my_repo_dir needs to be added to elasticsearch.yml on all the nodes in case you are planning to use repo type of file system otherwise it will fail
  • In case of using remote repositories , the network bandwidth and repository storage throughput should be high enough to complete the snapshot operations normally , otherwise you will end up in partial snapshots.


To help troubleshoot related issues we have gathered selected Q&A from the community and issues from Github , please review the following for further information :

1 cannot access elasticsearch in docker for windows linux containers 0.43 K 1

2Unable to access kibana web UI and Elasticsearch running in docker container from host machine 0.17 K 

Running elasticsearch on Azure VM using Docker: cannot access via localhost


Log Context

Log ”Cannot access [{}] in container {{}}: {}” classname is AzureBlobStore.java
We have extracted the following from Elasticsearch source code to get an in-depth context :

     public void delete(BlobPath path) throws IOException {
        final String keyPath = path.buildAsString();
        try {
            service.deleteFiles(clientName; container; keyPath);
        } catch (URISyntaxException | StorageException e) {
            logger.warn("cannot access [{}] in container {{}}: {}"; keyPath; container; e.getMessage());
            throw new IOException(e);
        }
    }

    
Override





About Opster

Incorporating deep knowledge and broad history of Elasticsearch issues. Opster’s solution identifies and predicts root causes of Elasticsearch problems, provides recommendations and can automatically perform various actions to manage, troubleshoot and prevent issues.

Learn more: Glossary | Blog| Troubleshooting guides | Error Repository

Need help with any Elasticsearch issue ? Contact Opster