It’s always important to have more than one copy of your data, especially if you’re operating a mission-critical Elasticsearch deployment. Should your cluster fall or become unavailable for any reason, it’s possible to maintain service and keep data available for search operations.
Elasticsearch provides a solution for creating multiple copies of data, which is good for high availability purposes, but it’s not a sufficient solution seeing as Elasticsearch itself could become completely unavailable for any reason.
In order to provide higher availability for mission-critical data, it’s advised to store the data in two separate ES clusters.
One way to do so is by using snapshot and restore operations. This allows you to store an offline copy of your data in a different location, such as S3, GCS or any other backend repository that has an official plugin. To ensure high availability and data recovery with snapshots, users can designate a periodical backup of a cluster, and the backed up data can then get restored in a secondary cluster.
The disadvantage of this method is that the data is not backed up and restored in real-time. The built-in delay can cause users to lose valuable data indexed or updated in between snapshots. In addition, the upkeep of the periodic snapshot process requires a lot of resources from the cluster.
With Opster’s Multi-Cluster Load Balancer, setting up multiple backends for a single tenant is a simple matter of configuration. The Load Balancer will mirror the traffic routed to the cluster to the backends in real time, providing two identical copies of the same index on two different clusters, completely simultaneously.
There’s no time delay, and therefore no data loss, no matter when the disaster occurs. The process requires no communication between the clusters, which spares the system any additional load of synchronization.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?