Hosting too many tenants on a single OpenSearch cluster adds unnecessary load to your system. With too many shards, indices and nodes, cluster state and metadata become far larger than is optimal. Master nodes can’t perform ideally and management operations slow down.
OpenSearch is generally good at scaling, but does so less well when the data involved is not homogenous. In the case of multiple users and tenants, the data is not homogenous and therefore it is far more difficult to scale efficiently.
The clear solution is to split the large cluster into smaller clusters. However, to ensure operations stay intact, you would need to write a routing component that is programmed into the application itself. This component would need to be aware of multiple clusters at the same time and aware of which cluster each user is located on. Every request to OpenSearch would have to go through that component to route the request to the relevant cluster.
Opster’s Multi-Cluster Load Balancer performs this exact operation straight out-of-the-box. Without making any changes to the application, you can specify which tenant resides on which cluster. This is done through a simple configuration file stating the cluster locations of different tenants based on the request index patterns and where non-specified tenant requests should be routed to by default.
In terms of the application, the Multi-Cluster Load Balancer will feel like a single OpenSearch cluster that all requests are sent to. The Load Balancer will then route the requests seamlessly to different backend clusters according to the user.
With the help of the Multi-Cluster Load Balancer, scaling with multiple tenants is simple and efficient, with requests being routed to different clusters without affecting the application itself or adding any additional load to the system. Adding new clusters as the system grows is quick and easy, as is moving and relocating tenants around between clusters.
You can also benefit from better visibility and the ability to see the usage patterns of each user and tenant.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?