The general recommendation is to only have a single instance of IMongoClient, since that internally will pool connections. And this works great as long as you always want to connect with the same user. But in a multi-tenant application, where each tenant has unique authentication credentials with access to only a small set of databases, that recommendation comes into question.
I’ve searched the web for answers to this many times over the years, but I have yet to come across a definitive best practice. Prior to version 2.0 (I think), you could “re-authenticate” an already connected client, reducing the overhead, so back then I guess the best practice in a multi-tenant situation would be to stick to a singleton MongoClient, and just change the authentication of it on every new web request. But since that’s no longer possible, I’m not sure what is the best practice anymore.
Do you recommend that we keep a separate IMongoClient for each tenant? How will that scale when we have thousands of tenants? Would that not lead to a huge number of simultaneous connections? Or maybe we should set a low “max pool size” of each MongoClient then?
Does the MongoDB team have any advice for dealing with this situation? It applies to all drivers equally, but we happen to use the C# driver.
Historically C# driver was designed to work with a more traditional multi-tenant approach, where authentication and authorization is done by a dedicated service or by an application layer. With such architecture having one (or few) instances of IMongoClient is sufficient.
Unfortunately short lifescope for IMongoClient is not supported yet. There is a feature request for this here.
You are absolutely right that having a large number of IMongoClient instances will lead to enormous connections amount, and is not recommended. As each IMongoClient maintains a connection pool, where each connection periodically pings the server, this will impose high connection load on the server, and consume app resources. For example 100 unique users, with 3-member replica set, will require at least 300 connections constantly pinging the MongoDB server for each app instance.
If a short lifescope of IMongoClient is absolutely crucial, you could try experimenting with invoking ClusterRegistry.UnregisterAndDisposeCluster(client.Cluster) for IMongoClient cleanup. Please note that this is untested functionality, which might result in unpredicted issues, and is not a recommended usage.
I’m not looking for a short lifescope of IMongoClient. I just want to know, if a low number of MongoClients is desirable, then how do we change user of those MongoClients without creating a new one?
Hi John,
It is not possible to reauthenticate the user with the existing MongoClient instance. As you mentioned this applies to all drivers, and there is no cross-driver support for this feature.
Thank you for requesting clarification on our recommendations.
When architecting a multi-tenant application, you have two main choices for implementing security:
Authenticate/authorize as a single application user and implement tenant authentication/authorization within the application level. This is often done by augmenting operations with a tenantId at the database, collection, or document level depending on your design.
Authenticate/authorize each tenant as a unique database user and delegate access control to the database.
Each technique has its own advantages and disadvantages, which you’ve likely already considered. We generally recommend the first approach using a single application user as it allows drivers to effectively use connection pooling when connecting to the cluster.
If you want to use the second approach, then you have to use a MongoClient per unique tenant. If you have a large number of tenants, you should consider managing an LRU list of MongoClient instances and disposing of old clients (including their clusters and associated connection pools).
As Boris noted, ClusterRegistry.UnregisterAndDisposeCluster(client.Cluster) was not designed for this purpose and may have unintended bugs when the AppDomain is up for days, weeks, or months. This functionality is used by our test runner where the lifetime of an AppDomain is around 30-60 minutes - long enough to run our test suite against a given configuration. We are considering this scenario of a reliably disposable MongoClient (CSHARP-3431) for a future release.
Also worth noting is that connection pooling and monitoring is implemented per server and per cluster key. Changing credentials will change the cluster key and thus result in a different set of connection pools and monitoring connections. You will want to ensure that maxPoolSize is configured appropriately if you have a large number of tenants. You may want to consider modifying other connection pool options as well to account for the potentially large number of connection pools and monitoring connections.
Please let us know if you have any additional questions.
Approach 1 is what we have today. And internally we’re fine with it. It would just be a great selling point to be able to tell customers that even if a user of another tenant managed to hack the application some how, they would still not have access to YOUR database, since it requires different credentials. We’ve never had any such incidents, but it would still look good on paper to have such data isolation.
But approach 2 isn’t really appealing either, so I guess we’ll just do nothing for now and hope things will change on your end in the future.