I'm building a multi tenant REST server application with Spring 2.x, Hibernate 5.x, Spring Data REST, Mysql 5.7. Spring 2.x uses Hikari for connection pooling.
I'm going to use a DB per tenant approach, so every tenant will have his own database.
I created my MultiTenantConnectionProvider in this way:
@Component
@Profile("prod")
public class MultiTenantConnectionProviderImpl implements MultiTenantConnectionProvider {
private static final long serialVersionUID = 3193007611085791247L;
private Logger log = LogManager.getLogger();
private Map<String, HikariDataSource> dataSourceMap = new ConcurrentHashMap<String, HikariDataSource>();
@Autowired
private TenantRestClient tenantRestClient;
@Autowired
private PasswordEncrypt passwordEncrypt;
@Override
public void releaseAnyConnection(Connection connection) throws SQLException {
connection.close();
}
@Override
public Connection getAnyConnection() throws SQLException {
Connection connection = getDataSource(TenantIdResolver.TENANT_DEFAULT).getConnection();
return connection;
}
@Override
public Connection getConnection(String tenantId) throws SQLException {
Connection connection = getDataSource(tenantId).getConnection();
return connection;
}
@Override
public void releaseConnection(String tenantId, Connection connection) throws SQLException {
log.info("releaseConnection " + tenantId);
connection.close();
}
@Override
public boolean supportsAggressiveRelease() {
return false;
}
@Override
public boolean isUnwrappableAs(Class unwrapType) {
return false;
}
@Override
public <T> T unwrap(Class<T> unwrapType) {
return null;
}
public HikariDataSource getDataSource(@NotNull String tentantId) throws SQLException {
if (dataSourceMap.containsKey(tentantId)) {
return dataSourceMap.get(tentantId);
} else {
HikariDataSource dataSource = createDataSource(tentantId);
dataSourceMap.put(tentantId, dataSource);
return dataSource;
}
}
public HikariDataSource createDataSource(String tenantId) throws SQLException {
log.info("Create Datasource for tenant {}", tenantId);
try {
Database database = tenantRestClient.getDatabase(tenantId);
DatabaseInstance databaseInstance = tenantRestClient.getDatabaseInstance(tenantId);
if (database != null && databaseInstance != null) {
HikariConfig hikari = new HikariConfig();
String driver = "";
String options = "";
switch (databaseInstance.getType()) {
case MYSQL:
driver = "jdbc:mysql://";
options = "?useLegacyDatetimeCode=false&serverTimezone=UTC&useUnicode=yes&characterEncoding=UTF-8&useSSL=false";
break;
default:
driver = "jdbc:mysql://";
options = "?useLegacyDatetimeCode=false&serverTimezone=UTC&useUnicode=yes&characterEncoding=UTF-8&useSSL=false";
}
hikari.setJdbcUrl(driver + databaseInstance.getHost() + ":" + databaseInstance.getPort() + "/" + database.getName() + options);
hikari.setUsername(database.getUsername());
hikari.setPassword(passwordEncrypt.decryptPassword(database.getPassword()));
// MySQL optimizations, see
// https://github.com/brettwooldridge/HikariCP/wiki/MySQL-Configuration
hikari.addDataSourceProperty("cachePrepStmts", true);
hikari.addDataSourceProperty("prepStmtCacheSize", "250");
hikari.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");
hikari.addDataSourceProperty("useServerPrepStmts", "true");
hikari.addDataSourceProperty("useLocalSessionState", "true");
hikari.addDataSourceProperty("useLocalTransactionState", "true");
hikari.addDataSourceProperty("rewriteBatchedStatements", "true");
hikari.addDataSourceProperty("cacheResultSetMetadata", "true");
hikari.addDataSourceProperty("cacheServerConfiguration", "true");
hikari.addDataSourceProperty("elideSetAutoCommits", "true");
hikari.addDataSourceProperty("maintainTimeStats", "false");
hikari.setMinimumIdle(3);
hikari.setMaximumPoolSize(5);
hikari.setIdleTimeout(30000);
hikari.setPoolName("JPAHikari_" + tenantId);
// mysql wait_timeout 600seconds
hikari.setMaxLifetime(580000);
hikari.setLeakDetectionThreshold(60 * 1000);
HikariDataSource dataSource = new HikariDataSource(hikari);
return dataSource;
} else {
throw new SQLException(String.format("DB not found for tenant %s!", tenantId));
}
} catch (Exception e) {
throw new SQLException(e.getMessage());
}
}
}
In my implementation I read tenantId and I get information about the database instance from a central management system. I create a new pool for each tenant and I cache the pool in order to avoid to recreate it each time.
I read this interesting question, but my question is quite different. I'm thinking to use AWS (both for server instance, and RDS db instance).
Let's hypothesize a concrete scenario in which I've 100 tenants. The application is a management/point of sale software. It will be used just from agents. Let's say each tenants has an average of 3 agents working concurrently in each moment.
With that numbers in mind and according to this article, the first thing I realize is that it seems hard to have a pool for each tenant.
For 100 tenants I would like to think that a db.r4.large (2vcore, 15,25GB RAM and fast disk access ) with Aurora should be enough (about 150€/month).
According to the formula to size a connection pool:
connections = ((core_count * 2) + effective_spindle_count)
I should have 2core*2 + 1 = 5 connections in the pool.
From what I got, this should be the max connections in the pool to maximise performance on that DB instance.
1st solution
So my first question is pretty simple: how can I create a separate connection pool for each tenant seen that I should only use 5 connection in total?
It seems not possible to me. Even if I assign 2 connections to each tenant, I would have 200 connections to the DBMS!!
According to this question, on a db.r4.large
instance I could have at max 1300 connections, so seems the instance should face quite well the load.
But according the article I mentioned before, seems a bad practice use hundreds connections to the db:
If you have 10,000 front-end users, having a connection pool of 10,000 would be shear insanity. 1000 still horrible. Even 100 connections, overkill. You want a small pool of a few dozen connections at most, and you want the rest of the application threads blocked on the pool awaiting connections.
2nd solution
The second solution I have in mind is to share a connection pool for tenants on the same DMBS. This means that all 100 tenants will use the same Hikari pool of 5 connections (honestly it seems quite low to me).
Should this the right way to maximize performance and redure the response time of the application?
Do you have a better idea of how to manage this scenario with Spring, Hibernate, Mysql (hosted on AWS RDS Aurora)?