0

I have a huge project where data access layer opens and closes connections on every http request or even more frequently. A lot of components depends on this functionality with data access layer. In moments with peak traffic, we have up to 100 request/second. Database is MySQL or Postgresql.

The question is if someone met real issues with such approach of communication with DBs?

Shadow
  • 33,525
  • 10
  • 51
  • 64
  • Yes, that is why connection poolers where created. Connections have overhead, poolers reduce that. – Adrian Klaver Jan 14 '23 at 17:46
  • I hope this link would help your question https://stackoverflow.com/questions/4111594/why-always-close-database-connection – william livins Jan 14 '23 at 17:52
  • @AdrianKlaver I think so - the connections pool is the answer here. How do you think, the overhead comes from TCP protocol level or from DB's internal design? – Anton Komarov Jan 14 '23 at 17:57
  • In Postgres case each connection is a separate process. Make a bunch of connections to the server and then at the command line run `ps ax | grep postgres`. Setting up and destroying those processes consumes resources as well as the resources they hold while up. – Adrian Klaver Jan 14 '23 at 17:59

0 Answers0