5

I'm deploying a Django app on Heroku and I'm seeing that most of the time in my requests is spent in the psycopg2:connect function.

See the New Relic graphs (blue is psycopg2:connect):

New Relic chart

New Relic table

I don't think spending 60% of the time in the db connection is adequate...

I tried using connection pooling with django-postgrespool but couldn't notice any difference.

I'm using waitress as the server (as per this article http://blog.etianen.com/blog/2014/01/19/gunicorn-heroku-django/). The app is running on a Hobby dyno with a Hobby basic Postgresql database (will upgrading make this better?).

Any pointers as to how I can speed up these connections?


[UPDATE] I did some more digging and this doesn't seem to be a problem when using the django rest framework browsable api:

no problem with the browsable api

In the previous screenshot, requests made after 14:20 are made to the same views but without ?format=json, and you can see that the psycopg2:connect is a lot faster. Maybe there's a configuration issue somewhere in django rest framework?

Corentin S.
  • 5,750
  • 2
  • 17
  • 21
  • Are these issues consistently logged as psycopg2:connect? I have a similar unanswered question here: http://stackoverflow.com/questions/29088113/heroku-sporadic-high-response-time but the issue seems to be elusive and not always db related. – grokpot Jul 27 '15 at 19:28
  • Regarding pooling here: we normally set `CONN_MAX_AGE` to `None` on django to keep alive one connection per worker. Not pooling, but good we believe. About the database here: Having worked with newrelic for quite some time, sometimes their classification of the time spent is not perfectly correct. Tough as it looks like it's related to the database somehow. How do you authenticate your API requests? Browsable API will be through session-auth, and the normal API. – Denis Cornehl Dec 04 '19 at 06:34

0 Answers0