4

We are working out a couple of performance issues on one of our web sites, and we have noticed that the command "SET TIME ZONE 'America/Chicago'" is being executed so often, that in a 24 hour period, just under 1 hour (or around 4% of total DB CPU resources) is spent running that command.

Note that the "USE_TZ" setting is False, so based on my understanding, everything should be stored as UTC in the database, and only converted in the UI to the local time zone when necessary.

Do you have any ideas on how we can remove this strain on the database server?

Kevin Christopher Henry
  • 46,175
  • 7
  • 116
  • 102
Keith
  • 5,311
  • 3
  • 34
  • 50
  • 1
    I don't know why that command is so slow, but your understanding of `USE_TZ` is backwards. See the [time zone documentation](https://docs.djangoproject.com/en/dev/topics/i18n/timezones/). – Kevin Christopher Henry Apr 17 '19 at 21:58
  • Well, look at that! Reading the first line of that was all it took to correct me. – Keith Apr 18 '19 at 16:06

1 Answers1

7

For postgres Django always sets timezone: either server's local (when USE_TZ = False) or UTC (When USE_TZ = True). That way django supports "live switching" of settings.USE_TZ for postgreSQL DB backend.

How have you actually determined that this is a bottle-neck?

Usually SET TIME ZONE is only called during creation of connection to DB. Maybe you should use persistent connections by using settings.DATABASES[...]['CONN_MAX_AGE'] = GREATER_THAN_ZERO (docs). That way connections will be reused and you'll have less calls to SET TIME ZONE. But if you use that approach you should also take closer look at your PostgreSQL configuration:

  • max_connections should be greater than 1+maximum concurrency of wsgi server + max number of simultaneous cron jobs that use django (if you have them) + maximum concurrency of celery workers (if you have them) + any other potential sources of connections to postgres
  • if you are running cron job to call pg_terminate_backend then make sure that CONN_MAX_AGE is greater than "idle timeout"
  • if you are running postgres on VPS, then in some cases there might be limits on number of open sockets)
  • if you are using something like pgbouncer then it may already be reusing connections
  • if you are killing server that serves your django project with sigkill (kill -9) then it may leave some unclosed connections to DB (but I'm not sure)

I think this may also happen if you use django.utils.timezone.activate. But I'm not sure of it... This may happen if you manually call it in your code or when you are using middleware to do this

Other possible explaining: the way youre are "profiling" your requests actually shows you the time of whole transaction

imposeren
  • 4,142
  • 1
  • 19
  • 27
  • Thank you for the info. We use Azure Database for PostgreSQL server, which has built in query performance monitoring. This is a VERY high-trafficked web site: in the last 24 hours, the query performance app is showing 195,000+ executions of the `SET TIME ZONE` command. – Keith Apr 18 '19 at 15:12
  • That was the exact answer I needed. We inherited this project from another company, and settings.CONN_MAX_AGE was being set to 60 seconds instead of settings.DATABASES['default']['CONN_MAX_AGE'], so it was effectively not set and was using the default of 0. The "SET TIME ZONE" was a symptom, not the cause, and our utilization dropped from 15-20% to just under 5% with this update alone. – Keith Apr 18 '19 at 18:46