Intuitively these two intervals represent the same amount of time. But the difference shows around daylight-saving time changes, in which case "1 day" can mean either "23 hours" in the spring or "25 hours" in the autumn.
I tested with PostgreSQL and there these two intervals don't mean the same amount of time:
set timezone TO 'CET';
SELECT timestamp with time zone'2020-03-29 0:00 Europe/Bratislava' + INTERVAL '1' DAY,
timestamp with time zone'2020-03-29 0:00 Europe/Bratislava' + INTERVAL '24' HOUR;
returns this:
2020-03-29T22:00:00.000Z 2020-03-29T23:00:00.000Z
The client's time zone was UTC, that's why the returned values are in UTC. But important is that they are not the same.
I also tried with MySQL but seems to me that it doesn't support time zones, only time zone offsets. And zone offsets don't have DST changes so a day is always 24 hours.
On the other hand Apache Calcite, that backs many SQL implementations such as Apache Drill, Apache Flink, Apache Beam and many more, represents interval literals as java's BigDecimal
: the day-second interval is converted to milliseconds and day is assumed to always be 24 hours.
My question is: which approach is correct according to the SQL standard?
EDIT:
Checked more DBs:
- Oracle:
SELECT INTERVAL '1' DAY FROM DUAL
returns+01 00:00:00
. Adding either 1 day or 24 hours to2020-03-29 0:00 CET
provides the same result: 24 hours are added. - SQL Server and DB2: as far as I can tell, only time zone offsets are supported. So same case as MySql: they don't support time zones with DST changes.
Conclusion: PostgreSQL seems to be the only exception to have 1 day different from 24 hours.