You get the correct timestamp if you interpret the input time string as time in UTC.
Note: that conclusion is based only on ("20150219-0700", 1424329200000)
pair that is consistent with UTC but it is not enough in general case e.g., you get the exact same result if the input time string is in Europe/London timezone (GMT+0000
) that has zero utc offset now but in the summer it is one hour 'BST+0100'
i.e., your code will be wrong by one hour for "20150619-0700"
if the input is 'Europe/London'
and not in UTC.
You have to know the input timezone to convert the time into POSIX timestamp:
#!/usr/bin/env python3
from datetime import datetime, timedelta
import pytz # $ pip install pytz
Epoch = datetime(1970, 1, 1, tzinfo=pytz.utc)
tz = pytz.timezone('Europe/London') # input timezone
naive_dt = datetime.strptime("20150219-0700", "%Y%m%d-%H%M")
dt = tz.localize(naive_dt, is_dst=None) # make it aware
timestamp_millis = (dt - Epoch) // timedelta(milliseconds=1)
# -> 1424329200000
Note: the latter expression might give more precise result than int(dt.timestamp()*1000)
or int(diff.total_seconds()*1000)
formulae. See the discussion in Python issue: timedelta.total_seconds needlessly inaccurate, especially for negative timedeltas e.g., "n/10**6
and n/1e6
aren't the same thing".
If you are sure that the input timezone is UTC then you could leave the input as naive datetime objects:
#!/usr/bin/env python3
from datetime import datetime
Epoch = datetime(1970, 1, 1)
utc_dt = datetime.strptime("20150219-0700", "%Y%m%d-%H%M")
timestamp_millis = (utc_dt - Epoch) // timedelta(milliseconds=1)