I know there are some posts about using timedelta
objects in Python and Python doc is clear enough to understand everything. But I cannot figure out one thing. Let's assume that we have two dates:
t1 = 'Fri 11 Feb 2078 00:05:21 +0400'
t2 = 'Mon 29 Dec 2064 03:33:48 -1100'
I parsed both t1
and t2
using the code below to find the difference between them in seconds:
def offset(arg):
return timedelta(hours = arg / 100, minutes = arg % 100)
def normalize(time, offset, sign):
return time+offset if sign == '-' else time-offset
def main():
t1offset = offset(int(t1[-5:]))
t2offset = offset(int(t2[-5:]))
t1 = normalize(datetime.strptime(t1[:-6],
"%a %d %b %Y %H:%M:%S"), t1offset, t1[-5])
t2 = normalize(datetime.strptime(t2[:-6],
"%a %d %b %Y %H:%M:%S"), t2offset, t2[-5])
if t1>t2:
print (t1-t2).total_seconds()
elif t2>t1:
print (t2-t1).total_seconds()
else:
print 0
The right answer is |t1-t2| = 413962293
while my result is 414041493
. It's a difference of 79200 secs -> 22 hours. What am I doing wrong? What did I skip or what should I think about to solve this problem?