[Edited: I didn't originally print them to quite ridiculous enough levels of precision, so the answer gave incorrect reasoning behind the result.]
If you print them out to ridiculous levels of precision, the truth comes out:
#include <stdio.h>
printf("%20.20f, %20.20f", 1.45, 1.445);
Result:
1.44999999999999995559, 1.44500000000000006217
So, as converted, 1.45 ends up ever so minutely smaller than 1.45, and 1.445 ends up every so slightly greater than 1.445.
So, of course, when we round 1.45, it rounds down, but when we round 1.445, it rounds upward.