I want to specify a sequence of large integers (with many zeros) like:
a = [1e13, 1e14, 1e19, ...]
My intuition is to use scientific notation. But in python, it's a float instead of integer. Is there a easy way in python to write these integer literals without writing all the zeros, because making sure the number of zeros correct is a nightmare.
I believe I can cast the floats back to integer using int
, but just wonder if there is a better way?