I would like to implement the precision multiplication of 255-bit integer in radix-2^16 in C
.
I have been suggested to present such big number as an array of bignumber[16] ( typedef uint16_t bignumber[16] )
. However, I don't get the intuition behind that ( as i know it can also be bignumber[8] with typedef uint32_t).
Then how do I perform multiplication of those big numbers? In order to check result ( with sage for example), I need to print those numbers in base 10, but I dont know how to do it.
Any help to clearly explain the concept would be appreciated.
Thanks