I have played around with Big Integer arithmetic for a few months (2+) now and have explored many ways and ideas of implementing it. I have working versions of String based implementations (highly inefficient), base 1e9 and 1e18 (more efficient but still slightly wasteful) and a modular arithmetic based implementation. Currently, I'm working on a new idea I've had in mind. (Note: it might not be a "new" radically different idea for working with big integers but because I've been exploring the concept on my own, it seems new to me. I've never looked at implementations in Boost Multiprecision Library or GNU GMP or any other library and have come up with most algorithms on my own)
The new idea: I'm representing the Big Integer as base 2 in 32-bit slots. The representation includes the 2's complement method.
(Base 10) 100
= (Base 2) 00000000000000000000000001100100
(Base 10) 10000000000000000
= (Base 2) 00000000001000111000011011110010 01101111110000010000000000000000
...and so on for bigger numbers.
This representation is highly efficient in terms of space lost and implementing algorithms in base 2 is a whole lot easier. The only problem I'm facing is the conversion from a huge base 2 Big Integer to base 10 human understandable representation. Currently, I use my base 1e9 Big Integer implementation in conjunction with this in order to do the conversion. However, this is very inefficient and leads to a trade-off between algorithm speeds and high memory costs. I do not want that sort of trade-off to exist. Why should I have to deal with such a high memory cost every time I need to display an integer as every time I need to do so, I'm gonna have to run base 1e9 for calculating and summing up pure powers of 2?
So far, I've not been able to intuitively come up with a clever way of doing the conversion and I would like to have some advice on how to proceed. Is there an efficient way? Can the base 10 representation solely be created by looking at the set bits?