I have an array of bytes and length of that array. The goal is to output the string containing that number represented as base-10 number.
My array is little endian. It means that the first (arr[0]
) byte is the least significant byte. This is an example:
#include <iostream>
using namespace std;
typedef unsigned char Byte;
int main(){
int len = 5;
Byte *arr = new Byte[5];
int i = 0;
arr[i++] = 0x12;
arr[i++] = 0x34;
arr[i++] = 0x56;
arr[i++] = 0x78;
arr[i++] = 0x9A;
cout << hexToDec(arr, len) << endl;
}
The array consists of [0x12, 0x34, 0x56, 0x78, 0x9A]
. The function hexToDec
which I want to implement should return 663443878930
which is that number in decimal.
But, the problem is because my machine is 32-bit so it instead outputs 2018915346
(notice that this number is obtained from integer overflow). So, the problem is because I am using naive way (iterating over the array and multiplying it by 256
to the power of position in the array, then multiplying by the byte at that position and finally adding to the sum). This of course yields integer overflow.
I also tried with long long int
, but at some point of course, integer overflow occurs.
The arrays I want to represent as decimal number can be very long (more that 1000 bytes) which definitelly requires a lot more clever algorithm than my naive one.
Question
What would be the good algorithm to achieve that? Also, another question I must ask is what is the optimal complexity of that algorithm? Can it be done in linear complexity O(n)
where n
is the length of the array? I really cannot think about a good idea. Implementation is not the problem, my lack of ideas is.
Advice or idea how to do that will be enough. But, if it is easier to explain using some code, feel free to write in C++.