0

I had to convert a the 128 bits of a character array which has size 16 (1 byte each character), into a decimal and hexadecimal, without using any other libraries than included. Converting it to hexadecimal was easy as four bits were processed each time an the result was printed for each four bits as soon as it was generated.

But when it comes to decimal. Converting it in the normal mathematical way was not possible, in which each bit is multiplied by 2 to the power the index of the bit from left.

So I thought to convert it like I did with hexadecimal by printing digit by digit. But the problem is that in decimal it is not possible as the maximum digit is 9 and it needs 4 bits to represented while 4 bits can represent decimal numbers up to 15. I tried making some mechanism to carry the additional part, but couldn't find a way to do so. And I think, that was not going to work either. I have been trying aimlessly for three days as I have no idea what to do. And couldn't even find any helpful solution on the internet.

So, I want some way to get this done.

Here is My Complete Code:

#include <iostream>
#include <cstring>
#include <cmath>

using namespace std;

const int strng = 128;
const int byts = 16;

class BariBitKari {

    char bits_ar[byts];

public:

    BariBitKari(char inp[strng]) {

        set_bits_ar(inp);
    }

    void set_bits_ar(char in_ar[strng]) {
        char b_ar[byts];

        cout << "Binary 1: ";
        for (int i=0, j=0; i<byts; i++) {

            for (int k=7; k>=0; k--) {
                if (in_ar[j] == '1') {
                    cout << '1';
                    b_ar[i] |= 1UL << k;
                }
                else if (in_ar[j] == '0') {
                    cout << '0';
                    b_ar[i] &= ~(1UL << k);
                }

                j++;
            }
        }
        cout << endl;

        strcpy(bits_ar, b_ar);
    }

    char * get_bits_ar() {
        return bits_ar;
    }


    // Functions

    void print_deci() {

        char b_ar[byts];

        strcpy(b_ar, get_bits_ar());

        int sum = 0;
        int carry = 0;

        cout << "Decimal : ";

        for (int i=byts-1; i >= 0; i--){

            for (int j=4; j>=0; j-=4) {

                char y = (b_ar[i] << j) >> 4;

                // sum = 0;

                for (int k=0; k <= 3; k++) {

                    if ((y >> k) & 1) {
                        sum += pow(2, k);
                    }
                }

                // sum += carry;
                // if (sum > 9) {
                //  carry = 1;
                //  sum -= 10;
                // }
                // else {
                //  carry = 0;
                // }
                // cout << sum;
            }
        }

        cout << endl;
    }

    void print_hexa() {

        char b_ar[byts];

        strcpy(b_ar, get_bits_ar());

        char hexed;

        int sum;

        cout << "Hexadecimal : 0x";

        for (int i=0; i < byts; i++){

            for (int j=0; j<=4; j+=4) {

                char y = (b_ar[i] << j) >> 4;

                sum = 0;

                for (int k=3; k >= 0; k--) {

                    if ((y >> k) & 1) {
                        sum += pow(2, k);
                    }
                }

                if (sum > 9) {
                    hexed = sum + 55;
                }
                else {
                    hexed = sum + 48;
                }
                cout << hexed;
            }
        }
        cout << endl;
    }
};

int main() {

    char ar[strng];

    for (int i=0; i<strng; i++) {
        if ((i+1) % 8 == 0) {
            ar[i] = '0';
        }
        else {
            ar[i] = '1';
        }
    }

    BariBitKari arr(ar);
    arr.print_hexa();
    arr.print_deci();

    return 0;
}
DaniyalAhmadSE
  • 807
  • 11
  • 20
  • 2
    You may need a Big Number library as standard C++ doesn't have a 128-bit integer number. Your compiler may have one, but you'll need to check the documentation. – Thomas Matthews Apr 04 '21 at 19:06
  • @ThomasMatthews I am not allowed to use any library for this purpose. – DaniyalAhmadSE Apr 04 '21 at 19:07
  • 1
    You have to implement a basic long-division algorithm yourself, byte-by-byte. It's highly doubtful that anyone on Stackoverflow will write all that code for you. Instead you'll have to try to implement it yourself, and ask ***specific*** programming-related questions if you get stuck during your implementation, explaining what exactly you're stuck on. – Sam Varshavchik Apr 04 '21 at 19:07
  • @SamVarshavchik can you give me some idea of this method, some direction, about how I have to treat each byte? – DaniyalAhmadSE Apr 04 '21 at 19:09
  • 5
    Simply ask yourself, how did you learn to do long division, in grade school? You do it the same way, but this time using bytes, instead of decimal digits. – Sam Varshavchik Apr 04 '21 at 19:10
  • 1
    I would create an array `char[128][39]` then stuff it with the decimal representation of powers of 2 where each char is a value from 0-9.so the vector is the decimal equiv. of the power of 2 (same as bit location). Then just go through each sequential bit and if it's 1, add in the vector at that location then do a pass to deal with carrys (overflows beyond 9). Pretty easy and you can create the initial set using constexpr so should be reasonably fast. – doug Apr 04 '21 at 19:46
  • Does this answer your question? [How to convert a 128-bit integer to a decimal ascii string in C?](https://stackoverflow.com/questions/8023414/how-to-convert-a-128-bit-integer-to-a-decimal-ascii-string-in-c) – phuclv Apr 05 '21 at 01:10
  • [How to convert a 128-bit integer to a decimal ascii string in C?](https://stackoverflow.com/q/8023414/995714), [Print int128 value from struct](https://stackoverflow.com/q/66825998/995714) – phuclv Apr 05 '21 at 01:11

1 Answers1

2

To convert a 128-bit number into a "decimal" string, I'm going to make the assumption that the large decimal value just needs to be contained in a string and that we're only in the "positive" space. Without using a proper big number library, I'll demonstrate a way to convert any array of bytes into a decimal string. It's not the most efficient way because it continually parses, copies, and scans strings of digit characters.

We'll take advantage of the fact that any large number such as the following:

0x87654321 == 2,271,560,481

Can be converted into a series of bytes shifted in 8-bit chunks. Adding back these shifted chunks results in the original value

0x87 << 24   == 0x87000000 == 2,264,924,160
0x65 << 16   == 0x00650000 ==     6,619,136
0x43 << 8    == 0x00004300 ==        17,152
0x21 << 0    == 0x00000021 ==            33

Sum          == 0x87654321 == 2,271,560,481

So our strategy for converting a 128-bit number into a string will be to:

  • Convert the original 16 byte array into 16 strings - each string representing the decimal equivalent for each byte of the array

  • "Shift left" each string by the appropriate number of bits based on the index of the original byte in the array. Taking advantage of the fact that a left shift is equivalent of multiplying by 2.

  • Add all these shifted strings together

So to make this work, we introduce a function that can "Add" two strings (consisting only of digits) together:

// s1 and s2 are string consisting of digits chars only ('0'..'9')
// This function will compute the "sum" for s1 and s2 as a string
string SumStringValues(const string& s1, const string& s2)
{
    string result;
    string str1=s1, str2=s2;

    // make str2 the bigger string
    if (str1.size() > str2.size())
    {
        swap(str1, str2);
    }

    // pad zeros onto the the front of str1 so it's the same size as str2
    while (str1.size() < str2.size())
    {
        str1 = string("0") + str1;
    }

    // now do the addition operation as loop on these strings
    size_t len = str1.size();
    bool carry = false;
    while (len)
    {
        len--;

        int d1 = str1[len] - '0';
        int d2 = str2[len] - '0';

        int sum = d1 + d2 + (carry ? 1 : 0);
        carry = (sum > 9);
        if (carry)
        {
            sum -= 10;
        }

        result.push_back('0' + sum);
    }

    if (carry)
    {
        result.push_back('1');
    }

    std::reverse(result.begin(), result.end());
    return result;
}

Next, we need a function to do a "shift left" on a decimal string:

// s is a string of digits only (interpreted as decimal number)
// This function will "shift left" the string by N bits
// Basically "multiplying by 2" N times
string ShiftLeftString(const string& s, size_t N)
{
    string result = s;

    while (N > 0)
    {
        result = SumStringValues(result, result); // multiply by 2
        N--;
    }
    return result;
}

Then to put it altogether to convert a byte array to a decimal string:

string MakeStringFromByteArray(unsigned char* data, size_t len)
{
    string result = "0";
    for (size_t i = 0; i < len; i++)
    {
        auto tmp = to_string((unsigned int)data[i]);   // byte to decimal string
        tmp = ShiftLeftString(tmp, (len - i - 1) * 8); // shift left
        result = SumStringValues(result, tmp);         // sum
    }
    return result;
}

Now let's test it out on the original 32-bit value we used above:

int main()
{
    // 0x87654321
    unsigned char data[4] = { 0x87,0x65,0x43,0x21 };
    cout << MakeStringFromByteArray(data, 4) << endl;
    return 0;
}

The resulting program will print out: 2271560481 - same as above.

Now let's try it out on a 16 byte value:

int main()
{
    // 0x87654321aabbccddeeff432124681111
    unsigned char data[16] = { 0x87,0x65,0x43,0x21,0xaa,0xbb,0xcc,0xdd,0xee,0xff,0x43,0x21,0x24,0x68,0x11,0x11 };
    std::cout << MakeStringFromByteArray(data, sizeof(data)) << endl;
    return 0;
}

The above prints: 179971563002487956319748178665913454865

And we'll use python to double-check our results:

Python 3.8.3 (tags/v3.8.3:6f8c832, May 13 2020, 22:37:02) [MSC v.1924 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> int("0x87654321aabbccddeeff432124681111", 16)
179971563002487956319748178665913454865
>>>

Looks good to me.

I originally had an implementation that would do the chunking and summation in 32-bit chunks instead of 8-bit chunks. However, little-endian vs. big endian byte order issues get involved. I'll leave that potential optimization as an exercise to do another day.

selbie
  • 100,020
  • 15
  • 103
  • 173
  • 1
    Nice. I suggest a micro-simplification for the padding though: `str1 = std::string(str2.size() - str1.size(), '0') + str1;` – Ted Lyngmo Apr 04 '21 at 21:14