7

I want to convert an integer to binary string and then store each bit of the integer string to an element of a integer array of a given size. I am sure that the input integer's binary expression won't exceed the size of the array specified. How to do this in c++?

lovespeed
  • 4,835
  • 15
  • 41
  • 54
  • Why would you want to do that? Ints are already natively an "array of bits", you can access each bit. – Mat Dec 31 '12 at 17:09
  • 1
    A "Binary string"? As in characters of 1s and 0s? What a strange task... – Mooing Duck Dec 31 '12 at 17:09
  • @Mat: reread the question, he wants to convert an integer into an array of _int_, where each integer in the array holds a bit from the original integer. – Mooing Duck Dec 31 '12 at 17:10
  • 1
    @MooingDuck: I understand. That's like a 32x or 64x storage increase. Doesn't change my question. – Mat Dec 31 '12 at 17:12
  • @Mat: There's several reasons to do so. The original int might be a bitfield and he wants do to extract all the data at once, or maybe he's doing IO. – Mooing Duck Dec 31 '12 at 17:13
  • @MooingDuck: precisely, there's a lot of potential reasons, some of which might be legitimate, some of which might have _much_ better alternatives. Don't you think OP would be better of with an answer that actually addresses their actual problem? – Mat Dec 31 '12 at 17:16
  • More likely, this is a school exercise! – Mats Petersson Dec 31 '12 at 17:22
  • 1
    LSB first or last in the array? – James Dec 31 '12 at 17:33

8 Answers8

12

Pseudo code:

int value = ????  // assuming a 32 bit int
int i;

for (i = 0; i < 32; ++i) {
    array[i] = (value >> i) & 1;
}
0 _
  • 10,524
  • 11
  • 77
  • 109
mah
  • 39,056
  • 9
  • 76
  • 93
  • 4
    Why not `array[i] = (theValue >> i) & 1` - I'm sure the compiler does the same thing, but seeing that "there isn't going to be a branch in there" makes me happier. – Mats Petersson Dec 31 '12 at 17:23
  • 5
    The question is tagged as C++ and so you must use templates, otherwise it's C. – James Dec 31 '12 at 17:24
  • works good, however order of bits is reversed, so instead of array[i] I suggest using index array[31 - i] – Marek Apr 07 '15 at 02:41
  • may 32 be changed to sizeof(int) * 8 – jin zhenhui Apr 13 '20 at 03:36
  • @jinzhenhui my answer (8 years ago) was, as indicated, intended to be pseudo code. There are several variations one could apply when writing this for real, including your suggestion -- but I would not literally use sizeof(int). Instead, I would use sizeof(value) which: yes, is currently the sam thing. The benefit of specifying the variable, not its type, is that if the variable type should change in the future, you only need to change its type directly and other things that know about the variable instead do not become broken or require a change. – mah Apr 14 '20 at 15:06
6
template<class output_iterator>
void convert_number_to_array_of_digits(const unsigned number, 
         output_iterator first, output_iterator last) 
{
    const unsigned number_bits = CHAR_BIT*sizeof(int);
    //extract bits one at a time
    for(unsigned i=0; i<number_bits && first!=last; ++i) {
        const unsigned shift_amount = number_bits-i-1;
        const unsigned this_bit = (number>>shift_amount)&1;
        *first = this_bit;
        ++first;
    }
    //pad the rest with zeros
    while(first != last) {
        *first = 0;
        ++first;
    }
}

int main() {
    int number = 413523152;
    int array[32];
    convert_number_to_array_of_digits(number, std::begin(array), std::end(array));
    for(int i=0; i<32; ++i)
        std::cout << array[i] << ' ';
}

Proof of compilation here

Mooing Duck
  • 64,318
  • 19
  • 100
  • 158
6

You could use C++'s bitset library, as follows.

#include<iostream>
#include<bitset>

int main()
{
  int N;//input number in base 10
  cin>>N;
  int O[32];//The output array
  bitset<32> A=N;//A will hold the binary representation of N 
  for(int i=0,j=31;i<32;i++,j--)
  {
     //Assigning the bits one by one.
     O[i]=A[j];
  }
  return 0;
}

A couple of points to note here: First, 32 in the bitset declaration statement tells the compiler that you want 32 bits to represent your number, so even if your number takes fewer bits to represent, the bitset variable will have 32 bits, possibly with many leading zeroes. Second, bitset is a really flexible way of handling binary, you can give a string as its input or a number, and again you can use the bitset as an array or as a string.It's a really handy library. You can print out the bitset variable A as cout<<A; and see how it works.

Community
  • 1
  • 1
Aravind
  • 3,169
  • 3
  • 23
  • 37
2

You can do like this:

while (input != 0) {

        if (input & 1)
            result[index] = 1; 
        else
            result[index] =0;
   input >>= 1;// dividing by two
   index++;
}
Alfred
  • 1,543
  • 7
  • 33
  • 45
1

As Mat mentioned above, an int is already a bit-vector (using bitwise operations, you can check each bit). So, you can simply try something like this:

// Note: This depends on the endianess of your machine
int x = 0xdeadbeef; // Your integer?
int arr[sizeof(int)*CHAR_BIT];
for(int i = 0 ; i < sizeof(int)*CHAR_BIT ; ++i) {
  arr[i] = (x & (0x01 << i)) ? 1 : 0; // Take the i-th bit
}
RageD
  • 6,693
  • 4
  • 30
  • 37
1

Decimal to Binary: Size independent

Two ways: both stores binary represent into a dynamic allocated array bits (in msh to lsh).

First Method:

#include<limits.h> // include for CHAR_BIT
int* binary(int dec){
  int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
  if(bits == NULL) return NULL;
  int i = 0;

  // conversion
  int left = sizeof(int) * CHAR_BIT - 1; 
  for(i = 0; left >= 0; left--, i++){
    bits[i] = !!(dec & ( 1u << left ));      
  }

  return bits;
}

Second Method:

#include<limits.h> // include for CHAR_BIT
int* binary(unsigned int num)
{
   unsigned int mask = 1u << ((sizeof(int) * CHAR_BIT) - 1);   
                      //mask = 1000 0000 0000 0000
   int* bits = calloc(sizeof(int) * CHAR_BIT, sizeof(int));
   if(bits == NULL) return NULL;
   int i = 0;

   //conversion 
   while(mask > 0){
     if((num & mask) == 0 )
         bits[i] = 0;
     else
         bits[i] = 1;
     mask = mask >> 1 ;  // Right Shift 
     i++;
   }

   return bits;
}
Grijesh Chauhan
  • 57,103
  • 20
  • 141
  • 208
0

I know it doesn't add as many Zero's as you wish for positive numbers. But for negative binary numbers, it works pretty well.. I just wanted to post a solution for once :)

int BinToDec(int Value, int Padding = 8)
{
    int Bin = 0;

    for (int I = 1, Pos = 1; I < (Padding + 1); ++I, Pos *= 10)
    {
        Bin += ((Value >> I - 1) & 1) * Pos;
    }
    return Bin;
}
Brandon
  • 22,723
  • 11
  • 93
  • 186
0

This is what I use, it also lets you give the number of bits that will be in the final vector, fills any unused bits with leading 0s.

std::vector<int> to_binary(int num_to_convert_to_binary, int num_bits_in_out_vec)
{
    std::vector<int> r;

    // make binary vec of minimum size backwards (LSB at .end() and MSB at .begin())
    while (num_to_convert_to_binary > 0)
    {
        //cout << " top of loop" << endl;
        if (num_to_convert_to_binary % 2 == 0)
            r.push_back(0);
        else
            r.push_back(1);
        num_to_convert_to_binary = num_to_convert_to_binary / 2;
    }

    while(r.size() < num_bits_in_out_vec)
        r.push_back(0);

    return r;
}
perry_the_python
  • 385
  • 5
  • 14