-1

I'm trying to convert a user entered integer to binary form, but I keep getting a warning that "binary" is not initialized in the last printf statement.

#include <stdio.h>

int main(void)
{
long int integer, binary;

    printf("Enter an integer: \n");
    scanf("%ld", &integer);
    while(integer != 0)
    {
        binary = integer % 2;
        integer = integer / 2;
    }
    printf("The integer in binary is %ld", binary);
    return 0;
}
Steve Friedl
  • 3,929
  • 1
  • 23
  • 30
tz013
  • 1
  • 1
  • 1

2 Answers2

2

Welcome to Stack Overflow.

The value binary is set during the while loop, but what happens if you actually enter a zero for the integer? In that case the loop doesn't run, and the value binary has no initialized value. Surprise!

That's why it's complaining.

However, the algorithm you're using, even if you enter a nonzero value, will only give binary the value of the lowest bit in the number you're converting, so you'll need different code to make it work to build up the binary value as you run through the integer.

Basically what you're trying to do is turn a binary value in to a kind of decimal representation of binary, which has enough limitations that I'm not sure it's worth doing.

Still:

long int binary = 0;

   while (integer != 0)
   {
       binary += integer % 2;  // 1 or 0
       binary *= 10;           // a "binary" decimal shift left
       integer /= 2;
   }
   printf("Integer in binary is %ld", binary);
   return 0;
}

This works, but it has a severe limitation of only being able to represent relatively small binary values.

The most common way people solve this exercise is to convert the integer value to a string rather than an integer, for easy display.

I wonder if that's the problem you're trying to solve?

S.S. Anne
  • 15,171
  • 8
  • 38
  • 76
Steve Friedl
  • 3,929
  • 1
  • 23
  • 30
2

An integer comprised of only decimal 1 and 0 digits is not binary. An int on a computer is already binary; the %d format specifier creates a character string representation of that value in decimal. It is mathematically nonsensical to generate a binary value that when represented as a decimal looks like some other binary value. Not least because that approach is good for only a 10 bit value (on a 32 bit int), or 19 bits using a 64bit int.

Moreover, the solution requires further consideration (and more code) to handle negative integer values - although how you do that is ambiguous due to the limited number of bits you can represent.

Since the int is already a binary value, it is far simpler to present the binary bit pattern directly than to calculate some decimal value that happens to resemble a binary value:

    // Skip leading zero bits
    uint32_t mask = 0x80000000u ;
    while( mask != 0 && (integer & mask) == 0 ) mask >>= 1 ;

    // Output remaining significant digits 
    printf("The integer in binary is ");
    while( mask != 0 )
    {
        putchar( (integer & mask) == 0 ? '0' : '1' ) ;
        mask >>= 1 ;
    }
    putchar( '\n' ) ;
Clifford
  • 88,407
  • 13
  • 85
  • 165