40

I'm currently working on a simulation of the MIPS processor in C++ for a comp architecture class and having some problems converting from decimal numbers to binary (signed numbers both ways). Everything's working fine until the very last bit because my current algorithm falls into out of bounds areas for int on 1<<=31. Just need a nudge in the right direction to get it up and running. Thanks!

//Assume 32 bit decimal number
string DecimalToBinaryString(int a)
{
    string binary = "";
    int mask = 1;
    for(int i = 0; i < 31; i++)
    {
        if((mask&a) >= 1)
            binary = "1"+binary;
        else
            binary = "0"+binary;
        mask<<=1;
    }
    cout<<binary<<endl;
    return binary;
}

I'm also including my other algorithm for completeness. I apologize for the lack of comments, but it's fairly straight forward.

int BinaryStringToDecimal(string a)
{
    int num = 0;
    bool neg = false;
    if(a.at(0) == '1')
    {
        neg = true;
        for(int x = a.length()-1; x >= 0; x--)
        {
            if(a.at(x) == '1')
                a.at(x) = '0';
            else a.at(x) = '1';
        }
        a.at(a.length()-1) += 1;
        for(int x = a.length()-1; x >= 0; x--)
        {
            if(a.at(x) == '2')
            {
                if(x-1 >= 0)
                {
                    if(a.at(x-1) == '1')
                        a.at(x-1) = '2';
                    if(a.at(x-1) == '0')
                        a.at(x-1) = '1';
                    a.at(x) = '0';
                }
            }
            else if(a.at(x) == '3')
            {
                if(x-1 >= 0)
                    a.at(x-1) += '2';
                a.at(x) = '1';
            }
        }
        if(a.at(0) == '2')
            a.at(0) = '0';
        else if(a.at(0) == '3')
            a.at(0) = '1';
    }
    for(int x = a.length()-1; x >= 0; x--)
    {
        if(a.at(x) == '1')
            num += pow(2.0, a.length()-x-1);
    }
    if(neg)
        num = num*-1;   
    return num;
 }

Also if anyone knows any good ways to go about writing these more efficiently I'd love to hear it. I've only had the two introductory programming classes but have been playing with different techniques to see how well I like their style.

Paul Ruiz
  • 2,396
  • 3
  • 27
  • 47
  • One option for altering the efficiency would be to work 4 bits at a time, mapping each nybble to the corresponding 4-character string. This would do fewer string assignments. I'd also look at producing the answer in MSB to LSB order rather than vice versa, so you could use `binary += pattern[nybble];`. Extending the string on the right might be (almost immeasurably) more efficient than what you have where you're effectively inserting on the left. – Jonathan Leffler Nov 22 '11 at 04:58
  • Unless I really needed to do the conversion myself, I'd use `std::bitset` to handle the conversion and printing: `std::cout << std::bitset<32>(a);`. – Jerry Coffin Nov 22 '11 at 05:26
  • In addition to working left to right like @JonathanLeffler said, I'd suggest you preallocate the entire string, since you know the output is going to be 32 `char`s. It may or may not make a difference depending on your string implementation, but it certainly doesn't hurt. – Pablo Nov 22 '11 at 05:46

5 Answers5

106

There are actually standard one-liners for these.

#include <bitset>

std::string s = std::bitset< 64 >( 12345 ).to_string(); // string conversion

std::cout << std::bitset< 64 >( 54321 ) << ' '; // direct output

std::bitset< 64 > input;
std::cin >> input;
unsigned long ul = input.to_ulong();

See this run as a demo.

Potatoswatter
  • 134,909
  • 25
  • 265
  • 421
  • Note that I had to use the full form of `std::bitset<64>(12345).to_string,std::allocator >()` for it to work. This also is how it's documented here: http://www.cplusplus.com/reference/bitset/bitset/to_string I don't know why your code works for you. GCC version? STL version? – Chaos_99 Jul 24 '13 at 07:38
  • IDEone uses GCC. They would never specify the standard library such that you would have to write all that. What are you using, old MSVC? Also, avoid cplusplus.com, they have no quality control. – Potatoswatter Jul 25 '13 at 12:15
  • I'm using GCC too. GCC 3.3.5 to be precise. – Chaos_99 Jul 25 '13 at 13:41
  • 2
    @Chaos_99 That's [unreasonably old](http://gcc.gnu.org/releases.html). Where did you even get that? – Potatoswatter Jul 25 '13 at 22:53
  • That's 9 years. Well, in certain environments that's barely worn in. You can't always chose what to work with ... – Chaos_99 Jul 26 '13 at 06:21
  • 1
    @Chaos_99 Long ago nobody could manage to make a reliable compiler, and there weren't enough users and testcases to do adequate prerelease QA. Those days are over, but some choose to stay in them. – Potatoswatter Jul 26 '13 at 10:44
  • 1
    Don't tell me, I would switch rather sooner than later. But if you have to guarantee that your code compiles exactly the same for 20+ years, you can't just do an `apt-get upgrade` every now and then. But let's end this here. This doesn't help anybody else. – Chaos_99 Jul 26 '13 at 11:54
  • what is the time complexity of the above method? – dorado Aug 26 '15 at 17:25
  • 5
    Surprisingly noone seems to mention that bitset's to_string will return the binary representation padded. Obviously, since bitset is returning the binary representation of the set that is exactly what it should return. But if someone is "misusing" bitset to convert ints to binary string representation they will call the output padded. `cout << bitset<16>(21); // outputs 0000000000010101` – Vlatko Šurlan Nov 20 '16 at 15:59
  • bitset seems to be working good with "decimal" value passed, but not with "binary" value. Like it understands "12" and "0x1100" as different values, and hence shows different output for them. Any clues, why is it so ? – parasrish Apr 23 '17 at 12:46
  • 1
    @parasrish `0x` indicates hexadecimal, not binary. C++14 introduces binary literals which begin `0b`. – Potatoswatter Apr 23 '17 at 14:13
  • oh yes with "0b" its working as expected. and since, i was trying working on each digits after the conversion (as string), therefore no difference in expected observed. thank you for pointing out @Potatoswatter ! – parasrish Apr 24 '17 at 03:05
  • @Potatoswatter ! Method 1: as stated above, was ok to understand the two cases. But, (Method2) when attempted with:: `while(num) { tempCount += (num & 1); num = num >> 1; } ` for counting number of 1s, then it did not had any problems with either input. any clues why it would be? (details: http://stackoverflow.com/a/43571822/4361073) – parasrish Apr 24 '17 at 03:16
  • Do you happen to know why to_string() does not work on Codility? I get this error `Compiler output: func.cpp: In function 'int solution(int)': func.cpp:11:33: error: 'std::__cxx11::string {aka class std::__cxx11::basic_string}' has no member named 'to_string'; did you mean 'basic_string'?` How would I use basic_string? – AlphaFoxtrot Aug 19 '20 at 21:10
  • @AlphaFoxtrot I don’t know Codility, but please be sure the language dialect isn’t defaulting to C++03. – Potatoswatter Aug 19 '20 at 21:42
  • @Potatoswatter codility.com is a free to use website for tech recruiters to assign tests and also has exercises for training. I'm using Visual Studio and set it to C++ 14 and Codility asks to write in C++ 14. Codility compiles differently than VS though and asks to write portable, standards-compliant code. So I guess to_string() is not portable and/or standards-compliant? – AlphaFoxtrot Aug 19 '20 at 22:15
5

Replace:

if((mask&a) >= 1)

with either:

if ((mask & a) != 0)

or:

if (mask & a)

Your problem is that the last bit gives you a negative number, not a positive one.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
2

I checked your code and couldn't find any error. Here is the code that i used...

#include <iostream>
#include <string>
using namespace std;

int main ()
{
  int a=1111165117;
  string binary  ("");
    int mask = 1;
    for(int i = 0; i < 31; i++)
    {
    if((mask&a) >= 1)
        binary = "1"+binary;
    else
        binary = "0"+binary;
     mask<<=1;
 }
 cout<<binary<<endl;
 system("PAUSE");         //optional if not using ideone
 return EXIT_SUCCESS;     //optional if not using ideone
 }

Output will be 1001110001010110101111010011101. I you can run this on ideone

Ravi Kumar
  • 123
  • 7
0

The problem with 1<<=31 has been addressed in other comment. Concerning the code piece for string -> int conversion, you have several options:

  • convert string to stream and use operator>>(int&) defined for stream //Correction - never mind that >:-) setbase() stream modifier does not support 2 value as argument
  • use standard C function strtol() which has base value argument (2 for binary)
  • or if you really want to implement the conversion yourself try the following code:

    int BinaryStringToDecimal(string a) 
    {
        int Rslt = 0;
        int Mask = 1;
        for (int i = a.length()-1; i >= 0; --i, Mask <<= 1) {
            if (a.at(i) != '0') {
                Rslt |= Mask;
            }
        }
        return (Rslt);
    }
    

Note that this code deals with negative numbers differently when compared with your code: in your function highest order bit is taken as signum. If the leftmost bit in string argument for your function is not in position 32 (when counting from right) your function may produce incorrect results. In the code suggested here there is no special signum treatment. But if you get the string of 32 digits with '1' as leftmost of them the MSB in int result will be == 1 and the integer will be negative (as it should be)

0

And why can't you just cast the int to a uint? Then it's very easy to generate the binary string because you don't have to worry about the sign bit. Same goes for converting a binary string to an int: build it as a uint, and then cast it to int:

string DecimalToBinaryString(int a)
{
    uint b = (uint)a;
    string binary = "";
    uint mask = 0x80000000u;
    while (mask > 0)
    {
        binary += ((b & mask) == 0) ? '0' : '1';
        mask >>= 1;
    }
    cout<<binary<<endl;
    return binary;
}

And, of course, you can apply the mentioned optimizations, like pre-allocating the string buffer, etc.

Going the other way:

uint b = 0;
for (int i = 31; i >=0; --i)
{
    b <<= 1;
    if (a.at(i) == '1')
        b |= 1;
}
int num = (int)b;
Jim Mischel
  • 131,090
  • 20
  • 188
  • 351