1

The below function is intended to convert its parameter, an integer, from decimal to octal.

std::string dec_to_oct(int num) {
    std::string output;
    for(int i=10; i>=0; --i) {
        output += std::to_string( (num >> i*3) & 0b111 );
    }
    return output;
}

It works for any positive input, however, for num = -1 it returns 77777777777, when it should return 37777777777, so the first digit needs to be a 3 instead of a 7. Why is this happening? The function appears to be incorrect for all negative input. How can I adjust the algorithm so that it returns correctly for negative numbers?

Note: this is a CS assignment so I'd appreciate hints/tips.

Dando18
  • 622
  • 9
  • 22
  • Wouldn't you prefer to have `77777777777`? `37777777777` doesn't mean -1 in octal. – wally Sep 01 '17 at 20:30
  • @rex With 32-bit integers, it does. `77777777777` indicates 33 one bits, while `37777777777` correctly indicates 32 one bits. – Daniel H Sep 01 '17 at 20:32
  • @rex not sure, but the assignment says `dec_to_oct(-1)` should return `37777777777` and `std::oct << -1` returns the same thing. – Dando18 Sep 01 '17 at 20:32
  • Also take a look at the [std::oct](http://en.cppreference.com/w/cpp/io/manip/hex) stream manipulator. – Ron Sep 01 '17 at 20:43
  • 4
    The input to this function is not in decimal. It's an `int`. `std::cout << num` would print decimal by default, but that implicitly involves a conversion to decimal. – user2357112 Sep 01 '17 at 20:49

2 Answers2

2

This is because the arithmetic shift preserves the sign of the number. To overcome this, cast the input integer to the equivalent unsigned type first.

(((unsigned int)num) >> 3*i) & 7


Going further, you can make the function templated and cast the pointer to the input to uint8_t*, using sizeof to calculate the number of octal digits (as suggested by DanielH). However that will be a bit more involved as the bits for a certain digit may stretch over two bytes.

meowgoesthedog
  • 14,670
  • 4
  • 27
  • 40
  • 1
    I’d go further and make the parameter be unsigned. You should also use `sizeof(num) * CHAR_BIT / 3` (with a correction in case that’s divisible by 3) as the top of the loop bounds, because on some systems `int` can be 64 bits or even stranger. – Daniel H Sep 01 '17 at 20:33
  • @DanielH I know the system will be armv8 with 4 byte integers, but I can see how that would help in the future. Also if I make the loop parameter unsigned, then the for-loop will run forever. – Dando18 Sep 01 '17 at 20:36
  • @Dando18 I didn’t mean to make the loop variable `unsigned` (although if you wanted to you could use `for (unsigned int i = 11; i-- > 0; )`); I meant to make `num` unsigned (`std::string dec_to_oct(unsigned int num)`). – Daniel H Sep 01 '17 at 20:40
  • @Dando18 because after the loop for `i=0`, it will become the maximum value of the unsigned type instead of -1, thus the loop condition is always satisfied. You should reverse the loop, and *pre-pend* (instead of append) the new character: `output = [string] + output` – meowgoesthedog Sep 01 '17 at 20:40
  • I understand now. Thanks for the answer! – Dando18 Sep 01 '17 at 20:41
  • @DanielH maybe `(sizeof(num) * 8 + 2) / 3` to round up in case the bit length is 1 or 2 bits larger than a multiple of 3. For example a 64-bit integer needs 22 octal digits instead of 21 (need to include the MSB) – meowgoesthedog Sep 01 '17 at 20:56
  • @meowgoesthedog Other than `CHAR_BIT` not being guaranteed to be 8, that looks good. – Daniel H Sep 01 '17 at 21:00
  • @DanielH I guess `uint8_t` then haha – meowgoesthedog Sep 01 '17 at 21:49
  • @meowgoesthedog Since `sizeof(char)` is guaranteed to be 1, `sizeof(num) * CHAR_BIT` is guaranteed to be the number of bits in `num`. – Daniel H Sep 01 '17 at 22:05
  • @DanielH so you mean 1 byte is not guaranteed to be 8 bits? Or `sizeof` is in terms of `char` sizes? – meowgoesthedog Sep 01 '17 at 22:27
  • 1
    @meowgoesthedog [One byte is only guaranteed to be *at least* 8 bits](https://stackoverflow.com/questions/5516044/system-where-1-byte-8-bit), but (the way C and C++ define things) a byte is whatever the size of `char` is. On any general-purpose system you’re likely to encounter, it’s 8 bits (unless they decide to switch to larger bytes in some future hardware), but some obscure systems have larger bytes, and anyway `CHAR_BIT` is better than having magic numbers. – Daniel H Sep 01 '17 at 23:05
  • @DanielH wow OK. I sure learned something today. So much for a *standard* lol – meowgoesthedog Sep 01 '17 at 23:13
0

Copy paste of documentation

ios_base& oct (ios_base& str);

Use octal base Sets the basefield format flag for the str stream to oct.

Example

// modify basefield
#include <iostream>     // std::cout, std::dec, std::hex, std::oct

int main () {
  int n = 70;
  std::cout << std::dec << n << '\n';
  std::cout << std::hex << n << '\n';
  std::cout << std::oct << n << '\n';
  return 0;
}

Output:

70
46
106

So bottom line you are reinventing the wheel.

Marek R
  • 32,568
  • 6
  • 55
  • 140
  • I'm aware of these, however this was for a school assignment that required the use of bitwise operations. – Dando18 Sep 01 '17 at 21:52