-2

I know that everything is stored as 0s and 1s in computer memory. To be specific let's take an example of integer. We assign n = 5 and it's equivalent binary would be 101 with 29 leading 0s. It's easy for us to find binary by doing repetitive n%2 and then by n/2. But my question is how does a computer does this conversion as computer cannot do any operation on the given decimal number. If anyone is thinking about writing program to convert decimal to binary, there also the input number is first stored as binary then operations are done. So coming down to my question, how does a computer finds the binary of a number without doing any arithmetic operations. I am highly curious to know about this process.

Correct me if I am wrong somewhere in my thinking process.

devi_D
  • 25
  • 4
  • 3
    `5` is how the computer is *showing* you something that is represented as `00000101` (if you ask it to). Numbers are numbers, and all of the base representations are equivalent. – Eugene Sh. Jan 28 '21 at 20:50
  • (1) How did you convert the string "5" to binary? How would you convert the string "50" to binary (hint 5 * 10)? Computer does the same thing. – Richard Critten Jan 28 '21 at 20:51
  • There's no _"conversion"_?!? I have no clue what you're talking about. – πάντα ῥεῖ Jan 28 '21 at 20:52
  • ASCII codes are 1-byte binary integers that computers can manipulate. For a specific x86 example, see [NASM Assembly convert input to integer?](https://stackoverflow.com/a/49548057) (this question was originally tagged [assembly]. In assembly, everything, even code, is just bytes in memory (or registers).) – Peter Cordes Jan 28 '21 at 21:21
  • And BTW, one large paragraph with every other sentence bolded doesn't really help readability. Too much is highlighted for anything to stand out, beyond the first bolded sentence. – Peter Cordes Jan 29 '21 at 01:08

2 Answers2

1

A decimal number as you type into the computer is stored as a series of characters, each of which is one of the characters '0' to '9'. In the popular ASCII character set, these have the character codes 48 to 57. These character codes are stored in binary. For example, the number 1234 as you type it in is stored as an array of the four numbers

49 50 51 52

To convert such an array to an integer, first, each character code is converted to the digit it represents. With ASCII, this is achieved by subtracting 48.

1 2 3 4

Then, each digit is multiplied by the power of 10 corresponding to its place value.

1000 200 30 4

Finally, the numbers are summed up to obtain the desired number.

1234

The last two steps are usually performed at once using a Horner scheme.

At no point does the computer need to perform arithmetic on anything but binary numbers to perform this conversion.

Historically, there have also been other methods to perform this sort of conversion. For example, many historical computer allowed numbers to be stored in decimal as BCD (binary coded decimal) where each decimal digit was stored in four bits.

Computers could then convert this BCD to binary and back using variants of the double dabble algorithm. Modern x86 processors still contain this circuitry as a part of the fbld and fbstp instructions, although it remains unused by mainstream software.

fuz
  • 88,405
  • 25
  • 200
  • 352
0

Decimal is a text representation of a number. And text is just a sequence of numbers to a computer.

So when you you find 24 in a program, this is really two numbers stored on disk.[1] Specifically, they are fifty and fifty-two.[1] Somewhere in the compiler is a function that will perform something ultimately equivalent to

(50 - 48) * 10 + (52 - 48)

This will produce the number twenty-four. Not the text which is the decimal representation of the number, but the actual number twenty-four. The computer can work with that as a number instead of a string.


  1. Assuming an ASCII-based encoding
ikegami
  • 367,544
  • 15
  • 269
  • 518