1

I've looked for the answer in many places, but they all say the same thing. Whenever someone explains how to convert decimal numbers to binary, the technique of continuous division by two is shown.

If we declare a number in a program (ex. int = 229), this doesn't make any sense as the computer doesn't know what this number is and can't divide it by two.

From what I understand, when an int is declared, the computer treats the digits as simple characters. To get the binary number, the only thing that makes sense for me would be something like this:

  • Computer uses Ascii table to recognize the symbols ( 2 - 2 - 9)
  • It takes the symbol 9, finds "00111001" (57 in ascii table) which is associated to its real binary value "1001" (9) [57 - 48]
  • It takes the first 2, finds "00110010" (50), binary value "10" (2), but knowing it is the second symbol, it multiplies it by "1010" (10) and obtains "10100" (20)
  • It sums "10100" to "1001" = 11101 (29)
  • It takes the second 2, finds "10" (2), but knowing it is the third symbol, it multiplies it by (100) and obtain "11001000" (200)
  • It sums "11001000" to "11101 " = 11100101 (229)

Am I on the right track?

This and the inverse conversion (binary to decimal) would resemble at something like C functions atoi and itoa but completely performed with binary arithmetic only, exploiting a small knowledge base (ascii table, binary value of 10, 100, 1000 etc.).

I want to clarify that:

  • I already went into arguments such floating-point arithmetic and binary-coded decimal (calculators)
  • I know that decimals are only useful for human understanding
  • Ints were chosen in the example for their simplicity

The question is not related to how numbers are stored but rather to how they are interpreted.

Thank you!

peppone
  • 85
  • 6
  • From what I understand, when an int is declared, the computer treats the digits as simple characters. This is not correct! Treating the types conversions is all up to the compiler not the CPU itself. You declare an int value, then compiler interprets it to the corresponding binary value and its bit size depends on the CPU architecture. All the values and code you write is nothing but an ASCII stream for the compiler. Then the compilers have a few phases to process this stream and interpret to somthing which makes sense for the cpu which is hex code aka object code. – Kozmotronik Nov 22 '20 at 17:39
  • @Kozmotronik , sure, it is only text (a program can also be written in Microsoft Word) but this does not change the meaning of my question: with which method the compiler transforms the characters **229** into `11100101`? – peppone Nov 22 '20 at 23:33
  • Man it is all about the allocation of memory. Actually there is no such transformation. The compiler just puts the value where it has to be. Consider this example: int a = 15; // No matter the binary transformation. // The compiler interprets it to the machine's assembly code first, depending on the architecture. Something like: MOV a, #15 – Kozmotronik Nov 23 '20 at 06:07
  • @Kozmotronik, It doesn't matter if it's in the compiler, in the computer, in the CPU, in the register, if its format is text, if its format is decimal, the number 15 has to be converted to binary somewhere. – peppone Nov 23 '20 at 18:03
  • 1
    Perhaps I couldn't give a good example, anyway. Finally I can only tell you that the unique place where the number 15 is converted to binary is when the hex code is deployed to the processor or memory, in other words when it becomes an executable. Since a computer can consist of variety of processors the tool that make this conversion might be a special programming tool, or be a compiler itself (this is the case for the general purpose desktop computers). For the rest it's nothing but asc2. May be a pro computer scientist might inform you better than my examples and explanations. Good luck. – Kozmotronik Nov 24 '20 at 09:42

2 Answers2

0

The computer uses hex to translate the string into what the machine will understand. So the number 229 from your example is e5 -> 11100101 in python

hx = hex(int(229))
print(bin(int(hx, 16)))

will give you 0b11100101

b10n1k
  • 567
  • 5
  • 21
  • I don't think there is a list that assigns at "every possible decimal value" its correspondent in binary/hex , there is definitely a conversion algorithm, probably my question is unclear ... – peppone Nov 23 '20 at 18:06
0

After some research I came to these conclusions:

  • The code we write is nothing more than a series of characters which will be parsed by the compiler and transformed into instructions for the CPU
  • Characters are themselves bits but there is no need to deepen such argument.
  • These instructions can be expressed as a sequence of hex numbers for simplicity but we are always talking about a bit sequence.
  • Going down to a lower level language such the assembly, the point is the same, the text (our assembly code) will be converted into machine instructions by the assembler.
  • The CPU itself doesn't contain a logic to convert the bits of char sequence "2-2-9" into 11100101, a conversion must be done first.
  • In a scenario like this : C code -> ASM -> Machine language, the point where this conversion takes place is before the Machine language generation.
  • If no one has implemented a method for this conversion (nothing to do with base 10 to base 2 conversion with the scholastic method) and we have no library, no external tools that do the conversion for us, the conversion is done as indicated in this answer: Assembler - write binary in assembly from decimal
peppone
  • 85
  • 6