0

I was given a task to convert decimal input numbers and output its binary equivalent via an "array" using assembly. I've the code complete and working, trouble is it only accepts numbers 0-99 for conversion.

Ideally, the program should be able to convert any number at least up to 255 decimal so that the code can be reusable for future practices where I might need to handle different 16 bit values within registers.

I appreciate all advice given, the code is as follows:

.model small
.stack
.data
                             ;Variables used:

   cad  db 9 dup (' '),'$'   ;Cad will contain the chain of bits
   var1 db ?                 ;Var1 will be used to conver number
   num  db ?                 ;variable for input number
   aux  db ?                 ;auxiliary  variable

   msg db 10,13, "Enter decimal number 0-99: $", 10, 13

.code
.startup

   mov ah,9
   lea dx,msg
   int 21h      ;Shows first message

   mov var1,0   ;Initializes var1 value to 0
   mov ah,01h   ;Int to obtain input
   int 21h      
   sub al,30h   ; Ascii code value to real decimal value conversion (subtracts 48d)
   mov num,al   ;Input number from AL is moved to variable num

   mov al,num   
   mov bl,10    ;10 is stored in bl
   mul bl       ;Number to convert is multiplied by 10
   mov aux,al   ;aux variable is assigned the result

   mov var1,0   ;We obtain the second user number input
   mov ah,01h
   int 21h      
   sub al,30h   


   add aux,al   ;We add aux to the previous number multiplied by 10
   mov bl,aux   ;Doesn't need to be multiplied
   mov num,bl   ;result is stored in BL

   mov ah,02h   ;Prints '=' sign symbol after decimal input 
   mov dl,'='
   int 21h

   mov SI,6     ;Cycles where we use long division (Divide by 2)
   L1:          ;L1  label

      xor Ah,Ah    ;Resets AH 
      mov Al,num
      mov Bl,2
      div Bl
      mov var1,Ah
      mov num,Al

      mov dl,var1
      add dl,30h

      mov cad[SI],dl;Concatenates results

      cmp num,1 ;Compares num with 1
      dec SI
      jne L1   ;L1 loops until it goes through the all numbers
       je exit   ;exits loop



      cmp num,0 ;Compares num with 0
       jne L1
        je exit

   exit:          ;exit label

      mov dl,num    ;prints the chain in binary
      add dl,30h

      mov cad[SI],dl

      mov ah,09h
      lea Dx,cad
      int 21h

      mov ah,4ch
      int 21h
      end  
Erik Eidt
  • 23,049
  • 2
  • 29
  • 53
  • Does it need to run on original 8086, or can you use 386 features in 16-bit mode, like `lea ax, [eax + eax*4]` to multiply by 5, or `imul ax, 10`? Also, `div` is a terribly slow way to divide by 2. Use a right shift. – Peter Cordes Sep 28 '18 at 03:10
  • Can you use SSE2 to convert a 16-bit binary integer into a 16-byte ASCII string in a few instructions? [How to efficiently convert an 8-bit bitmap to array of 0/1 integers with x86 SIMD](https://stackoverflow.com/a/52105856) (Yes, SIMD instructions do work in 16-bit mode, but only the SSE encoding not AVX.) – Peter Cordes Sep 28 '18 at 03:14
  • @PeterCordes First off, thank you for your response. For class purposes we're using only instructions of the original 8086. Professor has hinted we'll be making use of shift and rotation directives to manipulate bits in future tasks, but I wouldn't know to well how to handle this yet. – J Gustavo Munoz Sep 28 '18 at 04:23
  • Ok, well there's still a huge amount of room to optimize this for 8086. There's no need to store to memory and reload, just keep your data in registers. `shr bx, 1` puts the bit shifted out into CF, where you can use `mov al, '0'` / `adc al, 0` to get an ASCII 0/1 in `al`. – Peter Cordes Sep 28 '18 at 04:36
  • 1
    And BTW, you mention wanting 16-bit, but you say "up to decimal 255". That's 8-bit. Anyway, do you want to optimize for machine-code size, or for speed? And if speed, for original 8086 (where code-fetch was a major bottleneck) or for modern x86 like Skylake or Ryzen in 16-bit mode? (Everything I mentioned so far is pure win for both all CPUs, but there can be tradeoffs in the fine details like whether `mul` by a constant 10 is worth it vs. shift-and-add (`x*8 + x*2`). Modern x86 has very good multiply performance, but 8086 didn't.) – Peter Cordes Sep 28 '18 at 04:43

0 Answers0