1

EDIT: I asked around a bit, and apparently my mistake was this: we usually edit our 8086 Assembly code in a special debug.exe environment on MS-DOS. This particular environment indeed defaults to hexadecimal numbers, but other assemblers for 8086 default to decimal.


In writing Assembly language (e.g. for Intel's 8086), we can represent numbers either as 3F or 3FH, or as 16 or 16H, because all numbers default to hexadecimal notation.

In my experience there is no real difference between both representations as far as the Assembler is concerned: it works happily with both, even when mixed.

My question is: are there any strict rules on when or when not to append -h/-H after a number?

I can see that it can help to prevent the confusion (for beginning Assembly programmers) that would arise from seeing numbers we usually think of as decimal, as in my 16 vs. 16H example where the 16 is actually hexadecimal for decimal 22 -- I myself have been bitten by this error several times. But is clarity really the only criterion?

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
  • Related: [How to represent hex value such as FFFFFFBB in x86 assembly programming?](https://stackoverflow.com/q/11733731) – Peter Cordes Oct 28 '18 at 15:56

3 Answers3

2

Yes, there are strict rules and they should be mentioned in the documentation of your assembler (usually in a section named "Numeric literals"). To be honest, I've never encountered an assembler which defaults to hex; pretty much all of them default to decimal. The syntax can be all over the map; the most common notations for hex are [0]dddh and 0xddd, but sometimes you can also have h'ddd, $ddd or 16_ddd.

Igor Skochinsky
  • 24,629
  • 2
  • 72
  • 109
  • This is a little on a tangent, but what does it mean for a hex value to be denoted with a `C` after the number and before the `h` suffix? For example: `var_1C= dword ptr -1Ch` – olliezhu Dec 12 '14 at 20:51
  • I've read that DOS `debug.exe` defaults to hex for everything. It's obsolete and not very featureful so I wouldn't recommend using it. Related: [How to represent hex value such as FFFFFFBB in x86 assembly programming?](https://stackoverflow.com/q/11733731) summarizes which assembler supports what for some modern assemblers. – Peter Cordes Mar 29 '20 at 06:02
1

I don't believe all assemblers assume all numeric constants are in hex, so its a good idea not just for clarity, but for portability.

Scott Hunter
  • 48,888
  • 12
  • 60
  • 101
  • I'm only familiar (a bit) with Intel 80x86 and Zilog Z80 assembly languages. But if not all assemblers assume hex (I'm assuming you mean something like "different assemblers for one single processor type"), then why can I even choose to forego the -H? – Ruben Tavernier Jan 11 '12 at 17:03
  • 1
    It seems that, for the x86 assembler(s) you are using, the default for numeric constants is to consider them to be hexadecimal. But I recall (it's been a while!) having the default be decimal, and have seen code posted online making the same assumption. – Scott Hunter Jan 11 '12 at 17:07
  • Thank you for your answer. I would upvote it, but don't have enough reputation yet. I chose Igor's answer because it contained more precise references. – Ruben Tavernier Jan 11 '12 at 21:40
0

In MS DOS using debug as the compiler for your assembly code it will read values as HEX even if you don't put an H after the value.

Skye
  • 1