The emulator is wrong, and the source code shows a totally wrong implementation, just setting OF with the same condition as CF. Those flags are separate for a reason! Also, they set SF any time CF is set, even if the top bit of the 16-bit result is zero.
It's open source, https://github.com/YJDoc2/8086-Emulator-Web is a front-end for the 8086-Emulator library written in Rust (compiled to wasm) by the same folks who wrote the front-end.
It's great that an open-source emulator with single-step debugging and a nice GUI exists, but if they got this very basic thing wrong, who knows what other bugs might exist? I wouldn't trust it or recommend it for anyone to use until it passes some stress-tests try to check all the results of every instruction for many input values. I've heard of such tests for various 8-bit ISAs, I assume they exist for 8086.
I had a look in their 8086-Emulator library, and luckily found the emulation code for add
in the first file that seemed likely, src/lib/instructions/arithmetic.rs
.
The code sets both CF and OF according to the same condition: zero-extend the inputs to 32-bit, add, then check for greater than 0xffff unsigned. That should work for carry-out, but it'll fail to detect signed overflow in 0x7fff + anything
, as well as false-positive when adding two small negative numbers like you are.
pub fn word_add(vm: &mut VM, op1: u16, op2: u16) -> u16 {
let res = op1 as u32 + op2 as u32; // 32-bit sum of zero-extended inputs
set_all_flags(
vm,
FlagsToSet {
zero: res == 0, // correct
overflow: res > u16::MAX as u32, // BUG
parity: has_even_parity(res as u8), // correct, PF is set from the low 8
sign: res >= 1 << 15, // BUG
carry: res > u16::MAX as u32, // correct
auxillary: ((op1 & 0xF) + (op2 & 0xF)) > 0xF, // correct, carry-out from the low nibble. Typo for "auxilliary", though.
},
);
res as u16
}
Rust has so many ways they could have gotten this correct, like (a as i16).wrapping_add(b as i16) == a as i16 as i32 + b as i16 as i32
. Or any other way of sign-extending both sides to 32-bit before adding to get the true value, then compare with a wrapped 16-bit add, or with the truncation of the true value back to 16-bit.
Or (a as i16).checked_add(b)
and check that the result isn't None
, or (a as i16).overflowing_add(b)
to get separate i16
and bool
outputs. (https://doc.rust-lang.org/std/primitive.i16.html#method.overflowing_add). That i16 output can easily be used to get SF more simply as signed_sum < 0
.
See also http://teaching.idallen.com/dat2343/10f/notes/040_overflow.txt re: more ways of understanding carry-out (unsigned overflow) vs. signed overflow in addition. For addition, signed-overflow can only happen when both inputs have the same sign, and the result has the opposite sign. Or an easy way that always works for add and subtract is to sign-extend the inputs and do a wider operation, and check that the full result can be narrowed back to 8 or 16-bit without changing the value. (e.g. res as i16 as i32
to redo sign-extension from 16-bit, to get a 32-bit value to compare against the full result.)
The SF condition is wrong, too. They'd set SF on 0x8000 + 0x8000
which wraps back to zero, because they're comparing 0x10000u >= 0x8000u
. They need to test just that bit, like (res >> 15) & 1
, not any higher bits that might have carry-out.
Their hack of using wider arithmetic does make it easy for them to implement even adc
with correct (I think) carry-flag handling, but the same wrong handling of OF.
(Emulating adc
using only operations of the same width as the adc
is quite hard, because the usual carry-out formula of a+b < a
unsigned doesn't work for a+b + CF < a
; adding 0xFFFFu + 1u to something is the same as adding zero, if you're using wrapping 16-bit addition, so you'd have to check each addition step separately. Unless you have higher bits to hold that carry. Or if you have language or hardware support, which Rust exposes with carrying_add(self, rhs: i16, carry: bool) -> (i16, bool)
in a new API that's still only in the nightly builds. That for i64
would totally solve the problem.)