I'm trying to find or figure out the algorithm for converting a signed 64-bit int (twos-complement, natch) to closest value IEEE double (64-bit), staying within bitwise operations.What I'm looking for is for the generic "C-like" pseudocode; I'm implementing a toy JVM on a platform that is not C and doesn't have a native int64
types, so I'm operating on 8 byte arrays (details of that are mercifully outside this scope) and that's the domain the data needs to stay in.
So: input is a big-endian string of 64 bits, signed twos-complement. Output is a big-endian string of 64 bits in IEEE double format that represents as near the original int64 value as possible. In between is some set of masks, shifts, etc! Algorithm absolutely does not need to be especially clever or optimized. I just want to be able to get to the result and ideally understand what the process is.
Having trouble tracking this down because I suspect it's an unusual need. This answer addresses a parallel question (I think) in x86 SSE, but I don't speak SSE and my attempts and translation leave me more confused than enlightened.
Would love someone to either point in the right direction for a recipe or ideally explain the bitwise math behind so I actually understand it. Thanks!