In various contexts, such as bioinformatics, computations on byte-size integers is sufficient. For best performance, many processor architectures offer SIMD instruction sets (e.g. MMX, SSE, AVX) which partition registers into byte-, halfword-, and word-sized components, then perform arithmetic, logical, and shift operations individually on corresponding components.
However, some architecture do not offer such SIMD instructions, requiring them to be emulated, which often requires a significant amount of bit-twiddling. At the moment, I am looking at SIMD comparisons, and in particular the parallel comparison of signed, byte-sized, integers. I have a solution that I think is quite efficient using portable C code (see the function vsetles4
below). It is based on an observation made in year 2000 by Peter Montgomery in a newsgroup posting, that (A+B)/2 = (A AND B) + (A XOR B)/2
without overflow in intermediate computation.
Can this particular emulation code (function vsetles4
) be accelerated further? To first order any solution with a lower count of basic operations would qualify. I am looking for solutions in portable ISO-C99, without use of machine-specific intrinsics. Most architectures support ANDN
(a & ~b), so this may be assumed to be available as a single operation in terms of efficiency.
#include <stdint.h>
/*
vsetles4 treats its inputs as arrays of bytes each of which comprises
a signed integers in [-128,127]. Compute in byte-wise fashion, between
corresponding bytes of 'a' and 'b', the boolean predicate "less than
or equal" as a value in [0,1] into the corresponding byte of the result.
*/
/* reference implementation */
uint32_t vsetles4_ref (uint32_t a, uint32_t b)
{
uint8_t a0 = (uint8_t)((a >> 0) & 0xff);
uint8_t a1 = (uint8_t)((a >> 8) & 0xff);
uint8_t a2 = (uint8_t)((a >> 16) & 0xff);
uint8_t a3 = (uint8_t)((a >> 24) & 0xff);
uint8_t b0 = (uint8_t)((b >> 0) & 0xff);
uint8_t b1 = (uint8_t)((b >> 8) & 0xff);
uint8_t b2 = (uint8_t)((b >> 16) & 0xff);
uint8_t b3 = (uint8_t)((b >> 24) & 0xff);
int p0 = (int32_t)(int8_t)a0 <= (int32_t)(int8_t)b0;
int p1 = (int32_t)(int8_t)a1 <= (int32_t)(int8_t)b1;
int p2 = (int32_t)(int8_t)a2 <= (int32_t)(int8_t)b2;
int p3 = (int32_t)(int8_t)a3 <= (int32_t)(int8_t)b3;
return (((uint32_t)p3 << 24) | ((uint32_t)p2 << 16) |
((uint32_t)p1 << 8) | ((uint32_t)p0 << 0));
}
/* Optimized implementation:
a <= b; a - b <= 0; a + ~b + 1 <= 0; a + ~b < 0; (a + ~b)/2 < 0.
Compute avg(a,~b) without overflow, rounding towards -INF; then
lteq(a,b) = sign bit of result. In other words: compute 'lteq' as
(a & ~b) + arithmetic_right_shift (a ^ ~b, 1) giving the desired
predicate in the MSB of each byte.
*/
uint32_t vsetles4 (uint32_t a, uint32_t b)
{
uint32_t m, s, t, nb;
nb = ~b; // ~b
s = a & nb; // a & ~b
t = a ^ nb; // a ^ ~b
m = t & 0xfefefefe; // don't cross byte boundaries during shift
m = m >> 1; // logical portion of arithmetic right shift
s = s + m; // start (a & ~b) + arithmetic_right_shift (a ^ ~b, 1)
s = s ^ t; // complete arithmetic right shift and addition
s = s & 0x80808080; // MSB of each byte now contains predicate
t = s >> 7; // result is byte-wise predicate in [0,1]
return t;
}