I am writing a hash table implementation that can accept arbitrary data sizes as keys and custom hashing functions (returning uint32_t
).
I have one instance of this hash table that uses 16-byte UUIDs as keys. Since UUIDs have a very wide distribution already, I thought that the hashing function could be simply
typedef unsigned char UUID[16];
static inline uint32_t uuid_hash_fn(const UUID uuid)
{ return *((uint32_t*)(uuid + 4)); } // skipping the 4 MSB that are constant in UUID
Anything wrong with this?
Is the function, as it is written, already optimized in the sense that it just takes enough left-most bytes to fill an uint32?
EDIT: following commenters' advice, I'd like to clarify that by "is there anything wrong by this" I meant: is there a possibility of unexpected behavior with this approach?