This site needs a way to allow anonymous followups in addition to anonymous answers.
Why, more than once, do I see this insane assertion that an "index" must be in units of 1 byte? It's the complete opposite of convention. An "index" is usually symbolic, a measure whose physical byte offset is determined by the size of the element in the array (or vector, which may not even have the physical layout of an array, but then memcpy() is irrelevant too of course).
So, the 5th element in an array has "index" 5, but:
- If the array is type char, then the byte offset of that "index" is 5.
- If the array is type short (on x86), then the byte offset of that "index" is 10.
- If the array is type int (on x86), then the byte offset of that "index" is 20.
- If the array is type of some large 48-byte object, then the byte offset of that "index" is 240.
Which way is the correct way to access that specific element is a side point. The important part is that you understand the difference, choose one, and make the code correct.
On the meaning of words, I would much rather read:
void* memcpy_offset(void *s1, const void *s2, size_t offset, size_t n);
than:
void* memcpy_index(void *s1, const void *s2, size_t index, size_t n);
I find the idea that a completely generic void * could have an "index" to be misleading. (While we're here, "dest" and "source" or "in" and "out" would be much less ambiguous than "s1" and "s2". Code doesn't need as many comments when you pick self-explanatory variable names.)