1

given the following array (for example):

int arr[6] = {1 , 2 , 3 , 4 , 5, 6};

If I send it to function with the following decleration:

int func(void* array, int n);

How can I send from func to the following function:

int f(void* element);

address for some element in the array?

I tried to do something like that: f(array + i); (in order to send &array[i]) , but I get the following error:

pointer of type 'void *' used in arithmetic

So , how can I do it?

Software_t
  • 576
  • 1
  • 4
  • 13

2 Answers2

4

Accepting, for whatever reason, that you can't write

int func(int* array, int n);

you need to change the type of array back to something on which pointer arithmetic is valid:

int* real_array = (int*)array;

Then you can use the tractable notation real_array[i] to access elements.

If you want to keep func generic though in the sense that you don't know the type, you'd have to pass an element size along with the array size:

int func(void* array, int/*ToDo - use size_t here too*/ n, size_t element_size)

then you could use

f((const char*)array + i * element_size)

although I fear that is merely pushing the problem of resolving the genericity further down the call stack.

Bathsheba
  • 231,907
  • 34
  • 361
  • 483
2

If func needs to be able to work with arrays of different types, then you'll need to add a new size_t argument to func to tell it the size of the array items. Otherwise, something like array[1] is ill-defined: do you mean array + 4 bytes, array + 8 bytes, array + 1 byte, or something else?

If you have an item size parameter, you can cast array to a char* and offset by i * item_size to get a pointer to the ith element.

(Note that your item size parameter should probably be size_t rather than int.)

Daniel Pryden
  • 59,486
  • 16
  • 97
  • 135
  • The type of this argument must be `size_t` ? It's can't be `int`? – Software_t Jun 28 '18 at 13:59
  • @Software_t: The difference between two `void*` can always be represented by `size_t`, but is not guaranteed to be representable by `int`. So `size_t` is more correct. – Daniel Pryden Jun 28 '18 at 14:00
  • Which types can do errors if I will define it as `int`? (Namely , in which cases it's may fail if I will define it as `int`?) – Software_t Jun 28 '18 at 14:01
  • If the types you're working with are all small then there shouldn't be a difference in practice. But pretty much *any* time you're working with a value that semantically *means* the "size" of something, it should be `size_t`. That's what `size_t` is *for*, and that's why the `sizeof` operator is defined to return `size_t` and not `int`. – Daniel Pryden Jun 28 '18 at 14:02
  • 1
    See also https://stackoverflow.com/questions/131803/unsigned-int-vs-size-t for further discussion of the difference between `size_t` and `int`. Note also that `size_t` is guaranteed to be unsigned, and `int` is AFAIK guaranteed to be signed, so that's an obvious difference right off the bat. – Daniel Pryden Jun 28 '18 at 14:03
  • 1
    It fails when I pass it a array of my pictures of earth from space, each a size of 16GB. It fails whenever the size of the array elements can't be represented as int. So you probably never ever see it fail. But do write correct code and use `size_t`. – Goswin von Brederlow Jun 28 '18 at 14:04
  • "int is AFAIK guaranteed to be signed" --> not when used in a bit-field. Use `signed int` to insure sign-ness there. – chux - Reinstate Monica Jun 28 '18 at 14:56