-4

So I've been searching on Stackoverflow for all the code that can find the size of a pointer to an array. I couldn't find it.

int main() {
    int array[] = {6,3, 4, 6, 2};
    int *sizes = array;
    cout << sizeof(sizes); // output is 8
}

Using sizeof doesn't work. Can anyone suggest a good solution? Thanks a lot!

**EDIT:

I want to find the size of the array using the pointer "sizes". I know how to find the size using the "array" variable

Manny Lim
  • 1
  • 1
  • 2
  • 1
    That's the size of the pointer, not what it points to. There's no way to know the size of the array the pointer is pointing to, unless it contains some known terminator value (like with c strings) or otherwise encodes it in it's data. Otherwise, you just have to "remember" it. – François Andrieux Sep 29 '18 at 01:25
  • [What is array decaying?](https://stackoverflow.com/questions/1461432/what-is-array-decaying) – crayzeewulf Sep 29 '18 at 01:25
  • *So I've been searching on Stackoverflow for all the code that can find the size of a pointer to an array* -- What exactly are you looking for? How many elements there are in the array? – PaulMcKenzie Sep 29 '18 at 01:25
  • 2
    Good solution: Don't use raw arrays. Use `std::array` or `std::vector`, which actually carry the size information. – aschepler Sep 29 '18 at 01:26
  • 2
    Once an array has decayed to a pointer to its first element, that pointer to the first element is all you have. You can get the size of *the pointer* or of the *first element*, but you no longer have the size of the whole array. – Some programmer dude Sep 29 '18 at 01:28
  • There's no way to find the size of the array using the pointer? – Manny Lim Sep 29 '18 at 01:30
  • 3
  • Why do you think the pointer to the array isn't 8 bytes? I strongly suspect that all data pointers on your platform are 8 bytes and so 8 is the size of the pointer to the array. – David Schwartz Sep 29 '18 at 02:11
  • @MannyLim The pointer is just the memory address of the first byte of the first elements of the array. The only size information it would have is in its type, and it's a pointer to a single `int`. It doesn't know how many contiguous integers happen to be in memory starting at that address. Since this is C++, you should consider using more appropriate types that have the behavior you want. (As @aschepler suggested.) – David Schwartz Sep 29 '18 at 02:14
  • ***Can anyone suggest a good solution?*** The answer is you can't do what you want. A pointer does not contain any size information outside of size of the pointer or size of the first element. So any solution would involve changing the problem by using something different (perhaps a container from the standard library instead of a pointer or perhaps putting the size in a variable. – drescherjm Sep 29 '18 at 12:17

3 Answers3

1

You can read in c++ doc http://www.cplusplus.com/reference/array/array/size/

For example:

// array::size
#include <iostream>
#include <array>

int main ()
{
  std::array<int,5> myints;
  std::cout << "size of myints: " << myints.size() << std::endl;
  std::cout << "sizeof(myints): " << sizeof(myints) << std::endl;

  return 0;
}
0

If you want C level answer, why did you tag this question as c++?

int main() {
    int array[] = {6,3, 4, 6, 2};
    cout << sizeof(array) / sizeof(int);
    return 0;
}

It might not be satisfy but it's impossible to find the array's size using a pointer to the array. Pointer unlike the array, can point to any type of variable with the right casting. It doesn't store a straight memory allocation places like the array. That's why you'll always get 8 bytes in 64bit OS architecture or 4 bytes in 32bit OS architecture of size when you do sizeof(pointer).

Read about the differences between pointers and arrays in c.

Coral Kashri
  • 3,436
  • 2
  • 10
  • 22
0

Within the scope that array was declared in, sizeof(array) is the number of bytes in the array.¹

If you want the number of elements in array, that’s (sizeof(array)/sizeof(array[0])).

Since sizes is declared as an int*, sizeof(sizes) is the size of a pointer. That will be 8 for a 64-bit program, 4 for a 32-bit program, or some other size on an unusual architecture.

There is one other wrinkle: if you pass array to a function, such as:

int* reverse_array( int a[], const size_t n )
{
  assert( sizeof(a) == sizeof(int*) );
  // ...
}

Then the array parameter, a, automatically degrades to a pointer, and the compiler forgets its actual size. This is for backward-compatibility with C.

To use an array within another function, you must pass the size as a separate parameter, in this case n, or use a type such as std::array<int>, std::vector<int>, or a struct. The latter is what Bjarne Stroustrup’s C++ guidelines recommend, although, if you use a STL template in the ABI of a library, you are introducing a dependency on a particular implementation of the STL.

¹ Since this community loves language-lawyering: some historical C compilers measured sizes in increments other than bytes. Some C++ compiler hypothetically might make char more than 8 bits wide (although not less!) and claim to be technically conforming to the standard. You don’t need to worry about that possibility right now. Seriously, you don’t.

Davislor
  • 14,674
  • 2
  • 34
  • 49
  • Not “claim to be technically conforming”. Such an implement conforms to the standard. Full stop. The requirements were written that way because not all the world is 8-bit bytes. Granted, there aren’t many 9-bit hardware systems any more; on the other hand, there are now some systems with 32-bit `char`s. – Pete Becker Sep 29 '18 at 03:13
  • @PeteBecker Exactly. Tens if not hundreds of thousands of programs would break on that compiler, including basically all programs that have binary input or output or exchange data over a network, but it could advertise itself as technically conformant to the letter of the Standard. For what that’s worth. – Davislor Sep 29 '18 at 03:21
  • @PeteBecker Some other C-like languages have 32-bit `char`, and some C/C++ implementations have 32-bit `wchar_t`, but the Standard says that `sizeof(char)` must be 1. Therefore, `char` in C or C++ cannot be 32 bits wide without giving up the ability to address objects that are not 32-bit aligned and breaking a whole lot of code. That might hypothetically make sense on a 32-bit, word-addressed microcontroller. – Davislor Sep 29 '18 at 03:26
  • Yes, if `char` is 32-bits wide, it means that you cannot address objects that are not 32-bit aligned. If your program makes any other assumption, the program is broken. C and C++ define an **abstract machine**; if you play by the rules of the abstract machine, your code is (more or less) portable. One of the rules is that portable code **cannot assume** that `char` is 8 bits wide. And, no, that doesn't make it impossible to write code that accesses data over a network. It just means that you have to pay attention to alignment, and you might have to do some bitmasking on some hardware. – Pete Becker Sep 29 '18 at 11:25
  • I should also mention that the macro `CHAR_BIT` tells you how many bits there are in a `char`. – Pete Becker Sep 29 '18 at 11:39
  • @PeteBecker I think I agree with what you just said: it’s theoretically *possible* to re-write any existing codebase to make it portable to an implementation that can only address aligned 32-bit words, not individual bytes. For example, it could use four times as much memory to store strings. The problem is, any real-world codebase would break, often silently. So it wouldn’t be useful for compiling existing code. Therefore, there is no reason to worry whether the code you write would hypothetically be portable to it. Nobody would expect it to be able to compile your code! – Davislor Sep 29 '18 at 17:18
  • At the risk of sounding overly pedantic: if portability is a requirement, code that does not allow for the possibility of `CHAR_BIT` being something other than 8 is already broken. I write code that I fully expect will work correctly for any size `char`. Other people may not be as fastidious. – Pete Becker Sep 29 '18 at 17:51
  • @PeteBecker In principle, I agree: the attitude that types would always be exactly 16 or 32 bits wide has caused a lot of problems. On the other hand, if you write code that needs to interchange some kind of binary data (UTF-8, network packets, file headers), and suddenly someone releases a compiler where all objects *must* be padded to 32 bits or cannot be individually addressed at all, and using several times the memory weren't acceptable, your code would probably require some kind of modification to compile and run successfully. And that's fine. – Davislor Sep 29 '18 at 19:07
  • @PeteBecker Especially if it also makes no assumptions about endian-ness, or the layout of bitfields. – Davislor Sep 29 '18 at 19:17