-7

I am using codeblocks 16.01. And here in the code given below I have used new operator and declared array - pointer .

#include <iostream> 
using namespace std;
int main()
{
 int *p = new int[2];
 for(int i=0; i<10;i++)
 {   
     cout<<"\n Enter the element  "<<i << " :  ";
     cin>>*(p+i);
 }
 for(int i=0; i<10;i++)
   cout<<"\n"<<*(p+i)<<" :  "<< i << "th element ";
 }

Here is the image why i have doubtthe size declared is 2 but the array is able to store for 7 elements also why is this happening ??. Even I tried to increase the size to 5 then also it took and stored 7 elements value but rest it gave the address. Why ?

stacked
  • 1
  • 2
  • 6
    Accessing beyond the end us undefined behavior. Anything can happen. – Raymond Chen Jul 01 '18 at 04:18
  • Try final `delete[] p` and you will be surprised. – 273K Jul 01 '18 at 04:18
  • Arrays are const pointers!!! – Eduardo Pascual Aseff Jul 01 '18 at 04:23
  • @EduardoPascualAseff, I'm not sure what you're getting at, but arrays and pointers are distinct types. – chris Jul 01 '18 at 04:24
  • 3
    Nothing in your example shows that operator `new` allocated more memory than requested. Your example only means that your code accesses more elements that it has allocated, and you have seen no symptoms. The reasons for that could be anything - for example, tromping on memory outside the array that happen to correspond to other data, and your code (or the host system) isn't detecting that. With another compiler, the same code could behave completely differently. Undefined behaviour means "there are no guarantees". It doesn't mean "program should crash". – Peter Jul 01 '18 at 04:25
  • @chris what I said was a little informal, but what I mean is that the name of an array is const pointer that points to the first element of the array, and in this way the compiler treat them, so there isn't any check about array bounds, Sorry for my english – Eduardo Pascual Aseff Jul 01 '18 at 04:32
  • @EduardoPascualAseff, If that were true, `sizeof(arr)` would return the size of a pointer for `int arr[3];`. However, this isn't the best place for this conversation and most of what I'd have to say is already in [this Q&A](http://stackoverflow.com/questions/4810664/how-do-i-use-arrays-in-c). – chris Jul 01 '18 at 04:35
  • @chris I'm not saying they are the same types, arrays have differents size that depends on declaration! The output of this `void fun(int a[]){ cout << sizeof(a) << "\n";} ... int x[4]; cout << sizeof(x) << "\n"; fun(x);` is `16 4`. ` – Eduardo Pascual Aseff Jul 01 '18 at 04:47
  • @OP Read up on [guard bytes](https://en.wikipedia.org/wiki/Guard_byte). – PaulMcKenzie Jul 01 '18 at 05:18

3 Answers3

2

C++ array element references are not automatically checked against the array bounds.

You can often reference outside the size allocated for the array, but you shouldn’t. Outside the allocated array elects, the memory can below to something else.

There are tools like e.g. valgrind that can instrument your program to detect out-of-bounds access and other problematic actions. They can be very interesting to experiment with.

Bob Jacobsen
  • 1,150
  • 6
  • 9
1

Accessing beyond the end of an array is behaviour undefined by the C++ standard. It is your job as a programmer to ensure that never happens.

If it does happen, a valid C++ program can do anything without violating the C++ standard; it can format your hard drive, email your browser history and credit card numbers to Antarctica, or even time travel. Amusingly the last one is not a joke.

In practice, C++ compilers implement the abstract machine on your hardware on top of a a naive page based memory provider that your OS supplies. The C/C++ runtime manages a memory heap where it slices the OS provided pages into the chunks of memory you ask for. This means your process owns the address space around most of the addresses returned by new; often data immediately around the block returned contains bookeeping information for the C/C++ runtime for when the address is recycled.

By writing there, you are overwriting some other data structure in your programs memory space, or using memory you did not claim you are going to write to. In a larger program this will pretty reliably result in heap corruption, and a seemlingly random crash at an unexpected and seemingly unrelated part of your code. In your toy program, the memory you corrupted wasn't used, so no obvious symptoms iccurred and it "seemed to work". Welcome to undefined behaviour.

All of this is why it is not considered a good idea (in at least some circlea) to use new/malloc and pointer arithmetic "in the raw", but rather dress it up in patterns that make it far less likely you'll cause heap corruption. Things like std::vector and range based for loops and the like. C/C++ programmers failure to keep memory access under control has inspried dozens of highly successful and slow languages whose primary virtue is that accessing an invalid pointer is not undefined behaviour.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
  • In addition, a debug runtime may allocate extra memory to check if the programmer has overrun the buffer, not just solely for bookkeeping information. – PaulMcKenzie Jul 01 '18 at 05:21
0

You reserved space for 2 integers. With the pointer you point "somewhere" in the RAM. The first two accesses hit the int array. The last 8 accesses just write somewhere. Thus is a typical c issue. Writing outside an array, destroying some data and don,t notice...

maze
  • 789
  • 1
  • 7
  • 31
  • It's not actually a "typical c issue". It is a typical *programmer* issue that gets blamed on the language, because programmers prefer to blame their tools rather than themselves for consequences of bad code. – Peter Jul 01 '18 at 04:28