1

Possible Duplicate:
What's the difference between new char[10] and new char(10)

what is different between

char* t1=new char

and

char* t2=new char[10];

both allocate memory and t1[100]='m' and t2[100]='m' is correct for them

-----------after edit:

but why we can use t1[100] if t1 is dynamically allocated char not array of char

Community
  • 1
  • 1
maysam
  • 509
  • 10
  • 23
  • What do you mean by " t(x)[100]='m' is correct for them"? It it just a coincidence that you can set this memory to 'm' without your program crashing. You haven't allocated that memory so you are writing off the end of the array into memory that could be anything. – D.C. Oct 23 '10 at 22:06
  • 1
    Basically a duplicate of: http://stackoverflow.com/questions/3902011/whats-the-difference-between-new-char10-and-new-char10, except for that final comment in your question. `t[100] = 'm'` is well-formed, but leads to undefined behavior for both cases. Don't. – GManNickG Oct 23 '10 at 22:07
  • 1
    You can use `t1[100]` because when `X` is a pointer `X[I]` is equivalent to `*(X + I)`. You just blindly move the pointer value 100 elements over and dereference. C++ doesn't care or try to protect the programmer from doing stupid things. It's up to you. – GManNickG Oct 23 '10 at 22:21
  • Use the vector class and its at() function to be safe. I have an example in my update answer. – chrisaycock Oct 24 '10 at 00:02

3 Answers3

4

Your first case creates a single char element (1 byte) whereas your second case creates 10 consecutive char elements (10 bytes). However, your access of t(x)[100]='m' is undefined in both cases. That is, you are requesting 100 bytes after the position of the pointer, which is most likely garbage data.

In other words, your assignment of 'm' will overwrite whatever is already there, which could be data from another array. Thus, you may encounter some bizarre errors during runtime.

C/C++ allows programmers to access arrays out of bounds because an array is really just a pointer to consecutive memory. The convention t1[100] is just 100 bytes after the pointer, no matter what that is.

If you want "safe" arrays, use the vector class and invoke the at() function. This will throw the out_of_range exception if the access is invalid.

Stroustrup gives the following example:

template<class T> class Vec : public vector<T> {
public:
    Vec() : vector<T>() {}
    Vec(int s) : vector<T>(s) {}

    T& operator[] (int i) {return at(i);}
    const T& operator[] (int i) const {return at(i);}
};

This class is boundary-safe. I can use it like this:

Vec<char> t3(10);                // vector of 10 char elements
try {
    char t = t3[100];            // access something we shouldn't
}
catch (out_of_range) {
    cerr << "Error!" << endl;    // now we can't shoot ourselves in the foot
}
Cœur
  • 37,241
  • 25
  • 195
  • 267
chrisaycock
  • 36,470
  • 14
  • 88
  • 125
3

You need to delete these differently since arrays are allocated using a different variant of operator new:

delete t1;
delete [] t2;
Steve Townsend
  • 53,498
  • 9
  • 91
  • 140
0

t1 points to a dynamically allocated char; t2 points to a dynamically allocated array of 10 chars. But I believe this is C++, not C. And this is definitely a duplicate.

Revision after OP's edit p[n] where p is a pointer and n is an integer is equivalent to *(p+n) therefore it's like accessing 100 chars away from what your p points to. In both your cases (t1 and t2) the 100th (101th) element is beyond your ownership, so it's UB. Actually the above fact makes it legal to write 2[array] interchangably with array[2]. Fancy, but don't do that :)

Armen Tsirunyan
  • 130,161
  • 59
  • 324
  • 434