47

There are lots of examples of undefined/unspecified behavior when doing pointer arithmetics - pointers have to point inside the same array (or one past the end), or inside the same object, restrictions on when you can do comparisons/operations based on the above, etc.

Is the following operation well-defined?

int* p = 0;
p++;
Luchian Grigore
  • 253,575
  • 64
  • 457
  • 625
  • 8
    I'm curious as to why you think it wouldn't be... – Borgleader Apr 23 '15 at 13:44
  • @Borgleader because of all the restrictions I mentioned. If it was pointing to a single value, it'd be defined, but (only?) because it would be treated as a pointer to an array of size 1. What about in this case? – Luchian Grigore Apr 23 '15 at 13:46
  • 1
    After all it's just a (pointer) arithmetic. Arithmetic is always well-defined. – Matt Apr 23 '15 at 13:48
  • @LuchianGrigore I guess I'm missing the point of the question because it seems to be like as long as you don't *do* anything with that pointer I don't see how that would be undefined behavior. – Borgleader Apr 23 '15 at 13:49
  • OK. But then why so? Are there architectures with special 'pointer overflow' condition? – Matt Apr 23 '15 at 13:54
  • @user4419802: in first of all: because the specification says so! And the reason for that is probably rooted in the virtual and/or physical address representation of all those different architectures. – dhein Apr 23 '15 at 14:23
  • @user4419802 say you have a "fake" 64bit machine that can only do 36bit address space(which is not unrealistic since no x64 implements full 64bit address space yet), with leading 28 bits being 1 by default, or used as tags, then adding 1 to `null` will return `null ` plus tag, if works at all. – user3528438 Apr 23 '15 at 14:23
  • @user3528438 I mean that usually standard regs are used for pointers. Thus any possible overflow is not automatically checked until dereferencing. I admit that it's theoretically possible to build the machine which uses only "special" regs for dereferencing with all arithmetic on them protected by HW exceptions. The question is whether anything like that was actually done? – Matt Apr 23 '15 at 14:37
  • @ddriver, Mainly the one in Columbo's answer: *If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.* `p + 4` would point to the fifth element of `arr`, but that doesn't exist. The result pointer would not point to an element of the same array, or one past the end (unlike `p + 3`, which is okay). – chris Apr 23 '15 at 14:42
  • @chris I am still not clear how that is being figured out. You have a function that accepts a pointer and a size integer, how does "it know" that it is in the array, or whether it is an array to begin with? – dtech Apr 23 '15 at 14:45
  • 3
    @ddriver, Being undefined behaviour, it doesn't have to know. It can be assumed that you follow the rules and don't incur UB. – chris Apr 23 '15 at 14:47
  • @chris - so it will magically produce defined behavior while you are in the confines of the array and just as magically produce undefined behavior when it leaves it (without dereferencing), even though it has no idea or mechanism to determine when that is? Pardon my persistence, but I am really puzzled how will that exactly happen. – dtech Apr 23 '15 at 14:53
  • @ddriver Undefined behaviour isn't something that "happens" – Tavian Barnes Apr 23 '15 at 14:56
  • 4
    @ddriver, Well, imagine the implementation traps if you overflow past 0x1000. You have an array of 3 four-byte ints located at 0xFF0. `arr + 1` would be implemented as `0xFF0 + 1*4 = 0xFF4`. Similarly for `arr + 2` through `arr + 4`. Now imagine `arr + 5`. There's the trap from going past 0x1000, but that's okay since it's undefined behaviour. Placing the array any farther in memory would not be okay, lest `arr + 4` traps. Now imagine it's at 0x100. `arr + 5` is implemented as `0x100 + 5*4 = 0x114`. It's out of range, but no trap. This is also okay because it's undefined behaviour, but "works" – chris Apr 23 '15 at 14:59
  • 1
    @chris If it is an array of length 3, then `arr+4` is undefined. N'est pas? – Theodore Norvell Apr 23 '15 at 15:03
  • @TheodoreNorvell, Not quite, and the idea is carried over to iterators. Fact is it's handy to have a pointer to one past the end that you can't dereference. The half-open intervals represented by iterators can be overlain so that the end of one is the beginning of another without needing any pesky `+ 1`s or `- 1`s, which are harder to reason about and lead to off-by-one errors. It also allows things like `std::find` to use this one-past-the-end as a return value when the element was not found. – chris Apr 23 '15 at 15:06
  • OK, you go past, but so what? That's just a value, I'd certainly understand the UB when dereferencing it, but prior to that it is just a value representing a memory address. Judging from assembly code, it is just an arithmetic operation with no side effects whatsoever. It only takes effect when you try using the value as a memory address to read from or write to. – dtech Apr 23 '15 at 15:07
  • @ddriver, From a practical standpoint, yes. From a language standpoint, C++ does not limit the hardware very much. I'm personally unaware of any hardware that traps on overflow, but if there is such hardware, it's most likely supported. As seen before, given that undefined behaviour is allowed to work without anything bad happening, it simplifies the implementation. No need to worry about whether everything is valid, just add and be done with it. If it traps, oh well. If it overflows, oh well. That also means no need for validity checks taking up possibly-precious CPU cycles. – chris Apr 23 '15 at 15:09
  • I understand now, it is more of a "just in case of" thing reserved for some corner case hardware. – dtech Apr 23 '15 at 15:11
  • Good question - I had a fight with an OS vendor once about exactly this bug in their code. I eventually had to give up and do it correctly myself. – Carl Norum Apr 23 '15 at 15:22
  • 7
    @ddriver Optimizing compilers are fond of assuming "UB never happens". So they're free to assume that your code that invokes UB is unreachable and proceed to destroy everything using that contradiction. – CodesInChaos Apr 23 '15 at 15:27
  • 5
    @ddriver If you think that UB can only make problems if it would make "sense" according to the used architecture, you're at least ten years behind on compilers. Example: x86 has 2s complement arithmetic, so `int overflows(int x) { return x + 1 < x;}` should always give you the right result, right? On modern gccs you have a good chance that the function will be optimized to `return false`. – Voo Apr 23 '15 at 17:32
  • 2
    @chris If it is `arr` is an array of length 3, then `arr+0` is the address of the first item, `arr+1` is the address of the second item, `arr+2` is the address of the third item, `arr+3` is one past the end, and `arr+4` is undefined behaviour. – Theodore Norvell Apr 23 '15 at 19:20
  • @TheodoreNorvell, Yes, that's right. – chris Apr 23 '15 at 19:27
  • @Voo: Integer overflow need not grant license for full Undefined Behavior to allow that optimization; defining the term "partially-indeterminate value" to refer to an integer of size N whose N lowest bits have a defined value, but which may behave as though it has additional upper bits that would hold arbitrary values, would suffice. Such a definition could improve optimization if it were allowed as an "implementation-defined behavior" when attempting to store an out-of-range value into a variable of signed integer type (since it would mean that the optimization could be used... – supercat May 20 '15 at 20:28
  • ...with variables of type `int32_t` even on systems where `int` is 64 bits). Further, on any implementation which defined such a rule, operations like `uint32_t x = -3; x*=x;` would behave consistently regardless of the size of `int` since any wackiness would be confined to bits that get lopped off in the conversion back to `uint32_t`. – supercat May 20 '15 at 20:30

9 Answers9

36

§5.2.6/1:

The value of the operand object is modified by adding 1 to it, unless the object is of type bool [..]

And additive expressions involving pointers are defined in §5.7/5:

If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.

Columbo
  • 60,038
  • 8
  • 155
  • 203
  • 6
    I'm curious how "array object" is defined in the standard. Is return value of `malloc` considered array object? – user3528438 Apr 23 '15 at 14:05
  • How do you guarantee that `1` has a corresponding pointer representation for **mappings between pointers and integers are otherwise implementation-defined**? – Lingxi Apr 23 '15 at 14:13
  • 1
    @Lingxi The first part of that quote says that you can convert it. The resulting value is impementation-defined, but the operations legality is guaranteed, isn't it? (Actually, it should've been sizeof(int), forgot to adjust that - still, converting from 1 to it should be valid.) – Columbo Apr 23 '15 at 14:16
  • 1
    @Columbo: An implementation could legitimately specify that any integer value which it recognizes as never having been the result of a pointer-to-integer cast may yield a trap representation when converting to a pointer. – supercat Apr 23 '15 at 15:17
  • If hypothetically a call to `Foo *p = static_cast(malloc(0));` that resulted with `p` holding non-NULL pointer, then it would be UB to increment that `p` as well. – jxh Apr 23 '15 at 16:48
  • The question by @user3528438 about malloc is interesting. The std says "The pointer returned if the allocation succeeds is suitably aligned so that it may be assigned to a pointer to any type of object with a fundamental alignment requirement and then used to access such an object or an array of such objects in the space allocated ... ." So the intention is clear for malloc and friends. I'm not sure, though, that one can write a malloc replacement in a useful, standard conforming, 100% portable, way. – Theodore Norvell Apr 23 '15 at 21:22
  • @TheodoreNorvell: One could write an essentially-portable malloc/free/realloc library by defining a `union` containing an item of the coarsest required alignment and an `unsigned char[]` of the required heap size. The malloc function would be required to return pointers whose offset from the 'holds-everything" object was a multiple of the coarsest required alignment. There are only two notable problems with portability here: (1) some segmented architectures may require the use of multiple disjoint data holders, since the largest item size is much less than the max space available. – supercat Apr 23 '15 at 21:33
  • (2) For portability, one would have to use names other than `malloc`/`free`/`realloc`, since any attempt to define functions with any of those names would be Undefined Behavior. – supercat Apr 23 '15 at 21:35
  • For a specific example of how this can be undefined, some embedded processors memory map all of the registers at the low end of the address space, starting with 0x0000. This would suggest that null is actually a pointer to the first register, but this is not the case. What they actually do is set null to be a different value, such as 0xFFFF. This is legal in C (the assignment p=0 must yield p to be a null pointer, but it does not specify the bit pattern, as odd as that sounds). if we then do p++, this would overflow. Many such platforms allow you to trap an overflow. – Cort Ammon Apr 28 '15 at 23:59
  • ... Such a trap is typically UB because the spec doesn't like to include such very hardware-specific handlers. – Cort Ammon Apr 29 '15 at 00:00
16

There seems to be quite low understanding what "undefined behaviour" means.

In C, C++, and related languages like Objective-C, there are four kinds of behaviour: There is behaviour defined by the language standard. There is implementation defined behaviour, which means the language standard explicitely says that the implementation must define the behaviour. There is unspecified behaviour, where the language standard says that several behaviours are possible. And there is undefined behaviour, where the language standard doesn't say anything about the result. Because the language standard doesn't say anything about the result, anything at all can happen with undefined behaviour.

Some people here assume that "undefined behaviour" means "something bad happens". That's wrong. It means "anything can happen", and that includes "something bad can happen", not "something bad must happen". In practice it means "nothing bad happens when you test your program, but as soon as it is shipped to a customer, all hell breaks loose". Since anything can happen, the compiler can actually assume that there is no undefined behaviour in your code - because either it is true, or it is false, in which case anything can happen, which means whatever happens because of the compiler's wrong assumption is still correct.

Someone claimed that when p points to an array of 3 elements, and p + 4 is calculated, nothing bad will happen. Wrong. Here comes your optimising compiler. Say this is your code:

int f (int x)
{
    int a [3], b [4];
    int* p = (x == 0 ? &a [0] : &b [0]);
    p + 4;
    return x == 0 ? 0 : 1000000 / x;
}

Evaluating p + 4 is undefined behaviour if p points to a [0], but not if it points to b [0]. The compiler is therefore allowed to assume that p points to b [0]. The compiler is therefore allowed to assume that x != 0, because x == 0 leads to undefined behaviour. The compiler is therefore allowed to remove the x == 0 check in the return statement and just return 1000000 / x. Which means your program crashes when you call f (0) instead of returning 0.

Another assumption made was that if you increment a null pointer and then decrement it again, the result is again a null pointer. Wrong again. Apart from the possibility that incrementing a null pointer might just crash on some hardware, what about this: Since incrementing a null pointer is undefined behavour, the compiler checks whether a pointer is null and only increments the pointer if it isn't a null pointer, so p + 1 is again a null pointer. And normally it would do the same for the decrementing, but being a clever compiler it notices that p + 1 is always undefined behaviour if the result was a null pointer, therefore it can be assumed that p + 1 isn't a null pointer, therefore the null pointer check can be ommitted. Which means (p + 1) - 1 is not a null pointer if p was a null pointer.

Fattie
  • 27,874
  • 70
  • 431
  • 719
gnasher729
  • 51,477
  • 5
  • 75
  • 98
  • 1
    This would seem to be the only answer on the page, that actually knows what "undefined behaviour" means. – Fattie Apr 24 '15 at 09:39
  • 5
    This rant doesn't even attempt to address that thing up the top - ahhh - what's it called... oh yeah - ***the question***, which is ***not*** *"how might undefined behaviour manifest"*, but whether the code in the question has undefined behaviour or not. (In fairness, it does eventually reach a discussion that's *premised* on incrementing a null pointer being undefined without stating or justifying such.) This answer would be better moved to an appropriate question.... – Tony Delroy Apr 24 '15 at 10:43
13

Operations on a pointer (like incrementing, adding, etc) are generally only valid if both the initial value of the pointer and the result point to elements of the same array (or to one past the last element). Otherwise the result is undefined. There are various clauses in the standard for the various operators saying this, including for incrementing and adding.

(There are a couple of exceptions like adding zero to NULL or subtracting zero from NULL being valid, but that doesn't apply here).

A NULL pointer does not point at anything, so incrementing it gives undefined behaviour (the "otherwise" clause applies).

Peter
  • 35,646
  • 4
  • 32
  • 74
  • 2
    They can point to an object, and one byte past an object, too. – Columbo Apr 23 '15 at 13:56
  • 1
    @Columbo: what are you refering to? – dhein Apr 23 '15 at 14:25
  • @Zaibis Probably the rule that a pointer to one element past the end of an array is valid i.e. that `a+3` is valid when you have a `int a[3]`, despite pointing to no valid element. But talking about "one byte" is a bit weird. – CodesInChaos Apr 23 '15 at 15:24
  • You are wrong about adding zero; no exception is made in additive operations when adding zero, and for an invalid pointer value it can cause UB. This is one reason why there are cases where `p[0]` is not strictly equivalent to `*p` (when the contents of that location is not actually used), see for instance [this answer](http://stackoverflow.com/a/29251966/1436796). – Marc van Leeuwen Apr 24 '15 at 07:20
  • 2
    What you say is true for C, but is not true for C++, Marc. First sentence of C++98 - Section 5.7, para 8 says "If the value 0 is added to or subtracted from a pointer value, the result compares equal to the original pointer value." (I don't have more recent versions of the C++ standard at hand to check section numbers right now). There's an article by Andrew Koenig discussing why, http://www.drdobbs.com/cpp/why-does-c-allow-arithmetic-on-null-poin/240001022 – Peter Apr 24 '15 at 08:59
  • @Peter You don't have more recent versions? Not only is C++03 available for free since years, final drafts for every standard since then are available online (for free). Also, I was referring to the fact that an object itself is treated as an array, but your wording implies that `&obj + 1` induces UB, which is incorrect. – Columbo May 21 '15 at 22:30
  • @Peter Also, `(int*)0 + 0` (or the subtracting equivalent) does have undefined behavior, by §5.2.6/1. Your quote does not dispute that result but solely provides a superfluous gurarantee. – Columbo May 21 '15 at 22:32
0

As said by Columbo it is UB. And from a language lawyer point of view this is the definitive answer.

However all C++ compiler implementations I know will give same result :

int *p = 0;
intptr_t ip = (intptr_t) p + 1;

cout << ip - sizeof(int) << endl;

gives 0, meaning that p has value 4 on a 32 bit implementation and 8 on a 64 bits one

Said differently :

int *p = 0;
intptr_t ip = (intptr_t) p; // well defined behaviour
ip += sizeof(int); // integer addition : well defined behaviour 
int *p2 = (int *) ip;      // formally UB
p++;               // formally UB
assert ( p2 == p) ;  // works on all major implementation
Serge Ballesta
  • 143,923
  • 11
  • 122
  • 252
  • 7
    I wouldn't trust a modern optimizing compiler with this. For example it's quite plausible that it decides that in `p2=p+1; if(p==nullptr){...}` the condition will never be fulfilled (since p==null would have resulted in UB on the first statement) and remove the whole `if` statement. – CodesInChaos Apr 23 '15 at 15:31
  • As I said in first line it **is** definitely UB. But I could not find an example of program, compiler and parameters to exhibit the problem you give. – Serge Ballesta Apr 23 '15 at 15:39
  • @SergeBallesta: GCC, and even by default (that's why there is a `-fno-delete-null-pointer-checks` flag) – MSalters Apr 23 '15 at 17:05
  • @MSalters I'm not sure if current compilers detect `p++` as UB, or if they only do this when you dereference the pointer. But no matter if they detect it now, it's only a small step from optimizing the dereference case step to "optimizing" this case as well. – CodesInChaos Apr 23 '15 at 17:17
  • @CodesInChaos: It would be far simpler to detect it at the lowest level. And the GCC case became known from breaking the linux kernel on the equivalent of `int p = foo->bar`. At the HW level, both are just pointer increments. One adds `sizeof(*p)` bytes, the other `offsetof(cFoo, bar)`. – MSalters Apr 23 '15 at 17:25
  • @MSalters : I have only access to a gcc4.8.2. I tried to put that code in a separate compilation unit `int testptr(int *p) { intptr_t ip; int *p2 = p + 1; ip = (intptr_t) p2; if (p == nullptr) { ip *= 2; } else { ip *= -2; } return (int) ip; }` but even with `-O3` it still return a positive value when passed a null pointer. – Serge Ballesta Apr 23 '15 at 21:05
  • @MSalters I can confirm that clang 3.4.1 still does the same. So I repeat my first comment : I could not find an example to exhibit the problem ! (but I do know it is UB and could break in a future version :-) ) – Serge Ballesta Apr 23 '15 at 21:24
  • @SergeBallesta: Clang now supports nullable and non-null pointers. Declare the pointer as non-null and it breaks. – gnasher729 Apr 23 '15 at 23:46
  • @gnasher729: but the whole point is to exhibit a problem by the Undefined Behaviour of incrementing a null pointer. Is pointer is non-null I will get an error first instead of UB. – Serge Ballesta Apr 24 '15 at 06:14
  • [An example demostrating UB can be found here.](http://stackoverflow.com/a/30351819/193887) – Jeremy Mar 07 '17 at 13:30
0

From ISO IEC 14882-2011 §5.2.6:

The value of a postfix ++ expression is the value of its operand. [ Note: the value obtained is a copy of the original value —end note ] The operand shall be a modifiable lvalue. The type of the operand shall be an arithmetic type or a pointer to a complete object type.

Since a nullptr is a pointer to a complete object type. So I wouldn't see why this would be undefined behaviour.

As has been said before the same document also states in §5.2.6/1:

If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined.

This expression seems a bit ambiguous. In my interpretation, the undefined part might very well be the evaluation of the object. And I think nobody would disagree with this being the case. However, pointer arithmetics seem to only require a complete object.

Of course postfix [] operators and subtractions or multiplications on pointer to array objects are only well defined, if they in fact point to the same array. Mostly important because one might be tempted to think that 2 arrays defined in succession in 1 object, can be iterated over like they were a single array.

So my conclusion would be that the operation is well defined, but evaluation would not be.

laurisvr
  • 2,724
  • 6
  • 25
  • 44
0

The C Standard requires that no object which is created via Standard-defined means can have an address which is equal to a null pointer. Implementations may allow for the existence of objects which are not created via Standard-defined means, however, and the Standard says nothing about whether such an object might have an address which (likely because of hardware design issues) is the same as a null pointer.

If an implementation documents the existence of a multi-byte object whose address would compare equal to null, then on that implementation saying char *p = (char*)0; would make p hold a pointer to the first byte of that object [which would compare equal to a null pointer], and p++ would make it point to the second byte. Unless an implementation either documents the existence of such an object, however, or specifies that it will perform pointer arithmetic as though such an object exists, there is no reason to expect any particular behavior. Having implementation deliberately trap attempts to perform any kind of arithmetic on null pointers other than adding or subtracting zero or other null pointers can be a useful safety measure, and code that would increment null pointers for some intended useful purpose would be incompatible with it. Worse, some "clever" compilers may decide that they can omit null checks in cases on pointers that would get incremented even if they hold null, thus allowing all manner of havoc to ensue.

supercat
  • 77,689
  • 9
  • 166
  • 211
-1

It turns out it's actually undefined. There are systems for which this is true

int *p = NULL;
if (*(int *)&p == 0xFFFF)

Therefore, ++p would trip the undefined overflow rule (turns out that sizeof(int *) == 2)). Pointers aren't guaranteed to be unsigned integers so the unsigned wrap rule doesn't apply.

Joshua
  • 40,822
  • 8
  • 72
  • 132
  • It converts the value of p to an integer. The strange expression is required to prevent the compiler from generating the code to replace NULL with 0. The actual bitwise value is relevant here. – Joshua Apr 23 '15 at 23:31
  • 2
    That is not incrementing NULL. It is reassigning the value of the pointer. – Peter Apr 24 '15 at 10:45
-1

Back in the fun C days, if p was a pointer to something, p++ was effectively adding the size of p to the pointer value to make p point at the next something. If you set the pointer p to 0, then it stands to reason that p++ would still point it at the next thing by adding the size of p to it.

What's more, you could do things like add or subtract numbers from p to move it along through memory (p+4 would point at the 4th something past p.) These were good times that made sense. Depending on the compiler, you could go anywhere you wanted within your memory space. Programs ran fast, even on slow hardware because C just did what you told it to and crashed if you got too crazy/sloppy.

So the real answer is that setting a pointer to 0 is well-defined and incrementing a pointer is well-defined. Any other constraints are placed on you by compiler builders, os developers and hardware designers.

  • 2
    Isn't it the other way round? C standard doesn't define it, but compiler vendors are free to define it for a particular platform. – Kos Apr 24 '15 at 07:50
  • I just remember what my old K&R said when it talked about pointers, that this is the way they were supposed to work. If compiler vendors made it not work intuitively then I'd probably not use that compiler unless my arm was twisted roughly. :-) – The Software Barbarian Apr 24 '15 at 08:11
  • 1
    OTOH, you didn't get this level of optimisation from compilers back then. Tradeoffs, tradeoffs... – Kos Apr 24 '15 at 08:59
  • Question is about C++. "Real answer" is what the standard defines or does not define. – Luchian Grigore Apr 24 '15 at 09:09
  • As I understand it, the definition of what's a pointer in C and what's a pointer in C++ is the same. There are no pointer arithmetic differences, in any case. Assigning a pointer to 0 is legal - your code may in fact use it to detect the end of a chain, for instance. And incrementing a pointer should take you to the next thing in an array, making no assumptions about the interpretation of what the pointer's value is. Optimisation be damned, if I write an algorithm to work a particular way using pointers to bounce around a list of objects, it had better not make legal code stop working. – The Software Barbarian Apr 29 '15 at 21:10
-2

Given that you can increment any pointer of a well-defined size (so anything that isn't a void pointer), and the value of any pointer is just an address (there's no special handling for NULL pointers once they exist), I suppose there's no reason why an incremented null pointer wouldn't (uselessly) point to the 'one after NULL'est item.

Consider this:

// These functions are horrible, but they do return the 'next'
// and 'prev' items of an int array if you pass in a pointer to a cell.
int *get_next(int *p) { return p+1; }
int *get_prev(int *p) { return p-1; }

int *j = 0;

int *also_j = get_prev(get_next(j));

also_j has had maths done to it, but it's equal to j so it's a null pointer.

Therefore, I would suggest that's it's well-defined, just useless.

(And the null pointer appearing to have the value zero when printfed is irrelevant. The value of the null pointer is platform dependent. The use of a zero in the language to initialise pointer variables is a language definition.)

  • 1
    A good implementation should trap any attempt to add or subtract any integer from a null pointer at run-time (most of the harm from the "billion dollar mistake" stems from platforms' failures to do so). There are very few platforms where such behavior would ever be necessary, and in practice such behavior is almost always followed by a stray memory access. The only time in which implied arithmetic on a null pointer would be useful would be in cases which are entirely compile-time resolvable as the difference between two pointers which have constant displacements from a common base. – supercat Apr 23 '15 at 15:14
  • 3
    Your assumption that it's correct since it's intuitive or works on your system is wrong. C++ is governed by the language rules, which are described by the standard. Some rules are counter-intuitive, but the reasoning behind them is that it allows implementations to perform certain optimizations which otherwise wouldn't be possible. – Luchian Grigore Apr 23 '15 at 22:48
  • @LuchianGrigore: There's also the problem that if incrementing a null pointer isn't undefined behaviour, then it must be defined somehow. Well, I wouldn't want to be responsible for defining what incrementing a null pointer does. Microprocessor Cat claims that the result is defined as "something that produces a null pointer if you decrement it". – gnasher729 Apr 23 '15 at 23:38
  • I'd like to know why this answer was downvoted. Was the code specifically incorrect, or my conclusions? – Microprocessor Cat Apr 24 '15 at 09:29
  • Micro - your answer was about your ideas on how or what is sensible. (You may or may not be completely correct.) The question is completely unrelated to "what is sensible." It's simply a "language law" question. – Fattie Apr 24 '15 at 09:40