People have already mentioned (a) strict language lawyer interpretaions of the standard, and (b) x86 segments, usually in 16 bit code (but also on some older 32bit OSes).
Let me mention another example, close to my heart:
Andy Glew. 1990. An empirical investigation of OR indexing. SIGMETRICS Perform. Eval. Rev. 17, 2 (January 1990), 41-49. DOI=10.1145/378893.378896 http://doi.acm.org/10.1145/378893.378896
That's me.
In this paper, I proposed an instruction set that used OR rather than ADD as its memory addressing mode.
Now, since C programmers can always do
int* a = new int[5] + 1;
compilers must handle this sort of thing correctly.
But they may be less efficient when doing so.
It turns out that I am not the only one who thought of this. Some real shipping computers have used this technique, although I do not have references at hand.
Anyway, that's just an example.
Overall, though
a) what you suggest will work on most machines (of course, most machines are x86s running Windows or Linux, or ARMs running UNIX derivatives like iOS and Android).
b) but it is arguably illegal. It may break some compiler optimizations, be broken by some compiler optimizations, etc.
By the way, on x86 1-based arrays cost little more to code and almost nothing in machine code. If you say something like
template<typename T,uint size>
class One_Based_Array {
private: T array[size];
public: T& operator[](uint i) { return array[i-1]; }
};
used like
One_Based_Array<int,100> a;
int tmp = a[i];
the machine code will look something like
MOV EAX, a.array-4(ebx)
i.e. 1-based stuff can usually be folded into the x86's basereg+indexreg+offset addressing mode.
On some machine this costs nothing, usually, although the code may be a bit larger.
In fact, Intel's compilers often emit code that looks like
ebx = -i
MOV EAX, address_of_end_of_array[ebx]
i.e. using subtractive indexing rather than additive, or adding in a negative number, because testing against 0 is more efficient than testing against +N.