It is formally Undefined Behaviour as said by @ouah and is error prone so it should never exists in production code.
But it will be accepted with correct result but most (if not all) common compilers (gcc, clang, msvc do)
If you include the .h file containing extern int a[4];
in the .c containing int a[10];
you will get an error because you are redefining a to a different type (as others have already said).
If you only include the .h in other compilation units, linker should ignore the size and link it correctly.
Simply you will get sizeof(a) == 10 * sizeof(int)
in the .c where it is defined and sizeof(a) == 4 * sizeof(int)
in the other compilation units that include the .declaring it.
Working example :
foo.c :
#include <stdio.h>
int a[10];
void display();
int main() {
for(int i=0; i<sizeof(a)/sizeof(a[0]); i++) {
a[i] = i;
}
printf("sizeof(a)=%d\n", sizeof(a));
display();
return 0;
}
foo2.c :
#include <stdio.h>
extern int a[4];
void display() {
printf("sizeof(a)=%d\n", sizeof(a));
for(int i=0; i<sizeof(a)/sizeof(a[0]); i++) {
printf(" %2d", a[i]);
}
fputs("\n", stdout);
}
Compilation + link : cc foo.c foo2.c -o foo
: not even a warning
Execution :
sizeof(a)=40
sizeof(a)=16
0 1 2 3
This was commonly used in commons in fortran where a compilation unit could only declare the beginning of a common, but I cannot imagine a real use case for such a horror in C.
The reason why it works
Compilers cannot detect at compile time that there are declarations with incompatible types in same program because they are in different translation units so are processed but different compilation phases - possibly at different times.
At link time, linker only sees the addresses of the different declarations of a
and make sure that all .o (or .obj) get same address. It would be hard to do differently without breaking multi-language compatibility : it is the way for sharing an array between a C module and an assembly language one.
Why you should not use it
You could say that nothing prevents a compiler to do the write thing when facing what standard defines as undefined behaviour. But Hans Passant once gave me a link to an article on research for future compilers. Here are some extracts :
This article is about a new memory-safe interpretation of the C abstract machine that provides stronger protection to benefit security and debugging
... [Writers] demonstrate that it is possible for a memory-safe implementation of C to support not just the C abstract machine as specified, but a broader interpretation that is still compatible with existing code. By enforcing the model in hardware, our implementation provides memory safety that can be used to provide high-level security properties for C ...
[Implementation] memory capabilities are represented as the triplet (base, bound, permissions), which is loosely packed into a 256-bit value. Here base provides an offset into a virtual address region, and bound limits the size of the region accessed ... Special capability load and store instructions allow capabilities to be spilled to the stack or stored in data structures, just like pointers ... with the caveat that pointer subtraction is not allowed.
The addition of permissions allows capabilities to be tokens granting certain rights to the referenced memory. For example, a memory capability may have permissions to read data and capabilities, but not to write them (or just to write data but not capabilities). Attempting any of the operations that is not permitted will cause a trap.
[The] results confirm that it is possible to retain the strong semantics of a capability-system memory model (which provides non-bypassable memory protection) without sacrificing the advantages of a low-level language.
(emphasize mine)
TL/DR : Nothing prevents future compilers to add size information for arrays inside the object (compiled) module and raise an error if they were not compatible. Researches currently exists for such features