I just had Apple's C/C++ compiler initialize a float to a non-zero value (approx "-0.1").
That was a big surprise - and only happened occasionally (but 100% repeatably, if you ran through the same function calls / args beforehand). It took a long time to track down (using assertions).
I'd thought floats were zero-initialized. Googling suggests that I was thinking of C++ (which of course is much more precise about this stuff - c.f. SO: What are primitive types default-initialized to in C++? ).
But maybe Apple's excuse here is that their compiler was running in C mode ... so: what about C? What should happen, and (more importantly) what's typical?
(OF COURSE I should have initialized it manually - I normally do - but in this one case I failed. I didn't expect it to blow up, though!)
(Google is proving worse than useless for any discussion of this - their current search refuses to show "C" without "C++". Keeps deciding I'm too stupid, and ignoring even my input even when running in advanced mode)
Here's the actual source example where it happened. At first I thought there might be a problem with definitions of MAX and ABS (maybe MAX(ABS,ABS) doesnt always do what you'd expect?) ... but digging with assertions and debugger, I eventually found it was the missing initialization - that float was getting init'd to non-zero value VERY occasionally):
float crossedVectorX = ... // generates a float
float crossedVectorY = ... // generates a float
float infitesimal; // no manual init
float smallPositiveFloat = 2.0 / MAX( ABS(crossedVectorX), ABS(crossedVectorY));
// NB: confirmed with debugger + assertions that smallPositiveFloat was always positive
infitesimal += smallPositiveFloat;
NSAssert( infitesimal >= 0.0, @"This is sometimes NOT TRUE" );