I was commenting on Why should I always enable compiler warnings? and pointed out
In the embedded world, the warnings that worry me most are "
possible loss of precision
" and "comparison between signed and unsigned
" warnings.I find it difficult to grasp how many "programmers" ignore these (in fact, I am not really sure why they are not errors)
Can anyone explain why trying to put a possible quart into a certified pint pot is not treated as an error? Surely it's just a disaster waiting to happen?