Interesting question. A truly random number would not have worked, since it could lie in the range of typical values of many variables, which would violate best practices for fill values. That is why the best practice is to choose a number extremely large in absolute magnitude, so it does not conflict with the dynamic range of valid data. I examined the properties of NC_FILL_FLOAT and NC_FILL_DOUBLE once and found nothing too remarkable about them. Setting them equal makes sense, because users often convert between float and double. So that leaves the question of what single value to choose. The optimal choice, in my opinion, would the number as close to NC_FLOAT_MAX (or negative of NC_FLOAT_MAX) that had a bit pattern amenable to compression. Since many datasets are replete with missing values, using a number with long continuous strings of identical bits would allow such datasets to be better compressed by most algorithms. The DEFLATE compression of a dataset with NC_FILL_FLOAT is pretty good, because DEFLATE is good and NC_FILL_FLOAT in binary is
9.9692099683868690e+36f = 0 1111100 111100000000000000000000
They could have chosen a slightly more compressible number, e.g.,
-2.3509885615147286E-38f = 10000000111111111111111111111111
but they didn't and I'm also curious where NC_FILL_FLOAT came from.