The typical reason given for using a biased exponent (also known as offset binary) in floating-point numbers is that it makes comparisons easier.
By arranging the fields such that the sign bit takes the most significant bit position, the biased exponent takes the middle position, then the significand will be the least significant bits and the resulting value will be ordered properly. This is the case whether or not it is interpreted as a floating-point or integer value. The purpose of this is to enable high speed comparisons between floating-point numbers using fixed-point hardware.
However, because the sign bit of IEEE 754 floating-point numbers is set to 1 for negative numbers and 0 for positive numbers, the integer representation of negative floating-point numbers is greater than that of the positive floating-point numbers. If this were reversed, then this would not be the case: the value of all positive floating-point numbers interpreted as unsigned integers would be greater than all negative floating-point numbers.
I understand this wouldn't completely trivialize comparisons because NaN != NaN, which must be handled separately (although whether or not this is even desirable is questionable as discussed in that question). Regardless, it's strange that this is the reason given for using a biased exponent representation when it is seemingly defeated by the specified values of the sign and magnitude representation.
There is more discussion on the questions "Why do we bias the exponent of a floating-point number?" and "Why IEEE floating point number calculate exponent using a biased form?" From the first, the accepted answer even mentions this (emphasis mine):
The IEEE 754 encodings have a convenient property that an order comparison can be performed between two positive non-NaN numbers by simply comparing the corresponding bit strings lexicographically, or equivalently, by interpreting those bit strings as unsigned integers and comparing those integers. This works across the entire floating-point range from +0.0 to +Infinity (and then it's a simple matter to extend the comparison to take sign into account).
I can imagine two reasons: first, using a sign bit of 1 for negative values allows the definition of IEEE 754 floating-point numbers in the form -1s x 1.fe-b; and second, the floating-point number corresponding to a bit string of all 0s is equal to +0 instead of -0.
I don't see either of these as being meaningful especially considering the common rationale for using a biased exponent.