According to Wikipedia, the mathematical convention is that unary minus has a lower precedence than exponentiation. Some programming languages follow this, others don't.
But the above article also gives examples of different conventions for mathematical notation used in scientific publications1; e.g. the precedence of different ways of writing multiplication and division.
You asked: Why?
Well, in most cases there isn't a clear rationale for why particular language designers made particular choices; see the answers to this Q&A. However, we certainly can't justify the position that any particular precedence system is "correct" from a theoretical stand-point.
In general, the guiding principles for PL precedence systems seem to be:
- Try to be consistent with the ancestors for this language.
- Try to be consistent with perceived mathematical convention.
- Do what "feels right" at the time.
The results are not consistent.
Fortunately:
people tend to get used to the quirks of the languages that they use most of the time, and
the exponentiation operator is not used very often2, and even less often with unary minus.
So it doesn't usually matter. (Except when someone gets it wrong in a context that has huge impact / consequences. And even then, there should be processes in place to deal with human error.)
The operator precedence for expression evaluation in bash
is documented as being based on C operator precedence. (See man bash
.) C doesn't have an exponentiation operator, but it does make unary +
and -
higher precedence than multiplication and division.
So in order to be consistent with C, the bash implementors needed to put the operator precedence of **
above *
, /
and %
and below unary -
. (Putting **
above unary -
goes against the clear intent of C ... which is that unary -
is above all other arithmetic operators.)
If your real question is not "why did they do it" but "are the reasons documented", you will probably need to trawl through developer mailing lists, source code repositories and so on for clues. Or maybe try asking the designers ... though they may not remember the reasons accurately.
1 - If mathematicians can't be consistent about notation, why is it a big deal that programming language designers aren't either?
2 - Indeed, many programming languages don't even support an exponentiation operator.