I think that these methods use iteration based processing to get the result, and only stop when the difference between the value of two iterations fall bellow a given error-constant.
There are iterative methods that converge very fastly to the result of a power operation... so I think they are near constant time.
This question has a lot of great explanations:
How is Math.Pow() implemented in .NET Framework?
EDIT
I have found a lot of good meterial to work with in http://math.stackexchange.com.
This one is very interesting, as it explains a way of calculating exponentiation using human language:
Thoughts
I'm not a math genius, but as far as I can see, the time it takes does not depend a lot on the values you choose, but on the number of precise digits you want. What I'm trying to say is that it depends on the arguments, but there is a maximum.
Also, to support this theory, take a look at this algorithm (implemented by Sun): http://pastebin.com/LDjS5mAR. There is no loops, only ifs. I think that it is because the guys who implemented it, chose a fixed precision they wanted... and then expanded all the iterations needed to guarantee that precision.
For instance, a loop of invariant number of iterations can be easyly expanded like this:
for (int it = 0; it < 5; it++)
a *= a;
Is the same as:
a *= a; a *= a; a *= a; a *= a; a *= a;