The question here seems badly motivated. While exp(x) converges to 1 as x goes to 0, meaning that given the same floating point precision exp(x)-1 has more significant figures than exp(x) for small x, this is not true for sqrt(x), which converges to 0 as x goes to 0. In other words exp(x)-1 can be made fractionally more precise than exp(x) for small x, but the same is not true for 1-sqrt(x) -- which would in fact get worse, since you're taking it from something near 0 (1e-6) to something near 1 (0.999999).
If on the other hand you instead wanted to calculate sqrt(1+x) for very small x (as an accurate measurement of sqrt(x) very near x=1), sqrt(1+x)-1 would be a more accurate floating point computation. And its Taylor series would work very well; I find that for |x| < 1e-9, x/2 - x^2/8 + x^3/16 is a good approximation of sqrt(1+x)-1 to within an RMS fractional error of 3e-29 (with a maximum of 8e-29 on the edges) -- twice as many digits as are accurate in a double. Even the quadratic approximation is probably good enough (with roughly 20 digits of accuracy)