How to calculate goemetric mean along a dimension using Pytorch? Some numbers can be negative. The function must be differentiable.
Asked
Active
Viewed 1,679 times
2 Answers
11
A known (reasonably) numerically-stable version of the geometric mean is:
import torch
def gmean(input_x, dim):
log_x = torch.log(input_x)
return torch.exp(torch.mean(log_x, dim=dim))
x = torch.Tensor([2.0] * 1000).requires_grad_(True)
print(gmean(x, dim=0))
# tensor(2.0000, grad_fn=<ExpBackward>)
This kind of implementation can be found, for example, in SciPy (see here), which is a quite stable lib.
The implementation above does not handle zeros and negative numbers. Some will argue that the geometric mean with negative numbers is not well-defined, at least when not all of them are negative.

Berriel
- 12,659
- 4
- 43
- 67
-
This solves the problem of underflow by computing in log space, but does not handle negative values. I.e. if x contains a negative value then the log will compute to 'nan'. I think the only way to handle the negative values will be to treat them as tiny positive values (i.e. log(negative_something) = huge negative value). – Bazyli Debowski Aug 25 '21 at 14:51
-
1To handle negative values you can replace `torch.log(input_x)` with `torch.log(torch.clamp(input_x, torch.finfo(torch.dtype(input_x)).tiny))` – Bazyli Debowski Aug 25 '21 at 15:03
-
@BazyliDebowski gmean is actually not well-defined for negative values. Some argue that it is well-defnied when all of them are negative. This implementation also does not handle 0s, which results in 0. Most will argue that it only makes sense to compute it for positive numbers. – Berriel Aug 25 '21 at 16:12
-
I agree negative values don't really make sense for gmean. That being said, the question did specifically state that some values may be negative, so I was attempting to address that. I don't know if the solution in my previous comment would be differentiable though... intuitively I think not but I haven't worked it out. – Bazyli Debowski Aug 25 '21 at 17:08
-
1@BazyliDebowski oh, the mention to negative numbers is still in there... I remember all this was discussed with the OP in the comments at the time (as the comments were mostly conversational, they are now deleted). Anyway, I'll add a note to my answer as our comments might be deleted as well. I wouldn't suggest any specific way of handling negative numbers (unless all of them are negative), unless we want to support complex numbers. – Berriel Aug 25 '21 at 17:25
0
torch.prod() helps:
import torch
x = torch.FloatTensor(3).uniform_().requires_grad_(True)
print(x)
y = x.prod() ** (1.0/x.shape[0])
print(y)
y.backward()
print(x.grad)
# tensor([0.5692, 0.7495, 0.1702], requires_grad=True)
# tensor(0.4172, grad_fn=<PowBackward0>)
# tensor([0.2443, 0.1856, 0.8169])
EDIT: ?what about
y = (x.abs() ** (1.0/x.shape[0]) * x.sign() ).prod()

Alexey Birukov
- 1,565
- 15
- 22
-
If you do `x = torch.Tensor([2.0] * 1000).requires_grad_(True)`, you'll get infinity when you do `x.prod()`. I would like a more numerically stable method. – CrabMan Jan 14 '20 at 09:55