In the official PyTorch C++ example , there is this bit:
float Loss = 0, Acc = 0;
for (const auto& batch : loader) {
auto data = batch.data.to(options.device);
auto targets = batch.target.to(options.device).view({-1});
auto output = network->forward(data);
auto loss = torch::nll_loss(output, targets);
assert(!std::isnan(loss.template item<float>()));
auto acc = output.argmax(1).eq(targets).sum();
Loss += loss.template item<float>();
Acc += acc.template item<float>();
}
I am curious what this idiom means:
Loss += loss.template item<float>();
Acc += acc.template item<float>();
Why not use just loss.item<float>()
? Is this specific only to PyTorch?