I found the post here. Here, we try to find an equivalence of tf.nn.softmax_cross_entropy_with_logits
in PyTorch. The answer is still confusing to me.
Here is the Tensorflow 2
code
import tensorflow as tf
import numpy as np
# here we assume 2 batch size with 5 classes
preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])
labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])
tf_preds = tf.convert_to_tensor(preds, dtype=tf.float32)
tf_labels = tf.convert_to_tensor(labels, dtype=tf.float32)
loss = tf.nn.softmax_cross_entropy_with_logits(logits=tf_preds, labels=tf_labels)
It give me the loss
as
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.2427604, 1.0636061], dtype=float32)>
Here is the PyTorch
code
import torch
import numpy as np
preds = np.array([[.4, 0, 0, 0.6, 0], [.8, 0, 0, 0.2, 0]])
labels = np.array([[0, 0, 0, 1.0, 0], [1.0, 0, 0, 0, 0]])
torch_preds = torch.tensor(preds).float()
torch_labels = torch.tensor(labels).float()
loss = torch.nn.functional.cross_entropy(torch_preds, torch_labels)
However, it raises:
RuntimeError: 1D target tensor expected, multi-target not supported
It seems that the problem is still unsolved. How to implement tf.nn.softmax_cross_entropy_with_logits
in PyTorch?
What about tf.nn.sigmoid_cross_entropy_with_logits
?