Can I always replace torch.matmul with python's built-in @ operator to do the matrix multiplication? Please assume that I know the difference between torch.matmul, torch.mm and many others. I just want to make sure how many of them can be safely replaced by @ operator without sacrificing speed or some native support from torch.
If it does no harm, I would like to extensively use them in the future.