1

I have a variable called pts which is shaped [batch, ch, h, w]. This is a heatmap and I want to convert it to 2nd co-ordinates. The goal is, pts_o = heatmap_to_pts(pts) where pts_o will be [batch, ch, 2]. I have wrote this function so far,

def heatmap_to_pts(self, pts):  <- pts [batch, 68, 128, 128]
    
    pt_num = []
    
    for i in range(len(pts)):
        
        pt = pts[i]
        if type(pt) == torch.Tensor:

            d = torch.tensor(128)                                                   * get the   
            m = pt.view(68, -1).argmax(1)                                           * indices
            indices = torch.cat(((m / d).view(-1, 1), (m % d).view(-1, 1)), dim=1)  * from heatmaps
        
            pt_num.append(indices.type(torch.DoubleTensor) )   <- store the indices in a list

    b = torch.Tensor(68, 2)                   * trying to convert
    c = torch.cat(pt_num, out=b) *error*      * a list of tensors with grad
    c = c.reshape(68,2)                       * to a tensor like [batch, 68, 2]

    return c

The error says "cat(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.". It's unable to do the operations because tensors in pt_num requires grad".

How can I convert that list to a tensor?

kmario23
  • 57,311
  • 13
  • 161
  • 150
Farshid Rayhan
  • 1,134
  • 4
  • 17
  • 31

1 Answers1

1

The error says,

cat(): functions with out=... arguments don't support automatic differentiation, but one of the arguments requires grad.

What that means is that the output of functions such as torch.cat() which as an out= kwarg cannot be used as input to the autograd engine (which performs automatic differentiation).

The reason is that the tensors (in your Python list pt_num) have different values for the requires_grad attribute, i.e., some tensors have requires_grad=True while some of them have requires_grad=False.

In your code, the following line is (logically) troublesome:

c = torch.cat(pt_num, out=b) 

The return value of torch.cat(), irrespective of whether you use out= kwarg or not, is the concatenation of tensors along the mentioned dimension.

So, the tensor c is already the concatenated version of individual tensors in pt_num. Using out=b redundant. Thus, you can simply get rid of the out=b and everything should be fine.

c = torch.cat(pt_num)
kmario23
  • 57,311
  • 13
  • 161
  • 150