0

Hello everyone, I'm a beginner at pytorch.I just defined a very simple linear regression model.But unfortunately, my program got an error.I made a survey for this error, but unfortunately I was unable to resolve the problem.Can someone help me?Thank you in advance. My program is as follows:

import torch
import numpy as np
import torch.nn as nn

x_values = [i for i in range(11)]
x_train = np.array(x_values, dtype=np.float32) 
x_train = x_train.reshape(-1, 1)  

y_values = [2*i + 1 for i in range(len(x_values))]
y_train = np.array(y_values, dtype=np.float32)
y_train = y_train.reshape(-1, 1)

class LinearRegressionModel(nn.Module):    
    def __init__(self, input_dim, output_dim):
        super(LinearRegressionModel, self).__init__()  
        self.linear = nn.Linear(input_dim, output_dim)  

    def forward(self, x):  
        out = self.linear(x)
        return out

input_dim = 1
output_dim = 1
model = LinearRegressionModel(input_dim, output_dim)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)  

epochs = 1000 
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate) 
criterion = nn.MSELoss() 

for epoch in range(epochs):
    epoch += 1
    inputs = torch.from_numpy(x_train).to(device)  
    labels = torch.from_numpy(y_train).to(device)  
    optimizer.zero_grad()
    outputs = model(inputs)
    loss = criterion(outputs, labels)
    loss.backward()  
    optimizer.step()  
    if epoch % 50 == 0:
        print('epoch {}, loss {}'.format(epoch, loss.item()))

torch.save(model.state_dict(), 'model.pkl')
print(model.load_state_dict(torch.load('model.pkl')))

predicted = model(torch.from_numpy(x_train).requires_grad_()).data.numpy()
print('predicted:', predicted)


I have a simple understanding of the cause of the error:The tensors should be calculated in the same device. Here I intended to train the linear regression model in the GPU, and I also intended to put the model and inputs into the GPU, but the process still reports errors. The program has an error at the following code:

predicted =model(torch.from_numpy(x_train).requires_grad_()).data.numpy()
Traceback (most recent call last):
  File "F:\pytorch_Study\My_program.py", line 71, in <module>
    predicted = model(torch.from_numpy(x_train).requires_grad_()).data.numpy()
  File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\pytorch_Study\My_program.py", line 27, in forward
    out = self.linear(x)
  File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\Anaconda\envs\pytorch\lib\site-packages\torch\nn\modules\linear.py", line 103, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)
Leon Brant
  • 11
  • 3
  • Does this answer your question? [RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training](https://stackoverflow.com/questions/66091226/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least) – paisanco Nov 05 '22 at 03:25
  • Unfortunately, the problem in the link you mentioned is not quite the same as the problem I encountered.I checked the question in the link before Posting my question, and I don't seem to have made the mistake in this link – Leon Brant Dec 05 '22 at 02:06

0 Answers0