As the error says, R-squared is not well-defined for single predictions; in fact, scoring for single predictions does not make much sense in general, either.
Nevertheless, if you must do it for other (e.g. programming) reasons, you can use other performance metrics for regression, like RMSE or MAE (which, by definition, are equal for single predictions):
from sklearn.metrics import mean_squared_error, mean_absolute_error
# dummy data - must be single-element arrays, otherwise it throws error
y_true = [3]
y_pred = [2.5]
# RMSE:
mean_squared_error(y_true, y_pred, squared=False)
# 0.5
# MAE:
mean_absolute_error(y_true, y_pred)
# 0.5
FWIW, RMSE & MAE make much more sense as performance measures in such predictive settings than the R-squared; for details, see the last part of own answer in scikit-learn & statsmodels - which R-squared is correct?
Notice that these quantities should be presented as-is, and not as percentages (again, computing any percentage quantity for a single prediction does not make any sense); you may have already noticed that, in the special case of single predictions, they have a very natural interpretation, i.e. they are simply the difference between the prediction and the ground truth (here 0.5
).
Having clarified that, you could of course make your code slightly more efficient, by simply taking the difference between the prediction and the ground truth:
import numpy as np
np.array(y_true) - np.array(y_pred) # won't work with simple Python lists
# array([0.5])
resting assured that what you actually compute is the RMSE/MAE, and not something ad hoc.