35

While practicing Simple Linear Regression Model I got this error, I think there is something wrong with my data set.

Here is my data set:

Here is independent variable X:

Here is dependent variable Y:

Here is X_train

Here Is Y_train

This is error body:

ValueError: Expected 2D array, got 1D array instead:
array=[ 7.   8.4 10.1  6.5  6.9  7.9  5.8  7.4  9.3 10.3  7.3  8.1].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.

And this is My code:

import pandas as pd
import matplotlib as pt

#import data set

dataset = pd.read_csv('Sample-data-sets-for-linear-regression1.csv')
x = dataset.iloc[:, 1].values
y = dataset.iloc[:, 2].values

#Spliting the dataset into Training set and Test Set
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= 0.2, random_state=0)

#linnear Regression

from sklearn.linear_model import LinearRegression

regressor = LinearRegression()
regressor.fit(x_train,y_train)

y_pred = regressor.predict(x_test)

Thank you

CodeMonkey123
  • 1,046
  • 13
  • 22
danyialKhan
  • 697
  • 2
  • 8
  • 12

8 Answers8

44

You need to give both the fit and predict methods 2D arrays. Your x_train and x_test are currently only 1 dimensional. What is suggested by the console should work:

x_train= x_train.reshape(-1, 1)
x_test = x_test.reshape(-1, 1)

This uses numpy's reshape to transform your array. For example, x = [1, 2, 3] wopuld be transformed to a matrix x' = [[1], [2], [3]] (-1 gives the x dimension of the matrix, inferred from the length of the array and remaining dimensions, 1 is the y dimension - giving us a n x 1 matrix where n is the input length).

Questions about reshape have been answered in the past, this for example should answer what reshape(-1,1) fully means: What does -1 mean in numpy reshape? (also some of the other below answers explain this very well too)

CodeMonkey123
  • 1,046
  • 13
  • 22
  • 3
    You should not reshape the `y_train` since you want it as 1D array. – George Pipis Sep 24 '20 at 08:34
  • i have no idea why this is needed ;( – WestCoastProjects Dec 02 '22 at 06:13
  • @WestCoastProjects If you read the documentation for [sklearn fit](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.fit), the input X and Y must both be 2D arrays, to quote the documentation X is an: ` {array-like, sparse matrix} of shape (n_samples, n_features)` – CodeMonkey123 Jan 04 '23 at 10:18
27

A lot of times when doing linear regression problems, people like to envision this graph

one variable input linear regression

On the input, we have an X of X = [1,2,3,4,5]

However, many regression problems have multidimensional inputs. Consider the prediction of housing prices. It's not one attribute that determines housing prices. It's multiple features (ex: number of rooms, location, etc. )

If you look at the documentation you will see this screenshot from documentation

It tells us that rows consist of the samples while the columns consist of the features.

Description of Input

However, consider what happens when we have one feature as our input. Then we need an n x 1 dimensional input where n is the number of samples and the 1 column represents our only feature.

Why does the array.reshape(-1, 1) suggestion work? -1 means choose a number of rows that works based on the number of columns provided. See the image for how it changes in the input. Transformation using array.reshape

Neel Sandell
  • 429
  • 5
  • 12
8

If you look at documentation of LinearRegression of scikit-learn.

fit(X, y, sample_weight=None)

X : numpy array or sparse matrix of shape [n_samples,n_features]

predict(X)

X : {array-like, sparse matrix}, shape = (n_samples, n_features)

As you can see X has 2 dimensions, where as, your x_train and x_test clearly have one. As suggested, add:

x_train = x_train.reshape(-1, 1)
x_test = x_test.reshape(-1, 1)

Before fitting and predicting the model.

CodeMonkey123
  • 1,046
  • 13
  • 22
8

Use

y_pred = regressor.predict([[x_test]])
desertnaut
  • 57,590
  • 26
  • 140
  • 166
2

I would suggest to reshape X at the beginning before you do the split into train and test dataset:

import pandas as pd
import matplotlib as pt

#import data set

dataset = pd.read_csv('Sample-data-sets-for-linear-regression1.csv')
x = dataset.iloc[:, 1].values
y = dataset.iloc[:, 2].values
# Here is the trick
x = x.reshape(-1,1)

#Spliting the dataset into Training set and Test Set
from sklearn.cross_validation import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= 0.2, random_state=0)

#linnear Regression

from sklearn.linear_model import LinearRegression

regressor = LinearRegression()
regressor.fit(x_train,y_train)

y_pred = regressor.predict(x_test)
George Pipis
  • 1,452
  • 16
  • 12
2

This is what I use

X_train = X_train.values.reshape(-1, 1)
y_train = y_train.values.reshape(-1, 1)
X_test = X_test.values.reshape(-1, 1)
y_test = y_test.values.reshape(-1, 1)
Rami Alloush
  • 2,308
  • 2
  • 27
  • 33
0

This is the solution

regressor.predict([[x_test]])

And for polynomial regression:

regressor_2.predict(poly_reg.fit_transform([[x_test]]))
szelesaron
  • 59
  • 7
0

Modify

regressor.fit(x_train,y_train)
y_pred = regressor.predict(x_test)

to

regressor.fit(x_train.values.reshape(-1,1),y_train)
y_pred = regressor.predict(x_test.values.reshape(-1,1))
Rakshit p
  • 16
  • 2