147

I want to apply scaling (using StandardScaler() from sklearn.preprocessing) to a pandas dataframe. The following code returns a numpy array, so I lose all the column names and indeces. This is not what I want.

features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features = autoscaler.fit_transform(features)

A "solution" I found online is:

features = features.apply(lambda x: autoscaler.fit_transform(x))

It appears to work, but leads to a deprecationwarning:

/usr/lib/python3.5/site-packages/sklearn/preprocessing/data.py:583: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.

I therefore tried:

features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1)))

But this gives:

Traceback (most recent call last): File "./analyse.py", line 91, in features = features.apply(lambda x: autoscaler.fit_transform(x.reshape(-1, 1))) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 3972, in apply return self._apply_standard(f, axis, reduce=reduce) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 4081, in _apply_standard result = self._constructor(data=results, index=index) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 226, in init mgr = self._init_dict(data, index, columns, dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 363, in _init_dict dtype=dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 5163, in _arrays_to_mgr arrays = _homogenize(arrays, index, dtype) File "/usr/lib/python3.5/site-packages/pandas/core/frame.py", line 5477, in _homogenize raise_cast_failure=False) File "/usr/lib/python3.5/site-packages/pandas/core/series.py", line 2885, in _sanitize_array raise Exception('Data must be 1-dimensional') Exception: Data must be 1-dimensional

How do I apply scaling to the pandas dataframe, leaving the dataframe intact? Without copying the data if possible.

Louic
  • 2,403
  • 3
  • 19
  • 34

11 Answers11

130

You could convert the DataFrame as a numpy array using as_matrix(). Example on a random dataset:

Edit: Changing as_matrix() to values, (it doesn't change the result) per the last sentence of the as_matrix() docs above:

Generally, it is recommended to use ‘.values’.

import pandas as pd
import numpy as np #for the random integer example
df = pd.DataFrame(np.random.randint(0.0,100.0,size=(10,4)),
              index=range(10,20),
              columns=['col1','col2','col3','col4'],
              dtype='float64')

Note, indices are 10-19:

In [14]: df.head(3)
Out[14]:
    col1    col2    col3    col4
    10  3   38  86  65
    11  98  3   66  68
    12  88  46  35  68

Now fit_transform the DataFrame to get the scaled_features array:

from sklearn.preprocessing import StandardScaler
scaled_features = StandardScaler().fit_transform(df.values)

In [15]: scaled_features[:3,:] #lost the indices
Out[15]:
array([[-1.89007341,  0.05636005,  1.74514417,  0.46669562],
       [ 1.26558518, -1.35264122,  0.82178747,  0.59282958],
       [ 0.93341059,  0.37841748, -0.60941542,  0.59282958]])

Assign the scaled data to a DataFrame (Note: use the index and columns keyword arguments to keep your original indices and column names:

scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

In [17]:  scaled_features_df.head(3)
Out[17]:
    col1    col2    col3    col4
10  -1.890073   0.056360    1.745144    0.466696
11  1.265585    -1.352641   0.821787    0.592830
12  0.933411    0.378417    -0.609415   0.592830

Edit 2:

Came across the sklearn-pandas package. It's focused on making scikit-learn easier to use with pandas. sklearn-pandas is especially useful when you need to apply more than one type of transformation to column subsets of the DataFrame, a more common scenario. It's documented, but this is how you'd achieve the transformation we just performed.

from sklearn_pandas import DataFrameMapper

mapper = DataFrameMapper([(df.columns, StandardScaler())])
scaled_features = mapper.fit_transform(df.copy(), 4)
scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
Kevin
  • 7,960
  • 5
  • 36
  • 57
  • 2
    Thank you for the answer, but the problem still is that the rows are renumbered when the new dataframe is created from the array. The original dataframe does not contain consecutively numbered rows because some of them have been removed. I suppose I could also add an index=[...] keyword with the old index values. If you update your answer accordingly I can accept it. – Louic Mar 01 '16 at 13:46
  • 1
    I hope the edit helps, I think your intuition about setting the index values from the first df was correct. The numbers I used are consecutive...(just wanted to show you can reset them to anything and range(10,20) was best I could think of. But it will work with any random index on the original df. HTH! – Kevin Mar 01 '16 at 14:04
  • 2
    I see that you have the last step as converting the output of the `DataFrameMapper` to a `DataFrame` .. so the output is not *already* a `DataFrame` ? – WestCoastProjects Nov 20 '17 at 13:40
  • 1
    @StephenBoesch: Yes, the output is not `DataFrame`. If you want to get it directly from mapper, you have to use `df_out=True` option for `DataFrameMapper`. – Nerxis Nov 26 '20 at 10:36
  • @Kevin You'd probably want to use `df.to_numpy()` these days instead of `df.values`, as recommended in the [docs](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.values.html). – constantstranger Mar 03 '23 at 22:03
34
import pandas as pd    
from sklearn.preprocessing import StandardScaler

df = pd.read_csv('your file here')
ss = StandardScaler()
df_scaled = pd.DataFrame(ss.fit_transform(df),columns = df.columns)

The df_scaled will be the 'same' dataframe, only now with the scaled values

cody
  • 11,045
  • 3
  • 21
  • 36
Joe
  • 381
  • 3
  • 3
16

Reassigning back to df.values preserves both index and columns.

df.values[:] = StandardScaler().fit_transform(df)
Jim
  • 1,579
  • 1
  • 11
  • 18
11
features = ["col1", "col2", "col3", "col4"]
autoscaler = StandardScaler()
df[features] = autoscaler.fit_transform(df[features])
zzHQzz
  • 301
  • 3
  • 10
  • 8
    While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. – Piotr Labunski Mar 19 '20 at 10:03
  • 1
    This now throws a: "SettingWithCopyError: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead" – Vega Apr 08 '21 at 10:25
  • @Vega how do you deal with this? – jajamaharaja Oct 10 '21 at 16:12
  • This is the reason I came here but I have not found an awnser yet. I asked this new question about it https://stackoverflow.com/questions/72232036/replace-entire-pandas-dataframe-after-scaling-without-warning – Quinten C May 13 '22 at 15:38
8

This worked with MinMaxScaler in getting back the array values to original dataframe. It should work on StandardScaler as well.

data_scaled = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)

where, data_scaled is the new data frame, scaled_features = the array post normalization, df = original dataframe for which we need the index and columns back.

5

Works for me:

from sklearn.preprocessing import StandardScaler

cols = list(train_df_x_num.columns)
scaler = StandardScaler()
train_df_x_num[cols] = scaler.fit_transform(train_df_x_num[cols])
  • 2
    Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Mar 21 '22 at 08:17
5

Since sklearn Version 1.2, estiamtors can return a DataFrame keeping the column names. set_output can be configured per estimator by calling the set_output method or globally by setting set_config(transform_output="pandas")

See Release Highlights for scikit-learn 1.2 - Pandas output with set_output API

Example for set_output():

from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().set_output(transform="pandas")

Example for set_config():

from sklearn import set_config
set_config(transform_output="pandas")
DataJanitor
  • 1,276
  • 1
  • 8
  • 19
2

This is what I did:

X.Column1 = StandardScaler().fit_transform(X.Column1.values.reshape(-1, 1))
Fredrik
  • 21
  • 1
0

You can mix multiple data types in scikit-learn using Neuraxle:

Option 1: discard the row names and column names

from neuraxle.pipeline import Pipeline
from neuraxle.base import NonFittableMixin, BaseStep

class PandasToNumpy(NonFittableMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        return data_inputs.values

pipeline = Pipeline([
    PandasToNumpy(),
    StandardScaler(),
])

Then, you proceed as you intended:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
pipeline, scaled_features = pipeline.fit_transform(features)

Option 2: to keep the original column names and row names

You could even do this with a wrapper as such:

from neuraxle.pipeline import Pipeline
from neuraxle.base import MetaStepMixin, BaseStep

class PandasValuesChangerOf(MetaStepMixin, BaseStep):
    def transform(self, data_inputs, expected_outputs): 
        new_data_inputs = self.wrapped.transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return new_data_inputs

    def fit_transform(self, data_inputs, expected_outputs): 
        self.wrapped, new_data_inputs = self.wrapped.fit_transform(data_inputs.values)
        new_data_inputs = self._merge(data_inputs, new_data_inputs)
        return self, new_data_inputs

    def _merge(self, data_inputs, new_data_inputs): 
        new_data_inputs = pd.DataFrame(
            new_data_inputs,
            index=data_inputs.index,
            columns=data_inputs.columns
        )
        return new_data_inputs

df_scaler = PandasValuesChangerOf(StandardScaler())

Then, you proceed as you intended:

features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
df_scaler, scaled_features = df_scaler.fit_transform(features)
Guillaume Chevalier
  • 9,613
  • 8
  • 51
  • 79
-1

You can try this code, this will give you a DataFrame with indexes

import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston # boston housing dataset

dt= load_boston().data
col= load_boston().feature_names

# Make a dataframe
df = pd.DataFrame(data=dt, columns=col)

# define a method to scale data, looping thru the columns, and passing a scaler
def scale_data(data, columns, scaler):
    for col in columns:
        data[col] = scaler.fit_transform(data[col].values.reshape(-1, 1))
    return data

# specify a scaler, and call the method on boston data
scaler = StandardScaler()
df_scaled = scale_data(df, col, scaler)

# view first 10 rows of the scaled dataframe
df_scaled[0:10]
Hassan K
  • 23
  • 4
  • Thanks for your answer, but the solutions given as accepted answer are much better. Also, it can be done with dask-ml: `from dask_ml.preprocessing import StandardScaler; StandardScaler().fit_transform(df)` – Louic Jan 30 '20 at 13:21
-1

You could directly assign a numpy array to a data frame by using slicing.

from sklearn.preprocessing import StandardScaler
features = df[["col1", "col2", "col3", "col4"]]
autoscaler = StandardScaler()
features[:] = autoscaler.fit_transform(features.values)
abysslover
  • 683
  • 5
  • 14