235

I have a pandas dataframe with mixed type columns, and I'd like to apply sklearn's min_max_scaler to some of the columns. Ideally, I'd like to do these transformations in place, but haven't figured out a way to do that yet. I've written the following code that works:

import pandas as pd
import numpy as np
from sklearn import preprocessing

scaler = preprocessing.MinMaxScaler()

dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']})
min_max_scaler = preprocessing.MinMaxScaler()

def scaleColumns(df, cols_to_scale):
    for col in cols_to_scale:
        df[col] = pd.DataFrame(min_max_scaler.fit_transform(pd.DataFrame(dfTest[col])),columns=[col])
    return df

dfTest

    A   B   C
0    14.00   103.02  big
1    90.20   107.26  small
2    90.95   110.35  big
3    96.27   114.23  small
4    91.21   114.68  small

scaled_df = scaleColumns(dfTest,['A','B'])
scaled_df

A   B   C
0    0.000000    0.000000    big
1    0.926219    0.363636    small
2    0.935335    0.628645    big
3    1.000000    0.961407    small
4    0.938495    1.000000    small

I'm curious if this is the preferred/most efficient way to do this transformation. Is there a way I could use df.apply that would be better?

I'm also surprised I can't get the following code to work:

bad_output = min_max_scaler.fit_transform(dfTest['A'])

If I pass an entire dataframe to the scaler it works:

dfTest2 = dfTest.drop('C', axis = 1)
good_output = min_max_scaler.fit_transform(dfTest2)
good_output

I'm confused why passing a series to the scaler fails. In my full working code above I had hoped to just pass a series to the scaler then set the dataframe column = to the scaled series.

desertnaut
  • 57,590
  • 26
  • 140
  • 166
flyingmeatball
  • 7,457
  • 7
  • 44
  • 62
  • 1
    Does it work if you do this `bad_output = min_max_scaler.fit_transform(dfTest['A'].values)`? accessing the `values` attribute returns a numpy array, for some reason sometimes the scikit learn api will correctly call the right method that makes pandas returns a numpy array and sometimes it doesn't. – EdChum Jul 09 '14 at 06:57
  • Pandas' dataframes are quite complicated objects with conventions that do not match scikit-learn's conventions. If you convert everything to NumPy arrays, scikit-learn gets a lot easier to work with. – Fred Foo Jul 09 '14 at 12:49
  • @edChum - `bad_output = in_max_scaler.fit_transform(dfTest['A'].values)` did not work either. @larsmans - yeah I had thought about going down this route, it just seems like a hassle. I don't know if it is a bug or not that Pandas can pass a full dataframe to a sklearn function, but not a series. My understanding of a dataframe was that it is a dict of series. Reading in the "Python for Data Analysis" book, it states that pandas is built on top of numpy to make it easy to use in NumPy-centric applicatations. – flyingmeatball Jul 09 '14 at 14:16

9 Answers9

342

I am not sure if previous versions of pandas prevented this but now the following snippet works perfectly for me and produces exactly what you want without having to use apply

>>> import pandas as pd
>>> from sklearn.preprocessing import MinMaxScaler


>>> scaler = MinMaxScaler()

>>> dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],
                           'B':[103.02,107.26,110.35,114.23,114.68],
                           'C':['big','small','big','small','small']})

>>> dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']])

>>> dfTest
          A         B      C
0  0.000000  0.000000    big
1  0.926219  0.363636  small
2  0.935335  0.628645    big
3  1.000000  0.961407  small
4  0.938495  1.000000  small
LetsPlayYahtzee
  • 7,161
  • 12
  • 41
  • 65
  • 141
    Neat! A more generalized version `df[df.columns] = scaler.fit_transform(df[df.columns])` – citynorman Aug 31 '17 at 17:33
  • I know this is a delayed comment from original date, but why is there two square brackets in dfTest[['A', 'B']]? I can see it doesn't work with single bracket, but couldn't understand the reason. – Rajesh Mappu Feb 03 '18 at 04:50
  • 7
    @RajeshThevar The outer brackets are pandas' typical selector brackets, telling pandas to select a column from the dataframe. The inner brackets indicate a list. You're passing a list to the pandas selector. If you just use single brackets - with one column name followed by another, separated by a comma - pandas interprets this as if you're trying to select a column from a dataframe with multi-level columns (a MultiIndex) and will throw a keyerror. – ken Feb 07 '18 at 23:00
  • 1
    to add to @ken's answer if you want to see exactly how pandas implements this indexing logic and why a tuple of values would be interpreted differently than a list you can look at how DataFrames implement the `__getitem__` method. Specifically you can open you ipython and do `pd.DataFrame.__getitem__??` ; after you import pandas as pd of course ;) – LetsPlayYahtzee Feb 09 '18 at 10:58
  • This works great, however when I try to do `scaler.fit()` and `scaler.transform()` on two separate lines I'm getting the dreaded `SettingWithCopyWarning`. Anyone have any idea why? – Adam Stewart Jun 06 '18 at 18:31
  • 6
    A practical note: for those using train/test data splits, you'll want to only fit on your training data, not your testing data. – David J. Sep 21 '18 at 19:48
  • 1
    `df[df.columns] = scaler.fit_transform(df[df.columns])` -- perfect @citynorman! – satsumas Oct 10 '18 at 12:31
  • 2
    To scale all but the timestamps column, combine with `columns =df.columns.drop('timestamps') df[df.columns] = scaler.fit_transform(df[df.columns]` – intotecho Feb 01 '19 at 05:51
  • 1
    @DavidJ. can you provide a reference for why? – compguy24 Apr 04 '19 at 17:27
  • 2
    Correction of @intotecho's comment. You should do `columns = df.columns.drop('timestamps')` and `df[columns] = scaler.fit_transform(df[columns])`. It should be `columns` in the square brackets, not `df.columns` – JolonB May 07 '20 at 22:03
  • How to inverse a value in this way? – HFX Dec 03 '20 at 15:31
  • Can use `df[:] = scaler.fit_transform(df)` if scaling the entire dataset. – ssp Jan 12 '22 at 21:06
  • @compguy24 The test and validation datasets should not have access to any data from training dataset to prevent the bias, that is why. – Mr. Panda Jul 17 '22 at 19:58
30
df = pd.DataFrame(scale.fit_transform(df.values), columns=df.columns, index=df.index)

This should work without depreciation warnings.

athlonshi
  • 1,711
  • 1
  • 19
  • 23
24

Like this?

dfTest = pd.DataFrame({
           'A':[14.00,90.20,90.95,96.27,91.21],
           'B':[103.02,107.26,110.35,114.23,114.68], 
           'C':['big','small','big','small','small']
         })
dfTest[['A','B']] = dfTest[['A','B']].apply(
                           lambda x: MinMaxScaler().fit_transform(x))
dfTest

    A           B           C
0   0.000000    0.000000    big
1   0.926219    0.363636    small
2   0.935335    0.628645    big
3   1.000000    0.961407    small
4   0.938495    1.000000    small
Indy9000
  • 8,651
  • 2
  • 32
  • 37
Eric Czech
  • 719
  • 9
  • 9
15

As it is being mentioned in pir's comment - the .apply(lambda el: scale.fit_transform(el)) method will produce the following warning:

DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.

Converting your columns to numpy arrays should do the job (I prefer StandardScaler):

from sklearn.preprocessing import StandardScaler
scale = StandardScaler()

dfTest[['A','B','C']] = scale.fit_transform(dfTest[['A','B','C']].as_matrix())

-- Edit Nov 2018 (Tested for pandas 0.23.4)--

As Rob Murray mentions in the comments, in the current (v0.23.4) version of pandas .as_matrix() returns FutureWarning. Therefore, it should be replaced by .values:

from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()

scaler.fit_transform(dfTest[['A','B']].values)

-- Edit May 2019 (Tested for pandas 0.24.2)--

As joelostblom mentions in the comments, "Since 0.24.0, it is recommended to use .to_numpy() instead of .values."

Updated example:

import pandas as pd
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
dfTest = pd.DataFrame({
               'A':[14.00,90.20,90.95,96.27,91.21],
               'B':[103.02,107.26,110.35,114.23,114.68],
               'C':['big','small','big','small','small']
             })
dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A','B']].to_numpy())
dfTest
      A         B      C
0 -1.995290 -1.571117    big
1  0.436356 -0.603995  small
2  0.460289  0.100818    big
3  0.630058  0.985826  small
4  0.468586  1.088469  small
Low Yield Bond
  • 327
  • 5
  • 10
  • 1
    use `.values` in place of `.as_matrix()` as `as_matrix()` now gives a `FutureWarning`. – Rob Murray Nov 16 '18 at 20:41
  • 1
    Since `0.24.0`, it is [recommended to use `.to_numpy()` instead of `.values`](https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.24.0.html#accessing-the-values-in-a-series-or-index). – joelostblom May 27 '19 at 18:16
10

You can do it using pandas only:

In [235]:
dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],'B':[103.02,107.26,110.35,114.23,114.68], 'C':['big','small','big','small','small']})
df = dfTest[['A', 'B']]
df_norm = (df - df.min()) / (df.max() - df.min())
print df_norm
print pd.concat((df_norm, dfTest.C),1)

          A         B
0  0.000000  0.000000
1  0.926219  0.363636
2  0.935335  0.628645
3  1.000000  0.961407
4  0.938495  1.000000
          A         B      C
0  0.000000  0.000000    big
1  0.926219  0.363636  small
2  0.935335  0.628645    big
3  1.000000  0.961407  small
4  0.938495  1.000000  small
CT Zhu
  • 52,648
  • 17
  • 120
  • 133
  • 6
    I know that I can do it just in pandas, but I may want to eventually apply a different sklearn method that isn't as easy to write myself. I'm more interested in figuring out why applying to a series doesn't work as I expected than I am in coming up with a strictly simpler solution. My next step will be to run a RandomForestRegressor, and I want to make sure I understand how Pandas and sklearn work together. – flyingmeatball Jul 09 '14 at 04:11
  • 8
    **This answer is dangerous** because `df.max() - df.min()` can be 0, leading to an exception. Moreover, `df.min()` is computed twice which is inefficient. Note that `df.ptp()` is equivalent to `df.max() - df.min()`. – Asclepius Oct 15 '18 at 01:47
8

I know it's a very old comment, but still:

Instead of using single bracket (dfTest['A']), use double brackets (dfTest[['A']]).

i.e: min_max_scaler.fit_transform(dfTest[['A']]).

I believe this will give the desired result.

Azametzin
  • 5,223
  • 12
  • 28
  • 46
WAN
  • 81
  • 1
  • 1
6

(Tested for pandas 1.0.5)
Based on @athlonshi answer (it had ValueError: could not convert string to float: 'big', on C column), full working example without warning:

import pandas as pd
from sklearn.preprocessing import MinMaxScaler
scale = preprocessing.MinMaxScaler()

df = pd.DataFrame({
           'A':[14.00,90.20,90.95,96.27,91.21],
           'B':[103.02,107.26,110.35,114.23,114.68], 
           'C':['big','small','big','small','small']
         })
print(df)
df[["A","B"]] = pd.DataFrame(scale.fit_transform(df[["A","B"]].values), columns=["A","B"], index=df.index)
print(df)

       A       B      C
0  14.00  103.02    big
1  90.20  107.26  small
2  90.95  110.35    big
3  96.27  114.23  small
4  91.21  114.68  small
          A         B      C
0  0.000000  0.000000    big
1  0.926219  0.363636  small
2  0.935335  0.628645    big
3  1.000000  0.961407  small
4  0.938495  1.000000  small
Ricardo Rendich
  • 666
  • 5
  • 6
  • it should be "scale = MinMaxScaler()", instead of "scale = preprocessing.MinMaxScaler()" – yts61 Aug 21 '22 at 22:55
1

Using set_output(transform='pandas') works on Sklearn >= 1.2.

import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler().set_output(transform='pandas') # set_output works from version 1.2

dfTest = pd.DataFrame({'A':[14.00,90.20,90.95,96.27,91.21],
                       'B':[103.02,107.26,110.35,114.23,114.68], 
                       'C':['big','small','big','small','small']})

dfTest[['A', 'B']] = scaler.fit_transform(dfTest[['A', 'B']])
dfTest.head()
Erick Hernández
  • 1,072
  • 1
  • 10
  • 11
0

I tried applying the min_max_scaler.fit_transform() to multiple columns of a pd.DataFrame()

I was getting the following message:

ValueError: Expected 2D array, got 1D array instead:
array=[0.31428571 0.32142857 0.288... 0.46428571]
Reshape your data either using array.reshape(-1, 1) if your data has a single feature...

My data really had only one feature (dimension) and so the following approach worked:

columns_to_normalize = ['a', 'b']

min_max_scaler = preprocessing.MinMaxScaler()

for col in columns_to_normalize:
   df[col] = min_max_scaler.fit_transform(df[col].values.reshape(-1, 1) )
                                                 ^^^^^^^^^^^^^^^^^^^^^^
pink_demon
  • 89
  • 4