2

I am running a snippet that I borrowed from scikit-learn official website to plot the learning curve

My code is pretty simple like following:

import matplotlib.pyplot as plt
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split


from lightgbm import LGBMRegressor

lgb = LGBMRegressor()
std = StandardScaler()
x = std.fit_transform(df[features])
y = df['total_UPDRS']

title = lgb
cv = ShuffleSplit(n_splits=5, test_size=0.4, random_state=0)
plot_learning_curve(lgb, title, x, y, cv=3, ylim=(0.0, 1.01), n_jobs=16)
plt.show()

I am running on a 16 vCPU with 60GB memory. The process spiked for a like a few minutes and then it just died with no measurable activity, I don't know what went wrong with the setup coz' I can output the graph on my Macbook Pro's local Anaconda installation. (It just takes like 10-15 minutes to run.) . What do i do wrong?

VictorGGl
  • 1,848
  • 10
  • 15
Ghostintheshell
  • 163
  • 2
  • 7
  • 1
    Can you recover any error messages (to get an indication where the code crash / stop)? It is impossible to reproduce your example, as df is missing. Also the line `title = lgb`is suspicious (do you really want to set the title of the figure to an object of type `LGBMRegressors ?) – Eolmar Apr 05 '18 at 07:25
  • 1
    The data set is simply from UCI data repository on Parkinson telementery. And it's a very small sized data set of size 5876 x 20+ . The title = lgb works fine when I run it with AdaBoost at a Google Colaboratory Note book [https://colab.research.google.com/drive/1xf__2fI2xGOr4kv_YS7c5CX3TDqNLdpT]. It can reproduce the content of the regressor at the title nicely. And that's where I don't understand as Colaboratory is having same engine behind. – Ghostintheshell Apr 05 '18 at 07:28
  • 1
    What is the `plot_learning_curve` function ? How is feature `defined`? – Eolmar Apr 05 '18 at 07:39
  • 1
    The function is simply a helper function based on scikit-learn's learning curve class [http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html#sphx-glr-auto-examples-model-selection-plot-learning-curve-py]. Features are just a typical numpy array that any estimators from scikit learn can take. – Ghostintheshell Apr 05 '18 at 07:43

1 Answers1

2

UPDATE

I am able to run the code below with no problems in both GCE VM instance and in a Google Cloud Datalab Instance, and using both Python 2 and Python 3. However, I think there is some issue going on with the lightgbm package. If I set n_jobs=1 it runs quite fast (less than 1 minute). If I set n_jobs to i.e. 16 or whatever the number of cores available, it gets super slow (it lasts 10-15 min.). Maybe it would be worth opening an issue in the Github Repo to find out about this.

(btw: see that I'm not using the %matplotlib inline command in Datalab, doesn't look like it's needed.)

import matplotlib.pyplot as plt
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np

!pip install lightgbm

from lightgbm import LGBMRegressor

def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):

    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()

    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, 
                     color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")

    plt.legend(loc="best")
    return plt

df = pd.DataFrame.from_csv(parkinson data from https://archive.ics.uci.edu/ml/datasets/parkinsons)

lgb = LGBMRegressor(min_data=1,random_state=5, n_jobs=1)
std = StandardScaler()
x = std.fit_transform(df[['MDVP:Fo(Hz)','MDVP:Fhi(Hz)']])
y = df['status']

title = lgb
cv = ShuffleSplit(n_splits=5, test_size=0.4, random_state=0)
plt = plot_learning_curve(lgb, title, x, y, cv=3, ylim=(0.0, 1.01), n_jobs=1)
plt.show()

If using a VM machine and not using Jupyter Notebooks or similar: In my case I was using SSH to access the machine, so there is no User Interface so if I added the plt.show() it doesn't crash, but actually it doesn't show anything. To prove it works alright, instead of plt.show(), I added: plt.savefig("filename.png"), which successfully created a filename.png in the same folder of my .py file.

I imported matplotlib this way (following this thread) to avoid display errors:

import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt

Also, after getting some error I changed lgb = LGBMRegressor() to the following: lgb = LGBMRegressor(min_data=1) (following error claiming dataset was too small).

VictorGGl
  • 1,848
  • 10
  • 15
  • I have %matplotlib inline statement added at the beginning of the notebook. So it should correctly displayed the chart without any issue. I just run into the halt when I run it with xgboost or lightgbm. I later change the setting of n_jobs = 1 and the problem goes away. And that’s where I am puzzled again. Does it mean I cannot run on multi cores in a virtual environment. Same problem when I run it locally on my MacBook Pro – Ghostintheshell Apr 23 '18 at 12:06
  • You are actually using this: https://cloud.google.com/datalab/? Or when you say 'I am running on a 16 vCPU with 60GB memory. ' what exactly do you mean? – VictorGGl Apr 23 '18 at 12:10
  • I tried both. I run it on a VM, colaboratory and datalab instance. For VM and datalab, I set up with 16vCPU with no GPU. – Ghostintheshell Apr 23 '18 at 13:04
  • The notebook I shared with you is the one I generated using colaboratory. – Ghostintheshell Apr 23 '18 at 13:06
  • @Ghostintheshell I updated the answer. There is indeed something going on with the "n_jobs" parameter, but seems to be due to "lightgbm" package. Nothing to do with Datalab/VMs – VictorGGl Apr 25 '18 at 10:05
  • Thanks for looking into this issue. So where and what should I report in connection with this case? – Ghostintheshell Apr 25 '18 at 10:30
  • @Ghostintheshell you're welcome! So you could go to the ligthbm github repo here: https://github.com/Microsoft/LightGBM and there going on the issue tab on the top, you can open a "New issue". You should add the code snippet and describe the issue: Mainly that it seems that setting n_jobs=1 works fast and it gets really slow when setting a higher number. Add that you have tried in different environments (Cloud Datalab, VM instances, your MacBook...) – VictorGGl Apr 26 '18 at 10:17
  • Thanks. Will do when I am back to my desk – Ghostintheshell Apr 26 '18 at 10:19