UPDATE
I am able to run the code below with no problems in both GCE VM instance and in a Google Cloud Datalab Instance, and using both Python 2 and Python 3. However, I think there is some issue going on with the lightgbm
package. If I set n_jobs=1 it runs quite fast (less than 1 minute). If I set n_jobs to i.e. 16 or whatever the number of cores available, it gets super slow (it lasts 10-15 min.). Maybe it would be worth opening an issue in the Github Repo to find out about this.
(btw: see that I'm not using the %matplotlib inline command in Datalab, doesn't look like it's needed.)
import matplotlib.pyplot as plt
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import r2_score
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
!pip install lightgbm
from lightgbm import LGBMRegressor
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1,
color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
df = pd.DataFrame.from_csv(parkinson data from https://archive.ics.uci.edu/ml/datasets/parkinsons)
lgb = LGBMRegressor(min_data=1,random_state=5, n_jobs=1)
std = StandardScaler()
x = std.fit_transform(df[['MDVP:Fo(Hz)','MDVP:Fhi(Hz)']])
y = df['status']
title = lgb
cv = ShuffleSplit(n_splits=5, test_size=0.4, random_state=0)
plt = plot_learning_curve(lgb, title, x, y, cv=3, ylim=(0.0, 1.01), n_jobs=1)
plt.show()
If using a VM machine and not using Jupyter Notebooks or similar:
In my case I was using SSH to access the machine, so there is no User Interface so if I added the plt.show() it doesn't crash, but actually it doesn't show anything. To prove it works alright, instead of plt.show()
, I added: plt.savefig("filename.png")
, which successfully created a filename.png in the same folder of my .py file.
I imported matplotlib this way (following this thread) to avoid display errors:
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
Also, after getting some error I changed lgb = LGBMRegressor()
to the following: lgb = LGBMRegressor(min_data=1)
(following error claiming dataset was too small).