Using the iris data set from sklearn. I am splitting the data applying a Perceptron, recording scores in a dictionary that maps the sample size (key) used to fit the model to the corresponding score (training and test scores as a tuple)
This gives 3 dictionaries as I am running the loop 3 times. How can I find the find average of the scores over the 3 iterations? I tried to store the dictionaries in a list and average but it did not work
for eg: if the dictionary is
{21: (0.85, 0.82), 52: (0.80, 0.62), 73: (0.82, 0.45), 94: (0.81, 0.78)}
{21: (0.95, 0.91), 52: (0.80, 0.89), 73: (0.84, 0.87), 94: (0.79, 0.41)}
{21: (0.809, 0.83), 52: (0.841, 0.77), 73: (0.84, 0.44), 94: (0.79, 0.33)}
The output should be {21:(0.869,0.853),52.....}
where the first element of the value for key 21 is 0.85+0.95+0.809/3 and the second is 0.82+0.91+0.83/3
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
from sklearn.model_selection import train_test_split
score_list=shape_list=[]
iris = load_iris()
props=[0.2,0.5,0.7,0.9]
df = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target'])
y=df[list(df.loc[:,df.columns.values =='target'])]
X=df[list(df.loc[:,df.columns.values !='target'])]
# number of trials
for i in range(3):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, train_size=0.7)
results = {}
for i in props:
size = int(i*len(X_train))
ix = np.random.choice(X_train.index, size=size, replace = False)
sampleX = X_train.loc[ix]
sampleY = y_train.loc[ix]
#apply model
modelP = Perceptron(tol=1e-3)
modelP.fit(sampleX, sampleY)
train_score = modelP.score(sampleX,sampleY)
test_score = modelP.score(X_test,y_test)
#store in dictionary
results[size] = (train_score, test_score)
print(results)
Also, if someone knows statistics, is there a way to find the standard error over the trials and print the average standard error for each sample size (key of dictionary)?