3

Using the iris data set from sklearn. I am splitting the data applying a Perceptron, recording scores in a dictionary that maps the sample size (key) used to fit the model to the corresponding score (training and test scores as a tuple)

This gives 3 dictionaries as I am running the loop 3 times. How can I find the find average of the scores over the 3 iterations? I tried to store the dictionaries in a list and average but it did not work

for eg: if the dictionary is

{21: (0.85, 0.82), 52: (0.80, 0.62), 73: (0.82, 0.45), 94: (0.81, 0.78)}
{21: (0.95, 0.91), 52: (0.80, 0.89), 73: (0.84, 0.87), 94: (0.79, 0.41)}
{21: (0.809, 0.83), 52: (0.841, 0.77), 73: (0.84, 0.44), 94: (0.79, 0.33)}

The output should be {21:(0.869,0.853),52.....} where the first element of the value for key 21 is 0.85+0.95+0.809/3 and the second is 0.82+0.91+0.83/3

import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
from sklearn.model_selection import train_test_split

score_list=shape_list=[]
iris = load_iris()
props=[0.2,0.5,0.7,0.9]
df = pd.DataFrame(data= np.c_[iris['data'], iris['target']], columns= iris['feature_names'] + ['target'])
y=df[list(df.loc[:,df.columns.values =='target'])]
X=df[list(df.loc[:,df.columns.values !='target'])]

# number of trials
for i in range(3):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, train_size=0.7)
    results = {}
    for i in props:
        size = int(i*len(X_train))
        ix = np.random.choice(X_train.index, size=size, replace = False)
        sampleX = X_train.loc[ix]
        sampleY = y_train.loc[ix]
        #apply model
        modelP = Perceptron(tol=1e-3)
        modelP.fit(sampleX, sampleY)
        train_score = modelP.score(sampleX,sampleY)
        test_score = modelP.score(X_test,y_test)
        #store in dictionary
        results[size] = (train_score, test_score)

    print(results)

Also, if someone knows statistics, is there a way to find the standard error over the trials and print the average standard error for each sample size (key of dictionary)?

Trenton McKinney
  • 56,955
  • 33
  • 144
  • 158
freshman_2021
  • 361
  • 2
  • 9

2 Answers2

2
  • Update your existing loop to save results into a list, rl
  • Load the rl into a dataframe since you're already using pandas
  • Expand the columns of tuples into separate columns
  • Use .agg to get the metrics
  • Test with python 3.8 and pandas 1.3.1
    • f-strings (e.g. f'TrS{c}', f'TeS{c}') require python >= 3.6

Updates to Existing Code

# select columns for X and y
y = df.loc[:, 'target']
X = df.loc[:, iris['feature_names']]

# number of trials
rl = list()  # add: save results to a list
for i in range(3):
    ...
    results = {}
    for i in props:
        ...
        ...
    rl.append(results)  # add: append results

New code to get metrics

  • Converting metrics to a list of tuples is easier than a tuple of tuples, because a tuple is immutable once created. This means tuples can be added to an existing list, but not an existing tuple.
    • Therefore, it's easier to use defaultdict to create a list of tuples, and then convert each value to a tuple with map.
    • k[3:] requires that the numbers always start at index 3
from collections import defaultdict

# convert rl to a dataframe
rl = [{21: (0.5714285714285714, 0.6888888888888889), 52: (0.6153846153846154, 0.7111111111111111), 73: (0.7123287671232876, 0.6222222222222222), 94: (0.7127659574468085, 0.6)}, {21: (0.6190476190476191, 0.6444444444444445), 52: (0.6923076923076923, 0.6444444444444445), 73: (0.3698630136986301, 0.35555555555555557), 94: (0.7978723404255319, 0.7777777777777778)}, {21: (0.8095238095238095, 0.5555555555555556), 52: (0.7307692307692307, 0.5555555555555556), 73: (0.7534246575342466, 0.5777777777777777), 94: (0.6170212765957447, 0.7555555555555555)}]
df = pd.DataFrame(rl)

# display(df)
                                         21                                        52                                         73                                        94
0  (0.5714285714285714, 0.6888888888888889)  (0.6153846153846154, 0.7111111111111111)   (0.7123287671232876, 0.6222222222222222)                 (0.7127659574468085, 0.6)
1  (0.6190476190476191, 0.6444444444444445)  (0.6923076923076923, 0.6444444444444445)  (0.3698630136986301, 0.35555555555555557)  (0.7978723404255319, 0.7777777777777778)
2  (0.8095238095238095, 0.5555555555555556)  (0.7307692307692307, 0.5555555555555556)   (0.7534246575342466, 0.5777777777777777)  (0.6170212765957447, 0.7555555555555555)

# expand the tuples
for c in df.columns:
    df[[f'TrS{c}', f'TeS{c}']] = pd.DataFrame(df[c].tolist(), index= df.index)
    df.drop(c, axis=1, inplace=True)

# get the mean and std
metrics = df.agg(['mean', 'std']).round(3)

# display(metrics)
      TrS21  TeS21  TrS52  TeS52  TrS73  TeS73  TrS94  TeS94
mean  0.667  0.630  0.679  0.637  0.612  0.519  0.709  0.711
std   0.126  0.068  0.059  0.078  0.211  0.143  0.090  0.097

# convert to dict
dd = defaultdict(list)

for k, v in metrics.to_dict().items(): 
    dd[int(k[3:])].append(tuple(v.values()))
    
dd = dict(zip(dd, map(tuple, dd.values())))
print(dd)

[out]:
{21: ((0.667, 0.126), (0.63, 0.068)),
 52: ((0.679, 0.059), (0.637, 0.078)),
 73: ((0.612, 0.211), (0.519, 0.143)),
 94: ((0.709, 0.09), (0.711, 0.097))}
Trenton McKinney
  • 56,955
  • 33
  • 144
  • 158
1

Assuming all results are stored in list rl, the following program will do the calculation:

rl = [
    {21: (0.85, 0.82), 52: (0.80, 0.62), 73: (0.82, 0.45), 94: (0.81, 0.78)},
    {21: (0.95, 0.91), 52: (0.80, 0.89), 73: (0.84, 0.87), 94: (0.79, 0.41)},
    {21: (0.809, 0.83), 52: (0.841, 0.77), 73: (0.84, 0.44), 94: (0.79, 0.33)}
]

vd = {}
for k in rl[0].keys():
    vals = [[], []]
    for i in range(len(rl)):
        vals[0].append(rl[i][k][0])
        vals[1].append(rl[i][k][1])
    vd[k] = sum(vals[0])/len(vals[0]), sum(vals[1])/len(vals[1])
print(vd)

Output:

# {21: (0.8696666666666667, 0.8533333333333334),
#  52: (0.8136666666666666, 0.7600000000000001),
#  73: (0.8333333333333334, 0.5866666666666667),
#  94: (0.7966666666666667, 0.5066666666666667)}

Or, if using zip and numpy on the same list rl. Here, we can also conveniently calculate the standard error in the same fashion as the mean:

import numpy as np

rl2 = list(zip(rl[0].keys(), rl[0].values(), rl[1].values(), rl[2].values()))
vd2 = {rl2[i][0] : np.mean(list(zip(*rl2[i][1:])), axis=1) for i in range(len(rl2))}
print(vd2)
vd2_std = {rl2[i][0] : np.std(list(zip(*rl2[i][1:])), axis=1) for i in range(len(rl2))}
print(vd2)
print("Standard error\n", vd2_std)

Output:

# {21: array([0.86966667, 0.85333333]),
#  52: array([0.81366667, 0.76      ]),
#  73: array([0.83333333, 0.58666667]),
#  94: array([0.79666667, 0.50666667])}
# Standard error
# {21: array([0.05921899, 0.04027682]),
#  52: array([0.01932759, 0.11045361]),
#  73: array([0.00942809, 0.20038851]),
#  94: array([0.00942809, 0.19601587])}
VirtualScooter
  • 1,792
  • 3
  • 18
  • 28