0

i have the data like this

ID    8-Jan 15-Jan  22-Jan  29-Jan  5-Feb   12-Feb  LowerBound   UpperBound
001    618    720    645     573     503     447       -             -
002    62     80      67      94      81      65       -             -      
003    32     10      23      26      26      31       -             -
004    22     13       1      28      19      25       -             -
005    9       7       9      6        8       4       -             -

I want to create two columns with lower bounds and upper bounds for each product using 95% confidence intervals. I know manual way of writing a function which loops through each product ID

import numpy as np
import scipy as sp
import scipy.stats

# Method copied from http://stackoverflow.com/questions/15033511/compute-a-confidence-interval-from-sample-data
def mean_confidence_interval(data, confidence=0.95):
    a = 1.0*np.array(data)
    n = len(a)
    m, se = np.mean(a), scipy.stats.sem(a)
    h = se * sp.stats.t._ppf((1+confidence)/2., n-1)
    return m-h, m+h

Is there an efficient way in Pandas or (one liner kind of thing) ?

muazfaiz
  • 4,611
  • 14
  • 50
  • 88

3 Answers3

2

Of course, you want df.apply. Note you need to modify mean_confidence_interval to return pd.Series([m-h, m+h]).

df[['LowerBound','UpperBound']] = df.apply(mean_confidence_interval, axis=1)
gzc
  • 8,180
  • 8
  • 42
  • 62
2

Standard error of the mean is pretty straightforward to calculate so you can easily vectorize this:

import scipy.stats as ss
df.mean(axis=1) + ss.t.ppf(0.975, df.shape[1]-1) * df.std(axis=1)/np.sqrt(df.shape[1])

will give you the upper bound. Use - ss.t.ppf for the lower bound.

Also, pandas seems to have a sem method. If you have a large dataset, I don't suggest using apply over rows. It is pretty slow. Here are some timings:

df = pd.DataFrame(np.random.randn(100, 10))

%timeit df.apply(mean_confidence_interval, axis=1)
100 loops, best of 3: 18.2 ms per loop

%%timeit
dist = ss.t.ppf(0.975, df.shape[1]-1) * df.sem(axis=1)
mean = df.mean(axis=1)
mean - dist, mean + dist
1000 loops, best of 3: 598 µs per loop
ayhan
  • 70,170
  • 20
  • 182
  • 203
  • 1
    @MaxU Thanks.It seems like pandas has a method for sem too. It is faster than apply yeah. – ayhan Apr 12 '17 at 15:29
0

Since you already created a function for calculating the confidence interval, simply apply it to each row of your data:

def mean_confidence_interval(data):
  confidence = 0.95      
  m = data.mean()
  se = scipy.stats.sem(data)
  h = se * sp.stats.t._ppf((1 + confidence) / 2, data.shape[0] - 1)
  return pd.Series((m - h, m + h))

interval = df.apply(mean_confidence_interval, axis=1)
interval.columns = ("LowerBound", "UpperBound")
pd.concat([df, interval],axis=1)
DYZ
  • 55,249
  • 10
  • 64
  • 93