0

I'm trying to reduce meterological data using pandas 0.13.1. I have a large dataframe of floats. Thanks to this answer I have grouped the data into half-hour intervals most efficiently. I am using groupby+apply instead of resample because of the need to examine multiple columns.

>>> winddata
                            sonic_Ux  sonic_Uy  sonic_Uz
TIMESTAMP                                               
2014-04-30 14:13:12.300000  0.322444  2.530129  0.347921
2014-04-30 14:13:12.400000  0.357793  2.571811  0.360840
2014-04-30 14:13:12.500000  0.469529  2.400510  0.193011
2014-04-30 14:13:12.600000  0.298787  2.212599  0.404752
2014-04-30 14:13:12.700000  0.259310  2.054919  0.066324
2014-04-30 14:13:12.800000  0.342952  1.962965  0.070500
2014-04-30 14:13:12.900000  0.434589  2.210533 -0.010147
                                 ...       ...       ...

[4361447 rows x 3 columns]
>>> winddata.dtypes
sonic_Ux    float64
sonic_Uy    float64
sonic_Uz    float64
dtype: object
>>> hhdata = winddata.groupby(TimeGrouper('30T')); hhdata
<pandas.core.groupby.DataFrameGroupBy object at 0xb440790c>

I want to use math.atan2 on the 'Ux/Uy' columns and am having trouble successfully applying any function. I get tracebacks about attribute ndim:

>>> hhdata.apply(lambda g: atan2(g['sonic_Ux'].mean(), g['sonic_Uy'].mean()))
Traceback (most recent call last):
      <<snip>>
  File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.1-py2.7-linux-i686.egg/pandas/tools/merge.py", line 989, in __init__
    if not 0 <= axis <= sample.ndim:
AttributeError: 'float' object has no attribute 'ndim'
>>> 
>>> hhdata.apply(lambda g: 42)
Traceback (most recent call last):
      <<snip>>
  File "/usr/local/lib/python2.7/dist-packages/pandas-0.13.1-py2.7-linux-i686.egg/pandas/tools/merge.py", line 989, in __init__
    if not 0 <= axis <= sample.ndim:
AttributeError: 'int' object has no attribute 'ndim'

I can loop through the groupby object just fine. I can also wrap the result in a Series or DataFrame but wrapping values requires adding an index which is tuple-ed with my original index. Following the advice of this answer to remove the duplicate index didn't work as expected. Since I can reproduce the problem and solution from that question, I wonder if believe it's behaving differently because I am grouping on a DateTimeIndex an index.

>>> for name, g in hhdata:
...     print name, atan2(g['sonic_Ux'].mean(), g['sonic_Uy'].mean()), '   wd'
... 
2014-04-30 14:00:00 0.13861912975    wd
2014-04-30 14:30:00 0.511709085506    wd
2014-04-30 15:00:00 -1.5088990774    wd
2014-04-30 15:30:00 0.13200013186    wd
    <<snip>>
>>> def winddir(g):
...     return pd.Series(atan2( np.mean(g['sonic_Ux']), np.mean(g['sonic_Uy']) ), name='wd')
... 
>>> hhdata.apply(winddir)
2014-04-30 14:00:00  0    0.138619
2014-04-30 14:30:00  0    0.511709
2014-04-30 15:00:00  0   -1.508899
2014-04-30 15:30:00  0    0.132000
...
2014-05-05 14:00:00  0   -2.551593
2014-05-05 14:30:00  0   -2.523250
2014-05-05 15:00:00  0   -2.698828
Name: wd, Length: 243, dtype: float64
>>> hhdata.apply(winddir).index[0]
(Timestamp('2014-04-30 14:00:00', tz=None), 0)
>>> def winddir(g):
...     return pd.DataFrame({'wd':atan2(g['sonic_Ux'].mean(), g['sonic_Uy'].mean())}, index=[g.name])
... 
>>> hhdata.apply(winddir)
                                               wd
2014-04-30 14:00:00 2014-04-30 14:00:00  0.138619
2014-04-30 14:30:00 2014-04-30 14:30:00  0.511709
2014-04-30 15:00:00 2014-04-30 15:00:00 -1.508899
2014-04-30 15:30:00 2014-04-30 15:30:00  0.132000
                                              ...

[243 rows x 1 columns]
>>> hhdata.apply(winddir).index[0]
(Timestamp('2014-04-30 14:00:00', tz=None), Timestamp('2014-04-30 14:00:00', tz=None))
>>> 
>>> tsfast.groupby(TimeGrouper('30T')).apply(lambda g:
...     Series({'wd': atan2(g.sonic_Ux.mean(), g.sonic_Uy.mean()), 
...             'ws': np.sqrt(g.sonic_Ux.mean()**2 + g.sonic_Uy.mean()**2)}))
2014-04-30 14:00:00  wd    0.138619
                     ws    1.304311
2014-04-30 14:30:00  wd    0.511709
                     ws    0.143762
2014-04-30 15:00:00  wd   -1.508899
                     ws    0.856643
...
2014-05-05 14:30:00  wd   -2.523250
                     ws    3.317810
2014-05-05 15:00:00  wd   -2.698828
                     ws    3.279520
Length: 486, dtype: float64

Edited: Notice the extra column when a Series or DataFrame is returned? And following the formula of the previously linked answer results in a hierarchical index?

My original question was: what kind of value should be returned from my applyed function so that a groupby-apply operation results in a 1-column DataFrame or Series with a length equal to number of groups and group names (e.g. Timestamps) used as index values?

After feedback & further investigation, what I am really asking is why does grouping on an Index behave differently than grouping on a column? Observe changing the DatetimeIndex to a column with string values to achieve equivalent grouping as with TimeGrouper('30T') results in the behavior I was expecting:

>>> winddata.index.name = 'WASINDEX'
>>> data2 = winddata.reset_index()
>>> def to_hh(x): # <-- big hammer
...     ts = x.isoformat()
...     return ts[:14] + ('30:00' if int(ts[14:16]) >= 30 else '00:00')
... 
>>> data2['TS'] = data2['WASINDEX'].apply(lambda x: to_hh(x))
>>> wd = data2.groupby('TS').apply(lambda df: Series({'wd': np.arctan2(df.x.mean(), df.y.mean())}))
>>> type(wd)
pandas.core.frame.DataFrame
>>> wd.columns
Index([u'wd'], dtype=object)
>>> wd.index
Index([u'2014-04-30T14:00:00', u'2014-04-30T14:30:00', <<snip>> dtype=object)
Community
  • 1
  • 1
patricktokeeffe
  • 1,058
  • 1
  • 11
  • 21
  • 1
    this will be much more efficient to not use apply at all, rather compute the mean aggregates first, then use np.atan2. I'll put up an example tomorrow – Jeff Jun 23 '14 at 01:13
  • Just looking at your exception, looks like you're trying to apply function to each row but didn't specify axis=1 e.g. df.apply(f, axis=1) #apply function to each row – Bill Jun 23 '14 at 03:06

1 Answers1

0
In [31]: pd.set_option('max_rows',10)

In [32]: winddata = DataFrame({ 'x' : np.random.randn(N), 'y' : np.random.randn(N)+2, 'z' : np.random.randn(N) },pd.date_range('20140430 14:13:12',periods=N,freq='100ms'))

In [33]: winddata
Out[33]: 
                                   x         y         z
2014-04-30 14:13:12        -0.065350  0.567525  2.212534
2014-04-30 14:13:12.100000 -0.436498  2.591799  2.424359
2014-04-30 14:13:12.200000 -1.059038  3.120631 -0.645579
2014-04-30 14:13:12.300000  1.973474  0.630424  0.966405
2014-04-30 14:13:12.400000  0.575082  1.941845 -0.674695
...                              ...       ...       ...
2014-05-05 15:22:16.200000  0.601962  0.027834 -0.101967
2014-05-05 15:22:16.300000  0.741777  1.764745  0.991516
2014-05-05 15:22:16.400000 -0.494253  1.765930  2.493000
2014-05-05 15:22:16.500000 -2.643749  0.671604  0.275096
2014-05-05 15:22:16.600000  0.676698  0.958903  0.946942

[4361447 rows x 3 columns]

In [34]: winddata.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 4361447 entries, 2014-04-30 14:13:12 to 2014-05-05 15:22:16.600000
Freq: 100L
Data columns (total 3 columns):
x    float64
y    float64
z    float64
dtypes: float64(3)

In < 0.14.0, use pd.TimeGrouper

In [35]: g = winddata.groupby(pd.Grouper(freq='30T'))

In [36]: results = DataFrame({'x' : g['x'].mean(), 'y' : g['y'].mean() })

In [37]: results['wd'] = np.arctan2(results['x'],results['y'])

In [38]: results['ws'] = np.sqrt(results['x']**2+results['y']**2)

In [39]: results
Out[39]: 
                            x         y        wd        ws
2014-04-30 14:00:00  0.005060  1.986778  0.002547  1.986784
2014-04-30 14:30:00  0.004922  2.015551  0.002442  2.015557
2014-04-30 15:00:00 -0.004209  1.988889 -0.002116  1.988893
2014-04-30 15:30:00  0.008410  2.003453  0.004198  2.003470
2014-04-30 16:00:00  0.004027  1.997369  0.002016  1.997373
...                       ...       ...       ...       ...
2014-05-05 13:00:00  0.006901  1.991252  0.003466  1.991264
2014-05-05 13:30:00  0.005458  2.008731  0.002717  2.008739
2014-05-05 14:00:00 -0.000805  2.000045 -0.000402  2.000045
2014-05-05 14:30:00 -0.004556  1.997437 -0.002281  1.997443
2014-05-05 15:00:00  0.003444  2.000182  0.001722  2.000185

[243 rows x 4 columns]
Jeff
  • 125,376
  • 21
  • 220
  • 187