Unfortunately, groupby doesn't work for NaN values, so here's a somewhat dirty way of doing what you want (dirty in the sense I create a fake column >_>).
As an aside, the way the itertools.groupby function works is that it groups consecutive items that have the same key function value. Enumerate gives an index and the value of nanindices (e.g. if nanindices is [0,1,4,5,6], enumerate returns [(0,0), (1,1), (2,4), (3,5), (4, 6)]). The key function is the index minus the value. Note that when the value and index both go up by one at the same time (i.e. are consecutive), that difference is the same. Therefore, this groups consecutive numbers.
itemgetter(n) is just a callable object you can apply to an item to get it's n^th element using it's __getitem__ function. I mapped it to the result of the groupby simply because you can't call length directly on the iterable, g, it returns. You could simply convert g to a list and call length on that if you don't want to get the actual consecutive values.
import numpy as np
import pandas as pd
import itertools
from operator import itemgetter
locations = []
df = pd.DataFrame([np.NaN]*2+[5]*3+[np.NaN]*3+[4]*3+[3]*2+[np.NaN]*4, columns=['A'])
df['B'] = df.fillna(-1)
nanindices = df.reset_index().groupby('B')['index'].apply(np.array).loc[-1]
for k, g in itertools.groupby(enumerate(nanindices), lambda (i, x): i-x):
consec = map(itemgetter(1), g)
num_consec = len(consec)
if (num_consec >= 3):
locations.append((consec[0], num_consec))
print locations
For the DF sample I used, the sample data looks like:
A
0 NaN
1 NaN
2 5.0
3 5.0
4 5.0
5 NaN
6 NaN
7 NaN
8 4.0
9 4.0
10 4.0
11 3.0
12 3.0
13 NaN
14 NaN
15 NaN
16 NaN
And the program prints:
[(5, 3), (13, 4)]