3

Let's assume you have a pandas DataFrame that holds frequency information like this:

data = [[1,1,2,3],
        [1,2,3,5],
        [2,1,6,1],
        [2,2,2,4]]
df = pd.DataFrame(data, columns=['id', 'time', 'CountX1', 'CountX2'])

# id    time    CountX1     CountX2
# 0     1   1   2   3
# 1     1   2   3   5
# 2     2   1   6   1
# 3     2   2   2   4

I am looking for a simple command (e.g. using pd.pivot or pd.melt()) to revert these frequencies to tidy data that should look like this:

id time variable
0   1   X1
0   1   X1
0   1   X2
0   1   X2
0   1   X2
1   1   X1
1   1   X1
1   1   X1
1   1   X2 ...  # 5x repeated
2   1   X1 ...  # 6x repeated
2   1   X2 ...  # 1x repeated
2   2   X1 ...  # 2x repeated
2   2   X2 ...  # 4x repeated
user3637203
  • 762
  • 5
  • 17
  • The R code for this would by `uncount(df, freq)` with tidyr >= 0.8, see https://stackoverflow.com/a/48571794/3637203 – user3637203 Jun 21 '18 at 09:49

2 Answers2

3

You need:

a = df.set_index(['id','time']).stack()
df = a.loc[a.index.repeat(a)].reset_index().rename(columns={'level_2':'a'}).drop(0, axis=1)
print(df)
    id  time        a
0    1     1  CountX1
1    1     1  CountX1
2    1     1  CountX2
3    1     1  CountX2
4    1     1  CountX2
5    1     2  CountX1
6    1     2  CountX1
7    1     2  CountX1
8    1     2  CountX2
9    1     2  CountX2
10   1     2  CountX2
11   1     2  CountX2
12   1     2  CountX2
13   2     1  CountX1
14   2     1  CountX1
15   2     1  CountX1
16   2     1  CountX1
17   2     1  CountX1
18   2     1  CountX1
19   2     1  CountX2
20   2     2  CountX1
21   2     2  CountX1
22   2     2  CountX2
23   2     2  CountX2
24   2     2  CountX2
25   2     2  CountX2

First solution was first deleted, because different ordering:

a = df.melt(['id','time'])
df = (a.loc[a.index.repeat(a['value'])]
       .drop('value', 1)
       .sort_values(['id', 'time'])
       .reset_index(drop=True))
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • After fixing your answer, it is `100 loops, best of 3: 6.84 ms per loop`. You can make your own timings if you aren't convinced. :) – cs95 Jan 26 '18 at 11:16
  • @cᴏʟᴅsᴘᴇᴇᴅ - no problem, please add it to your timings ;) – jezrael Jan 26 '18 at 11:17
2

You can use melt + repeat.

v = df.melt(['id', 'time'])
r = v.pop('value')

df = pd.DataFrame(
        v.values.repeat(r, axis=0),  columns=v.columns
)\
       .sort_values(['id', 'time'])\
       .reset_index(drop=True)

   id time variable
0   1    1  CountX1
1   1    1  CountX1
2   1    1  CountX2
3   1    1  CountX2
4   1    1  CountX2
5   1    2  CountX1
6   1    2  CountX1
7   1    2  CountX1
8   1    2  CountX2
9   1    2  CountX2
10  1    2  CountX2
11  1    2  CountX2
12  1    2  CountX2
13  2    1  CountX1
14  2    1  CountX1
15  2    1  CountX1
16  2    1  CountX1
17  2    1  CountX1
18  2    1  CountX1
19  2    1  CountX2
20  2    2  CountX1
21  2    2  CountX1
22  2    2  CountX2
23  2    2  CountX2
24  2    2  CountX2
25  2    2  CountX2

This produces the ordering as depicted in your question.


Performance

df = pd.concat([df] * 100, ignore_index=True)

# jezrael's stack solution

%%timeit
a = df.set_index(['id','time']).stack()
a.loc[a.index.repeat(a)].reset_index().rename(columns={'level_2':'a'}).drop(0, axis=1)

1 loop, best of 3: 173 ms per loop

# jezrael's melt solution
%%timeit
a = df.melt(['id','time'])
a.loc[a.index.repeat(a['value'])].drop('value', 1).sort_values(['id', 'time']).reset_index(drop=True)

100 loops, best of 3: 6.84 ms per loop

# in this answer

%%timeit
v = df.melt(['id', 'time'])
r = v.pop('value')

pd.DataFrame(
        v.values.repeat(r, axis=0),  columns=v.columns
)\
       .sort_values(['id', 'time'])\
       .reset_index(drop=True)

100 loops, best of 3: 4.65 ms per loop
cs95
  • 379,657
  • 97
  • 704
  • 746