In the event that the timestamps have seconds, you can first remove them to group on whole minutes.
df2 = (
df
.groupby(df['timestamp'].map(lambda x: x.replace(second=0)))['status']
.value_counts()
.unstack(fill_value=0)
.reset_index()
)
>>> df2
status timestamp FAILED PASSED UNKNOWN
0 2019-01-01 09:00:00 2 2 1
1 2019-01-01 09:01:00 1 1 0
You may also wish to fill in every minute in the range. Same code as above, but don't reset the index at the end. Then:
df2 = df2.reindex(pd.date_range(df2.index[0], df2.index[-1], freq='1min'), fill_value=0)
Timings
Timings will certainly vary based on the datasets (small vs large, heterogeneous data vs. homogenous, etc.). Given that the dataset is basically a log, one would expect a lot of data with high variation in the timestamp. To create a more suitable test data, lets make the sample dataframe 100k times larger and then make the timestamps unique (one each minute).
df_ = pd.concat([df] * 100000)
df_['timestamp'] = pd.date_range(df_.timestamp.iat[0], periods=len(df_), freq='1min')
And here are the new timings:
%timeit pd.crosstab(df_['timestamp'],df['status'])
# 4.27 s ± 150 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit df_.groupby(['timestamp','status']).size().unstack(fill_value=0)
# 567 ms ± 34.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
(
df_
.groupby(['timestamp', 'status'])
.size()
.unstack(fill_value=0)
.reset_index()
)
# 614 ms ± 27.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
(
df_
.groupby(df['timestamp'].map(lambda x: x.replace(second=0)))['status']
.value_counts()
.unstack(fill_value=0)
.reset_index()
)
# 147 ms ± 6.66 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)