I'm trying to use a list as an index in a Dataframe subtract operation. However I get the following error: cannot do positional indexing on Index with these indexers
I have these two DataFrames:
df1:
index | t1 | t2 | t3 | t4 | t5 | ... | t950 |
---|---|---|---|---|---|---|---|
a,1 | 0,00001 | 0,00002 | 0,00003 | 0,00004 | 0,00008 | ... | 0,00004 |
a,2 | 0,00001 | 0,00002 | 0,00003 | 0,00005 | 0,00007 | ... | 0,00004 |
b,1 | 0,00004 | 0,00003 | 0,00002 | 0,00006 | 0,00006 | ... | 0,00001 |
b,2 | 0,00005 | 0,00004 | 0,00003 | 0,00007 | 0,00005 | ... | 0,00002 |
df2:
index | t1 | t2 | t3 | t4 | t5 | ... | t950 |
---|---|---|---|---|---|---|---|
a,1 | 0,00008 | 0,00007 | 0,00007 | 0,00006 | 0,00004 | ... | 0,00002 |
a,2 | 0,00007 | 0,00006 | 0,00005 | 0,00004 | 0,00003 | ... | 0,00002 |
b,1 | 0,00002 | 0,00001 | 0,00002 | 0,00003 | 0,00004 | ... | 0,00004 |
b,2 | 0,00005 | 0,00006 | 0,00007 | 0,00008 | 0,00009 | ... | 0,00004 |
And I have a list too which includes the index for each column from where the subtract should start:
index_col
[2,3,1,2]
My code nowadays is as follows:
result=df1.subtract(df2.iloc[:,index_col:].rename(columns=dict(zip(df2.iloc[:,index_col:].columns,df2.columns))
My expected result is:
index | t1 | t2 | t3 | t4 | t5 | ... | t950 |
---|---|---|---|---|---|---|---|
a,1 | -0,00006 | -0,00004 | -0,00001 | ... | ... | ... | 0,00002 |
a,2 | -0,00003 | -0,00001 | ... | ... | ... | ... | 0,00002 |
b,1 | -0,00003 | -0,00001 | 0,00001 | -0,00002 | ... | ... | 0,00004 |
b,2 | -0,00002 | -0,00004 | -0,00006 | ... | ... | ... | 0,00004 |
Where, for example, in the first row:
t1 - t3 // t2 - t4 // t3 - t5 because in df2 it should start in the third column (as the first index_col value reflects).
Do you know how I can calculate this subtract following my list as a column index ? I know I can do this with a loop but I want to try to avoid it and use the power of vectorization.
Thanks you so much!