I have a data frame with more than 700 columns. I have an array whose length is equal to the rows of data frame so each array value corresponds to one row of data frame. the value in the array is basically a random column from the data frame. This is what I am looking for:
I want to shift all the columns mentioned in the array to the 100th column in my data frame. To do so, I can add zero columns to the start depending on how much shift I need to make. All the values in the array are less than 100, so I know I will have to add zero columns for all rows.
for instance if this is the data frame
DF1
col1 col2 col3 col4 col5 col6... col100....col700
21 321 52 74 74 55 ..... 20 .... 447
array[0] = 6 so I will have to shift 96 columns so I will add 96 zeros to the start. and the new dataframe will be:
modified_DF1
col1 col2 col3 col4 col5 col6... col 98 col99 col100....col796
0 0 0 0 0 0 ..... 52 75 55 .... 447
I have tried following code but it does not seem to be working:
def shift_r_peak(df, shift_array):
for i in range(df.shape[0]):
shift_val = 100 - shift_array[i]
new_row = np.zeros(df.shape[1]) # create a new row with zeros
new_row[shift_val:shift_val+shift_array[i]+1] = df.iloc[i, :shift_array[i]+1].values # insert the values from the original row at the appropriate position
df.iloc[i] = new_row # assign the new row to the original DataFrame
return df