I need to perform pandas df operations on multiple files,
df1 = pd.read_csv("~/pathtofile/sample1.csv")
some_df=pd.read_csv("~/pathtofile/metainfo.csv")
df1.sort_values('col2')
df1 = df1[df1.col5 != 'N']
df1['new_col'] = df1['col3'] - df1['col2'] + 1
f = lambda row: '{col1}:{col2}-{col3}({col4})'.format(**row)
df1.astype(str).apply(f,1)
df4 = df1.assign(Unique=df1.astype(str).apply(f,1))
# print(df4)
##merge columns
df44 = df4.merge(some_df, left_on='genes', right_on='name', suffixes=('','_1'))
df44 = df44.rename(columns={'id':'id_new'}).drop(['name_1'], axis=1)
# print(df44)
df44['some_col'] = df44['some_col'] + ':E' +
df44.groupby('some_col').cumcount().add(1).astype(str).str.zfill(3)
print(df44)
##drop unwanted columns adapted from http://stackoverflow.com/questions/13411544/delete-column-from-pandas-dataframe
df4 = df44
df4.drop(df4.columns[[3,7,9,11,12,13]], axis=1, inplace=True)
df4 = df4[['col0', 'col1', 'col2', 'col4', 'col5', 'col6', 'col8']]
df4
df4.to_csv('foo.csv', index=False)
above code is just for one file, few questions 1) i have ~ 15 files i need to perform this set of commands how can i use this on all the 15 files 2)and write to 15 different csv 3) merge certain columns from all 15 df and make a matrix (for example just merging 3 dfs)
sample1 = pd.DataFrame.set_index(df4,['col1'])["col4"]
sample2 = pd.DataFrame.set_index(df5,['col1'])["col4"]
sample3 = pd.DataFrame.set_index(df6, ['col1'])["col4"]
concat = pd.concat([sample1,sample2,sample3], axis=1).fillna(0)
# print(concat)
concat.reset_index(level=0, inplace=True)
concat.columns = ["newcol0", "col1", "col2", "col3"]
concat.to_csv('bar.csv', index=False)
Is there a better way to do this, than copy pasting it for 15 times?