I'm trying to merge multiple .txt files in a directory specifically merging on one of the common column X
found in dataframes.
Import multiple csv files into pandas and concatenate into one DataFrame
Python pandas - merge csv files in directory into one
Import multiple nested csv files and concatenate into one DataFrame
Python for merging multiple files from a directory into one single file
import pandas as pd
df1 = pd.DataFrame({'X': ['X0', 'X1', 'X2', 'X3'],
...: 'B': ['B0', 'B1', 'B2', 'B3'],
...: 'C': ['C0', 'C1', 'C2', 'C3'],
...: 'D': ['D0', 'D1', 'D2', 'D3']})
df2 = pd.DataFrame({'X': ['X0', 'X1', 'X2', 'X3'],
...: 'G': ['G0', 'G1', 'G2', 'G3'],
...: 'H': ['H0', 'H1', 'H2', 'H3'],
...: 'J': ['J0', 'J1', 'J2', 'J3']})
by following previous post solutions I built the following code
filepath = "D:\\test"
data=[]
for file in glob.iglob(filepath + '/*.txt', recursive=True):
print(file)
df=pd.read_csv(file, header=0, skiprows=0, skipfooter=0, na_values=(""," ","NA"))
data=data.append(df)
data_merge = pd.concat(data, keys = ('X'))
but I got
AttributeError: 'NoneType' object has no attribute 'append'
How can I join two datafame by common column ?
the expected output
Thanks.