You are running out of memory. Even if you manage to load all of them (with pandas or other package), your system will still run out of memory for every task you want to perform with this data.
Assuming that you want to perform different operations in different columns of all the tables, the best way to do so is to perform each task separately, preferrably batching your columns since there are more than 1k for each file, as you say.
Let's say you want to sum the values in the first column of each file (assuming they are numbers...) and store these results in a list:
import glob
import pandas as pd
import numpy as np
filelist = glob.glob('*.txt') # Make sure you're working in the directory containing the files
sum_first_columns = []
for file in filelist:
df = pd.read_csv(file,sep=' ') # Adjust the separator for your case
sum_temp = np.sum(df.iloc[:,0])
sum_first_columns.append(sum_temp)
You now have a list of dimension (1,120).
For each operation, this is what I would do if it was mandatory for me to work with my own computer/system.
Please note that this process will be very time consuming as well, given the size of your files. You can either try to reduce your data or to use a cloud server to compute everything.