I am a beginner in Python and I am facing a problem. I would like a user to select a CSV file to be read-in. In the case that the program cannot locate the file or handle the condition, it should default to an error.
I have successfully implemented this solution for small file sizes (< 50000 rows) but when the selected file becomes larger (e.x. > 50000 rows), the program freezes.
The following are some characteristics to consider:
- My computer has 8GB of RAM.
- The selected file was only 200k+ rows, which is not considered "Big Data."
The following is my attempt at an implementation:
def File_DATALOG():
global df_LOG
try:
dataloggerfile = tk.filedialog.askopenfilename(parent=root,
title='Choose Logger File',
filetype=(("csv files", "*.csv"),
("All Files", "*.*")))
if len(dataloggerfile) == 0:
return None
lb.insert(tk.END, dataloggerfile)
if dataloggerfile[-4:] == ".csv":
df_LOG = pd.DataFrame(pd.read_csv(dataloggerfile))
if 'Unnamed: 1' in df_LOG.columns:
df_LOG = pd.DataFrame(pd.read_csv(dataloggerfile, skiprows=5, low_memory=False))
else:
df_LOG = pd.DataFrame(pd.read_excel(dataloggerfile, skiprows=5))
df_LOG.rename(columns={'Date/Time': 'DateTime'}, inplace=True)
df_LOG.drop_duplicates(subset=None, keep=False, inplace=True)
df_LOG['DateTime'] = df_LOG['DateTime'].apply(lambda x: insert_space(x, 19))
df_LOG['DateTime'] = pd.to_datetime(df_LOG['DateTime'], dayfirst=False, errors='coerce')
df_LOG.sort_values('DateTime', inplace=True)
df_LOG = df_LOG[~df_LOG.DateTime.duplicated(keep='first')]
df_LOG = df_LOG.set_index('DateTime').resample('1S').pad()
print(df_LOG)
columnsDict['Logger'] = df_LOG.columns.to_list()
except Exception as ex:
tk.messagebox.showerror(title="Title", message=ex)
return None