Wondering if anyone sees a problem with creating a dictionary with dataframes in PySpark application, like described below. Would this cause any memory issues? Many thanks.
def create_a_dict_with_df ():
dict_result = dict()
dict_result['key1'] = df1 #as an example, assign df to a value
dict_result['key2'] = df2 #as an example, assign df to aother value
dict_result['key3'] = df3
return dict_result #this is a dict with dataframes as value