I have a database that expands horizontally with different lengths.
I have a for loop that works and iterates through and appends each set to a new row in a new dataframe, however the data set is growing increasingly larger and this is slowing down. I've tried melt, wide_to_long, stack across several other threads. I tend to lose the data or cannot pivot out the melt to the needed columns.
I thought I had previously found a similar problem on stack overflow of address history, but can no longer find it. This is my current for loop that works and gives the results needed.
def df_transpose(df):
new_df = []
for index, row in df.iterrows():
ID = row.loc['User ID']
i = 0
while True:
try:
Street = row.loc['Address.{}.Street'.format(i)]
except KeyError:
break
# if we encounter another kind of error, raise it. it's helpful to see
except Exception as e:
raise e
City = row.loc['Address.{}.City'.format(i)]
State = row.loc['Address.{}.State'.format(i)]
position = i
new_df.append([ID, Street, City, State, position])
i += 1