I don't know that this functionality is built into Pandas, since frankly Excel is not intended to be used like this but people still tend to abuse the heck out of it. Man I hate Excel.....but that's a topic for another thread.
I think your best bet here is to define a custom function based on the logic that you know applies to these files. As I am currently in the middle of a project dealing with diverse and poorly-formatted Excel files, I'm all too familiar with this kind of garbage.
This is my suggestion, based on my understanding of the data and what you're asking. It may need to be changed depending on the specifics of your files.
last_valid = None
check_cols = [] # if only need to check a subset of cols for validity, do it here
for i, s in df.iterrows(): # This is slow, but probably necessary in this case
""" If all the rows are valid, we want to keep it as a reference in case
the following rows are not """
if all(s[check_cols].notna()):
lvi, last_valid = i, s
# need to store index and series so we can go back and replace it
continue
else: # here is the critical part
extra_vals = s[s.notna()] # find cells in row that have actual values
for col in extra_vals.index:
""" I'm creating a list and appending here since I don't know
your values or how they need to be handled exactly"""
last_valid[col] = list(last_valid[col]).append(extra_vals[col])
# replace that row in the dataframe
df.iloc[lvi, :] = last_valid
# drop extra rows:
df = df.dropna(axis=0, subset=check_cols)
Hope this works for ya!