Without further knowledge regarding the dataset I would do the following:
If each of the 52 entries in the datasets are related to each other, then you could consider connecting the datasets into one tokenized string and then concatenating each dataset together. LSTMs should be able to understand start-of-sentence and end-of-sentence (SOS and EOS) without it having to be explicitly told that the end of a sentence is actually the end.
e.g.:
df1 = [['Today was a good day.'], ['Tomorrow will be even better.']...]
new_df1 = [sent for alist in df1 for sent in alist]
print(new_df1)
Out[1]: ['Today was a good day.', 'Tomorrow will be even better'...]
new_df1 = " ".join(new_df1)
print(new_df1)
Out[2]: 'Today was a good day. Tomorrow will be even better....'
If the entries are unrelated to each other, I would need more information as to why you wouldn't concatenate the datasets and treat each entry as an individual input. Best practice is to combine all datasets into a single dataset. If you absolutely don't want to do that, you will still have to capture each dataset within a variable in order for the model to be able to process the datasets.