I looked into itertools.product()'s runtime. It seems it runs in O(n*m) In the worst case, this ends up being inefficient at O(n^2), should the two lists have the same length.
Assuming you're using something like Jupyter notebook, maybe you can create the product first in a separate cell above the loop? That may help speed up the loop's cell.
So something like this:
prd = product(df2["Description"].to_list(), df1["Question"].to_list())
followed by the next cell:
for i, j in prd :
...
Edit: Some more reasoning. I think it's the use of product() slowing your code down, and that's what you should focus on. If your 'Description' and 'Question' columns each have 60,000 rows and you apply product() using both of them as lists, it would return a list of 60,000 * 60,000 = 3,600,000,000 items.
3.6 billion items is a massive list for current-day computers to handle. If we try imitating this with the code below:
from itertools import product
a = [0] * 60000
b = [1] * 60000
c = product(a, b)
print(len(list(c)))
sure enough, my computer starts to struggle for memory space.
However, I think we can both agree my original solution isn't satisfactory, so I found this answer that better explains the difficulty of handling a list with several billion items and proposes a solution involving emulated lists. If you really need a list that big, I suggest looking into that or figuring out how to do it concurrently.