I have this join in a pyspark script.
d = d.join(p, [
d.p_hash == p.hash,
d.dy >= p.mindy,
d.dy <= p.maxdy,
], "left") \
.drop(p.hash) \
.drop(p.mindy) \
.drop(p.maxdy)
The variables 'd' and 'p' are spark dataframes. Is there any way I could do this in pandas?