I am filtering a DataFrame and when I pass an integer value, it considers only those that satisfy the condition when the DataFrame column value is rounded to an integer. Why is this happening? See the screenshot below, the two filters give different results. I am using Spark 2.2. I tested it with python 2.6 and python 3.5. The results are the same.
Update
I tried it with Spark-SQL. If I do not convert the field to double, it gives the same answer as the first one above. However, if I cast the column to double before filtering, it gives correct answer.