This is not related to spark, you can try that division in pretty much any programming languages, and you will get the same result
print(10.62 / 100) # result is always 0.10619999999999999
In most programming languages, mathematical operations, are based on the IEEE 754 standard.
Here's a more clear explanation of how IEEE 754 standard works:
In the IEEE-754 standard, hardware designers are allowed any value of
error/epsilon as long as it's less than one half of one unit in the
last place, and the result only has to be less than one half of one
unit in the last place for one operation.
I'm afraid that the only solution is rounding.
Another possible solution is to use BigDecimal, but you have to put a precision, but obly for the input, you don't have to care about the precision od the result:
df.withColumn("percentage", regexp_replace(lit("10.62%"), "%", "").cast(DecimalType(10, 2)) / 100)