I have a newly installed hadoop2.8 for spark2.2.1. It throws spark java.lang.NumberFormatException: For input string: "100M"
when I enter pyspark.
I am following this question for my solution .
Additional Info: I am trying to create spark sessions with AWS ARN roles to that spark can access different data sources with Assume role capability in AWS.
Edit: Installed hadoop2.8 for spark2.2.1; Previously had hadoop2.7 as a default but it doesn't support aws roles for spark sessions.