I am writing a google big query connector for spark and underneath it uses the google hadoop connector.
Currently the google hadoop connector requires a Google env variable pointing to the creds json file.
This can be annoying to set up when your launching clusters outside the dataproc world
Is it bad practice to set it in real time in the code? or is there a workaround to tell the hadoop connector to ignore the env variable since its been set in the "fs.gs.auth.service.account.json.keyfile" hadoop configuration?
Dennis since your a contributor on the project, perhaps you can help this time too?