You can`t save your dataset to specific filename using spark api, there is multiple workarounds to do that.
- as Vladislav offered, collect your dataset then write it into your filesystem using scala/java/python api.
- apply repartition/coalesce(1), write your dataset and then change the filename.
both are not very recommended, because in large datasets it can cause OOM or just lost of the power of spark`s parallelism.
The second issue that you are getting parquet file, its because the default format of spark, you should use:
df.write.format("text").save("/path/to/save")