I have approx 1 millions text files stored in S3 . I want to rename all files based on their folders name.
How can i do that in spark-scala ?
I am looking for some sample code .
I am using zeppelin to run my spark script .
Below code I have tried as suggested from answer
import org.apache.hadoop.fs._
val src = new Path("s3://trfsmallfffile/FinancialLineItem/MAIN")
val dest = new Path("s3://trfsmallfffile/FinancialLineItem/MAIN/dest")
val conf = sc.hadoopConfiguration // assuming sc = spark context
val fs = Path.getFileSystem(conf)
fs.rename(src, dest)
But getting below error
<console>:110: error: value getFileSystem is not a member of object org.apache.hadoop.fs.Path
val fs = Path.getFileSystem(conf)