As we can read from here about the -put
command:
This command is used to copy files from the local file system to the
HDFS filesystem. This command is similar to –copyFromLocal command.
This command will not work if the file already exists unless the –f
flag is given to the command. This overwrites the destination if the
file already exists before the copy
Which makes it clear why it doesn't work and throws the No such file or directory
message. It's because it can't find any file with the name project-data.txt
on your current directory of your local filesystem.
You plan on moving a file between directories inside the HDFS, so instead of using the -put
parameter for moving, we can simply use the -mv
parameter as we would in our local filesystem!
Tested it out on my own HDFS as follows:
- Create the source and destination directories in HDFS
hadoop fs -mkdir source_dir dest_dir
- Create an empty (for the sake of the test) file under the source directory
hadoop fs -touch source_dir/test.txt
- Move the empty file to the destination directory
hadoop fs -mv source_dir/test.txt dest_dir/test.txt
(Notice how the /user/username/
part of the path for the file and the destination directory is not needed, because HDFS is by default on this directory where you are working. You also should note that you have to write the full path of the destination with name of the file included.)
You can see below with the HDFS browser that the empty text file has been moved to the destination directory:
