I'm doing something about replacing HDFS with Ceph in hadoop environment (yarn), according to my research, the guideline from hortonworks and Replace HDFS form local disk to s3 getting error shows me that I need to modify core-site.xml
under $hadoop_home/etc/hadoop
.
My modification is like below:
<property>
<name>fs.s3a.access.key</name>
<value>xxxxxxxxxxxxxx</value>
</property>
<property>
<name>fs.s3a.secret.key</name>
<value>xxxxxxxxxxxxx</value>
</property>
<property>
<name>fs.default.name</name>
<value>s3a://bucket_name</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>s3a://bucket_name</value>
</property>
<property>
<name>fs.s3a.endpoint</name>
<value>http://x.x.x.x:xxxx</value>
</property>
<property>
<name>fs.AbstractFileSystem.s3a.imp</name>
<value>org.apache.hadoop.fs.s3a.S3A</value>
</property>
However, when I tried to start the hadoop by sbin/start-all.sh
, I got error like below,
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): s3a://bucket_name is not of scheme 'hdfs'.
For your information, my hadoop version is 3.2.0.
Thanks for your help in advance.