We have Hadoop cluster version HDP – 26 , on redhat machines version 7.x
We run the following command to capture the files that have corrupted blocks
Example 1
[root@master_3 ~]# su hdfs
[hdfs@master_3 root]$ hdfs fsck -list-corruptfileblocks
blk_1097240344/localhdp/Rtrone/intercept_by_type/2018/4/10/16/2018_4_03_11_45.parquet/part-00002-be0f80a9-2c7c-4c50-b18d-db0f94a98cff.snly.parquet
blk_1097240348/localhdp/Rtrone/intkjd_country/2018/4/10/16/2018_4_03_11_45.parquet/part-00003-8600d0e2-c6b6-49b7-89cd-ef243cda4c5e.snly.parquet
The filesystem under path '/' has 2 CORRUPT files
so seems the files are:
/localhdp/Rtrone/intercept_by_type/2018/4/10/16/2018_4_03_11_45.parquet/part-00002-be0f80a9-2c7c-4c50-b18d-db0f94a98cff.snly.parquet
/localhdp/Rtrone/intkjd_country/2018/4/10/16/2018_4_03_11_45.parquet/part-00003-8600d0e2-c6b6-49b7-89cd-ef243cda4c5e.snly.parquet
searching on google , I understand that the procedure to handle the corrupted block in files should be as the following:
This will delete the corrupted HDFS blocks:
hdfs fsck / -delete
run again the hdfs fsck -list-corruptfileblocks to find out if corrupted block deleted successfully
hdfs fsck -list-corruptfileblocks
if corrupted blocks still exists ( as example 1 ) then we need to delete the files as the following
hdfs fs -rm /localhdp/Rtrone/intercept_by_type/2018/4/10/16/2018_4_03_11_45.parquet/part-00002-be0f80a9-2c7c-4c50-b18d-db0f94a98cff.snly.parquet
hdfs fs -rm /localhdp/Rtrone/intkjd_country/2018/4/10/16/2018_4_03_11_45.parquet/part-00003-8600d0e2-c6b6-49b7-89cd-ef243cda4c5e.snly.parquet
am I right or I miss something?
Please let me know what need to add or update regarding my procedure