0

I have a directory with many files with different names and some of them are having same contents.

The directory tree is like:

dir1/case-1.sc001
dir1/case-1.sc002
.................
dir1/case-1.sc010
dir1/case-1.sc011
..................
.............
dir1/case-1.sc998
dir1/case-1.sc999

The file may be more than 999 or less than it.

I want to keep the first files in the main directory and other files with same contents in a new directory. Is there any workaround for this? I tried for diff, cksum, fslint, rdfind, and fdupes, detailes about how I executed are give below, as shown below, but all are not working in the same directory with different file names.

diff *
fslint .
rdfind .
fdupes -r
cksum case-1*
John1024
  • 109,961
  • 14
  • 137
  • 171
astha
  • 593
  • 5
  • 15
  • Show the directory tree, the input and expected output and the code that you have tried that is **not working** – Jetchisel Jun 18 '20 at 04:45
  • Dear @Jetchisel , I have updated the post as per your direction. – astha Jun 18 '20 at 04:51
  • https://stackoverflow.com/questions/61584817/how-do-i-find-duplicate-files-by-comparing-them-by-size-ie-not-hashing-in-bas is one link. – Jetchisel Jun 18 '20 at 05:16
  • 1
    https://superuser.com/questions/386199/how-to-remove-duplicated-files-in-a-directory This link is using `rm` just change it to `mv` and point it to the destination directory. – Jetchisel Jun 18 '20 at 05:23
  • Both are not working for me. In the second link, my bash is not supporting `declare -A arr ` while the version if my bash is 4.4.20. – astha Jun 18 '20 at 08:12
  • While from the first link, I used 1st accepted answer and getting error "yntax error: "(" unexpected" . I could not figure out the solution. Using second solution from the same answer creates a potential_dups directory and copy all files from the main directory to the potential_dups directory. – astha Jun 18 '20 at 08:15

0 Answers0