1

I need to search through a directory which contains many sub directories, each which contain files. The files read as follows question1234_01, where 1234 are random digits and the suffix _01 is the number of messages that contain the prefix, meaning they are apart of the same continuing thread.

find . -name 'quest*' | cut -d_ -f1  | awk '{print $1}' | uniq -c | sort -n  

example output:

1 quest1234    
10 quest1523

This searches for all the files then sorts them in order.

What I want to do is print all the files which end up having the most occurrences, in my example the one with 10 matches.

So it should only output quest1523_01 through quest1523_11.

jww
  • 97,681
  • 90
  • 411
  • 885
  • Possible duplicate of [Find all files with a filename beginning with a specified string?](https://stackoverflow.com/q/4034896/608639) – jww May 21 '19 at 01:34
  • `find` outputs full paths. Where does your pipeline filter out to get the basename? Was the awk supposed to do that? – William Pursell May 21 '19 at 09:51

3 Answers3

1

If I understood what you mean, and you want to get a list of items, sorted by frequency, you can pipe through something like:

| sort | uniq -c | sort -k1nr

Eg:

Input:

file1
file2
file1
file1
file3
file2
file2
file1
file4

Output:

4 file1
3 file2
1 file3
1 file4

Update

By the way, what are you using awk for?

find . -name 'quest*' | cut -d_ -f1  | sort | uniq -c | sort -k1nr | head -n10

Returns the 10 items found more often.

Update

Here it is a much improved version. Only drawback, it's not sorting by number of occurrences. However, I'm going to figure out how to fix it :)

find . -name 'question*' | sort \
    | sed "s#\(.*/question\([0-9]\+\)_[0-9]\+\)#\2 \1#" \
    | awk '{ cnt[$1]++; files[$1][NR] = $2 } END{for(i in files){ print i" ("cnt[i]")"; for (j in files[i]) { print "    "files[i][j] } }}'

Update

After testing on ~1.4M records (it took 23''), I decided that awk was too inefficient to handle all the grouping stuff etc. so I wrote that in Python:

#!/usr/bin/env python

import sys, re

file_re = re.compile(r"(?P<name>.*/question(?P<id>[0-9]+)_[0-9]+)")

counts = {}
files = {}

if __name__ == '__main__':
    for infile in sys.stdin:
    infile = infile.strip()
    m = file_re.match(infile)
    _name = m.group('name')
    _id = m.group('id')
    if not _id in counts:
        counts[_id] = 0
    counts[_id]+=1
    if not _id in files:
        files[_id] = []
    files[_id].append(_name)

    ## Calculate groups
    grouped = {}
    for k in counts:
    if not counts[k] in grouped:
        grouped[counts[k]] = []
    grouped[counts[k]].append(k)

    ## Print results
    for k, v in sorted(grouped.items()):
    for fg in v:
        print "%s (%s)" % (fg, counts[fg])
        for f in sorted(files[fg]):
            print "    %s" % f

This one does all the job of splitting, grouping and sorting. And it took just about 3'' to run on the same input file (with all the sorting thing added).

If you need even more speed, you could try compiling with Cython, that is usually at least 30% faster.

Update - Cython

Ok, I just tried with Cython.

Just save the above file as calculate2.pyx. In the same folder, create setup.py:

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

setup(
    cmdclass = {'build_ext': build_ext},
    ext_modules = [Extension("calculate2", ["calculate2.pyx"])]
)

And a launcher script (I named it calculate2_run.py)

import calculate2
import sys
if __name__ == '__main__':
    calculate2.runstuff(sys.stdin)

Then, make sure you have cython installed, and run:

python setup.py build_ext --inplace

That should generate, amongst other stuff, a calculate2.so file.

Now, use calculate2_run.py as you normally would (just pipe in the results from find).

I run it, without any further optimization, on the same input file: this time, it took 1.99''.

redShadow
  • 6,687
  • 2
  • 31
  • 34
  • Thanks, so how would I list all the files named file1, for example, which has 4 occurrences of it? As for using AWk, I have only been using bash for a week so I just use what I know. –  Oct 14 '12 at 21:44
  • You should double-filter the list, but you need to apply some caching on the search to speed up things.. I'm writing a solution, wait a sec :) – redShadow Oct 14 '12 at 21:55
  • This is amazing!!!! You have gone far out of your way to help me, and for that I thank you! –  Oct 14 '12 at 23:20
  • you're welcome! It's always fun to try and solve this kind of challenges :) – redShadow Oct 14 '12 at 23:45
0

You can do something like this:

  1. Save your initial search result in a temporary file.
  2. Filter out the prefix with the highest file count
  3. Search for the prefix in that temporary file, then remove the temporary file

.

find -name 'quest*' | sort -o tempf
target=$(awk -F_ '{print $1}' tempf\
         | uniq -c | sort -n | tail -1\
         | sed 's/[0-9]\+ //')
grep "$target" tempf
rm -f tempf

Note:

  1. I assumed that files with the same prefix are in the same subdirectories.
  2. The output contains the path relative to current directory. If you want just the basename, just do something like sed 's/.*\///' after the grep
doubleDown
  • 8,048
  • 1
  • 32
  • 48
0

your solution is not selecting the basename of the files, but I think you are looking for:

awk 'NF{ b=$(NF-1); v[b]=v[b] (v[b]?",":"") $NF;  a = ++c[b]} 
    a > max {max = a; n=b }  
    END {split(v[b],d, ","); for(i in d) print b "_" d[i]}' FS='[/_]'

There's no need to sort the data; full sorting is very expensive.

William Pursell
  • 204,365
  • 48
  • 270
  • 300