24

I am trying to run the following command, but gets argument too long error. Can you help?.

HOST# grep -rl 'pattern' /home/*/public_html/*
-bash: /bin/grep: Argument list too long

Is there a way to override this error and grep the pattern matching files I want in all users public_html directory. There are around 500+ users in the same server.

John
  • 289
  • 1
  • 3
  • 7
  • Use [xargs](http://linux.die.net/man/1/xargs) to break it up into manageable chunks. – Paul R Apr 19 '15 at 07:44
  • @PaulR can you ellaborate instead of a hint?, that would be really helpful. – John Apr 19 '15 at 07:47
  • The link in the previous comment takes you to a man page for `xargs`. Note that your question is off-topic for Stack Overflow as it's not a programming question - try http://superuser.com or http://unix.stackexchange.com. – Paul R Apr 19 '15 at 07:49
  • Related: [Does "argument list too long" apply to shell builtins?](https://stackoverflow.com/questions/47443380/does-argument-list-too-long-restriction-apply-to-shell-builtins) – codeforester Nov 23 '17 at 00:45
  • This worked for me: https://www.saotn.org/bash-grep-through-large-number-files-argument-list-too-long/ – Scientist Jul 03 '18 at 15:38

1 Answers1

50

Use find

find /home/*/public_html -type f -exec grep -l 'pattern' {} +

The + modifier makes it group the filenames in manageable chunks.

However, you can do it with grep -r. The arguments to this should be the directory names, not filenames.

grep -rl 'pattern' /home/*/public_html

This will just have 500+ arguments, not thousands of filenames.

Barmar
  • 741,623
  • 53
  • 500
  • 612
  • 1
    Right. The GNU guys REALLY screwed up grep when they gave it an argument to recursively find files. Why not give it an option to sort the output too? The UNIX command to **find** files is named `find` and the command to **g**lobally search in a file for a **r**egular **e**xpression and **p**rint the result is named `grep`. Always just use the right tool for the job as shown above. – Ed Morton Apr 19 '15 at 15:06
  • 2
    I understand where you're coming from @EdMorton, but it is like syntactic sugar in programming languages. Sometimes it is not the optimal way, but I tend to appreciate when a tool goes out of their way a little to help me do things quicker and easier. – Marton Tatai Feb 15 '16 at 08:08
  • 1
    @EdMorton I think you'll find that lots of GNU tools have convenience options to replace common combinations. It's like the -z option to `tar`, because it was so common to pipe the input/output to `gzip`. Or tools like `zless` that combine `less` and `zcat`, along with the options that were added to `less` to make it work. – Barmar Feb 15 '16 at 11:57
  • Not only GNU tools - `sort` doesn't NEED a `-u` option since you could do `sort | uniq`. All the convenience options I can think of though, like that and the ones you mention, are in the same vein as the tools intent (e.g. sorting uniquely, catting a gzipped file instead of a regular one) while giving grep an option to go crawling through my directory structure has nothing at all to do with the tool's intent. Why not give `cat`, `sort`, `sed`, and every other similar tool an option to go recursively looking for files? It'd make as much sense as having `grep` do it. It's simply a bad idea. – Ed Morton Feb 15 '16 at 13:45
  • 1
    @EdMorton Because searching for something in a folder is far more common than printing or sorting all files in a folder. BTW, the reason for `sort -u` is not just convenience, it's efficiency; sorting is O(n log n), so if you filter out the duplicates before sorting you can reduce the sorting time significantly. – Barmar Feb 15 '16 at 18:49