53

Does anybody know a way to perform a quick fuzzy search on the Linux console?

Quite often I come across situations where I need to find a file in a project but I don't remember the exact filename.

In the Sublime text editor I would press Ctrl+ P and type a part of the name, which will produce a list of files to select from. That's an amazing feature I'm quite happy with. The problem is that in most cases I have to browse code in a console on remote machines via ssh. I'm wondering if there is a tool similar to the "Go Anywhere" feature for the Linux console?

Matthias Braun
  • 32,039
  • 22
  • 142
  • 171
nab
  • 4,751
  • 4
  • 31
  • 42

12 Answers12

72

You may find fzf useful. It's a general purpose fuzzy finder written in Go that can be used with any list of things: files, processes, command history, Git branches, etc.

Its install script will setup a Ctrl+T keybinding for your shell. Pressing Ctrl+T lets you fuzzy-search for a file or directory and put its path on your console.

The following GIF shows example usage of fzf including its Vim integration:

Animation of using FZF

Matthias Braun
  • 32,039
  • 22
  • 142
  • 171
Junegunn Choi
  • 1,581
  • 14
  • 9
  • sounds good. Could you make a youtube video about it? I also want to know how you insert kill -9 so quickly. – anonymous Feb 17 '17 at 11:25
  • How do I jump directly to a file with `fzf`? If i write `fzf` in terminal I find the location of a file: if I press enter it just shows me the path as if in `echo` command. I'd like to find and access a file, whatever the extension type, not only for vim but for images and pdf and so on. – nilon Jul 12 '17 at 15:34
  • 1
    @nilon: You might find [this answer](https://unix.stackexchange.com/questions/403916/getting-started-with-fzf-on-arch-linux/403917#403917) of mine helpful to get started with fzf. – Matthias Braun Nov 11 '22 at 20:57
7

Most of these answers won't do fuzzy searching like sublime text does it -- they may match part of the answer, but they don't do the nice 'just find all the letters in this order' behavior.

I think this is a bit closer to what you want. I put together a special version of cd ('fcd') that uses fuzzy searching to find the target directory. Super simple -- just add this to your bashrc:

function joinstr { local IFS="$1"; shift; echo "$*"; }
function fcd { cd $(joinstr \* $(echo "$*" | fold -w1))* }

This will add an * between each letter in the input, so if I want to go to, for instance,

/home/dave/results/sample/today

I can just type any of the following:

fcd /h/d/r/spl/t
fcd /h/d/r/s/t
fcd /h/d/r/sam/t
fcd /h/d/r/s/ty

Using the first as an example, this will execute cd /*h*/*d*/*r*/*s*p*l*/*t* and let the shell sort out what actually matches.

As long as the first character is correct, and one letter from each directory in the path is written, it will find what you're looking for. Perhaps you can adapt this for your needs? The important bit is:

$(joinstr \* $(echo "$*" | fold -w1))*

which creates the fuzzy search string.

dlonie
  • 575
  • 2
  • 7
  • 12
  • 3
    FYI, fzf (from the other comment) does exactly that, for command history, file listing, and whatever extra thing you wanna pipe into it. Bonus: live update, highlighting and gradual async indexing (doesn't block when you're fuzzy matching a directory of 30k files). – chenglou Jan 18 '15 at 05:07
  • Agreed! Since writing that answer I stumbled across fzf somewhere along the way, and now that's what I use. Great tool. – dlonie Jun 04 '15 at 15:34
  • `shopt -s nocaseglob` before the `find /home/user/*fuzzy*/*file*/*search*/` will make case insensitive to match: `/home/user/fuzzy/FILE/Search/` or whatever – Reed Feb 08 '21 at 23:04
5

The fasd shell script is probably worth taking a look at too.

fasd offers quick access to files and directories for POSIX shells. It is inspired by tools like autojump, z and v. Fasd keeps track of files and directories you have accessed, so that you can quickly reference them in the command line.

It differs a little from a complete find of all files, as it only searches recently opened files. However it is still very useful.

Chris Farmiloe
  • 13,935
  • 5
  • 48
  • 57
5
find . -iname '*foo*'

Case insensitive find of filenames containing foo.

ssc
  • 9,528
  • 10
  • 64
  • 94
user1338062
  • 11,939
  • 3
  • 73
  • 67
  • 1
    Correct usage as in...? – user1338062 Feb 21 '14 at 18:40
  • When I run this, it fails with `find: illegal option -- i`. I'm on a Mac right now though. Maybe `find` works differently between Mac and Linux. – evanrmurphy Feb 21 '14 at 18:46
  • Yes, OSX ships its own versions of most basic utilities. The example I wrote is for GNU find. – user1338062 Feb 23 '14 at 07:34
  • 2
    @evanrmurphy: the code sample simply was wrong, the first parameter `find` takes is the folder to search; I've taken the liberty to fix the code using the current folder (`.`) as an example; OSX / GNU find are no different in that respect – ssc Jun 21 '17 at 14:02
  • @scc: ah yes, somehow I missed that missing dot. Thanks! – user1338062 Jun 21 '17 at 14:09
5

I usually use:

ls -R | grep  -i [whatever I can remember of the file name]

From a directory above where I expect the file to be - the higher up you go in the directory tree, the slower this is going to go.

When I find the the exact file name, I use it in find:

find . [discovered file name]

This could be collapsed into one line:

for f in $(ls --color=never -R | grep --color=never -i partialName); do find -name $f; done

(I found a problem with ls and grep being aliased to "--color=auto")

mmrtnt
  • 382
  • 1
  • 8
  • If the searched item may be hidden, you may alternate command like: `ls -laR | grep -i [fuzzy file name]` (added l (long format) and a (all files, including those dotted files) before R). – Semo May 14 '14 at 10:29
3

I don't know how familiar you are with the terminal, but this could help you:

find | grep 'report'
find | grep 'report.*2008'

Sorry if you already know grep and were looking for something more advanced.

Adiel Mittmann
  • 1,764
  • 9
  • 12
  • 7
    That's really not how you use find. Use `find . -name 'report*2008'`. – Chris Eberle Feb 24 '12 at 23:14
  • @Chris You need to quote your search pattern to prevent the shell from expanding the glob. +1 for the point made. – jordanm Feb 24 '12 at 23:17
  • @Chris That's not what I was looking for but what's wrong with `find | grep 'report.*2008'` form? – nab Feb 24 '12 at 23:31
  • @nab it's ridiculously inefficient. – Chris Eberle Feb 24 '12 at 23:31
  • @Chris This is obviously not supposed to be run from `/`. For an average project of 10K - 100K files this will work just fine. – nab Feb 24 '12 at 23:49
  • 4
    @nab I'm sorry, there's really no argument here. My version is shorter to write and more efficient. If you really like slowing down things as a matter of habit, go for it. But when telling someone else, use the generally accepted (general case) way. – Chris Eberle Feb 24 '12 at 23:52
  • 1
    When I wrote my answer, I considered that my audience (i.e., the OP) is a beginner. I definitely didn't bother about performance here. I think that for beginners it's nice to know about a command that finds files, another one that filters and a way to compose them. – Adiel Mittmann Feb 24 '12 at 23:59
  • The "unix way" is the source of some of the worst shell scripts and practices out there. You don't need to start beginners off with bad habits. – jordanm Feb 25 '12 at 00:29
  • I'm with Chris and jordanm on this... Beginner or not, do it right the first time. Piping the output of a command like find into another means you're spawning a grep process for each line find returns, which is utterly stupid and irresponsible to do as an admin. Find is heavy already on a CPU, you don't want to ad more when you can avoid it. See my answer which uses find2perl. It's much faster than find too! – Yanick Girouard Feb 25 '12 at 04:14
  • underrated, it fills exactly what I need and it's efficient enought for an instant response – Francisco Nov 28 '20 at 17:11
2

fd is a simple, fast and user-friendly alternative to find.

Demo from the GitHub project page:

user1338062
  • 11,939
  • 3
  • 73
  • 67
1

You can try c- (Cminus), a fuzzy dir changing tool of bash script, which using bash completion. It is somehow limited by only matching visited paths, but really convenient and quite fast.

enter image description here

GitHub project: whitebob/cminus

Introduction on YouTube: https://youtu.be/b8Bem53Cz9A

Jeremy Harris
  • 24,318
  • 13
  • 79
  • 133
whitebob
  • 48
  • 6
1

You might want to try AGREP or something else that uses the TRE Regular Expression library.

(From their site:)

TRE is a lightweight, robust, and efficient POSIX compliant regexp matching library with some exciting features such as approximate (fuzzy) matching.
At the core of TRE is a new algorithm for regular expression matching with submatch addressing. The algorithm uses linear worst-case time in the length of the text being searched, and quadratic worst-case time in the length of the used regular expression. In other words, the time complexity of the algorithm is O(M2N), where M is the length of the regular expression and N is the length of the text. The used space is also quadratic on the length of the regex, but does not depend on the searched string. This quadratic behaviour occurs only on pathological cases which are probably very rare in practice.

TRE is not just yet another regexp matcher. TRE has some features which are not there in most free POSIX compatible implementations. Most of these features are not present in non-free implementations either, for that matter.

Approximate pattern matching allows matches to be approximate, that is, allows the matches to be close to the searched pattern under some measure of closeness. TRE uses the edit-distance measure (also known as the Levenshtein distance) where characters can be inserted, deleted, or substituted in the searched text in order to get an exact match. Each insertion, deletion, or substitution adds the distance, or cost, of the match. TRE can report the matches which have a cost lower than some given threshold value. TRE can also be used to search for matches with the lowest cost.
Stabledog
  • 3,110
  • 2
  • 32
  • 43
Tony Laidig
  • 1,048
  • 2
  • 11
  • 33
  • 3
    That's very vague without explaining why he would want to use TRE regex. What does it offer that ERE or PCRE doesn't? – jordanm Feb 24 '12 at 23:18
1

You can do the following

grep -iR "text to search for" .

where "." being the starting point, so you could do something like

grep -iR "text to search" /home/

This will make grep search for the given text inside every file under /home/ and list files which contain that text.

dgw
  • 13,418
  • 11
  • 56
  • 54
Snake007uk
  • 56
  • 2
0

Search zsh for file or folder in terminal and open or navigate to it with combination of find, fzf, vim and cd.

Install fzf in zsh and add script to ~/.zshrc, then reload shell source ~/.zshrc

fzf-file-search() {
   item="$(find '/' -type d \( -path '/proc/*' -o -path '/dev/*' \) -prune -false -o -iname '*' 2>/dev/null | FZF_DEFAULT_OPTS="--height ${FZF_TMUX_HEIGHT:-40%} --rev    erse --bind=ctrl-z:ignore $FZF_DEFAULT_OPTS $FZF_CTRL_T_OPTS" $(__fzfcmd) -m "$@")"
  if [[ -d ${item} ]]; then
    cd "${item}" || return 1
  elif [[ -f ${item} ]]; then
    (vi "${item}" < /dev/tty) || return 1
  else
    return 1
  fi
   zle accept-line
}
zle     -N   fzf-file-search
bindkey '^f' fzf-file-search

Press keyboard shortcut 'Ctrl+F' to run it, this can be changed in bindkey '^f'. It searchs (find) through all files/folders (fzf) and depending on file type, navigate to directory (cd) or open file with text editor (vim).

Also quickly open recent files/folders with fasd:

fasd-fzf-cd-vi() {
   item="$(fasd -Rl "$1" | fzf -1 -0 --no-sort +m)"
  if [[ -d ${item} ]]; then
    cd "${item}" || return 1
  elif [[ -f ${item} ]]; then
    (vi "${item}" < /dev/tty) || return 1
  else
    return 1
  fi
   zle accept-line
}
zle -N fasd-fzf-cd-vi
bindkey '^e' fasd-fzf-cd-vi

Keyboard shortcut 'Ctrl+E'

Check other usefull tips and tricks for fast navigation inside terminal https://github.com/webdev4422/.dotfiles

webdev4422
  • 113
  • 8
0

You could use find like this for complex regex:

find . -type f -regextype posix-extended -iregex ".*YOUR_PARTIAL_NAME.*" -print

Or this for simplier glob-like matches:

find . -type f -name "*YOUR_PARTIAL_NAME*" -print

Or you could also use find2perl (which is quite faster and more optimized than find), like this:

find2perl . -type f -name "*YOUR_PARTIAL_NAME*" -print | perl

If you just want to see how Perl does it, remove the | perl part and you'll see the code it generates. It's a very good way to learn by the way.

Alternatively, write a quick bash wrapper like this, and call it whenever you want:

#! /bin/bash
FIND_BASE="$1"
GLOB_PATTERN="$2"
if [ $# -ne 2 ]; then
    echo "Syntax: $(basename $0) <FIND_BASE> <GLOB_PATTERN>"
else
        find2perl "$FIND_BASE" -type f -name "*$GLOB_PATTERN*" -print | perl
fi

Name this something like qsearch and then call it like this: qsearch . something

Yanick Girouard
  • 4,711
  • 4
  • 19
  • 26