1388

I have lines like these, and I want to know how many lines I actually have...

09:16:39 AM  all    2.00    0.00    4.00    0.00    0.00    0.00    0.00    0.00   94.00
09:16:40 AM  all    5.00    0.00    0.00    4.00    0.00    0.00    0.00    0.00   91.00
09:16:41 AM  all    0.00    0.00    4.00    0.00    0.00    0.00    0.00    0.00   96.00
09:16:42 AM  all    3.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00   96.00
09:16:43 AM  all    0.00    0.00    1.00    0.00    1.00    0.00    0.00    0.00   98.00
09:16:44 AM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
09:16:45 AM  all    2.00    0.00    6.00    0.00    0.00    0.00    0.00    0.00   92.00

Is there a way to count them all using linux commands?

built1n
  • 1,518
  • 14
  • 25
Alucard
  • 16,628
  • 7
  • 24
  • 23

28 Answers28

2506

Use wc:

wc -l <filename>

This will output the number of lines in <filename>:

$ wc -l /dir/file.txt
3272485 /dir/file.txt

Or, to omit the <filename> from the result use wc -l < <filename>:

$ wc -l < /dir/file.txt
3272485

You can also pipe data to wc as well:

$ cat /dir/file.txt | wc -l
3272485
$ curl yahoo.com --silent | wc -l
63
Mike
  • 1,968
  • 18
  • 35
user85509
  • 36,612
  • 7
  • 33
  • 26
  • 22
    this is great!! you might use awk to get rid of the file name appended to the line number as such: `wc -l | awk '{print $1}` – CheeHow Apr 03 '14 at 04:25
  • 88
    Even shorter, you could do `wc -l < ` – Tensigh May 16 '14 at 06:32
  • 2
    This gives me one extra line then all the lines? – CMCDragonkai Jun 02 '14 at 05:33
  • 7
    @GGB667 you can also get rid of the file name with `cat | wc -l` – baptx Feb 10 '15 at 12:42
  • 14
    and with `watch wc -l ` you can follow this file in real-time. That's useful for log files for example. – DarkSide Jun 02 '15 at 13:06
  • 1
    None of the suggestions in this answer are actually bash answers. But while we're recommending other tools, you could avoid the whitespace by just using awk: `awk 'END{print NR}' /dir/file.txt`, or sed: `sed -n '$=' /dir/file.txt`. Or heck, if you wanted an actual **bash** solution, you could count the files in a loop! `while read _; do ((n++)); done < /dir/file.txt; echo $n`. – ghoti Aug 30 '16 at 18:35
  • So simple, thanks .... How could I write this into a variable inside a bash script so that I can collect line counts from various files and then use those variables later on in my script. So like $LINECOUNT1 (from file1.txt) $LINECOUNT2 (from file2.txt) etc ??? And then if I want to I can just take a sum of variable1 + variable2 +variable3 etc. – MitchellK Jun 19 '17 at 14:24
  • Never mind figured it out WC1=$(wc -l < file1.txt) WC2=$(wc -l < file2.txt) – MitchellK Jun 19 '17 at 14:29
  • 36
    Beware that wc -l counts "newlines". If you have a file with 2 lines of text and one "newline" symbol between them, wc will output "1" instead of "2". – Konstantin Jul 24 '17 at 14:11
  • 2
    @user85509 wc -l gives the number of new lines, which might be different from actual number of lines in a file. (Usually wc -l gives 1 less than actual no of lines) – asdf Jun 17 '18 at 06:50
  • In a bash script, how do I assign the output of `wc -l < /dir/file.txt` to a variable? – sveti petar Dec 04 '18 at 09:09
  • @jovan I would use `$()` (evaluation) operator. – Dragas Feb 08 '19 at 11:31
  • 2
    @asdf Actually, `wc -l` usually gives the real number of lines in a compliant Linux text file. The last line in a file is always supposed to be `\n`, so that `cat ` prints the prompt on a new line, wc -l gives the right line count, etc. A lot of text editors (and IDEs) will always introduce a newline at the end of a text file when you save it for this reason. So you shouldn't assume you need to increment; if you care, you should check whether it's non-compliant (last char is not `'\n'`), and add one in that case. – Theodore Murdock Jul 05 '19 at 21:12
  • 5
    **This answer is not POSIX-compliant and can easily miscount lines.** `wc` counts newlines, the character, and not lines. This will lead to miscounts if your EOF is not `\n`, which POSIX does not require. I've answered this in detail [here](https://stackoverflow.com/a/59573533/2899048). – Chiru Jan 03 '20 at 05:22
164

To count all lines use:

$ wc -l file

To filter and count only lines with pattern use:

$ grep -w "pattern" -c file  

Or use -v to invert match:

$ grep -w "pattern" -c -v file 

See the grep man page to take a look at the -e,-i and -x args...

Davis Broda
  • 4,102
  • 5
  • 23
  • 37
Lauro Oliveira
  • 2,362
  • 1
  • 18
  • 12
  • 1
    Oddly sometimes the `grep -c` works better for me. Mainly due to `wc -l` annoying "feature" padding space prefix. – MarkHu Sep 28 '16 at 01:07
  • Additionally when your last line does not end with an LF or CRLF `wc -l` gives out a wrong number of lines as it only counts line endings. So `grep` with a pattern like `^.*$` will actually give you the true line number. – Nexonus Mar 16 '21 at 15:55
84
wc -l <file.txt>

Or

command | wc -l
John Kugelman
  • 349,597
  • 67
  • 533
  • 578
67

wc -l does not count lines.

Yes, this answer may be a bit late to the party, but I haven't found anyone document a more robust solution in the answers yet.

Contrary to popular belief, POSIX does not require files to end with a newline character at all. Yes, the definition of a POSIX 3.206 Line is as follows:

A sequence of zero or more non- <newline> characters plus a terminating character.

However, what many people are not aware of is that POSIX also defines POSIX 3.195 Incomplete Line as:

A sequence of one or more non- <newline> characters at the end of the file.

Hence, files without a trailing LF are perfectly POSIX-compliant.

If you choose not to support both EOF types, your program is not POSIX-compliant.

As an example, let's have look at the following file.

1 This is the first line.
2 This is the second line.

No matter the EOF, I'm sure you would agree that there are two lines. You figured that out by looking at how many lines have been started, not by looking at how many lines have been terminated. In other words, as per POSIX, these two files both have the same amount of lines:

1 This is the first line.\n
2 This is the second line.\n
1 This is the first line.\n
2 This is the second line.

The man page is relatively clear about wc counting newlines, with a newline just being a 0x0a character:

NAME
       wc - print newline, word, and byte counts for each file

Hence, wc doesn't even attempt to count what you might call a "line". Using wc to count lines can very well lead to miscounts, depending on the EOF of your input file.

POSIX-compliant solution

You can use grep to count lines just as in the example above. This solution is both more robust and precise, and it supports all the different flavors of what a line in your file could be:

$ grep -c ^ FILE
Claudio
  • 7,474
  • 3
  • 18
  • 48
Chiru
  • 3,661
  • 1
  • 20
  • 30
  • 9
    This should be the accepted asnwer. Not only because it is correct but also because `grep` is more that twice faster than `wc`. – Eric Jul 02 '21 at 08:38
  • 4
    Wow, this is a good answer. It needs to be the accepted answer because of good explanation and POSIX specs are clearly outlined. – netrox Sep 25 '21 at 21:33
  • 2
    Very nice: you might want to comment on [this](https://stackoverflow.com/q/729692/8344060) – kvantour Nov 03 '21 at 19:57
  • On spot with what was needed in the question – erPe Sep 12 '22 at 09:22
  • This is the best answer because this is true, more precise than the other answers, and **that darn set of spaces in the beginning of `wc` output doesn't show up with `grep`**. I'm using the number of lines in a file for mathematical processing in a program, and those spaces are a pain, especially because I can't use `cut` since I don't know how many digits are going to be in the number of lines, so I can't always just cut out the number. This just outputs a number and nothing but a number. It should be the accepted answer :) – O5 Command Stands With Ukraine Mar 25 '23 at 14:47
  • Another reason why a file doesn't requires to finish with a newline, `grep -f ` will see the last line as a pattern and match anything. – AxelH Apr 19 '23 at 12:41
56

there are many ways. using wc is one.

wc -l file

others include

awk 'END{print NR}' file

sed -n '$=' file (GNU sed)

grep -c ".*" file
ghostdog74
  • 327,991
  • 56
  • 259
  • 343
  • 5
    Yes, but `wc -l file` gives you the number of lines AND the filename to get just the filename you can do: `filename.wc -l < /filepath/filename.ext` – ggb667 Nov 22 '13 at 15:00
  • Using the GNU grep -H argument returns filename and count. `grep -Hc ".*" file` – Zlemini Oct 28 '16 at 19:27
  • I voted this solutions because `wc -l` counts newline characters and not the actual lines in a file. All the other commands included in this answer will give you the right number in case you need the lines. – growlingchaos Dec 16 '19 at 13:53
36

The tool wc is the "word counter" in UNIX and UNIX-like operating systems, but you can also use it to count lines in a file by adding the -l option.

wc -l foo will count the number of lines in foo. You can also pipe output from a program like this: ls -l | wc -l, which will tell you how many files are in the current directory (plus one).

built1n
  • 1,518
  • 14
  • 25
  • 4
    `ls -l | wc -l` will actually give you the number of files in the directory +1 for the total size line. you can do `ls -ld * | wc -l` to get the correct number of files. – Joshua Lawrence Austill Aug 14 '17 at 19:52
29

If you want to check the total line of all the files in a directory ,you can use find and wc:

find . -type f -exec wc -l {} +
fedorqui
  • 275,237
  • 103
  • 548
  • 598
storen
  • 1,025
  • 10
  • 22
26

Use wc:

wc -l <filename>
Vivin Paliath
  • 94,126
  • 40
  • 223
  • 295
16

If all you want is the number of lines (and not the number of lines and the stupid file name coming back):

wc -l < /filepath/filename.ext

As previously mentioned these also work (but are inferior for other reasons):

awk 'END{print NR}' file       # not on all unixes
sed -n '$=' file               # (GNU sed) also not on all unixes
grep -c ".*" file              # overkill and probably also slower
ggb667
  • 1,881
  • 2
  • 20
  • 44
  • 4
    This answer was posted 3 years after the question was asked and it is just copying other ones. The first part is the trivial and the second is all [ghostdog's answer](http://stackoverflow.com/a/3137621/1983854) was adding. Downvoting. – fedorqui Jun 10 '15 at 15:32
  • 1
    4 years on.. downvoting. Let's see if we can get a decade long downvote streak! – Damien Roche Mar 10 '16 at 17:52
  • 1
    No, you are wrong; ghostdog's answer does not answer the original question. It gives you the number of lines AND the filename. To get just the filename you can do: filename.wc -l < /filepath/filename.ext. Which is why I posted the answer. awk, sed and grep are all slightly inferior ways of doing this. The proper way is the one I listed. – ggb667 Dec 22 '16 at 18:41
11

Use nl like this:

nl filename

From man nl:

Write each FILE to standard output, with line numbers added. With no FILE, or when FILE is -, read standard input.

fedorqui
  • 275,237
  • 103
  • 548
  • 598
decimal
  • 312
  • 4
  • 8
  • This is the first answer I have found that works with a file that has a single line of text that does not end in a newline, which `wc -l` reports as 0. Thank you. – Scott Joudry Sep 26 '17 at 16:36
8

I've been using this:

cat myfile.txt | wc -l

I prefer it over the accepted answer because it does not print the filename, and you don't have to use awk to fix that. Accepted answer:

wc -l myfile.txt

But I think the best one is GGB667's answer:

wc -l < myfile.txt

I will probably be using that from now on. It's slightly shorter than my way. I am putting up my old way of doing it in case anyone prefers it. The output is the same with those two methods.

fedorqui
  • 275,237
  • 103
  • 548
  • 598
Buttle Butkus
  • 9,206
  • 13
  • 79
  • 120
  • 3
    the first and last method are the same. the last one is better because it doesn't spawn an extra process –  May 31 '15 at 17:48
7

Above are the preferred method but "cat" command can also helpful:

cat -n <filename>

Will show you whole content of file with line numbers.

Yogesh
  • 663
  • 8
  • 17
6

wc -l file_name

for eg: wc -l file.txt

it will give you the total number of lines in that file

for getting last line use tail -1 file_name

Vikas Sharma
  • 129
  • 2
  • 4
5

I saw this question while I was looking for a way to count multiple files lines, so if you want to count multiple file lines of a .txt file you can do this,

cat *.txt | wc -l

it will also run on one .txt file ;)

talsibony
  • 8,448
  • 6
  • 47
  • 46
5

wc -l <filename>

This will give you number of lines and filename in output.

Eg.

wc -l 24-11-2019-04-33-01-url_creator.log

Output

63 24-11-2019-04-33-01-url_creator.log

Use

wc -l <filename>|cut -d\ -f 1

to get only number of lines in output.

Eg.

wc -l 24-11-2019-04-33-01-url_creator.log|cut -d\ -f 1

Output

63

Harsh Sarohi
  • 807
  • 1
  • 9
  • 18
  • Where is the benefit of repeating the accepted (ten years old) answer? – jeb Jan 06 '20 at 07:48
  • 1
    Because I couldn't find command to get only line numbers in output in this thread. – Harsh Sarohi Jan 06 '20 at 12:30
  • It's the second example in the accepted answer. `wc -l < filename` – jeb Jan 06 '20 at 12:53
  • wc -l < filename > gives filename as well as number of lines in output. – Harsh Sarohi Jan 07 '20 at 07:36
  • 1
    No, `wc -l < filename` is different to `wc -l filename`, the first uses redirection and then there isn't any filename in the output, like shown in [the answer from user85509](https://stackoverflow.com/a/3137099/463115) – jeb Jan 07 '20 at 07:42
4
cat file.log | wc -l | grep -oE '\d+'
  • grep -oE '\d+': In order to return the digit numbers ONLY.
AechoLiu
  • 17,522
  • 9
  • 100
  • 118
4

count number of lines and store result in variable use this command:

count=$(wc -l < file.txt) echo "Number of lines: $count"

Konstantin F
  • 180
  • 1
  • 15
4

I tried wc -l to get the number of line from the file name

To do more filtering for example want to count to the number of commented lines from the file use grep '#' Filename.txt | wc -l

echo  "No of files in the file $FILENAME"
wc -l < $FILENAME
echo total number of commented lines
echo $FILENAME
grep '#' $FILENAME | wc -l
4

Just in case. It's all possible to do it with many files in conjunction with the find command.

find . -name '*.java' | xargs wc -l 
Jorge Tovar
  • 1,374
  • 12
  • 17
  • Don't use `xargs`. The `find` command has an `-exec` verb that is much simpler to use. Someone already suggested its use 6 years ago, although this question does not ask anything about multiple files. https://stackoverflow.com/a/28016686 – miken32 Nov 16 '21 at 21:04
3
wc -l file.txt | cut -f3 -d" "

Returns only the number of lines

Umur Kontacı
  • 35,403
  • 8
  • 73
  • 96
3

Redirection/Piping the output of the file to wc -l should suffice, like the following:

cat /etc/fstab | wc -l

which then would provide the no. of lines only.

fedorqui
  • 275,237
  • 103
  • 548
  • 598
tk3000
  • 31
  • 3
3

Or count all lines in subdirectories with a file name pattern (e.g. logfiles with timestamps in the file name):

wc -l ./**/*_SuccessLog.csv
jwebuser
  • 103
  • 1
  • 3
3

This drop-in portable shell function [ℹ]  works like a charm. Just add the following snippet to your .bashrc file (or the equivalent for your shell environment).

# ---------------------------------------------
#  Count lines in a file
#
#  @1 = path to file
#
#  EXAMPLE USAGE: `count_file_lines $HISTFILE`
# ---------------------------------------------
count_file_lines() {
    local subj=$(wc -l $1)
    subj="${subj//$1/}"
    echo ${subj//[[:space:]]}
}

This should be fully compatible with all POSIX-compliant shells in addition to bash and zsh.

blizzrdof77
  • 363
  • 4
  • 14
3

Awk saves livestime (and lines too):

awk '{c++};END{print c}' < file

If you want to make sure you are not counting empty lines, you can do:

awk '{/^./ && c++};END{print c}' < file
smac89
  • 39,374
  • 15
  • 132
  • 179
  • 1
    `awk` used this way is 16 times slower than `grep -c '^'` – Eric Jul 02 '21 at 08:36
  • @Eric does `grep` also count the lines? – smac89 Jul 02 '21 at 14:50
  • sure: `grep -c -E ^` will count the number of "start of line" markers, hence the number of lines. – Eric Jul 06 '21 at 06:56
  • @Eric Ah cool, cool. I was going to suggest you post that answer, but it looks like [someone](https://stackoverflow.com/a/3137127/2089675) else already did so. Anyways, when I posted this answer, I just discovered `awk`, and this was one of the many things I discovered it could do. I also just tested with a 1GB file, and awk was only 4x slower, not 16x. I created the test file using `base64 /dev/urandom | head -c 1000000000`, but with smaller files (which is most likely what these answers will be used for), the speed is hardly variable – smac89 Jul 06 '21 at 15:30
  • Yeah I get also a ratio of 4 with this sort of files. So depending on the file, yout mileage may vary. The point is that it's always in benefit of `grep`. – Eric Aug 11 '21 at 12:17
2

I know this is old but still: Count filtered lines

My file looks like:

Number of files sent
Company 1 file: foo.pdf OK
Company 1 file: foo.csv OK
Company 1 file: foo.msg OK
Company 2 file: foo.pdf OK
Company 2 file: foo.csv OK
Company 2 file: foo.msg Error
Company 3 file: foo.pdf OK
Company 3 file: foo.csv OK
Company 3 file: foo.msg Error
Company 4 file: foo.pdf OK
Company 4 file: foo.csv OK
Company 4 file: foo.msg Error

If I want to know how many files are sent OK:

grep "OK" <filename> | wc -l

OR

grep -c "OK" filename
Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
1

As others said wc -l is the best solution, but for future reference you can use Perl:

perl -lne 'END { print $. }'

$. contains line number and END block will execute at the end of script.

Majid Azimi
  • 5,575
  • 13
  • 64
  • 113
1

I just made a program to do this ( with node )

npm install gimme-lines
gimme-lines verbose --exclude=node_modules,public,vendor --exclude_extensions=html

https://github.com/danschumann/gimme-lines/tree/master

dansch
  • 6,059
  • 4
  • 43
  • 59
1

if you're on some sort of BSD-based system like macOS, i'd recommend the gnu version of wc. It doesn't trip up on certain binary files the way BSD wc does. At least it's still somewhat usable performance. On the other hand, BSD tail is slow as ............zzzzzzzzzz...........

As for AWK, only a minor caveat though - since it operates under the default assumption of lines, meaning \n, if your file just happens not to have a trailing new line delimiter, AWK will over count it by 1 compared to either BSD or GNU wc. Also, if you're piping in things with no new lines at all, such as echo -n, depending on whether you're measuring at the END { } section or FNR==1, the NR will be different.

RARE Kpop Manifesto
  • 2,453
  • 3
  • 11