1

i had a problem with using grep, or cut command to get out the files size i have that file :

   4096 Feb 15 21:52 f1
      0 Feb 15 18:24 f4
6928808 Feb 10 16:59 install_flash_player_11_linux.i386.tar.gz
     87 Feb 14 18:43 sc1.sh
    281 Feb 14 19:11 sc2.sh
    168 Feb 14 21:40 sc3.sh
    345 Feb 15 21:15 sc4.sh
    278 Feb 15 19:27 sc4.sh~
      6 Feb 15 18:27 sc5.sh
    472 Feb 16 11:01 sc6.sh
    375 Feb 16 11:01 sc6.sh~
    359 Feb 17 01:18 sc7.sh
    358 Feb 17 01:17 sc7.sh~
    230 Feb 16 09:31 toUppefi.sh
    230 Feb 16 02:07 toUppefi.sh~

i need to get off only the first numbers every time for example :

4096
0
...

I used ls -l . | cut -d" " -f5 (for list of files!) to get only size but the result is spaces ! because of the space before numbers ! and when i use delimiter " " and -f it doesn't work it gives only the most biggest number that begin from the left side , i hope u understand my problem

Etan Reisner
  • 77,877
  • 8
  • 106
  • 148
Zitrov
  • 13
  • 1
  • 9
  • If you want to get the first field of lines like these look at `awk`. That being said you don't want to [parse the output from ls](http://mywiki.wooledge.org/ParsingLs) in the first place. – Etan Reisner Feb 17 '15 at 01:24
  • 1
    possible duplicate of [How to make the 'cut' command treat several sequential delimiters as one?](http://stackoverflow.com/questions/4143252/how-to-make-the-cut-command-treat-several-sequential-delimiters-as-one) – fedorqui Feb 17 '15 at 10:13

3 Answers3

2

You could do ls -l . | awk '{print $1}', but you should follow the general advice advice to avoid parsing the output of ls.

The usual way to avoid parsing the output of ls is to loop over the files to get the information you need. To get the size of the files, you could use wc -c.

for file in *; do
    if [ -e "$file" ]; then   #test if file exists to avoid problems with an empty directory
        wc -c "$file"
    fi
done

If you really only need the size - just pipe through awk.

for file in *; do
    if [ -e "$file" ]; then
        wc -c "$file" | awk '{print $1}'
    fi
done

Getting the size without using awk (@tripleee suggestion):

for file in *; do
    if [ -e "$file" ]; then
        wc -c < "$file"
    fi
done
sonologico
  • 774
  • 3
  • 9
  • thank u sir for ur reply , can't we do it with just using cut command ? – Zitrov Feb 17 '15 at 01:55
  • The other answers have good explanations of the problem of using `cut` in this case. – sonologico Feb 17 '15 at 10:04
  • The answer which uses `stat` is closer to what `ls` does internally, and will return a size for directories, too. You can avoid Awk by doing ’wc -c <"$file"` with a redirection. – tripleee Feb 18 '15 at 04:08
  • The `ls -l` column he wanted and `wc -c` give the same answer here. That's why I suggested it. So does `stat`, actually, but it doesn't have a `-c` flag here (like in one of the answers). – sonologico Feb 18 '15 at 12:40
2

The problem is that cut does not support patterns as delimiters, e.g. [ \t]+. This can be mitigated to some extent with tr -s, e.g. if all rows start with at least one space, this works:

tr -s ' ' | cut -d' ' -f2 

An alternative is to use sed to remove all whitespace from the start of line, e.g.:

sed 's/^ *//' | cut -d' ' -f1

On the other hand, to retrieve file sizes you are better of using stat:

stat -c '%s %n' *
Thor
  • 45,082
  • 11
  • 119
  • 130
1

Problem with cut it that it can not use regex in delimiter.
So setting it to space and ask for first field you get only

ls -l . | cut -f 1 -d " "
6928808

But this awk we set line to first field $0=$1 and then print the line 1:

ls -l . | awk '{$0=$1}1'
4096
0
6928808
87
281
168
345
278
6
472
375
359
358
230
230

Or you could do this: ls -l . | awk '{print $1}'

Jotne
  • 40,548
  • 12
  • 51
  • 55