108

On shell, I pipe to awk when I need a particular column.

This prints column 9, for example:

... | awk '{print $9}'

How can I tell awk to print all the columns including and after column 9, not just column 9?

Lazer
  • 90,700
  • 113
  • 281
  • 364
  • 1
    possible duplicate of [Using awk to print all columns from the nth to the last](http://stackoverflow.com/questions/2961635/using-awk-to-print-all-columns-from-the-nth-to-the-last) –  Sep 16 '15 at 04:24

11 Answers11

107
awk '{ s = ""; for (i = 9; i <= NF; i++) s = s $i " "; print s }'
Amadan
  • 191,408
  • 23
  • 240
  • 301
  • 4
    a couple of slight refinements: `awk -v N=9 '{sep=""; for (i=N; i<=NF; i++) {printf("%s%s",sep,$i); sep=OFS}; printf("\n")}'` – glenn jackman Feb 22 '11 at 18:45
  • Thanks @glenn, that is indeed a bit more general. Anyway - I'd definitely agree it'd be better to use `cut` or `perl` for this. Use this only if you really insist on having it in `awk`. – Amadan Feb 22 '11 at 19:14
  • A few additional improvements: `awk -v N=9 '{for(i=1;i – SiegeX Feb 22 '11 at 19:32
  • 1
    @SiegeX: It doesn't add NUL bytes, it leaves the FS in place between each empty field. – Dennis Williamson Feb 22 '11 at 20:30
  • @Dennis ahh, so it does. Would have been more apparent if I didn't use the default FS. – SiegeX Feb 22 '11 at 22:37
  • 2
    Please see @Ascherer's answer for elegance. –  May 01 '13 at 09:50
  • 3
    @veryhungrymike: Elegance is nice, but I'd rather be correct. :p – Amadan May 01 '13 at 10:36
  • and this can be put inside a bash shell script (e.g. columns_after.sh 9) and passed the column like so: `awk -v start_col="$1" '{ s = ""; for (i = start_col; i <= NF; i++) s = s $i " "; print s }' ` – TheWestIsThe... May 02 '14 at 12:11
  • I usually prefer sed/cut to awk, but without awk I need to condense spaces from columnated output, like from ps, so possibly throw a `tr -s ' '` which could potentially lose data, but maybe only rarely when spaces are in side the data you are about, or a regex with sed/grep otherwise, but seems like more effort than is necessary.. – Pysis Jun 17 '20 at 19:12
  • Code only answer. Some explanation about what it does and how is welcomed... :( – Itération 122442 Sep 13 '21 at 06:26
82

When you want to do a range of fields, awk doesn't really have a straight forward way to do this. I would recommend cut instead:

cut -d' ' -f 9- ./infile

Edit

Added space field delimiter due to default being a tab. Thanks to Glenn for pointing this out

SiegeX
  • 135,741
  • 24
  • 144
  • 154
  • 18
    One thing about cut is that it uses a specific delimiter (tab by default), where awk uses "whitespace". With cut, 2 consecutive tabs delimit an empty field. – glenn jackman Feb 22 '11 at 21:48
  • 1
    As @glennjackman pointed out, awk's delimiter is "whitespace" (any amount, too). So setting cut delimiter to single space would not match behavior too. unfortunately the loop is the best one can do, so it looks. – poncha Apr 10 '13 at 11:06
  • This one does not work properly. Try the command `find . | xargs ls -l | cut -d' ' -f 9-`. For some reason double spaces are counted as well. Example: ```lrwxrwxrwx 1 me me 21 Dec 12 00:00 ./file_a lrwxrwxrwx 1 me me 64 Dec 6 00:06 ./file_b``` will result in ```./file_a 00:06 ./file_b``` – m3o Mar 03 '15 at 19:14
  • @MarcoPashkov please elaborate on *This one does not work properly*, especially considering you use the **exact** same code in your pipeline. By the way, you should [never try to parse the output of ls](http://mywiki.wooledge.org/ParsingLs) – SiegeX Mar 14 '15 at 06:09
  • 1
    cut does not do the job here. For example the, if your input is "foo bar" (single space) for one line, and "foo ___ bar" (ie multiple spaces, but SO is too smart to show it) for another, cut will process them differently. – UKMonkey Oct 17 '18 at 11:09
  • @UKMonkey Easy fix for that, pipe your output through `tr -s ‘ ‘` before piping to `cut`. This will ensure there are no repeated spaces when cut delimits. – SiegeX Oct 18 '18 at 18:46
58
awk '{print substr($0, index($0,$9))}'

Edit: Note, this doesn't work if any field before the ninth contains the same value as the ninth.

Ascherer
  • 8,223
  • 3
  • 42
  • 60
  • 10
    @veryhungrymike: ...and doesn't work if any field before ninth contains the same value as the ninth. – Amadan May 01 '13 at 10:35
  • true, but hopefully your file doesnt have that issue! Not the only, nor the best solution to the OP, but, can work – Ascherer May 01 '13 at 14:07
  • 7
    Probably because of the classic sentence "hopefully your file doesn't have that issue". It's a total __no-no__ in s/w engineering to state: *"we're not going to waste time including error-checking for input of e. g. negative values, because 'we hope the user will be intelligent enough to not try them out, crashing our tool'".* HAHAHA! Always love to hear this! (I like a good sense of humor) Well, as idiots *do* exist, it's the developer's duty to make his stuff *idiot-proof*! Instead of "hoping for the good in man". That's rather an attitude expected with philosophers, not s/w engineers...LOL – syntaxerror Dec 09 '14 at 19:43
  • 4
    I wasn't saying not to check for errors, but if you know you're not going to run into the issue, then this solution is fine, like I stated. But thank you for the unnecessary downvote @syntaxerror. This solution will work for some, as the (currently) 19 upvotes will show, but if it doesn't, then don't use it for your solution. There are lots of ways to solve the OP's problem. – Ascherer Dec 09 '14 at 22:59
  • 3
    If you are using awk on the command line in your daily work, this is definitely the solution you want. Is it not obvious? Error checking, etc, doesn't really matter in that case since you are typing it in & can catch these sort of things before you press enter (personally, I don't think awk should be used for anything else anyways, that's why we've got perl, python, tcl, and about 100+ other, better, faster, less annoying scripting languages!) 'Course maybe Im giving my fellow software developers too much credit and they really do need error checking even on the stuff they type on the fly (??) – osirisgothra Apr 02 '15 at 11:42
  • @Ascherer please at least include Amadan 's note into your answer. – atti Mar 05 '19 at 15:43
  • 1
    Not that it needed it, as its right below the answer, but i added it @atti – Ascherer Mar 06 '19 at 17:15
13
sed -re 's,\s+, ,g' | cut -d ' ' -f 9-

Instead of dealing with variable width whitespace, replace all whitespace as single space. Then use simple cut with the fields of interest.

It doesn't use awk so isn't germane but seemed appropriate given the other answers/comments.

Kevin
  • 2,761
  • 1
  • 27
  • 31
Some Fool
  • 139
  • 1
  • 3
11

Generally perl replaces awk/sed/grep et. al., and is much more portable (as well as just being a better penknife).

perl -lane 'print "@F[8..$#F]"'

Timtowtdi applies of course.

Burhan Ali
  • 2,258
  • 1
  • 28
  • 38
bobbogo
  • 14,989
  • 3
  • 48
  • 57
5
awk -v m="\x01" -v N="3" '{$N=m$N ;print substr($0, index($0,m)+1)}'

This chops what is before the given field nr., N, and prints all the rest of the line, including field nr.N and maintaining the original spacing (it does not reformat). It doesn't mater if the string of the field appears also somewhere else in the line, which is the problem with Ascherer's answer.

Define a function:

fromField () { 
awk -v m="\x01" -v N="$1" '{$N=m$N; print substr($0,index($0,m)+1)}'
}

And use it like this:

$ echo "  bat   bi       iru   lau bost   " | fromField 3
iru   lau bost   
$ echo "  bat   bi       iru   lau bost   " | fromField 2
bi       iru   lau bost   

Output maintains everything, including trailing spaces For N=0 it returns the whole line, as is, and for n>NF the empty string

Robert Vila
  • 221
  • 3
  • 2
  • This is a good idea. It doesn't quite work on a current Mac using typical gawk, because $0 collapses. The fix is to set a variable to $0 as the first step, such as: '{s=$0; ... print substr(s,index(s,m)+1} – joelparkerhenderson Mar 28 '15 at 06:21
  • That definitely WILL reformat the line since `$N=m$N` is changing the value of a field which causes awk to rebuild $0 replacing all FSs with OFSs. I can't imagine how you're getting the output you show given that script. – Ed Morton Jan 09 '22 at 13:40
1

Here is an example of ls -l output:

-rwxr-----@ 1 ricky.john  1493847943   5610048 Apr 16 14:09 00-Welcome.mp4
-rwxr-----@ 1 ricky.john  1493847943  27862521 Apr 16 14:09 01-Hello World.mp4
-rwxr-----@ 1 ricky.john  1493847943  21262056 Apr 16 14:09 02-Typical Go Directory Structure.mp4
-rwxr-----@ 1 ricky.john  1493847943  10627144 Apr 16 14:09 03-Where to Get Help.mp4

My solution to print anything post $9 is awk '{print substr($0, 61, 50)}'

rickydj
  • 629
  • 5
  • 17
0

To display the first 3 fields and print the remaining fields you can use:

awk '{s = ""; for (i=4; i<= NF; i++) s= s $i : "; print $1 $2 $3 s}' filename

where $1 $2 $3 are the first 3 fields.

0
function print_fields(field_num1, field_num2){
    input_line = $0

    j = 1;
    for (i=field_num1; i <= field_num2; i++){
        $(j++) = $(i);

    }
    NF = field_num2 - field_num1 + 1;
    print $0

    $0 = input_line
}
cakan
  • 2,099
  • 5
  • 32
  • 42
msol01
  • 1
0

Using cut instead of awk and overcoming issues with figuring out which column to start at by using the -c character cut command.

Here I am saying, give me all but the first 49 characters of the output.

 ls -l /some/path/*/* | cut -c 50-

The /*/*/ at the end of the ls command is saying show me what is in subdirectories too.

You can also pull out certain ranges of characters ala (from the cut man page). E.g., show the names and login times of the currently logged in users:

       who | cut -c 1-16,26-38
Joe
  • 965
  • 11
  • 16
-3
ruby -lane 'print $F[3..-1].join(" ")' file
kurumi
  • 25,121
  • 5
  • 44
  • 52