Without using sed
or awk
, only cut
, how do I get the last field when the number of fields are unknown or change with every line?

- 46,058
- 19
- 106
- 116

- 6,089
- 2
- 18
- 34
-
You can't. Why do you have to use `cut`? – tom Mar 29 '14 at 04:46
-
19Are you in love with `cut` command :)? why not any other Linux commands? – Jayesh Bhoi Mar 29 '14 at 04:54
-
9Without `sed` or `awk`: `perl -pe 's/^.+\s+([^\s]+)$/$1/'`. – jordanm Mar 29 '14 at 04:56
-
1@jaynesh as zfus correctly guessed, yes this is homework and we cant use sed or awk. if i have lines of simple text where each is a web address, there is a delimiter of a '.' i need to extract last field from those web addresses ie the com, net, nz. but the number of the '.' (the delimiter) always changes for each address but it is always the last field. i thought cut was the obvious choice :\ – noobcoder Mar 29 '14 at 05:00
-
1If you restrict your question to only the answers you expect (in this case `cut`), you're preventing yourself from learning about anything new. – Charles Duffy Mar 29 '14 at 05:08
-
6possible duplicate of [how to split a string in shell and get the last field](http://stackoverflow.com/questions/3162385/how-to-split-a-string-in-shell-and-get-the-last-field) – Amir Ali Akbari Oct 13 '14 at 19:43
-
1This one is quite silly too: `cut` alone **can not** perform this task. You *require* other tools. So you can not use `sed` or `awk` but you can use `grep` and `rev`? What about invoking an inline `python` or `perl` script? What about not using `cut` at all? – MestreLion Sep 20 '15 at 23:44
-
8@MestreLion Many times people read a question to find a solution to a variation of a problem. This one starts with the false premise that `cut` supports something it doesn't. But I thought it was useful, in that it forces the reader to consider code that's easier to follow. I wanted a quick, simple way to use `cut` without needing to use multiple syntaxes for `awk`, `grep`, `sed`, etc. The `rev` thing did the trick; very elegant, and something I've never considered (even if clunky for other situations). I also liked reading the other approaches from the other answers. – Beejor Aug 23 '16 at 03:07
-
@EliranMalka Thank you. I appreciate the feedback. I am trying. – zedfoxus Feb 15 '17 at 15:49
-
3Came here a real life problem: I want to find all the different file extensions in a source tree, to update a .gitattributes file with. So `find | cut -d. -f
` is the natural inclination – studog Dec 13 '17 at 19:49 -
1@studog, as an aside, `find . -printf '%f\n'` will emit only the filenames on its own, if on a GNU platform. – Charles Duffy Aug 16 '18 at 16:01
14 Answers
You could try something like this:
echo 'maps.google.com' | rev | cut -d'.' -f 1 | rev
Explanation
rev
reverses "maps.google.com" to bemoc.elgoog.spam
cut
uses dot (ie '.') as the delimiter, and chooses the first field, which ismoc
- lastly, we reverse it again to get
com
-
-
8It not using only `cut` but it's without `sed` or `awk`.So what OP think? – Jayesh Bhoi Mar 29 '14 at 05:02
-
14@tom OP has asked more questions than just this in the last few hours. Based on our interactions with the OP we know that awk/sed/etc. are not allowed in his homework, but a reference to rev has not been made. So it was worth a shot – zedfoxus Mar 29 '14 at 05:03
-
4
-
3@zfus I think you require another `rev` also because in such condition like `echo 'www.google.com' | rev | cut -d'.' -f 1 | rev`. – Jayesh Bhoi Mar 29 '14 at 05:06
-
-
This is a very clever solution, but after all these years does `cut` really not have a way to access the final element indirectly? This feels hacky. – h0r53 Jul 21 '20 at 18:15
-
dinging this solution for needing 3 pipes :::::::::: ::::::::: :::::: :::::: :::::::: ::::::::::::::::::::: :::::::::::: :::::::::: :::::`% g3bn 81181` ::: -> `% 81181=Epik High 에픽하이` ::::::::::::: ::::::::::::::: :::::::::::: :::::::::: ::::::: :::::: `% g3bn 81181 | gawk '$_ = $NF'` ::: -> `% 에픽하이` – RARE Kpop Manifesto Oct 22 '22 at 14:04
Use a parameter expansion. This is much more efficient than any kind of external command, cut
(or grep
) included.
data=foo,bar,baz,qux
last=${data##*,}
See BashFAQ #100 for an introduction to native string manipulation in bash.

- 280,126
- 43
- 390
- 441
-
3@ErwinWessels: Because bash is really slow. Use bash to run pipelines, not to process data in bulk. I mean, this is great if you have one line of text already in a shell variable, or if you want to do `while IFS= read -ra array_var; do :;done <(cmd)` to process a few lines. But for a big file, rev|cut|rev is probably faster! (And of course awk will be faster than that.) – Peter Cordes Dec 07 '15 at 06:30
-
2@PeterCordes, awk will be faster for a big file, sure, but it takes a fair bit of input to overcome the constant-factor startup costs. (There also exist shells -- like ksh93 -- with performance closer to awk, where the syntax given in this answer remains valid; bash is exceptionally sluggish, but it's not even close to the only option available). – Charles Duffy Dec 07 '15 at 06:33
-
1Thanks @PeterCordes; as usual I guess each tool has its use cases. – Erwin Wessels Dec 07 '15 at 11:50
-
2This is by far the fastest and most concise way of trimming down a single variable inside a `bash` script (assuming you're already using a `bash` script). No need to call anything external. – Ken Sharp Jul 28 '17 at 04:34
-
While this seems really neat, I highly prefer the double rev, which isn't bash specific. And it learnt me a new tool ! *Note that I always use bash, but will never remind any of those weird and barbaric syntaxes.* – Balmipour Oct 13 '17 at 09:36
-
1@Balmipour, ...however, `rev` *is* specific to whatever OS you're using that provides it -- it's not standardized across all UNIX systems. See the [chapter listing for the POSIX section on commands and utilities](http://pubs.opengroup.org/onlinepubs/009696699/utilities/contents.html) -- it's not there. And `${var##prefix_pattern}` is *not* in fact bash-specific; it's in the [POSIX sh standard](http://pubs.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_06_02), see the end of section 2.6.2 (linked), so unlike `rev`, it's always available on any compliant shell. – Charles Duffy Oct 13 '17 at 11:15
-
@Balmipour, ...and if you're in the business of learning new tools, you might consider the benefit of learning tools that have good runtime performance characteristics. Half the reason shell has a reputation of being slow is because a great many people habitually write highly inefficient scripts, using external commands when internal ones will do. (The other half is what Peter and I were arguing over earlier -- interpreter performance -- but if you're spinning up external tools inside a tight loop, that overhead makes interpreter performance unnoticable by comparison). – Charles Duffy Oct 13 '17 at 11:33
-
@Charles Duffy Thanks for those precisions. I never needed performances in *my* shell scripts, but you are obviously right, both on that point and the fact that it's POSIX standard (which I didn't know). Guess the choice highly depends on one's need, but I'm glad to learn a little more about this :) – Balmipour Oct 13 '17 at 12:45
-
The claim that rec | cut | rev is faster is completely unfounded. It is extremely slow compared to string expansion. on my system, 10000 repetitions with string expansion take 0.398 second. rev|cut|rev takes a staggering 1 minute 6 seconds – Bruno9779 Aug 16 '18 at 13:41
-
@Bruno9779, so, it depends on implementation details. If you spin up a new pipeline for each string you want to reverse, that's extremely slow -- as you noticed. If you reuse a single pipeline by sending 10,000 strings through it, it'll be faster than the equivalent native bash -- that's presumably what PeterCordes was talking about. That said, the reuse-a-single-pipeline case is rarely actually practical, so I agree with you that in general a parameter expansion is the right way to go. – Charles Duffy Aug 16 '18 at 13:55
-
@Bruno9779, ...to provide a concrete example of using just one pipeline to process a whole lot of lines very quickly: in `for ((i=0; i<10000; i++)); do echo "foo,bar,baz,$RANDOM"; done >file; time { rev
/dev/null; }`, the portion covered by `time` takes 0m0.026s wall-clock time on my local system. – Charles Duffy Aug 16 '18 at 18:27 -
Can you generalize this to get thenth from last field? the `rev|cut|rev` answer is trivially adaptable to fetch any field... – Giacomo Alzetta Jun 06 '19 at 07:51
-
@GiacomoAlzetta, `n=2; IFS=, read -r -a fields; echo "${fields[${#fields[@]}-n]}"` -- see it running at https://ideone.com/gMUu1x – Charles Duffy Jun 06 '19 at 12:33
-
@GiacomoAlzetta : with `awk` you generalize that to :::::::: ::::::::::::::::::: ::::::::::::::: :::: ::::::::: ::::::::: `awk '$!NF = $( NF < n ? NF=_ : NF - n)` without any double-reversal – RARE Kpop Manifesto Oct 22 '22 at 14:11
It is not possible using just cut
. Here is a way using grep
:
grep -o '[^,]*$'
Replace the comma for other delimiters.
Explanation:
-o
(--only-matching
) only outputs the part of the input that matches the pattern (the default is to print the entire line if it contains a match).[^,]
is a character class that matches any character other than a comma.*
matches the preceding pattern zero or more time, so[^,]*
matches zero or more non‑comma characters.$
matches the end of the string.- Putting this together, the pattern matches zero or more non-comma characters at the end of the string.
- When there are multiple possible matches,
grep
prefers the one that starts earliest. So the entire last field will be matched.
Full example:
If we have a file called data.csv containing
one,two,three
foo,bar
then grep -o '[^,]*$' < data.csv
will output
three
bar

- 21,844
- 6
- 43
- 36
-
6To do the opposite, and find everything except the last field do: `grep -o '^.*,'` – Ariel Mar 11 '16 at 06:59
-
2This was especially useful, because `rev` add an issue multibyte unicode characters in my case. – bric3 Dec 21 '17 at 15:04
-
6I was trying to do this on MinGW but my grep version doesn't support -o, so I used `sed 's/^.*,//'` which replaces all characters up to and including the last comma with an empty string. – TamaMcGlinn Apr 04 '18 at 14:16
Without awk ?... But it's so simple with awk:
echo 'maps.google.com' | awk -F. '{print $NF}'
AWK is a way more powerful tool to have in your pocket. -F if for field separator NF is the number of fields (also stands for the index of the last)

- 4,140
- 3
- 27
- 36
-
7This is universal and it works exactly as expected every time. In this scenario, using `cut` to achieve the OP's final output is like using a spoon to "cut" steak (pun intended :) ) . `awk` is the steak knife. – Hickory420 Oct 11 '18 at 01:04
-
7Avoid un-necessary use of `echo` that may slow down script for long files using `awk -F. '{print $NF}' <<< 'maps.google.com'`. – Anil_M Oct 17 '18 at 20:36
There are multiple ways. You may use this too.
echo "Your string here"| tr ' ' '\n' | tail -n1
> here
Obviously, the blank space input for tr command should be replaced with the delimiter you need.

- 471
- 7
- 7
-
1This feels like the simplest answer to me, less pipes and clearer meaning – joeButler Jan 12 '17 at 14:14
-
1That will not work for an entire file, which is what the OP probably meant. – Amir Apr 26 '17 at 07:24
This is the only solution possible for using nothing but cut:
echo "s.t.r.i.n.g." | cut -d'.' -f2- [repeat_following_part_forever_or_until_out_of_memory:] | cut -d'.' -f2-
Using this solution, the number of fields can indeed be unknown and vary from time to time. However as line length must not exceed LINE_MAX characters or fields, including the new-line character, then an arbitrary number of fields can never be part as a real condition of this solution.
Yes, a very silly solution but the only one that meets the criterias I think.

- 99
- 1
- 1
-
2
-
2I love when everyone says something is impossible and then someone chimes in with a working answer. Even if it is indeed very silly. – Beejor Aug 23 '16 at 03:13
-
1One could iterate `cut -f2-` in a loop until the output no longer changes. – loa_in_ Jun 25 '18 at 11:11
-
I think you'd have to read the file line-by-line and ***then*** iterate the `cut -f2-` until it no longer changes. Otherwise you'd have to buffer the entire file. – Tripp Kinetics May 14 '21 at 18:02
It is better to use awk
while working with tabular data. If it can be achieved by awk
, why not use that? I suggest you do not waste your precious time, and use a handful of commands to get the job done.
Example:
# $NF refers to the last column in awk
ll | awk '{print $NF}'

- 61
- 1
- 3
-
And to change field separator, you can use `-F
`, e.g. `awk -F':' '{print $NF}'` (regular shell escaping applies). (Credit: https://stackoverflow.com/a/2609565/737956) – Dalibor Filus Apr 17 '23 at 19:08 -
1Thank you for pointing that out. In addition to `$NF`, which refers to the last field, if you want to print fields other than the last then you can use the field number with a $ sign. **For example:** `awk -F, '{print $2} /tmp/test.csv` this will print the second field rather than the last. – Maso Mahboob Apr 28 '23 at 00:32
If your input string doesn't contain forward slashes then you can use basename
and a subshell:
$ basename "$(echo 'maps.google.com' | tr '.' '/')"
This doesn't use sed
or awk
but it also doesn't use cut
either, so I'm not quite sure if it qualifies as an answer to the question as its worded.
This doesn't work well if processing input strings that can contain forward slashes. A workaround for that situation would be to replace forward slash with some other character that you know isn't part of a valid input string. For example, the pipe (|
) character is also not allowed in filenames, so this would work:
$ basename "$(echo 'maps.google.com/some/url/things' | tr '/' '|' | tr '.' '/')" | tr '|' '/'

- 3,234
- 20
- 21
-
Of *course* the pipe character is allowed in filenames. Just try `touch \|`. – Tripp Kinetics May 14 '21 at 18:04
-
I will change from downvote to upvote if you remove the false claim about `|` being not allowed in file names. But almost every `tr` out there supports `\0` or some other way of expressing the nul byte, and that definitely isn't allowed in file names, so you can use that as a place holder. Also `tr ab bc` just swaps all `a` and `b` without problems, so you can just avoid having to find a disallowed character entirely. Just pipe through `tr './' './'` once to swap before the `basename` and then again to swap back after. – mtraceur Jun 26 '21 at 11:34
-
Just realized I have a typo: "just pipe through `tr '/.' './'` once to swap before the basename and then again after". – mtraceur Jul 09 '21 at 01:31
the following implements A friend's suggestion
#!/bin/bash
rcut(){
nu="$( echo $1 | cut -d"$DELIM" -f 2- )"
if [ "$nu" != "$1" ]
then
rcut "$nu"
else
echo "$nu"
fi
}
$ export DELIM=.
$ rcut a.b.c.d
d

- 69
- 1
- 3
-
2You need double quotes around the arguments to `echo` in order for this to work reliably and robustly. See https://stackoverflow.com/questions/10067266/when-to-wrap-quotes-around-a-shell-variable – tripleee Dec 30 '17 at 14:24
An alternative using perl would be:
perl -pe 's/(.*) (.*)$/$2/' file
where you may change \t
for whichever the delimiter of file
is

- 21
- 1
choose -1
choose supports negative indexing (the syntax is similar to Python's slices).

- 486
- 4
- 11
If you have a file named filelist.txt that is a list paths such as the following: c:/dir1/dir2/file1.h c:/dir1/dir2/dir3/file2.h
then you can do this: rev filelist.txt | cut -d"/" -f1 | rev

- 9
- 1
Adding an approach to this old question just for the fun of it:
$ cat input.file # file containing input that needs to be processed
a;b;c;d;e
1;2;3;4;5
no delimiter here
124;adsf;15454
foo;bar;is;null;info
$ cat tmp.sh # showing off the script to do the job
#!/bin/bash
delim=';'
while read -r line; do
while [[ "$line" =~ "$delim" ]]; do
line=$(cut -d"$delim" -f 2- <<<"$line")
done
echo "$line"
done < input.file
$ ./tmp.sh # output of above script/processed input file
e
5
no delimiter here
15454
info
Besides bash, only cut is used. Well, and echo, I guess.

- 424
- 3
- 9
-
1Meh, why not just remove cut completely and only use bash... x] `while read -r line; do echo ${line/*;}; done
– Kaffe Myers May 27 '19 at 09:50
I realized if we just ensure a trailing delimiter exists, it works. So in my case I have comma and whitespace delimiters. I add a space at the end;
$ ans="a, b"
$ ans+=" "; echo ${ans} | tr ',' ' ' | tr -s ' ' | cut -d' ' -f2
b

- 97,681
- 90
- 411
- 885

- 9,932
- 6
- 52
- 48
-
And `ans="a, b, c"` produces `b`, which does not meet the requirements of *"number of fields are unknown or change with every line"*. – jww Mar 15 '19 at 08:17