1

Using the linux command sort, how do you sort the lines within a text file?

Normal sort swaps the lines until they're sorted while I want to swap the words within the lines until they're sorted.

Example:

Input.txt

z y x v t
c b a

Output.txt

t v x y z
a b c
Bob
  • 4,576
  • 7
  • 39
  • 107

6 Answers6

3

To sort words within lines using sort, you would need to read line by line, and call sort once for each line. It gets quite tricky though, and in any case, running one sort process for each line wouldn't be very efficient.

You could do better by using Perl (thanks @glenn-jackman for the awesome tip!):

perl -lape '$_ = qq/@{[sort @F]}/' file
mosh
  • 1,402
  • 15
  • 16
janos
  • 120,954
  • 29
  • 226
  • 236
  • 2
    With `echo $(printf ... | sort)`, you risk glob expansion, no? You could just do `printf ... | sort` instead. – Benjamin W. Nov 27 '17 at 21:38
  • 1
    `$line` itself is subject to pathname expansion. – chepner Nov 27 '17 at 21:39
  • @BenjaminW. the wrapping in `echo $(...)` is to turn the multiline result of `sort` back into a single line – janos Nov 27 '17 at 21:41
  • Nope, now you are only writing a single line to `sort`, because `"$line"` isn't subject to word-splitting. – chepner Nov 27 '17 at 21:42
  • You need to split the line without exposing it to pathname expansion; either use `printf '%s\n' "$(read -a arr <<< "$line"; printf "%s\n" "${line[@]}" | sort)`, or turn pathname expansion off with `printf '%s\n' "$(set -f; printf '%s\n' $line | sort)"`. – chepner Nov 27 '17 at 21:46
  • @chepner thanks... I dropped that idea entirely, I'd rather resort to Perl for this. – janos Nov 27 '17 at 21:48
  • Can be written as `perl -lape '$_="@{[sort @F]}"' file` – glenn jackman Nov 28 '17 at 01:28
3

If you have gnu awk then it can be done in a single command using asort function:

awk '{for(i=1; i<=NF; i++) c[i]=$i; n=asort(c); 
for (i=1; i<=n; i++) printf "%s%s", c[i], (i<n?OFS:RS); delete c}' file

t v x y z
a b c
glenn jackman
  • 238,783
  • 38
  • 220
  • 352
anubhava
  • 761,203
  • 64
  • 569
  • 643
2

Here's a fun way that actually uses the linux sort command (plus xargs):

while read line; do xargs -n1 <<< $line | sort | xargs; done < input.txt

Now, this makes several assumptions (which are probably not always true), but the main idea is xargs -n1 takes all the tokens in a line and emits them on separate lines in stdout. This output gets piped through sort and then a final xargs with no arguments puts them all back into a single line.

carl.anderson
  • 1,040
  • 11
  • 16
1

I was looking for a magic switch but found my own solution more intuitive:

$ line="102 103 101 102 101"
$ echo $(echo "${line}"|sed 's/\W\+/\n/g'|sort -un)
101 102 103

Thank you!

RoGoR
  • 11
  • 3
0

It's a little awkward, but this uses only a basic sort command, so it's perhaps a little more portable than something that requires GNU sort:

while read -r -a line; do
  printf "%s " $(sort <<<"$(printf '%s\n' "${line[@]}")")
  echo
done < input.txt

The echo is included to insert a newline, which printf doesn't include by default.

ghoti
  • 45,319
  • 8
  • 65
  • 104
  • This is subject to globbing: if I have files `f1.txt` and `f2.txt` in the current directory and one of the words in `input.txt` is `f?.txt`, it'll expand to `f1.txt f2.txt`. – Benjamin W. Nov 27 '17 at 22:31
  • Something like `sort <<< "$(printf '%s\n' "${line[@]}")" | paste -s -d ' '` might work, assuming that the words are separated by a single space each. – Benjamin W. Nov 27 '17 at 22:38
0

Having tried various ways to solve this (involving, e.g., GNU parallel and/or xargs) I found a simple way to do this, using only GNU coreutils:

split -l 1 --filter 'tr " " "\n" | sort | paste -s -d " "' \
in.txt > out.txt

Unfortunately, I think the --filter option to split is a GNU addition...

sebbit
  • 111
  • 3
  • 4