16

The one-liner should:

  • solve a real-world problem
  • not be extensively cryptic (should be easy to understand and reproduce)
  • be worth the time it takes to write it (should not be too clever)

I'm looking for practical tips and tricks (complementary examples for perldoc perlrun).

Timur Shtatland
  • 12,024
  • 2
  • 30
  • 47
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • "last" means "final", as in, I'd never write another again. That's not gonna happen. I'm gonna keep writing Perl one-liners as long as the prompt accepts it. Perhaps you meant "latest". – Randal Schwartz Oct 02 '08 at 22:13
  • Thank you. I've corrected the title. The time for my last meal has not come yet :) – jfs Oct 03 '08 at 01:11

24 Answers24

14

Please see my slides for "A Field Guide To The Perl Command Line Options."

xtreak
  • 1,376
  • 18
  • 42
Andy Lester
  • 91,102
  • 13
  • 100
  • 152
  • 6
    `-e examples' slide: on Windows I prefer q() and qq() instead of quotes. It allows me to use the same one-liner of Linux by replacing just two outside quotes. Windows: perl -E"say q(Hello, World)". Linux: perl -E'say q(Hello, World)' – jfs Sep 21 '08 at 20:15
13

Squid log files. They're great, aren't they? Except by default they have seconds-from-the-epoch as the time field. Here's a one-liner that reads from a squid log file and converts the time into a human readable date:

perl -pe's/([\d.]+)/localtime $1/e;' access.log

With a small tweak, you can make it only display lines with a keyword you're interested in. The following watches for stackoverflow.com accesses and prints only those lines, with a human readable date. To make it more useful, I'm giving it the output of tail -f, so I can see accesses in real time:

tail -f access.log | perl -ne's/([\d.]+)/localtime $1/e,print if /stackoverflow\.com/'
quantum
  • 3,672
  • 29
  • 51
pjf
  • 5,993
  • 25
  • 42
11

The problem: A media player does not automatically load subtitles due to their names differ from corresponding video files.

Solution: Rename all *.srt (files with subtitles) to match the *.avi (files with video).

perl -e'while(<*.avi>) { s/avi$/srt/; rename <*.srt>, $_ }'

CAVEAT: Sorting order of original video and subtitle filenames should be the same.

Here, a more verbose version of the above one-liner:

my @avi = glob('*.avi');
my @srt = glob('*.srt');

for my $i (0..$#avi)
{
  my $video_filename = $avi[$i];
  $video_filename =~ s/avi$/srt/;   # 'movie1.avi' -> 'movie1.srt'

  my $subtitle_filename = $srt[$i]; # 'film1.srt'
  rename($subtitle_filename, $video_filename); # 'film1.srt' -> 'movie1.srt'
}
jfs
  • 399,953
  • 195
  • 994
  • 1,670
11

You may not think of this as Perl, but I use ack religiously (it's a smart grep replacement written in Perl) and that lets me edit, for example, all of my Perl tests which access a particular part of our API:

vim $(ack --perl -l 'api/v1/episode' t)

As a side note, if you use vim, you can run all of the tests in your editor's buffers.

For something with more obvious (if simple) Perl, I needed to know how many test programs used out test fixtures in the t/lib/TestPM directory (I've cut down the command for clarity).

ack $(ls t/lib/TestPM/|awk -F'.' '{print $1}'|xargs perl -e 'print join "|" => @ARGV') aggtests/ t -l

Note how the "join" turns the results into a regex to feed to ack.

zb226
  • 9,586
  • 6
  • 49
  • 79
Ovid
  • 11,580
  • 9
  • 46
  • 76
11

The common idiom of using find ... -exec rm {} \; to delete a set of files somewhere in a directory tree is not particularly efficient in that it executes the rm command once for each file found. One of my habits, born from the days when computers weren't quite as fast (dagnabbit!), is to replace many calls to rm with one call to perl:

find . -name '*.whatever' | perl -lne unlink

The perl part of the command line reads the list of files emitted* by find, one per line, trims the newline off, and deletes the file using perl's built-in unlink() function, which takes $_ as its argument if no explicit argument is supplied. ($_ is set to each line of input thanks to the -n flag.) (*These days, most find commands do -print by default, so I can leave that part out.)

I like this idiom not only because of the efficiency (possibly less important these days) but also because it has fewer chorded/awkward keys than typing the traditional -exec rm {} \; sequence. It also avoids quoting issues caused by file names with spaces, quotes, etc., of which I have many. (A more robust version might use find's -print0 option and then ask perl to read null-delimited records instead of lines, but I'm usually pretty confident that my file names do not contain embedded newlines.)

John Siracusa
  • 14,971
  • 7
  • 42
  • 54
  • 6
    I've been using xargs to solve that problem from a time before Perl was a glint in Larry's eye :-). – paxdiablo Oct 13 '08 at 13:15
  • Fair because it's a Perl related topic; but another more robust POSIX version could use `find ... -print0 | xargs -0 -- rm` or with GNU findutils: `find ... --exec rm {} +` [⁽ʳᵉᶠ⁾](https://man7.org/linux/man-pages/man1/find.1.html) – bufh Aug 27 '20 at 12:26
8

All one-liners from the answers collected in one place:

  • perl -pe's/([\d.]+)/localtime $1/e;' access.log

  • ack $(ls t/lib/TestPM/|awk -F'.' '{print $1}'|xargs perl -e 'print join "|" => @ARGV') aggtests/ t -l

  • perl -e'while(<*.avi>) { s/avi$/srt/; rename <*.srt>, $_ }'

  • find . -name '*.whatever' | perl -lne unlink

  • tail -F /var/log/squid/access.log | perl -ane 'BEGIN{$|++} $F[6] =~ m{\Qrad.live.com/ADSAdClient31.dll} && printf "%02d:%02d:%02d %15s %9d\n", sub{reverse @_[0..2]}->(localtime $F[0]), @F[2,4]'

  • export PATH=$(perl -F: -ane'print join q/:/, grep { !$c{$_}++ } @F'<<<$PATH)

  • alias e2d="perl -le \"print scalar(localtime($ARGV[0]));\""

  • perl -ple '$_=eval'

  • perl -00 -ne 'print sort split /^/'

  • perl -pe'1while+s/\t/" "x(8-pos()%8)/e'

  • tail -f log | perl -ne '$s=time() unless $s; $n=time(); $d=$n-$s; if ($d>=2) { print qq ($. lines in last $d secs, rate ),$./$d,qq(\n); $. =0; $s=$n; }'

  • perl -MFile::Spec -e 'print join(qq(\n),File::Spec->path).qq(\n)'

See corresponding answers for their descriptions.

Tim Lewis
  • 3,335
  • 1
  • 36
  • 26
jfs
  • 399,953
  • 195
  • 994
  • 1,670
6

The Perl one-liner I use the most is the Perl calculator

perl -ple '$_=eval'
4

One of the biggest bandwidth hogs at $work is download web advertising, so I'm looking at the low-hanging fruit waiting to be picked. I've got rid of Google ads, now I have Microsoft in my line of sights. So I run a tail on the log file, and pick out the lines of interest:

tail -F /var/log/squid/access.log | \
perl -ane 'BEGIN{$|++} $F[6] =~ m{\Qrad.live.com/ADSAdClient31.dll}
    && printf "%02d:%02d:%02d %15s %9d\n",
        sub{reverse @_[0..2]}->(localtime $F[0]), @F[2,4]'

What the Perl pipe does is to begin by setting autoflush to true, so that any that is acted upon is printed out immediately. Otherwise the output it chunked up and one receives a batch of lines when the output buffer fills. The -a switch splits each input line on white space, and saves the results in the array @F (functionality inspired by awk's capacity to split input records into its $1, $2, $3... variables).

It checks whether the 7th field in the line contains the URI we seek (using \Q to save us the pain of escaping uninteresting metacharacters). If a match is found, it pretty-prints the time, the source IP and the number of bytes returned from the remote site.

The time is obtained by taking the epoch time in the first field and using 'localtime' to break it down into its components (hour, minute, second, day, month, year). It takes a slice of the first three elements returns, second, minute and hour, and reverses the order to get hour, minute and second. This is returned as a three element array, along with a slice of the third (IP address) and fifth (size) from the original @F array. These five arguments are passed to sprintf which formats the results.

dland
  • 4,319
  • 6
  • 36
  • 60
4

@dr_pepper

Remove literal duplicates in $PATH:

$ export PATH=$(perl -F: -ane'print join q/:/, grep { !$c{$_}++ } @F'<<<$PATH)

Print unique clean paths from %PATH% environment variable (it doesn't touch ../ and alike, replace File::Spec->rel2abs by Cwd::realpath if it is desirable) It is not a one-liner to be more portable:

#!/usr/bin/perl -w
use File::Spec; 

$, = "\n"; 
print grep { !$count{$_}++ } 
      map  { File::Spec->rel2abs($_) } 
      File::Spec->path;
jfs
  • 399,953
  • 195
  • 994
  • 1,670
  • Thanks for showing me this, I was looking for a shorter one-liner to do this. In my environment, white space is the separator when using lowercase $path. Is it better to use upper case $PATH? – dr_pepper Oct 03 '08 at 00:49
  • In my shell (bash) $path and $PATH are different variables (names are case-sensitive: $ a=2; A=3; echo $(($a * $A)) This prints '6'. – jfs Oct 03 '08 at 01:08
  • Duplicates could be removed using a combination of programs `tr`, `sort`, `uniq`, `cut` and a pipe. – jfs Oct 03 '08 at 01:15
  • But, using tr, sort, etc changes the path order, which may cause undesirable side effects. – dr_pepper Oct 04 '08 at 15:16
  • In ZSH the variable `path` is bound the variable `PATH`, so `PATH` is always the elements of `path`, join with a colon, and `path` is always an array containing the chunks of `PATH` split by a column. To make them unique, just apply the -U modifier to one of the variables: typeset -U PATH – jkramer Oct 10 '08 at 11:08
3

In response to Ovid's Vim/ack combination:

I too am often searching for something and then want to open the matching files in Vim, so I made myself a little shortcut some time ago (works in Z shell only, I think):

function vimify-eval; {
    if [[ ! -z "$BUFFER" ]]; then
        if [[ $BUFFER = 'ack'* ]]; then
            BUFFER="$BUFFER -l"
        fi
        BUFFER="vim  \$($BUFFER)"
        zle accept-line
    fi
}

zle -N vim-eval-widget vimify-eval

bindkey '^P' vim-eval-widget

It works like this: I search for something using ack, like ack some-pattern. I look at the results and if I like it, I press arrow-up to get the ack-line again and then press Ctrl + P. What happens then is that Z shell appends and "-l" for listing filenames only if the command starts with "ack". Then it puts "$(...)" around the command and "vim" in front of it. Then the whole thing is executed.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
jkramer
  • 15,440
  • 5
  • 47
  • 48
3

I use this quite frequently to quickly convert epoch times to a useful datestamp.

perl -l -e 'print scalar(localtime($ARGV[0]))'

Make an alias in your shell:

alias e2d="perl -le \"print scalar(localtime($ARGV[0]));\""

Then pipe an epoch number to the alias.

echo 1219174516 | e2d

Many programs and utilities on Unix/Linux use epoch values to represent time, so this has proved invaluable for me.

jtimberman
  • 8,228
  • 2
  • 43
  • 37
3

Extracting Stack Overflow reputation without having to open a web page:

perl -nle "print '  Stack Overflow        ' . $1 . '  (no change)' if /\s{20,99}([0-9,]{3,6})<\/div>/;" "SO.html"  >> SOscores.txt

This assumes the user page has already been downloaded to file SO.html. I use wget for this purpose. The notation here is for Windows command line; it would be slightly different for Linux or Mac OS X. The output is appended to a text file.

I use it in a BAT script to automate sampling of reputation on the four sites in the family: Stack Overflow, Server Fault, Super User and Meta Stack Overflow.

Klaus Byskov Pedersen
  • 117,245
  • 29
  • 183
  • 222
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
3

Remove duplicates in path variable:

set path=(`echo $path | perl -e 'foreach(split(/ /,<>)){print $_," " unless $s{$_}++;}'`)
dr_pepper
  • 1,587
  • 13
  • 28
3

I often need to see a readable version of the PATH while shell scripting. The following one-liners print every path entry on its own line.

Over time this one-liner has evolved through several phases:

Unix (version 1):

perl -e 'print join("\n",split(":",$ENV{"PATH"}))."\n"'

Windows (version 2):

perl -e "print join(qq(\n),split(';',$ENV{'PATH'})).qq(\n)"

Both Unix/Windows (using q/qq tip from @j-f-sebastian) (version 3):

perl -MFile::Spec -e 'print join(qq(\n), File::Spec->path).qq(\n)' # Unix
perl -MFile::Spec -e "print join(qq(\n), File::Spec->path).qq(\n)" # Windows
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Tim Lewis
  • 3,335
  • 1
  • 36
  • 26
3

Remove MS-DOS line-endings.

perl -p -i -e 's/\r\n$/\n/' htdocs/*.asp
JDrago
  • 2,079
  • 14
  • 15
2

Filters a stream of white-space separated stanzas (name/value pair lists), sorting each stanza individually:

perl -00 -ne 'print sort split /^/'
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
mtk
  • 21
  • 1
  • 2
  • 1
    `sort()` will put empty lines at paragraph's top. I guess you actually mean this: perl -00 -ne'($n, @a) = sort split /^/; print @a, $n' Both one-liners will fail if there is no newline after the last paragraph. – jfs Sep 23 '08 at 17:20
2

Get human-readable output from du, sorted by size:

perl -e '%h=map{/.\s/;7x(ord$&&10)+$`,$_}`du -h`;print@h{sort%h}'
Adam Bellaire
  • 108,003
  • 19
  • 148
  • 163
2

One of the most recent one-liners that got a place in my ~/bin:

perl -ne '$s=time() unless $s; $n=time(); $d=$n-$s; if ($d>=2) { print "$. lines in last $d secs, rate ",$./$d,"\n"; $. =0; $s=$n; }'

You would use it against a tail of a log file and it will print the rate of lines being outputed.

Want to know how many hits per second you are getting on your webservers? tail -f log | this_script.

melo
  • 315
  • 3
  • 9
2

Network administrators have the tendency to misconfigure "subnet address" as "host address" especially while using Cisco ASDM auto-suggest. This straightforward one-liner scans the configuration files for any such configuration errors.

incorrect usage: permit host 10.1.1.0

correct usage: permit 10.1.1.0 255.255.255.0

perl -ne "print if /host ([\w\-\.]+){3}\.0 /" *.conf

This was tested and used on Windows, please suggest if it should be modified in any way for correct usage.

jfs
  • 399,953
  • 195
  • 994
  • 1,670
Benny
  • 639
  • 3
  • 11
  • 25
1

Expand all tabs to spaces: perl -pe'1while+s/\t/" "x(8-pos()%8)/e'

Of course, this could be done with :set et, :ret in Vim.

ephemient
  • 198,619
  • 38
  • 280
  • 391
1

I have a list of tags with which I identify portions of text. The master list is of the format:

text description {tag_label}

It's important that the {tag_label} are not duplicated. So there's this nice simple script:

perl -ne '($c) = $_ =~ /({.*?})/; print $c,"\n" ' $1 | sort  | uniq -c | sort -d

I know that I could do the whole lot in shell or perl, but this was the first thing that came to mind.

singingfish
  • 3,136
  • 22
  • 25
  • `perl -ne'$f{$1}++ while /({.*?})/g; END{ print "$f{$_} $_\n" for (sort {$f{$a} <=> $f{$b}} keys %f) }' $1`. You're right that for such tasks the first thing in mind is good enough. btw, are you sure that there could be only one tag per line? – jfs Sep 18 '09 at 19:24
1

Often I have had to convert tabular data in to configuration files. For e.g, Network cabling vendors provide the patching record in Excel format and we have to use that information to create configuration files. i.e,

Interface, Connect to, Vlan
Gi1/0/1, Desktop, 1286
Gi1/0/2, IP Phone, 1317

should become:

interface Gi1/0/1
 description Desktop
 switchport access vlan 1286

and so on. The same task re-appears in several forms in various administration tasks where a tabular data needs to be prepended with their field name and transposed to a flat structure. I have seen some DBA's waste a lot of times preparing their SQL statements from excel sheet. It can be achieved using this simple one-liner. Just save the tabular data in CSV format using your favourite spreadsheet tool and run this one-liner. The field names in header row gets prepended to individual cell values, so you may have to edit it to match your requirements.

perl -F, -lane "if ($.==1) {@keys = @F} else{print @keys[$_].$F[$_] foreach(0..$#F)} " 

The caveat is that none of the field names or values should contain any commas. Perhaps this can be further elaborated to catch such exceptions in a one-line, please improve this if possible.

Benny
  • 639
  • 3
  • 11
  • 25
0

Here is one that I find handy when dealing with a collection compressed log files:

   open STATFILE, "zcat $logFile|" or die "Can't open zcat of $logFile" ;
Kwondri
  • 575
  • 3
  • 4
-5

At some time I found that anything I would want to do with Perl that is short enough to be done on the command line with 'perl -e' can be done better, easier and faster with normal Z shell features without the hassle of quoting. E.g. the example above could be done like this:

srt=(*.srt); for foo in *.avi; mv $srt[1] ${foo:r}.srt && srt=($srt[2,-1])
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
jkramer
  • 15,440
  • 5
  • 47
  • 48
  • 1
    glob-in-scalar-context is really really easy to get wrong; it should be avoided whereever possible. – ysth Sep 23 '08 at 07:27
  • 1
    Doesn't the new version get out of sync on an mv failure? – ysth Sep 23 '08 at 07:29
  • Well, the whole idea of this thing is somewhat instable, since it assumes that for every .avi there's a .srt and that both, when sorted alphabetically, have each avi/srt pair at the same position in the lists. However, you can replace the && with ; and put braces around it. ;) – jkramer Sep 23 '08 at 17:17