I have a multi-GB file to process in Perl. Reading the file line-by-line takes several minutes; reading it into a scalar via File::Slurp takes a couple of seconds. Good. Now, what is the most efficient way to process each "line" of the scalar? I imagine that I should avoid modifying the scalar, e.g. lopping off each successive line as I process it, to avoid reallocating the scalar.
I tried this:
use File::Slurp;
my $file_ref = read_file( '/tmp/tom_timings/tom_timings_15998', scalar_ref => 1 ) ;
for my $line (split /\n/, $$file_ref) {
# process line
}
And it's sub-minute: adequate but not great. Is there a faster way to do this? (I have more memory than God.)