Here's a rewrite of your code in saner Perl. All for
loops are guaranteed to terminate because I changed them to use the list version of for
instead of C-style for
, which is a Good Perl Habit, since list-for
is a lot harder to get wrong.
I also changed the outermost for
to iterate over the list directly instead of looping over the indexes because you didn't actually use the index for anything other than pulling items out of the list anyhow.
The bare <>
operator reads from files listed in @ARGV
(or from STDIN
if @ARGV
is empty), so I changed that. There's rarely a need to explicitly open $ARGV[0]
.
Finally, I changed the open
for the output file to use a lexical filehandle and the three-argument form of open
. Both are recommended practices. Lexical filehandles avoid globals and are automatically closed when they go out of scope (...and you forgot to close
your OUT
...), while three-arg open
avoids security bugs when giving it filenames coming from outside the program.
I also made it run cleanly under strict
and warnings
, both of which you should always use
.
The revised code:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.010;
my @list = <>;
chomp @list;
for my $line (@list) {
my @id = split(/\s+/, $line);
open my $outfile, '>', "tmp/$id[0].diso"
or die "Failed to open output file: $!";
for my $j ($id[1] .. $#id) {
my ($n1, $n2) = split(/-/, $id[$j]);
for my $k ($n1 .. $n2) {
say $outfile $k;
}
}
}
Unfortunately, I am not able to say whether this will actually work correctly for your data set because you have not provided any sample data to test it against.