I'm using WWW::Mechanize to read a particular webpage in a loop that runs every few seconds. Occasionally, the 'GET' times out and the script stops running. How can I recover from one such timeout such that it continues the loop and tries the 'GET' the next time around?
Asked
Active
Viewed 1,855 times
4 Answers
4
Use eval
:
eval {
my $resp = $mech->get($url);
$resp->is_success or die $resp->status_line;
# your code
};
if ($@) {
print "Recovered from a GET error\n";
}
The eval
block will catch any error while GETing the page.

Eugene Yarmash
- 142,882
- 41
- 325
- 378
1
One option would be to implement a method to handle timeout errors and hook it into the mech object at construction time as the onerror
handler. See Constructor and Startup in the docs.
You could even ignore errors by setting a null error handler, for example:
my $mech = WWW::Mechanize->new( onerror => undef );
but I wouldn't recommend that - you'll just get weird problems later.

martin clayton
- 76,436
- 32
- 213
- 198
0
This solution will continue to attempt to load the page until it works.
do {
eval {
$mech->get($url);
};
} while ($@ ne '');

Brett Pennings
- 1,489
- 14
- 19
0
For a more complete solution, you can use a module like Try::Tiny::Retry. It allows you to specify a code block to run, catches any errors, then retries that code block a configurable amount of time. The syntax is pretty clean.
use WWW::Mechanize();
use Try::Tiny::Retry ':all';
my $mech = WWW::Mechanize->new();
retry {
$mech->get("https://stackoverflow.com/");
}
on_retry {
warn("Failed. Retrying. Error was: $_");
}
delay {
# max of 100 tries, sleeping 5 seconds between each failure
return if $_[0] >= 100;
sleep(11 * 1000 * 1000);
}; #don't forget this semicolon
# dump all the links found on the page
print join "\n", map {$_->text } $mech->links;

Trenton
- 11,678
- 10
- 56
- 60