1

I want my program to try to open the page and get the data in time interval [0,t], if that time expires connection should be close.
I'm using urllib2 to try to accomplish the task.

t=1
url="http://example.com"
response=urllib2.urlopen(url,timeout=t)
html=response.read()

This seems to work if url exists. However, if you put some nonexistent url it takes too long for error to stop the program. And if I put this program to be used by some web site the user would need to wait for error message for too long.
Is there a way to stop execution of urlopen command if it takes longer than set time?

enedene
  • 3,525
  • 6
  • 34
  • 41

2 Answers2

1

If you're just checking if the link is correct, use a HEAD request.

Community
  • 1
  • 1
Kenan Banks
  • 207,056
  • 34
  • 155
  • 173
  • No, I want it to stop in less than 1 second if the link is wrong. Try putting the wrong link into code that you've linked to, you'll have to wait for 20 seconds before you get an error msg. – enedene Nov 11 '11 at 17:52
1

I'm not sure why you're experiencing such long delays.

When I try and make a request to a non-existant domain, I get urllib2.URLError: <urlopen error [Errno 11004] getaddrinfo failed> raised in about 0.2 seconds.

What's the exact code you're running and domain you're fetching?

Try using requests and the timeout parameter.

Acorn
  • 49,061
  • 27
  • 133
  • 172