0

I am parsing through various links using requests, however some links are "bad" aka they basically just don't load which causes my program to become hung up and eventually crash.

Is there a way to set a time limit for getting a request, and if that time passes (fails to get a request from the url) it will just return some kind of error? Or is there some other way I can prevent bad links from breaking my program?

TheBandit
  • 109
  • 2
  • 9
  • 1
    Possible duplicate of [Timeout for python requests.get entire response](http://stackoverflow.com/questions/21965484/timeout-for-python-requests-get-entire-response) – TheoretiCAL Dec 06 '16 at 21:56

1 Answers1

0

urllib2 is one option

import urllib2

test_url = "http://www.test.com"

try:
    urllib2.urlopen(test_url)
except:
    pass