1

This is a part of my code in Python:

url="http://export.finam.ru/very long line"
data = urlopen(url).read()

These urls are changing, but they are 100% don't have mistakes. Sometimes my script freeze at this place. Nothing happens. urlopen cannot get data. No error messages come. Script just waiting for an hours. How can I break this waiting and start

data = urlopen(url).read() 

again? Comments showed that timeout parameter will solve my problem. How can I use it to restart

data = urlopen(url).read() 

?

Kosmonavt
  • 191
  • 2
  • 14
  • possible duplicate of https://stackoverflow.com/questions/492519/timeout-on-a-function-call – FrenchMasterSword Jan 12 '19 at 12:19
  • 1
    You can set the timeout parameter on `urlopen`. Add the timeout parameter in my suggested dupe and handle it in accordance with the answer. – roganjosh Jan 12 '19 at 12:21
  • Possible duplicate of [Handling urllib2's timeout? - Python](https://stackoverflow.com/questions/2712524/handling-urllib2s-timeout-python) – roganjosh Jan 12 '19 at 12:22
  • 1
    @FrenchMasterSword it's not a duplicate of that; requests come with a specific timeout parameter, there's no need to treat it as some generic function that you want to wrap in some timeout - it's done for you – roganjosh Jan 12 '19 at 12:23
  • 1
    Yes I didn't know about it. You may post it as an answer I think. – FrenchMasterSword Jan 12 '19 at 12:25
  • There is no timeout parameter in this function urllib.urlopen(url[, data[, proxies[, context]]]) – Kosmonavt Jan 12 '19 at 12:27
  • @FrenchMasterSword I've flagged as a duplicate of another question. There is no need for me to answer it, I'm hoping for it to be closed as a dupe because all the info needed already exists – roganjosh Jan 12 '19 at 12:27
  • 1
    @Kosmonavt why are you using the base urllib? You haven't mentioned that anywhere in the question, I already took a guess to work out it was urllib2. On that theme, is there a reason you're not using `requests`? – roganjosh Jan 12 '19 at 12:28

0 Answers0