1

I'm having an issue with Python Requests. Running Python 2.7.

This is my code

import requests
URL = 'http://www.chiefscientist.gov.au/category/archives/media-releases/'
urlHeaders = {"User-Agent": "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 GTB7.1 (.NET CLR 3.5.30729)", "Referer": "http://example.com"}
r = requests.get(URL, headers=urlHeaders, timeout=None)

I'm using timeout=None, so the server can take as long as it likes.

I get this error -

requests.exceptions.ConnectionError: ('Connection aborted.', error(10060, ...

This is not a duplicate of this question, I'm certain.

I have tried http and https, and the server returns 'not found' when I request https. When I use http, the script takes 10 or more seconds, and then returns the error.

Any thoughts?

JasTonAChair
  • 1,948
  • 1
  • 19
  • 31
  • 1
    your code is working fine for me. Did you check your DNS settings? – Ozgur Vatansever Oct 07 '15 at 01:41
  • I couldn't figure out how to do that on Windows (work computer) but I replaced the domain name with the IP address (175.107.139.110), and I'm getting the same error. – JasTonAChair Oct 07 '15 at 01:58
  • Actually, I tried this on my Linux laptop, and all good. I'll change my Windows DNS settings in future. – JasTonAChair Oct 07 '15 at 02:21
  • Did you try to use the libraries as Scrapy or Beautiful Soup? Or Do you think that your IP address is blocked in www.chiefscientist.gov.au server? – Nguyen Sy Thanh Son Oct 07 '15 at 02:52
  • It's not an issue with parsing the markup, so I didn't use BS. it works with Linux, so as ozgur pointed out, it's likely the Windows DNS settings I was using – JasTonAChair Oct 07 '15 at 03:07

0 Answers0