2

I'm using a python script with 'lxml' and 'requests' to scrape a web page. My goal is to grab an element from a page and download it, but the content is on an HTTPS page and I'm getting an error when trying to access the stuff in the page. I'm sure there is some kind of certificate or authentication I have to include, but I'm struggling to find the right resources. I'm using:

page = requests.get("https://[example-page.com]", auth=('[username]','[password]'))

and the error is:

requests.exceptions.SSLError: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib
kevingduck
  • 531
  • 1
  • 6
  • 21

1 Answers1

6

Adding verify=False to the GET request solves the issue.

page = requests.get("https://[example-page.com]", auth=('[username]','[password]'), verify=False)
WGS
  • 13,969
  • 4
  • 48
  • 51
kevingduck
  • 531
  • 1
  • 6
  • 21
  • Glad it helped. It's alright to mark your own answer as correct, I don't know how much else there is to this question, though it might flagged as duplicate :) – Jason Sperske May 01 '14 at 21:50