0

I’m currently trying to download a pdf from a website (I’m trying to automate the process) and I have tried numerous different approaches. I’m currently using python and selenium/phantomjs to first find the pdf href link on the webpage source and then use something like wget to download and store the pdf on my local drive.

Whilst I have no issues finding all the href links find_elements_by_xpath("//a/@href") on the page, or narrowing in on the element that has the url path find_element_by_link_text('Active Saver') and then printing it using, the get_attribute('href') method, it does not display the link correctly.

This is the source element, an a tag, that I need the link from is:

href="#" data-ng-mouseup="loadProductSheetPdf($event, download.ProductType)" target="_blank" data-ng-click="$event.preventDefault()" analytics-event="{event_name:'file_download', event_category: 'download', event_label:'product summary'}" class="ng-binding ng-isolate-scope">Active Saver< 

As you can see the href attribute is href="#" and when I run get_attribute('href') on this element I get:

https://www.bupa.com.au/health-insurance/cover/active-saver#

Which is not the link to the PDF. I know this because when I open the page in Firefox and inspect the element I can see the actual, JavaScript executed source:

href="https://bupaanzstdhtauspub01.blob.core.windows.net/productfiles/J6_ActiveSaver_NSWACT_20180401_000000.pdf" data-ng-mouseup="loadProductSheetPdf($event, download.ProductType)" target="_blank" data-ng-click="$event.preventDefault()" analytics-event="{event_name:'file_download', event_category: 'download', event_label:'product summary'}" class="ng-binding ng-isolate-scope">Active Saver< 

This https://bupaanzstdhtauspub01.blob.core.windows.net/productfiles/J6_ActiveSaver_NSWACT_20180401_000000.pdf is the link I need.

https://www.bupa.com.au/health-insurance/cover/active-saver is the link to the page that houses the PDF. As you can see the PDF is stored on another domain, not www.bupa.com.au.

Any help with this would be very appreciated.

I realised that this is acutally an AJAX request and when executed it obtains the PDF url that I'm after. I'm now trying to figure out how to extract that url from the response object sent via a post request.

My code so far is:

import requests
from lxml.etree import fromstring

url = "post_url"
data = {data dictionary to send with request extraced from dev tools}
response = requests.post(url,data)
response.json()

However, I keep getting error indicating that No Json object could be decoded. I can look at the response, using response.text and I get

 u'<html>\r\n<head>\r\n<META NAME="robots" CONTENT="noindex,nofollow">\r\n<script src="/_Incapsula_Resource?SWJIYLWA=719d34d31c8e3a6e6fffd425f7e032f3">\r\n</script>\r\n<script>\r\n(function() { \r\nvar z="";var b="7472797B766172207868723B76617220743D6E6577204461746528292E67657454696D6528293B766172207374617475733D227374617274223B7661722074696D696E673D6E65772041727261792833293B77696E646F772E6F6E756E6C6F61643D66756E6374696F6E28297B74696D696E675B325D3D22723A222B286E6577204461746528292E67657454696D6528292D74293B646F63756D656E742E637265617465456C656D656E742822696D6722292E7372633D222F5F496E63617073756C615F5265736F757263653F4553324C555243543D363726743D373826643D222B656E636F6465555249436F6D706F6E656E74287374617475732B222028222B74696D696E672E6A6F696E28292B222922297D3B69662877696E646F772E584D4C4874747052657175657374297B7868723D6E657720584D4C48747470526571756573747D656C73657B7868723D6E657720416374697665584F626A65637428224D6963726F736F66742E584D4C4854545022297D7868722E6F6E726561647973746174656368616E67653D66756E6374696F6E28297B737769746368287868722E72656164795374617465297B6361736520303A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2072657175657374206E6F7420696E697469616C697A656420223B627265616B3B6361736520313A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2073657276657220636F6E6E656374696F6E2065737461626C6973686564223B627265616B3B6361736520323A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2072657175657374207265636569766564223B627265616B3B6361736520333A7374617475733D6E6577204461746528292E67657454696D6528292D742B223A2070726F63657373696E672072657175657374223B627265616B3B6361736520343A7374617475733D22636F6D706C657465223B74696D696E675B315D3D22633A222B286E6577204461746528292E67657454696D6528292D74293B6966287868722E7374617475733D3D323030297B706172656E742E6C6F636174696F6E2E72656C6F616428297D627265616B7D7D3B74696D696E675B305D3D22733A222B286E6577204461746528292E67657454696D6528292D74293B7868722E6F70656E2822474554222C222F5F496E63617073756C615F5265736F757263653F535748414E45444C3D363634323839373431333131303432323133352C353234303631363938363836323232363836382C393038303935393835353935393539353435312C31303035363336222C66616C7365293B7868722E73656E64286E756C6C297D63617463682863297B7374617475732B3D6E6577204461746528292E67657454696D6528292D742B2220696E6361705F6578633A20222B633B646F63756D656E742E637265617465456C656D656E742822696D6722292E7372633D222F5F496E63617073756C615F5265736F757263653F4553324C555243543D363726743D373826643D222B656E636F6465555249436F6D706F6E656E74287374617475732B222028222B74696D696E672E6A6F696E28292B222922297D3B";for (var i=0;i<b.length;i+=2){z=z+parseInt(b.substring(i, i+2), 16)+",";}z = z.substring(0,z.length-1); eval(eval(\'String.fromCharCode(\'+z+\')\'));})();\r\n</script></head>\r\n<body>\r\n<iframe style="display:none;visibility:hidden;" src="//content.incapsula.com/jsTest.html" id="gaIframe"></iframe>\r\n</body></html>'

This clearly does not have the url I'm after. The frustrating thing is I can see the was obtained when I used Firefox's dev tools:

Screen shot of FireFox Dev tools showing link

Can anyone help me with this?

Alan
  • 11
  • 2
  • Possible duplicate of [How can I download a file on a click event using selenium?](https://stackoverflow.com/questions/18439851/how-can-i-download-a-file-on-a-click-event-using-selenium) – JeffC Apr 23 '18 at 01:53
  • Any reason to trim the `tagName` from the HTML provided within the question? – undetected Selenium Apr 23 '18 at 05:46
  • I only just signed up to stackoverflow today so I'm still working out how to use it. For some reason when copied and pasted the full link it didn't seem to publish the full link with tag (only innerhtml) – Alan Apr 23 '18 at 05:50
  • Well, welcome to Stack Overflow then! As a new member, you may want to read the [Tour] some time. Formatting help and a preview are available while you are entering your question; for more, see [How do I format my posts using Markdown or HTML?](https://stackoverflow.com/help/formatting). Since HTML is a valid formatting option, its codes disappear from view; to prevent that, use `\`code ticks\`` (when inline) or `code blocks` – I have edited your post to use the latter, as it is a rather long line, but still a single line, of data that you get. – Jongware Apr 23 '18 at 10:13

1 Answers1

0

I was able to solve this by ensuring that both my header information and the request payload (data) that was sent with the post request was complete and accurate (obtained from Firefox dev tools web console). Once I was able to receive the response data for the post request it was relatively trivial to extract the url linking to the pdf file I was wanting to download. I then downloaded the pdf using urlretrieve from the urllib module. I modeled my script based on the script from this page. However, I also ended up using the urllib2.Request form the urllib2 module instead of requests.post from the requests module. For some reason urllib2 module worked more consistently then the Requests module. My working code ended up looking like this (these two methods come from a my class object, but shows the working code):

....
def post_request(self,url,data):
        self.data = data
        self.url = url
        req = urllib2.Request(self.url)
        req.add_header('Content-Type', 'application/json')
        res = urllib2.urlopen(req,self.data)
        out = json.load(res)
        return out


    def get_pdf(self):
        link ='https://www.bupa.com.au/api/cover/datasheets/search'
        directory = '/Users/U1085012/OneDrive/PDS data project/Bupa/PDS Files/'
        excess = [None, 0,50,100,500]

        #singles
        for product in get_product_names_singles():
            self.search_request['PackageEntityName'] = product
            print product
            if 'extras' in product:
                self.search_request['ProductType'] = 2
            else:
                self.search_request['ProductType'] = 1
            for i in range(len(excess)):
                    try:
                        self.search_request['Excess'] = excess[i]
                        payload = json.dumps(self.search_request)
                        output = self.post_request(link,payload)
                    except urllib2.HTTPError:
                        continue
                    else:
                        break

            path = output['FilePath'].encode('ascii')
            file_name = output['FileName'].encode('ascii')
            #check to see if file exists if not then retrieve
            if os.path.exists(directory+file_name):
                pass
            else:
                ul.urlretrieve(path, directory+file_name                   
Alan
  • 11
  • 2