I need to render a page for scraping with requests-html in python. No matter what I have tried, I've been unsuccessful with getting the page rendered. It stays as standard HTML.
I did find this site, which made me believe that i'm unable to render the page as it is behind a login screen, and thus would require cookies (correct?).
Here is a part of my current code:
r = session.get("https://ib.nab.com.au/nabib/acctInfo_acctBal.ctl#/")
r.html.render(cookies=session.cookies)
item = r.html.find('#myAccounts > div > ib-my-accounts > div > section > div > ib-my-account-
summary-net-position > div > table > tbody > tr:nth-child(3) > td > div > span > number-signed-balance > div > span')
print(item)
And here is the error i get:
r.html.render(cookies=session.cookies)
TypeError: render() got an unexpected keyword argument 'cookies'
This is even in the documentation.. so why won't it accept the cookies argument?
NOTE
The render function does work on my computer. The code i used from an example below confirms this, hence my belief this is a cookie issue.
from requests_html import HTMLSession
# create an HTML Session object
session = HTMLSession()
# Use the object above to connect to needed webpage
resp = session.get("https://finance.yahoo.com/quote/NFLX/options?p=NFLX")
# without js
option_tags = resp.html.find("option")
# Run JavaScript code on webpage
resp.html.render()
# with js
option_tags_js = resp.html.html.find("option")
print(option_tags)
UPDATE!
I managed to fix the cookie issue! The Request-html library must be installed through the GitHub repo, not directly from pip. The same version was installed, however the render() function has been updated on GitHub to accept cookie input.... I am however, still unable to render this page and pull out the xpath data I require.