I'm testing the code below.
from bs4 import BeautifulSoup
import requests
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.accept_untrusted_certs = True
import time
browser = webdriver.Firefox(executable_path="C:/Utility/geckodriver.exe")
wd = webdriver.Firefox(executable_path="C:/Utility/geckodriver.exe", firefox_profile=profile)
url = "https://corp_intranet"
wd.get(url)
# set username
time.sleep(2)
username = wd.find_element_by_id("id_email")
username.send_keys("my_email@corp.com")
# set password
password = wd.find_element_by_id("id_password")
password.send_keys("my_password")
url=("https://corp_intranet")
r = requests.get(url)
content = r.content.decode('utf-8')
print(BeautifulSoup(content, 'html.parser'))
This logs into my corporate intranet fine, but it just prints very, very basic information. Hitting F12 shows me that a lot of the data on the page is rendered using JavaScript. I did a little research on this, and tried to find a way to actually grab what I see on the screen, rather than a very, very diluted version of what I can see. Is there some way to do a big data dump of all the data that is displayed on the page? Thanks.