0

I still haven't figured out the answer to this question here:

Python Requests Library returning 429 on 1 request

But I figured I should do the responsible thing and try and figure out an alternate method. I already have a function that will open a specific browser and login using my credentials. This will leave me with a browser currently being controlled by selenium.

If I have already managed to login with a different function, is there a way to use the requests library to ensure I'm manipulating that browser which already has the correct credentials?

Login function is here:

def riot_login():
    print('Running Login')
    username = 'StackOverflow1218'
    password = 'helpme0biwankenobi'
    #https://stackoverflow.com/questions/56380889/wait-for-element-to-be-clickable-using-python-and-selenium
    wait.until(EC.visibility_of_element_located((By.NAME, "username"))).send_keys(username)
    wait.until(EC.visibility_of_element_located((By.NAME, "password"))).send_keys(password)
    wait.until(EC.element_to_be_clickable((By.XPATH , "/html/body/div/div/div/div[2]/div[1]/div/button"))).click()

Once I have the selenium-controlled browser open to the page I want, I'd like to reload the page and then use the requests library to grab the response data I'm looking for. Is this possible without having to repass the credentials? Is there a better way to do this?

Note: Dont worry about the username/password associated with this question, there's no personal data attached to that account.

Thanks.

S.Slusky
  • 232
  • 4
  • 12
  • You're already logged in. Why do you think you need requests to get the data? You can already get the data using Selenium. –  Dec 20 '20 at 01:48
  • I can scrape the page once it is loaded, and have done so 150,000 times. But it's incredibly slow. I'd rather read a portion of the network data being used to generate the page and drop it to a local json. I don't think selenium has any functions to grab that piece of data. – S.Slusky Dec 20 '20 at 01:51

0 Answers0