This question is for Python 3.6.3, bs4 and Selenium 3.8 on Win10.
I am trying to scrape pages with dynamic content. What I am trying to scrape is numbers and text (from http://www.oddsportal.com for example). From my understanding using requests+beautifulsoup will not do the job, as dynamic content will be hidden. So I have to use other tools such us selenium webdriver.
Then, given that I will use selenium webdriver anyway, do you recommend ignoring beautifulsoup and stick with selenium webdriver functions, e.g.
elem = driver.find_element_by_name("q"))
Or is it considered better practice to use selenium+beautifulsoup?
Do you have any opinion as to which of the two routes will give me more convenient functions to work with?