You probably don't need the overheads of selenium/chromedriver setup, instead you can do it with requests:
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.imdb.com/title/tt0454848/?ref_=adv_li_i')
soup = BeautifulSoup(r.text, 'html.parser')
genres = soup.select_one('div.ipc-chip-list__scroller')
for genre in genres.contents:
print(genre.text)
This prints out:
Crime
Drama
Mystery
BeautifulSoup documentation can be found at https://www.crummy.com/software/BeautifulSoup/bs4/doc/
UPDATE: To get the desired genre list, you can use selenium only. I will include the full code below:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chrome_options = Options()
chrome_options.add_argument("--no-sandbox")
webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary
browser = webdriver.Chrome(service=webdriver_service, options=chrome_options)
url = 'https://www.imdb.com/title/tt0454848/?ref_=adv_li_i'
# url = 'https://www.imdb.com/title/tt0765429/'
browser.get(url)
browser.execute_script("window.scrollBy(0,2200);")
elem_pulled_from_graphql = WebDriverWait(browser, 20).until(EC.element_to_be_clickable((By.XPATH, '//div[@data-testid="storyline-plot-summary"]')))
genres = elem_pulled_from_graphql.find_elements(By.XPATH, "//a[@class='ipc-metadata-list-item__list-content-item--link']")
genres = WebDriverWait(browser, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//span[text()='Genres']/following-sibling::div//child::li")))
for g in genres:
print(g.text)
This will print out:
Crime
Drama
Mystery
Thriller
This solution is based on selenium only, and will wait as long as needed (well, up to 20 seconds) for the data to be pulled from database by the graphql query.