I'm trying to use a multi-thread strategy with selenium. In shorts I'm trying to fill in input field with ids.
This is my script :
from concurrent.futures import ThreadPoolExecutor
from selenium.webdriver.common.by import By
import numpy as np
import sys
from selenium import webdriver
def driver_setup():
path = "geckodriver.exe"
options = webdriver.FirefoxOptions()
options.add_argument('--incognito')
# options.add_argument('--headless')
driver = webdriver.Firefox(options=options, executable_path=path)
return driver
def fetcher(id, driver):
print(id) #this works
# this doesnt work
driver.get(
"https://www.roboform.com/filling-test-all-fields")
driver.find_element(By.XPATH, '//input[@name="30_user_id"]').send_keys(id)
time.sleep(2)
print(i, " sent")
#return data
def crawler(ids):
for id in ids:
print(i)
results = fetcher(id, driver_setup())
drivers = [driver_setup() for _ in range(4)]
ids = list(range(0,50)) # generates ids
print(ids)
chunks = np.array_split(np.array(ids),4) #splits the id list into 4 chunks
with ThreadPoolExecutor(max_workers=4) as executor:
bucket = executor.map(crawler, chunks)
#results = [item for block in bucket for item in block]
[driver.quit() for driver in drivers]
Everything seems to work except the send_keys method. Both print() works so it seems the ids are sent to both functions. Weirdly, I don't get an error message (i get the pycharm's Process finished with exit code 0 notice) so I don't know what I'm doing wrong.
Any idea what is missing ?
I used this example : https://blog.devgenius.io/multi-threaded-web-scraping-with-selenium-dbcfb0635e83 if it helps