The code:
from bs4 import BeautifulSoup
import undetected_chromedriver as uc
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
options = uc.ChromeOptions()
options.add_argument('--blink-settings=imagesEnabled=false') # disable images for loading of page faster
options.add_argument('--disable-notifications')
prefs = {"profile.default_content_setting_values.notifications" : 2}
options.add_experimental_option("prefs",prefs)
driver = uc.Chrome(options=options)
driver.get('https://www.hepsiburada.com/pinar-tam-yagli-sut-4x1-lt-pm-zypinar153100004')
try:
price_element = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.ID, 'offering-price')))
price = float(price_element.find_element_by_xpath('./span[1]').text + '.' + price_element.find_element_by_xpath('./span[2]').text)
except:
print("there is no price info...")
Hello friends, what could be the reason why I can't get the price information while working on the gcp virtual machine, how can I fix the problem in data extraction? While working on a normal computer, I was getting the price information with the same code. What changes when you work in a virtual machine?
On virtual computer it throws AttributeError: 'WebElement' object has no attribute 'find_element_by_xpath'. A normal computer doesn't either. Both have chrome installed. I also turned off the firewall on the virtual computer.
I also added these to my code but nothing changed:
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--disable-gpu')
options.add_argument('--disable-extensions')
options.add_argument('--disable-notifications')
options.add_argument('--disable-popup-blocking')
Version Informations:
OS Name Microsoft Windows Server 2022 Datacenter
Google Chrome Version 111.0.5563.65 (Official Build) (64-bit)
Jupyterlab 3.4.4