I am trying to scrape links inside child elements href attribute from parent id='search-properties' from this site. I firstly tried locating elements using find_elements_by_id and then locating links with find_elements_by_css_selector but I constantly got AttributeError: 'list' object has no attribute 'find_elements_by_css_selectors' while doing it so I tried using find_elements_by_tag_name as well as find_elements_by_xpath but instead of scraping links it actually scraped the details inside the links which are of no use to me. so after a lot of looking around I finally found this code
from logging import exception
from typing import Text
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
import time
import pandas as pd
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import csv
from selenium import webdriver
PATH = "C:/ProgramData/Anaconda3/scripts/chromedriver.exe" #always keeps chromedriver.exe inside scripts to save hours of debugging
driver =webdriver.Chrome(PATH) #preety important part
driver.get("https://www.gharbazar.com/property/search/?_q=&_vt=1&_r=0&_pt=residential&_si=0&_srt=latest")
driver.implicitly_wait(10)
house=driver.find_elements_by_tag_name("a")
# traverse list
for lnk in house:
# get_attribute() to get all href
print(lnk.get_attribute('href'))
The problem with this code is that it scrapes all the links meaning it also has links which are absolutely unnecessary like in this image don't need javascript void. Finally for pagination I tried to follow this answer but got infinite loop and so I had to remove the code of pagination. In conclusion I am trying to get links of multiple pages having id = 'search-properties'