0

i'm trying to scrape web site to csv file, and there is some text elements that i just can't locate.

i keep getting Message: no such element: Unable to locate element:

i'm trying to get the elements with xpath, and i waiting to the site to load before i start to look for them.

my line of code that working for other elements in the site (like the H1 title) is: driver.find_element_by_xpath("//*[contains(@class,'descriptionContainer')]/p[1]").text

and i tried couple of xpath to make sure:

//*[contains(@class,'VenueHeroBanner__description')]/p
//*div[contains(@class,'VenueHeroBanner__description')]/p
//*[contains(@class,'descriptionContainer')]/p[1]
//*[@id='venueHeroBanner']/div[2]/div[1]/p

** and all of them working in the chrome extension called:"xpath helper", but not in my script

note that i opening the the link in a new tab and then trying to get the element if its matters

driver.execute_script(f'''window.open("{rest_links[0]}","_blank");''')
CryptoNight77
  • 377
  • 1
  • 2
  • 12
  • 1
    This sounds like an [X-Y problem](http://xyproblem.info/). Instead of asking for help with your solution to the problem, edit your question and ask about the actual problem. What are you trying to do? – undetected Selenium Jun 28 '20 at 15:21
  • i just need the entire page in csv sheet – CryptoNight77 Jun 28 '20 at 15:36
  • You are seeing [NoSuchElementException](https://stackoverflow.com/questions/47993443/selenium-selenium-common-exceptions-nosuchelementexception-when-using-chrome/47995294#47995294) how would you get the entire page in csv sheet? Please [edit the question](/posts/62619554/edit) to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the [How to Ask](https://stackoverflow.com/help/how-to-ask) page for help clarifying this question. – undetected Selenium Jun 28 '20 at 15:40
  • I hope it's more clearer now for you – CryptoNight77 Jun 28 '20 at 16:29

2 Answers2

0

The website seems to need some time to load the data.

Pizzas

You should add expecting conditions to your script.

Imports :

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

Then use :

WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "your_XPath"))).text
...

Side note : you should fix your last XPath (2 elements are returned).

WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "(//a[contains(@class,'venueInfoLink')])[1]"))).get_attribute("href")
E.Wiest
  • 5,425
  • 2
  • 7
  • 12
  • thanks but that's not it, i'm waiting 10 seconds in the code and for now i actually see the page loading . for example if you will enter this link: [link](https://wolt.com/en/isr/rishon-lezion/restaurant/pizza-prego-rishon-lezion) under the H1 (restaurant name ) there is the category name and i can get to the restaurant name any time but i can't to get the category name by any means . its driving me crazy, i tried bunch of xpath and it just not getting it ```//*[contains(@class,'VenueHeroBanner__description')]/p``` – CryptoNight77 Jun 28 '20 at 15:49
0

i didnt understand way, but the xpath work if you give him the parent xpath of the child that you want to get

<div class="VenueHeroBanner__descriptionContainer___3Q-jT">
    <p class="VenueHeroBanner__description___1wQwD xh-highlight">Bar & Restaurant</p>

not working-

driver.find_element_by_xpath("//*[contains(@class,'descriptionContainer')]/p[1]").text

that one work:

driver.find_element_by_xpath("//*[contains(@class,'descriptionContainer')]").text
[output] "Bar & Restaurant"
CryptoNight77
  • 377
  • 1
  • 2
  • 12
  • Great ! Good job. Maybe the child of the `div` is not evaluated as a `p` element. What happens if you use : `//*[contains(@class,'descriptionContainer')]/*[1]` Increase the position index (1) if necessary. – E.Wiest Jun 29 '20 at 04:32