It works fine when I do it on my window desktop, but it doesn't work when I do it with AWS LAMBDA. I know the code for crawling website by using selenium. It works fine on other websites but it doesn't work only on certain websites as below.
- https://despread-creative.notion.site/6f7b61a2f09b41488d63492c665aadf4?v=1f42aaf6a4d546839700383df006b862
- http://korbit.co.kr/market/research
- https://xangle.io/insight/research
driver.get('http://korbit.co.kr/market/research')
time.sleep(5)
driver.find_element('xpath','//*[@id="list"]/div/div[1]/a/h3').text
above simple code didn't works...
with msg: unable to lacate elements ,,,
My lambda layer for selenium with this command.
1. pip3.7 install -t selenium/python/lib/python3.7/site-packages selenium==3.8.0
2. curl -SL https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip > chromedriver.zip
3. curl -SL https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-41/stable-headless-chromium-amazonlinux-2017-03.zip > headless-chromium.zip
pls help.