4

I am current having an issue with my scraper when I set options.add_argument("--headless"). However, it works perfectly fine when it is removed. Could anyone advise how I can achieve the same results with headless mode?

Below is my python code:

from seleniumwire import webdriver as wireDriver
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.chrome.options import Options
    
chromedriverPath = '/Users/applepie/Desktop/chromedrivermac'

    def scraper(search):

    mit = "https://orbit-kb.mit.edu/hc/en-us/search?utf8=✓&query="  # Empty search on mit site
    mit += "+".join(search) + "&commit=Search"
    results = []

    options = Options()
    options.add_argument("--headless")
    options.add_argument("--window-size=1440, 900")
    driver = webdriver.Chrome(options=options, executable_path= chromedriverPath)

    driver.get(mit)
    # Wait 20 seconds for page to load
    timeout = 20
    try:
        WebDriverWait(driver, timeout).until(EC.visibility_of_element_located((By.CLASS_NAME, "header")))
        search_results = driver.find_element_by_class_name("search-results")
        for result in search_results.find_elements_by_class_name("search-result"):
            resultObject = {
                "url": result.find_element_by_class_name('search-result-link').get_attribute("href")
            }
            results.append(resultObject)
        driver.quit()
    except TimeoutException:
        print("Timed out waiting for page to load")
        driver.quit()

    return results

Here is also a screenshot of when I print(driver.page_source) after get():

enter image description here

undetected Selenium
  • 183,867
  • 41
  • 278
  • 352
ApplePie
  • 1,155
  • 3
  • 12
  • 30

2 Answers2

4

This screenshot...

screenshot

...implies that the Cloudflare have detected your requests to the website as an automated bot and subsequently denying you the access to the application.


Solution

In these cases the a potential solution would be to use the undetected-chromedriver in headless mode to initialize the browsing context.

undetected-chromedriver is an optimized Selenium Chromedriver patch which does not trigger anti-bot services like Distill Network / Imperva / DataDome / Botprotect.io. It automatically downloads the driver binary and patches it.

  • Code Block:

    import undetected_chromedriver as uc
    from selenium import webdriver
    
    options = webdriver.ChromeOptions() 
    options.headless = True
    driver = uc.Chrome(options=options)
    driver.get(url)
    

References

You can find a couple of relevant detailed discussions in:

undetected Selenium
  • 183,867
  • 41
  • 278
  • 352
  • I'm getting the following error: `ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1122) ` – ApplePie Jan 06 '21 at 14:08
  • @ApplePie `SSLCertVerificationError` is a certificate issue which is a much granular issue compared to this generic question on accessing the application through `headless` mode. Can you raise a new question as per your new requirement? Stackoverflow contributors will be happy to help you out. – undetected Selenium Jan 06 '21 at 18:59
2

I know this is a super old question but recently (2023) they upgraded the headless chrome which now allows headless to work with extensions.

See this webpage for more details.

Just replace the headless option above with the below.

options.add_argument('--headless=new')

According to the website if you just use --headless, it still uses the old version and you have to explicitly point it to the new version to work.

Pravash Panigrahi
  • 701
  • 2
  • 7
  • 21
Jono Jack
  • 48
  • 5