2

I have a selenium project. I am going to use Crawlera proxy in selenium. I have already an API Key of Crawlera.

        headless_proxy = "127.0.0.1:3128"
        proxy = Proxy({
            'proxyType': ProxyType.MANUAL,
            'httpProxy': headless_proxy,
            'ftpProxy' : headless_proxy,
            'sslProxy' : headless_proxy,
            'noProxy'  : ''
        })
        user_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) ' \
                     'Chrome/80.0.3987.132 Safari/537.36'
        chrome_option = webdriver.ChromeOptions()
        chrome_option.add_argument('--no-sandbox')
        chrome_option.add_argument('--disable-dev-shm-usage')
        chrome_option.add_argument('--ignore-certificate-errors')
        chrome_option.add_argument("--disable-blink-features=AutomationControlled")
        chrome_option.add_argument(
            'user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) '
            'Chrome/80.0.3987.132 Safari/537.36')
        chrome_option.headless = True
        prefs = {"profile.managed_default_content_settings.images": 2}
        chrome_option.add_experimental_option("prefs", prefs)

        capabilities = dict(DesiredCapabilities.CHROME)
        proxy.add_to_capabilities(capabilities)

        driver = webdriver.Chrome(desired_capabilities=capabilities, options=chrome_option)
        driver.set_page_load_timeout(600)
        #driver = webdriver.Chrome(options=chrome_option)

So how can I set API KEY? And then I want to deploy the code on Scrapinghub. How can I apply Crawlera into selenium so that it will work fine on the Scrapy cloud? Please help me. Thanks.

Patrick Klein
  • 1,161
  • 3
  • 10
  • 23

0 Answers0