1

I'm trying to make a dryscrape session on Mac. The code I'm trying to run is below:

import dryscrape
session = dryscrape.Session(base_url = 'http://google.com')

But when I run it I get this permission error:

Traceback (most recent call last):

  File "<ipython-input-37-5e3204f25ebb>", line 3, in <module>
    session = dryscrape.Session(base_url = 'http://google.com')

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/dryscrape/session.py", line 22, in __init__
    self.driver = driver or DefaultDriver()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/dryscrape/driver/webkit.py", line 30, in __init__
    super(Driver, self).__init__(**kw)

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 230, in __init__
    self.conn = connection or ServerConnection()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 507, in __init__
    self._sock = (server or get_default_server()).connect()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 450, in get_default_server
    _default_server = Server()

  File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 416, in __init__
    stderr = subprocess.PIPE)

  File "/Users/MyName/anaconda/lib/python3.5/subprocess.py", line 947, in __init__
    restore_signals, start_new_session)

  File "/Users/MyName/anaconda/lib/python3.5/subprocess.py", line 1551, in _execute_child
    raise child_exception_type(errno_num, err_msg)

PermissionError: [Errno 13] Permission denied

I've tried running it in the terminal with sudo, but I still get the same error. Thanks for helping! Note: I will upvote all answers, and accept the best one.

whackamadoodle3000
  • 6,684
  • 4
  • 27
  • 44

2 Answers2

2

I have this working:

# scrape.py
import dryscrape

s = dryscrape.Session()
s.visit("https://www.google.com/search?q={}".format('query'))
print(s.body().encode("utf-8"))

That should print the html

I do this:

python scrape.py > results.html

Then open results.html in a browser to check

infosecDaemon
  • 603
  • 5
  • 7
1

This is a very basic example from the documentation.

import dryscrape
import sys

if 'linux' in sys.platform:
    # start xvfb in case no X is running. Make sure xvfb 
    # is installed, otherwise this won't work!
    dryscrape.start_xvfb()

search_term = 'dryscrape'

# set up a web scraping session
sess = dryscrape.Session(base_url = 'http://google.com')

# we don't need images
sess.set_attribute('auto_load_images', False)

# visit homepage and search for a term
sess.visit('/')
q = sess.at_xpath('//*[@name="q"]')
q.set(search_term)
q.form().submit()

# extract all links
for link in sess.xpath('//a[@href]'):
  print(link['href'])

# save a screenshot of the web page
sess.render('google.png')
print("Screenshot written to 'google.png'")
Aaron Zolla
  • 532
  • 1
  • 5
  • 11