0

I have built a scraper to retrieve concert data from songkick by using their api. However, it takes a lot of time to retrieve all the data from these artists. After scraping for approximately 15 hours the script is still running but the JSON file doesn’t change anymore. I interrupted the script and I checked if I could access my data with TinyDB. Unfortunately I get the following error. Does anybody know why this is happening?

Error:

('cannot fetch url', 'http://api.songkick.com/api/3.0/artists/8689004/gigography.json?apikey=###########&min_date=2015-04-25&max_date=2017-03-01')
8961344


Traceback (most recent call last):
  File "C:\Users\rmlj\Dropbox\Data\concerts.py", line 42, in <module>
    load_events()
  File "C:\Users\rmlj\Dropbox\Data\concerts.py", line 27, in load_events
    print(artist)
  File "C:\Python27\lib\idlelib\PyShell.py", line 1356, in write
    return self.shell.write(s, self.tags)
KeyboardInterrupt

>>> mydat = db.all()

Traceback (most recent call last):
  File "<pyshell#0>", line 1, in <module>
    mydat = db.all()
  File "C:\Python27\lib\site-packages\tinydb\database.py", line 304, in all
    return list(itervalues(self._read()))
  File "C:\Python27\lib\site-packages\tinydb\database.py", line 277, in _read
    return self._storage.read()
  File "C:\Python27\lib\site-packages\tinydb\database.py", line 31, in read
    raw_data = (self._storage.read() or {})[self._table_name]
  File "C:\Python27\lib\site-packages\tinydb\storages.py", line 105, in read
    return json.load(self._handle)
  File "C:\Python27\lib\json\__init__.py", line 287, in load
    return loads(fp.read(),
MemoryError

below you can find my script

 import urllib2
import requests
import json
import csv
import codecs


from tinydb import TinyDB, Query
db = TinyDB('events.json')


def load_events():
        MIN_DATE = "2015-04-25"
        MAX_DATE = "2017-03-01"
        API_KEY= "###############"
        with open('artistid.txt', 'r') as f:
            for a in f: 
                artist = a.strip() 
                print(artist)
                url_base = 'http://api.songkick.com/api/3.0/artists/{}/gigography.json?apikey={}&min_date={}&max_date={}'
                url = url_base.format(artist, API_KEY, MIN_DATE, MAX_DATE)
                # url = u'http://api.songkick.com/api/3.0/search/artists.json?query='+artist+'&apikey=WBmvXDarTCEfqq7h'
                try:
                  r = requests.get(url)
                  resp = r.json()
                  if(resp['resultsPage']['totalEntries']):
                    results = resp['resultsPage']['results']['event']
                    for x in results:
                        print(x)
                        db.insert(x)
                except:
                    print('cannot fetch url',url);

load_events()
db.close()
print ("End of script")    
MRJJ17
  • 117
  • 1
  • 1
  • 8
  • 1
    You've removed the API key from your code, but it's visible on the first line of your error. – alxwrd May 30 '17 at 08:56

1 Answers1

0

MemoryError is a built in Python exception (https://docs.python.org/3.6/library/exceptions.html#MemoryError) so it looks like the process is out of memory and this isn't really related to Songkick.

This question probably has the information you need to debug this: How to debug a MemoryError in Python? Tools for tracking memory use?

grahamlyons
  • 687
  • 5
  • 15