The below is an extract from my code from a previous question. The aim of this python code is to create a crawler for Twitter where by I plan to run the code every hour for 60 seconds. I am going to do this over 24 hours for 7 days to look at the trends within Tweets.
Currently this code is running until it is manually ended
import time
import threading
class listener(StreamListener):
def on_data(self, data):
try:
print data
saveFile = open('twitDB.csv','a')
saveFile.write(data)
saveFile.write('\n')
saveFile.close()
return True
except BaseException, e:
print 'failed ondata,' ,str(e)
time.sleep(5)
def on_error(self, status):
print status
time.sleep(5)