I have multiple URLs to scrape stored in a csv file where each row is a separate URL and I'm using this code to run it
def start\\_requests(self):
with open('csvfile', 'rb') as f:
list=[]
for line in f.readlines():
array = line.split(',')
url = array[9]
list.append(url)
list.pop(0)
for url in list:
if url != "":
yield scrapy.Request(url=url, callback=self.parse)
It gives me the following error IndexError: list index out of range
, can anyone help me correct this or suggest another way to use that csv file?
edit: csv file looks like this:
http://example.org/page1
http://example.org/page2
there are 9 such rows