0

I am building a simple scraper that takes urls from a csv file then crawls them. The issue I am having is that scrapy returns a file not found error even though the file exists in the same directory.

FileNotFoundError: [Errno 2] No such file or directory: 'urlList.csv'

When I run the code to read the file in a normal (non-scrapy) script it works fine. This is my first time using scrapy so I am struggling to work out the issue.

Here's my code:

import scrapy
import csv

class infoSpider(scrapy.Spider):
    name = 'info2'

    url_file = "urlList.csv"

    debug = False

    def start_requests(self):
  
        with open(self.url_file, "U") as f:
            reader = csv.DictReader(f, delimiter=';')
            for record in reader:
                yield scrapy.Request(
                    url=record["URL"],
                    callback=self.parse_event,
                    meta={'ID': record["ID"], 'Date': record["Year"]},
                )

                if self.debug:
                    break  # DEBUG


    
Robsmith
  • 339
  • 1
  • 8

0 Answers0