0

Im currently having some issues trying to adapt my scrapy program. The thing im trying do is make a different parser work depending on the "site" im in.

Currently i have this start request

def start_requests(self):

    txtfile = open('productosABuscar.txt', 'r')

    keywords = txtfile.readlines()

    txtfile.close()

    for keyword in keywords:

        yield Request(self.search_url.format(keyword))

I want to find a way to, depending on which keyword i get from the txt file, call different parsers for extracting the data from the page.

Is it there a way to accomplish this?

Manuel
  • 730
  • 7
  • 25
  • 2
    You could use a dictionary to map strings to function calls, as described [here](https://stackoverflow.com/a/11479840/102937). – Robert Harvey Dec 20 '18 at 17:54

1 Answers1

1

what about matching the callback dependant on the keyword you got inside start_requests? Something like:

def start_requests(self):
    keyword_callback = {
        'keyword1':  self.parse_keyword1,
        'keyword2': self.parse_keyword2,
    }


    txtfile = open('productosABuscar.txt', 'r')

    keywords = txtfile.readlines()

    txtfile.close()

    for keyword in keywords:
        yield Request(self.search_url.format(keyword), callback=keyword_callback[keyword])
eLRuLL
  • 18,488
  • 9
  • 73
  • 99
  • I think this is the way to go. Problem is that, when i use this method, the spider is not "entering" the link associated with that keyword so when the program tries to parse, no info comes out. I don't know if i explained myself correctly. If you need more information on how i did the program i have no problem in updating the post. Thanks! – Manuel Dec 20 '18 at 20:11