I have a scrapy crawler which works fine. I now want to use its 'parse' function to parse a given url. While there exists a command line utility to do so for a single url using command:
scrapy parse <options> <url>
But I want to do this inside my python code (and no starting a new process for every url is not an option)
From what I figure what I need for this is essentially a way to create Response given a url. Since the response that scrapy takes is not the same as HTTPResponse, I am not sure how to get that response given a url.
I did find a method make_reqests_from_url which does the obvious, but I am not sure how to get from scrapy Request to scrapy response, which I can pass to the parse function.