0

I have sites and I want to scrape their logos.

PROBLEM:

I have an outer class, in which I save all the data about the logos - urls, links, everything is working correct:

class PatternUrl:

    def __init__(self, path_to_img="", list_of_conditionals=[]):
        self.url_pattern = ""
        self.file_url = ""
        self.path_to_img = path_to_img
        self.list_of_conditionals = list_of_conditionals

    def find_obj(self, response):
        for el in self.list_of_conditionals:
            if el:
                if self.path_to_img:
                    url = response
                    file_url = str(self.path_to_img)
                    print(file_url)
                    yield LogoScrapeItem(url=url, file_url=file_url)

class LogoSpider(scrapy.Spider):
....
def parse(self, response):
        a = PatternUrl(response.css("header").xpath("//a[@href='"+response.url+'/'+"']/img/@src").extract_first(), [response.css("header").xpath("//a[@href='"+response.url+'/'+"']")] )
        a.find_obj(response)

The problem is in the yield line

yield LogoScrapeItem(url=url, file_url=file_url)

For some reason when I comment this line, all the lines in this method are being executed.

Output when yield is commentated:

#yield LogoScrapeItem(url=url, file_url=file_url)

2017-12-25 11:09:32 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://time.com> (referer: None)
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAKQAAAAyCAYAAAD........
2017-12-25 11:09:32 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-25 11:09:32 [scrapy.statscollectors] INFO: Dumping Scrapy stats:

Output when yield is not commentated:

yield LogoScrapeItem(url=url, file_url=file_url)

2017-12-25 11:19:28 [scrapy.core.engine] INFO: Spider opened
2017-12-25 11:19:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-12-25 11:19:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-12-25 11:19:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://git-scm.com/robots.txt> (referer: None)
2017-12-25 11:19:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://git-scm.com/docs/git-merge> (referer: None)
2017-12-25 11:19:28 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://time.com/robots.txt> (referer: None)
2017-12-25 11:19:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://time.com> (referer: None)
2017-12-25 11:19:29 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-25 11:19:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 926,

QUESTION:

The function is not executed when there is a yield statement, why ?

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
Hartun
  • 113
  • 3
  • 10

2 Answers2

1

Your find_obj method is actually a generator because of the yield keyword. For a thorough explanation on generators and yield I recommend this StackOverflow question.

In order to get results from your method you should call it in a manner similar to this :

for logo_scrape_item in a.find_obj(response):
    # perform an action on your logo_scrape_item
Glenn D.J.
  • 1,874
  • 1
  • 8
  • 19
1

Yield is designed to produce a generator.

It looks like you should run your find_obj as:

for x in a.find_obj(response):

instead.

For details on yield please see What does the "yield" keyword do?

Gnudiff
  • 4,297
  • 1
  • 24
  • 25