5

I have an array of links that define the structure of a website. While downloading images from these links, I want to simultaneously place the downloaded images in a folder structure similar to the website structure, and not just rename it (as answered in Scrapy image download how to use custom filename)

My code for the same is like this:

class MyImagesPipeline(ImagesPipeline):
    """Custom image pipeline to rename images as they are being downloaded"""
    page_url=None
    def image_key(self, url):
        page_url=self.page_url
        image_guid = url.split('/')[-1]
        return '%s/%s/%s' % (page_url,image_guid.split('_')[0],image_guid)

    def get_media_requests(self, item, info):
        #http://store.abc.com/b/n/s/m
        os.system('mkdir '+item['sku'][0].encode('ascii','ignore'))
        self.page_url = urlparse(item['start_url']).path #I store the parent page's url in start_url Field
        for image_url in item['image_urls']:
            yield Request(image_url)

It creates the required folder structure but when I go into the folders in deapth, I see that the files have been misplaced in the folders.

I'm suspecting that it is happening because the "get_media_requests" and "image_key" functions might be executing asynchronously hence the value of "page_url" changes before it is used by the "image_key" function.

Community
  • 1
  • 1
Gaurav Toshniwal
  • 3,552
  • 2
  • 24
  • 23

2 Answers2

1

You are absolutely right that asynchronous Item processing prevents using class variables via self within the pipeline. You will have to store your path in each Request and override a few more methods (untested):

def image_key(self, url, page_url):
    image_guid = url.split('/')[-1]
    return '%s/%s/%s' % (page_url, image_guid.split('_')[0], image_guid)

def get_media_requests(self, item, info):
    for image_url in item['image_urls']:
        yield Request(image_url, meta=dict(page_url=urlparse(item['start_url']).path))

def get_images(self, response, request, info):
    key = self.image_key(request.url, request.meta.get('page_url'))
    ...

def media_to_download(self, request, info):
    ...
    key = self.image_key(request.url, request.meta.get('page_url'))
    ...

def media_downloaded(self, response, request, info):
    ...
    try:
        key = self.image_key(request.url, request.meta.get('page_url'))
    ...
Steven Almeroth
  • 7,758
  • 2
  • 50
  • 57
0

This scrapy pipeline extension provides an easy way to store downloaded files into a folder tree.

You have to install it:

pip install scrapy_folder_tree

and then, add the pipeline on your configuration:

ITEM_PIPELINES = {
    'scrapy_folder_tree.ImagesHashTreePipeline': 300
}

Disclaimer: I'm the author of scrapy-folder-tree

Panagiotis Simakis
  • 1,245
  • 1
  • 18
  • 45