Home > Mobile >  Scrapy doesn't download files with File
Scrapy doesn't download files with File

Time:07-23

I've no clue about what's happening with my code. I wrote my spider and loaded items as describe at https://docs.scrapy.org/en/latest/topics/media-pipeline.html but scrapy doesn't download any file:

2022-07-19 01:35:09 [scrapy.core.engine] INFO: Spider opened
2022-07-19 01:35:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-07-19 01:35:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-07-19 01:35:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.tp-link.com/br/support/download/> from <GET https://www.tp-link.com/br/support/download>
2022-07-19 01:35:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/> (referer: None)
2022-07-19 01:35:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/archer-c7/> (referer: https://www.tp-link.com/br/support/download/)
2022-07-19 01:35:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/archer-c7/v4/> (referer: https://www.tp-link.com/br/support/download/archer-c7/)
2022-07-19 01:35:10 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.tp-link.com/br/support/download/archer-c7/v4/>
{'file_urls': ['https://static.tp-link.com/2019/201912/20191206/Archer C7(US)_V4_190411.zip',
               'https://static.tp-link.com/2018/201804/20180428/Archer C7(US)_V4_180425.zip',
               'https://static.tp-link.com/2017/201712/20171221/Archer C7(US)_V4_171101.zip'],
 'model': 'Archer C7 ',
 'vendor': 'TP-Link',
 'version': 'V4'}
2022-07-19 01:35:10 [scrapy.core.engine] INFO: Closing spider (finished)
2022-07-19 01:35:10 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1082,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 4,
 'downloader/response_bytes': 77004,
 'downloader/response_count': 4,
 'downloader/response_status_count/200': 3,
 'downloader/response_status_count/301': 1,
 'elapsed_time_seconds': 1.45197,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2022, 7, 19, 1, 35, 10, 957079),
 'httpcompression/response_bytes': 667235,
 'httpcompression/response_count': 3,
 'item_scraped_count': 1,
 'log_count/DEBUG': 6,
 'log_count/INFO': 10,
 'memusage/max': 93999104,
 'memusage/startup': 93999104,
 'request_depth_max': 2,
 'response_received_count': 3,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'start_time': datetime.datetime(2022, 7, 19, 1, 35, 9, 505109)}
2022-07-19 01:35:10 [scrapy.core.engine] INFO: Spider closed (finished)

My Item Class I defined some customs fields, but file_urls and files are created as scrapy documentation:

class FirmwareItem(scrapy.Item):
    # define the fields for your item here like:
    vendor = scrapy.Field()
    model = scrapy.Field()
    version = scrapy.Field()
    file_urls = scrapy.Field()
    files = scrapy.Field()

ParseMethod Here I get the zip files link at page and load them as list at 'file_urls':

def parseVersion(self,response):
        
        firmware = FirmwareItem()
        firmware['vendor'] = "TP-Link"
        firmware['model'] = response.xpath('//em[@id="model-version-name"]/text()').get()
        firmware['version'] = response.xpath('//em[@id="model-version-name"]/span/text()').get()
        urls = response.xpath('//div[@data-id="Firmware"]//a[@]/@href').getall()
       
        firmware['file_urls'] = [url.replace(' '," ") for url in urls]
        yield firmware

Settings Enabling Item Pipelines and setting default dir:

ITEM_PIPELINES = {
    'router.pipelines.RouterPipeline': 300,
}
FILES_STORE = "downloaded"

Pipeline Kept as default:

from itemadapter import ItemAdapter

class RouterPipeline:
    def process_item(self, item, spider):
        return item

CodePudding user response:

You are almost there... Make sure you have all of these settings set in your settings.py:

ITEM_PIPELINES = {
    'scrapy.pipelines.files.FilesPipeline': 1,
    'router.pipelines.RouterPipeline': 300,
}

FILES_STORE = '/path/to/save/directory'  # set your preferred download directory

FILES_URLS_FIELD = 'files_urls'  # copy verbatim
FILES_RESULT_FIELD = 'files'     # copy verbatim
  • Related