Home > Enterprise >  Scrapy doesn't follow new requests
Scrapy doesn't follow new requests

Time:11-16

I have written this code:

curl_command = "curl blah blah"

class MySpider(scrapy.Spider):
    name = 'myspider'
    allowed_domains = ['some_domain', ]
    start_urls = ['someurl', ]

    postal_codes = ['some_postal_code', ]

    def start_requests(self):
        for postal_code in self.postal_codes:
            curl_req = scrapy.Request.from_curl(curl_command=curl_command)
            curl_req._cb_kwargs = {'page': 0}

            yield curl_req

    def parse(self, response, **kwargs):
        cur_page = kwargs.get('page', 1)

        logging.info("Doing some logic")
        num_pages = do_some_logic()
        yield mySpiderItem

        if cur_page < num_pages:
            logging.info("New Request")
            curl_req = scrapy.Request.from_curl(curl_command=curl_command)
            curl_req._cb_kwargs = {'page': cur_page   1}

            yield curl_req
            yield scrapy.Request(url="https://jsonplaceholder.typicode.com/posts")

Now the problem is that the parse method gets called only once. In other words, the log looks like something like this:

Doing some logic
New Request
Spider closing

I don't get what's happening to New Request. Logically, The new request should also lead to a Doing some logic log, but for some reason it doesn't.

Am i missing something here? Is there an other way to yield a new request?

CodePudding user response:

It's kind of hard to know exactly what's the problem from the code sample, but I guess it's probably that you don't use the page number in the request.

As an example I modified your code for other website:

import scrapy
import logging


curl_command = 'curl "https://scrapingclub.com/exercise/list_basic/"'


class MySpider(scrapy.Spider):
    name = 'myspider'
    allowed_domains = ['scrapingclub.com']
    #start_urls = ['someurl', ]

    postal_codes = ['some_postal_code', ]

    def start_requests(self):
        for postal_code in self.postal_codes:
            curl_req = scrapy.Request.from_curl(curl_command=curl_command, dont_filter=True)
            curl_req._cb_kwargs = {'page': 1}

            yield curl_req

    def parse(self, response, **kwargs):
        cur_page = kwargs.get('page', 1)

        logging.info("Doing some logic")
        #num_pages = do_some_logic()
        #yield mySpiderItem
        num_pages = 4
        if cur_page < num_pages:
            logging.info("New Request")
            curl_req = scrapy.Request.from_curl(curl_command=f'{curl_command}?page={str(cur_page   1)}', dont_filter=True)
            curl_req._cb_kwargs = {'page': cur_page   1}
            yield curl_req
            yield scrapy.Request(url="https://jsonplaceholder.typicode.com/posts")

Output:

[scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapingclub.com/exercise/list_basic/> (referer: None)
[root] INFO: Doing some logic
[root] INFO: New Request
[scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'jsonplaceholder.typicode.com': <GET https://jsonplaceholder.typicode.com/posts>
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapingclub.com/exercise/list_basic/?page=2> (referer: https://scrapingclub.com/exercise/list_basic/)
[root] INFO: Doing some logic
[root] INFO: New Request
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapingclub.com/exercise/list_basic/?page=3> (referer: https://scrapingclub.com/exercise/list_basic/?page=2)
[root] INFO: Doing some logic
[root] INFO: New Request
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapingclub.com/exercise/list_basic/?page=4> (referer: https://scrapingclub.com/exercise/list_basic/?page=3)

Scrapy has a built-in duplicate filter which is enabled by default. If you don't want this behavior you can set 'dont_filter = True' to avoid this ignoring duplicate requests.

CodePudding user response:

I think u forgot the callback part in the request. Check the code I got from the documentation. In your case should be callback=self.parse

   class MySpider(scrapy.Spider):
        name = 'myspider'
    
        def start_requests(self):
            return [scrapy.FormRequest("http://www.example.com/login",
                                       formdata={'user': 'john', 'pass': 'secret'},
                                       callback=self.logged_in)]
    
        def logged_in(self, response):
            # here you would extract links to follow and return Requests for
            # each of them, with another callback
            pass
  • Related