Home > other > Scrapy - redis crawler after running for a while won't take the url from the queue
Scrapy - redis crawler after running for a while won't take the url from the queue
Time:11-06
I have stored two thousand in the queue of redis start_url, but in time to crawl every time he climbed dozens or hundreds of article, will be entering a state of waiting for start_url
CodePudding user response:
Is there a big help to analyze, I baidu checked a lot better, but didn't find a solution, Because it can get data from the queue to crawl, just will run for a while after get the data, restart the reptiles, and there will be some start_url can climb, but will not start_url again in a short time