Whenever I run my spider scrapy crawl test -O test.json
in my Visual Studio Code terminal I get output like this:
2023-01-31 14:31:45 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.example.com/product/1
{'price': 100,
'newprice': 90
}
2023-01-31 14:31:50 [scrapy.core.engine] INFO: Closing spider (finished)
2023-01-31 14:31:50 [scrapy.extensions.feedexport] INFO: Stored json feed (251 items) in: test.json
2023-01-31 14:31:50 [selenium.webdriver.remote.remote_connection] DEBUG: DELETE http://localhost:61169/session/996866d968ab791730e4f6d87ce2a1ea {}
2023-01-31 14:31:50 [urllib3.connectionpool] DEBUG: http://localhost:61169 "DELETE /session/996866d968ab791730e4f6d87ce2a1ea HTTP/1.1" 200 14
2023-01-31 14:31:50 [selenium.webdriver.remote.remote_connection] DEBUG: Remote response: status=200 | data={"value":null} | headers=HTTPHeaderDict({'Content-Length': '14', 'Content-Type': 'application/json; charset=utf-8', 'cache-control': 'no-cache'})
2023-01-31 14:31:50 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
2023-01-31 14:31:52 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 91321,
'downloader/request_count': 267,
'downloader/request_method_count/GET': 267,
'downloader/response_bytes': 2730055,
'downloader/response_count': 267,
'downloader/response_status_count/200': 267,
'dupefilter/filtered': 121,
'elapsed_time_seconds': 11.580893,
'feedexport/success_count/FileFeedStorage': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2023, 1, 31, 13, 31, 50, 495392),
'httpcompression/response_bytes': 9718676,
'httpcompression/response_count': 267,
'item_scraped_count': 251,
'log_count/DEBUG': 537,
'log_count/INFO': 11,
'request_depth_max': 2,
'response_received_count': 267,
'scheduler/dequeued': 267,
'scheduler/dequeued/memory': 267,
'scheduler/enqueued': 267,
'scheduler/enqueued/memory': 267,
'start_time': datetime.datetime(2023, 1, 31, 13, 31, 38, 914499)}
2023-01-31 14:31:52 [scrapy.core.engine] INFO: Spider closed (finished)
I want to log all this, including the print('hi')
lines in my Spider but I DON'T want the spider output logged, in this case {'price': 100, 'newprice': 90 }
.
Inspecting the above I think I need to disable only the downloader/response_bytes
.
I've been reading this https://docs.scrapy.org/en/latest/topics/logging.html, but I'm not sure where or how to configure my exact use case. I have hundreds of spiders and I don't want to have to add a configuration in each, but rather apply the loggin config to all spiders. Do I need to add a separate config file or add to an existing like scrapy.cfg
?
UPDATE 1
So here's my folder structure where I created settings.py
:
Scrapy\
tt_spiders\
myspiders\
spider1.py
spider2.py
settings.py
middlewares.py
pipelines.py
settings.py
scrapy.cfg
settings.py
settings.py
if __name__ == "__main__":
disable_list = ['scrapy.core.engine', 'scrapy.core.scraper', 'scrapy.spiders']
for element in disable_list:
logger = logging.getLogger(element)
logger.disabled = True
spider = 'example_spider'
settings = get_project_settings()
settings['USER_AGENT'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
process = CrawlerProcess(settings)
process.crawl(spider)
process.start()
This throws 3 errors, which makes sense as I have not defined these:
- "logging" is not defined
- "get_project_settings" is not defined
- "CrawlerProcess" is not defined
But more importantly, what I don't understand, this code contains spider = 'example_spider'
,
where I want this logic to apply to ALL spiders.
So I reduced it to:
if __name__ == "__main__":
disable_list = ['scrapy.core.scraper']
But still the output is logged. What am I missing?
CodePudding user response:
Let's assume that we have this spider:
spider.py:
import scrapy
class ExampleSpider(scrapy.Spider):
name = 'example_spider'
allowed_domains = ['scrapingclub.com']
start_urls = ['https://scrapingclub.com/exercise/detail_basic/']
def parse(self, response):
item = dict()
item['title'] = response.xpath('//h3/text()').get()
item['price'] = response.xpath('//div[@]/h4/text()').get()
yield item
And its output is:
...
[scrapy.middleware] INFO: Enabled item pipelines:
[]
[scrapy.core.engine] INFO: Spider opened
[scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
[scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
[scrapy.core.engine] DEBUG: Crawled (200) <GET https://scrapingclub.com/exercise/detail_basic/> (referer: None)
[scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapingclub.com/exercise/detail_basic/>
{'title': 'Long-sleeved Jersey Top', 'price': '$12.99'}
[scrapy.core.engine] INFO: Closing spider (finished)
[scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 329,
'downloader/request_count': 1,
...
If you want to disable logging for specific line then just copy the text inside the square brackets and disable its logger.
e.g.: [scrapy.core.scraper] DEBUG: Scraped from <200 https://scrapingclub.com/exercise/detail_basic/>
.
main.py:
if __name__ == "__main__":
disable_list = ['scrapy.core.engine', 'scrapy.core.scraper', 'scrapy.spiders']
for element in disable_list:
logger = logging.getLogger(element)
logger.disabled = True
spider = 'example_spider'
settings = get_project_settings()
settings['USER_AGENT'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
process = CrawlerProcess(settings)
process.crawl(spider)
process.start()
If you want to disable some of the extensions you can set them to None
in settings.py
:
EXTENSIONS = {
'scrapy.extensions.telnet': None,
'scrapy.extensions.logstats.LogStats': None,
'scrapy.extensions.corestats.CoreStats': None
}
Update 1:
Add just this to settings.py
:
import logging
disable_list = ['scrapy.core.engine', 'scrapy.core.scraper', 'scrapy.spiders']
for element in disable_list:
logger = logging.getLogger(element)
logger.disabled = True