Home > Software design >  Scraping Tripadvisor attractions using scrapy and python
Scraping Tripadvisor attractions using scrapy and python

Time:06-07

I am trying to scrape TripAdvisor's attractions, but I cannot get the names and addresses of each attraction. I suspect I wrote product.css(...) wrong (there are jsons?).

Can anyone tell me how to correct the code to get the name and address of each attraction?

My current code:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'https://www.tripadvisor.com/Attractions-g187427-Activities-oa90-Spain'
    ]

    def parse(self, response):
        for link in response.css('.EsZYd a::attr(href)'):
            yield response.follow(link.get(), callback=self.parse_categories)

    def parse_categories(self, response):
        products = response.css('div.eeqnt')
        for product in products:
            yield {
                'name' : product.css('h1.WlYyy cPsXC GeSzT::text').get().strip(),
                'address' : product.css('span.WlYyy cacGK Wb::text').get().strip(),
            }

CodePudding user response:

It's not really related to python, but css-selectors.

CSS classes should separate with dot and not space WlYyy.cPsXC.GeSzT.

Best suggestion would be to use chrome with dev-toolbar. It will give you an ability to get path to the specific element via css-selector or xpath, just right-click on the element in a DOM-tree and select copy menu-item.

Avoid using classes (especially one without semantic meaning) as an anchor point. They might change from page to page, or in time.

Better to use semantically meaningful nodes, like in your case: XPath for the title would looks like this //main//header//div[@data-automation="main_h1"]//h1.

CodePudding user response:

You can't use for loop in each listing page

from scrapy.crawler import CrawlerProcess
import scrapy

class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = [
        'https://www.tripadvisor.com/Attractions-g187427-Activities-oa90-Spain'
    ]

    def parse(self, response):
        for link in response.css('.EsZYd a::attr(href)').getall():
            #print(link)
            yield response.follow(link, callback=self.parse_categories)

    def parse_categories(self, response):
       
    
        yield {
            
            'name' : response.css('h1.WlYyy.cPsXC.GeSzT::text').get(),
            'address' :''.join(response.xpath('(//*[@])[1]//text()').getall()[:-1]),
            'url':response.url
            }
if __name__ == "__main__":
    process =CrawlerProcess(QuotesSpider)
    process.crawl()
    process.start()
  • Related