Home > Back-end >  Python Scrapy Web Scraping : problem with getting URL inside the onclick element which has ajax cont
Python Scrapy Web Scraping : problem with getting URL inside the onclick element which has ajax cont

Time:05-14

I am beginner for the web scraping with scrapy . I try to scrape user reviews for specific book from goodreads.com . I want to scrape all of the reviews about book so i must parse every review page . There is a next_page button below the every review page , content of the next_page button embedded in the onclick element but there is a problem . This onclick link include the ajax request and i dont know how to handle this situation . Thanks for your help in advance.

Picture of the next_page button

Its the content of onclick button

Its the remaining part of the onclick button

I am also beginner for the posting stackoverflow and i am sorry if i have any mistake. :)

I am sharing my codes for scraping in below

Also , its the example link one of the book , there is a review part below the page.

a book_link

import scrapy
from ..items import GoodreadsItem
from scrapy import Request
from urllib.parse import urljoin
from urllib.parse import urlparse



class CrawlnscrapeSpider(scrapy.Spider):
    name = 'crawlNscrape'
    allowed_domains = ['www.goodreads.com']
    start_urls = ['https://www.goodreads.com/list/show/702.Cozy_Mystery_Series_First_Book_of_a_Series']

    def parse(self, response):
        
        
        #collect all book links in this page then make request for 
        #parse_page function
        for href in response.css("a.bookTitle::attr(href)") :
            url=response.urljoin(href.extract())
            yield scrapy.Request(url, callback=self.parse_page)
            
        
        #go to the next page and make request for next page and call parse 
        #function again
        next_page = response.xpath("(//a[@class='next_page'])[1]/@href")
        if next_page:
            url= response.urljoin(next_page[0].extract())
            yield scrapy.Request(url, self.parse)
        
        
            

    def parse_page(self, response):
        
        #call goodreads item and create empty dictionary with name book
        book = GoodreadsItem()
        title = response.css("#bookTitle::text").get()
        reviews = response.css(".readable span:nth-child(2)::text").getall()
        
        #add book and reviews that earned into dictionary
        book['title'] = title
        book['reviews'] = reviews#take all reviews about book in single page
        
        
        # i want to extract all of the review pages for any book ,
        # but there is a ajax request in onclick button
        # so i cant scrape link of next page.
        next_page = response.xpath("(//a[@class='next_page'])[1]/@onclick")
        if next_page:
            url = response.urljoin(next_page[0].extract())
            yield scrapy.Request(url,callback=self.parse_page)
            
       

        
        
        yield book

CodePudding user response:

Instead of following code:

next_page = response.xpath("(//a[@class='next_page'])[1]/@onclick")
if next_page:
    url = response.urljoin(next_page[0].extract())
    yield scrapy.Request(url,callback=self.parse_page)

Try this instead:

First import this repository:

from re import search

then use following for pagination:

next_page_html = response.xpath("//a[@class='next_page' and @href='#']/@onclick").get()
if next_page_html != None:
    next_page_href = search( r"Request\(.([^\'] )", next_page_html)
    if next_page_href:
        url = response.urljoin(next_page_href.group(1))
        yield scrapy.Request(url,callback=self.parse_page)
  • Related