Home > Blockchain >  Selenium: Next pages with javascript throwing ElementClickInterceptedException Error
Selenium: Next pages with javascript throwing ElementClickInterceptedException Error

Time:12-13

My code is working fine but the pagination portion throwing the following exception:

selenium.common.exceptions.ElementClickInterceptedException: 
Message: element click intercepted: 
Element <a href="#cpricehistory" data-toggle="tab"  id="btn_cpricehistory" aria-expanded="true">...</a> 
is not clickable at point (165, 19). 
Other element would receive the click: <a href="#">...</a>

Your help is much appreciate

Script:

import time
import pandas as pd
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException


url = 'https://www.sharesansar.com/company/shl'

cdm = ChromeDriverManager().install()
driver = webdriver.Chrome(cdm)

driver.maximize_window()
time.sleep(8)
driver.get(url)
time.sleep(10)
data =[]

while True:
    driver.find_element_by_link_text('Price History').click()
    time.sleep(3)

    select = Select(WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@name="myTableCPriceHistory_length"]'))))
    select.select_by_visible_text("50")

    soup = BeautifulSoup(driver.page_source,'lxml')

    tables =soup.select('#myTableCPriceHistory tbody tr')

    for table in tables:
        _open = table.select_one('td:nth-child(3)').text
        high = table.select_one('td:nth-child(4)').text
        low = table.select_one('td:nth-child(5)').text
        close = table.select_one('td:nth-child(6)').text

        print ( f"""
        Opening:{_open}
        High:{high}
        Low:{low} 
        """)

    print("-" * 85)

    
    # next_page=driver.find_element_by_xpath('//a[contains(text(),"Next")]')
    # if next_page:
    #     next_page.click()
    #     time.sleep(3)
    # else:
    #     break
#while True:
    try:
        WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@]/span/following-sibling::a'))).click()
        print("Clicked on  Next Page »")
    except TimeoutException:
        print("No more Next Page »")
        break
driver.quit()

CodePudding user response:

Full error message shows problem with clicking again 'Price History' but you don't need to click it again to get next page. You should click it only once - before while-loop.

And the same with selecting 50. You should select it only once - before while-loop.

Other problem makes Next Page because it exists even on last page and it click again and again Next Page which load again and again last page.

Normally this button has class "paginate_button next" but on last page it has class "paginate_button next disabled" - so if you will search class "paginate_button next" then you should detect last page

'//a[@]'

Full working code:

from webdriver_manager.chrome import ChromeDriverManager
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException, TimeoutException
from bs4 import BeautifulSoup
import time

url = 'https://www.sharesansar.com/company/shl'

cdm = ChromeDriverManager().install()
driver = webdriver.Chrome(cdm)

driver.maximize_window()
driver.get(url)
time.sleep(10)

data = []

driver.find_element_by_link_text('Price History').click()
time.sleep(3)

select = Select(WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@name="myTableCPriceHistory_length"]'))))
select.select_by_visible_text("50")

while True:
    
    soup = BeautifulSoup(driver.page_source, 'lxml')

    tables = soup.select('#myTableCPriceHistory tbody tr')

    for table in tables:
        _open = table.select_one('td:nth-child(3)').text
        high = table.select_one('td:nth-child(4)').text
        low = table.select_one('td:nth-child(5)').text
        close = table.select_one('td:nth-child(6)').text

        print(f"Opening: {_open}\nHigh: {high}\nLow: {low}\n")

    print("-" * 85)
    
    try:
        WebDriverWait(driver, 5).until(EC.element_to_be_clickable((By.XPATH, '//a[@]'))).click()
        print("Clicked on Next Page »")
        time.sleep(5)  # page needs time to load new data
    except TimeoutException:
        print("No more Next Page »")
        break
        
driver.quit()

BTW:

Yesterday was similar question and I show how to get this table using only requests instead of Selenium. It gets JSON data directly from API so it doesn't need BeautifulSoup.

scrape responsive table from site whose url doesnt change

  • Related