Home > Software engineering >  Selenium could not get last element with XPATH
Selenium could not get last element with XPATH

Time:03-18

I am creating an automation about fiverr question answer but selenium could not get last answer text.

from selenium import webdriver
from selenium.webdriver.common.by import By
import time

url = [
    'https://www.fiverr.com/volkeins/provide-10x-dofollow-backlinks-from-amazon-da96-permanent',
]

    for x in url:
        driver = webdriver.Chrome()
        driver.maximize_window()
        driver.get(x)
    
        # PASS CAPTCHA
        time.sleep(5)
        driver.execute_script("document.getElementsByClassName('perimeterx-async-challenge')[0].style.display='none';")
    
        # Questions is working very well
        questions = []
        questions_elements = driver.find_elements(By.XPATH, '//div/p[@]')
        for question in questions_elements:
            questions.append(question.text)
        print("Question:", questions)
    
        # it could not get last answer
        answers = []
        answers_elements = driver.find_elements(By.XPATH, '//div/p[@]')
        for answer in answers_elements:
                answers.append(answer.text)
        print("Question:", answers)
    driver.quit()

Print:

Question: ['Do you provide profile backlink?', 'Can you publish my article']
Answers:  ['Never, we provide custom product backlink', '']

Note: it works on beautifulsoup very well

CodePudding user response:

Instead of using .text method please use get_attribute("innerHTML")
This gave me the correct output:

answers = []
    answers_elements = driver.find_elements(By.XPATH, '//div/p[@]')
    for answer in answers_elements:
            answers.append(answer.get_attribute("innerHTML"))
    print("Question:", answers)

CodePudding user response:

The issue is that with Selenium it should be in its's viewport.

However, with BS4 you must have extracted the page source using driver.page_source.

Solution:

Scroll to the respective element and then perform a click so that answer is visible just like you do it manually.

Code:

url = [
'https://www.fiverr.com/volkeins/provide-10x-dofollow-backlinks-from-amazon-da96-permanent',
]

for x in url:
driver = webdriver.Chrome()
driver.maximize_window()
wait = WebDriverWait(driver, 20)
driver.get(x)

# PASS CAPTCHA
time.sleep(5)
driver.execute_script("document.getElementsByClassName('perimeterx-async-challenge')[0].style.display='none';")

# Questions is working very well
questions = []
questions_elements = driver.find_elements(By.XPATH, '//div/p[@]')
for question in questions_elements:
    questions.append(question.text)
print("Question:", questions)

# it could not get last answer
answers = []
answers_elements = wait.until(EC.presence_of_all_elements_located((By.XPATH, "//div/p[@class='answer']")))
print(len(answers_elements))
i = 1
for answer in answers_elements:
    driver.execute_script("arguments[0].scrollIntoView(true);", answer)
    arrow_btn = wait.until(EC.element_to_be_clickable((By.XPATH, f"((//h2[@class='section-title'])[3]//following-sibling::div//descendant::*[name()='svg'])[{i}]")))
    ActionChains(driver).move_to_element(arrow_btn).click().perform()
    answers.append(wait.until(EC.visibility_of_element_located((By.XPATH, f"(//div/p[@class='answer'])[{i}]"))).text)
    i = i    1
print("Question:", answers)

Imports:

from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains

Output:

Question: ['Do you provide profile backlink?', 'Can you publish my article']
2
Question: ['Never, we provide custom product backlink', 'Yes exactly, we can publish your article instead of our article']

Process finished with exit code 0
  • Related