Home > Software design >  Selenium - Why NoSuchElementException happens in the second for loop iteration?
Selenium - Why NoSuchElementException happens in the second for loop iteration?

Time:11-07

I'm trying to loop over a list of web elements matching a div tag. The first loop goes well, but the second one throws a NoSuchElementException. Here is a minimal example of my code:

for div in driver.find_elements_by_xpath("//div[@class='class_name']"):
    print(div.text)
    print(f"Current url 1: {driver.current_url}") # url 
    new_url = url   "/page/"
    time.sleep(2)
    driver.get(new_url)
    print(f"Current url 2: {driver.current_url}") # new_url
    time.sleep(2)
    # Then get info from the new url

    # Go back
    # driver.execute_script("window.history.go(-1)")
    driver.back()
    print(f"Current url 3: {driver.current_url}") # url
    print("Sleeping for 3 seconds from now...")
    time.sleep(3)

Thank you!

CodePudding user response:

You are getting StaleElementReferenceException because the reference to a web element you are trying to use is no more valid AKA stale.
See here or on any other resource about the Stale Element Reference Exception.
Since you went to some other web page, even if you get back to the initial web page all the web elements you got there become stale elements.
To overcome this problem you have to get those elements again.
So instead of your current code I'd suggest using something like the following:

divs = driver.find_elements_by_xpath("//div[@class='class_name']")
for i in range(len(divs)):
    divs = driver.find_elements_by_xpath("//div[@class='class_name']")
    div = divs[i]
    print(div.text)
    print(f"Current url 1: {driver.current_url}") # url 
    new_url = url   "/page/"
    time.sleep(2)
    driver.get(new_url)
    print(f"Current url 2: {driver.current_url}") # new_url
    time.sleep(2)
    # Then get info from the new url

    # Go back
    # driver.execute_script("window.history.go(-1)")
    driver.back()
    print(f"Current url 3: {driver.current_url}") # url
    print("Sleeping for 3 seconds from now...")
    time.sleep(3)

You can try to get the specific div inside the loop as following:

divs = driver.find_elements_by_xpath("//div[@class='class_name']")
for i in range(len(divs)):
    div = driver.find_element_by_xpath("(//div[@class='class_name'])["   (str)i   "]")
    print(div.text)
    print(f"Current url 1: {driver.current_url}") # url 
    new_url = url   "/page/"
    time.sleep(2)
    driver.get(new_url)
    print(f"Current url 2: {driver.current_url}") # new_url
    time.sleep(2)
    # Then get info from the new url

    # Go back
    # driver.execute_script("window.history.go(-1)")
    driver.back()
    print(f"Current url 3: {driver.current_url}") # url
    print("Sleeping for 3 seconds from now...")
    time.sleep(3)
  • Related