I am web scraping a web table that looks like the follow:
| A | B | C | D |
1| Name | Surname| Route | href="link with more info"|
2| Name | Surname| Route | href="link with more info"|
3| Name | Surname| Route | href="link with more info"|
links = driver.find_elements(by='xpath', value='//a[@title="route detail"]')
So far so good, I get what I want.
Now I want to click on the collected links to gather the info in the subpage (which I know how) and return to the main page, move to the second row, and so forth.
for link in links:
# links = driver.find_elements(by='xpath', value='//a[@title="route detail"]')
link.click()
time.sleep(2)
driver.back()
The code above works for the first run and then throws the error:
Message: stale element reference: element is not attached to the page document
I tried to add various Wait, refresh etc etc but no success. I am using selenium 4.6.0. By the way, if I execute line by line outside of the for loop with Jupyter Notebook works.
I also added the find_element
line inside the door loop but still doesn't work.
CodePudding user response:
By navigating to another page all previously collected by Selenium web elements (they are actually references to a physical web elements) become no more valid since the web page is re-built when you open it again.
To make your code working you need to collect the links
list again on the main page when you getting back.
So, this code should work:
links = driver.find_elements(by='xpath', value='//a[@title="route detail"]')
for index, link in enumerate(links):
links[index].click()
# do what you want to do on the opened page
driver.back()
time.sleep(0.2)
links = wait.until(EC.visibility_of_element_located((By.XPATH, '//a[@title="route detail"]')))