I'm trying to scrape google search results using python and selenium webdriver and can't figure out why the function find_elements returns only one element from the class (there are multiple h3 elements in the class, each one contains different title).
for element in driver.find_elements(by=By.XPATH, value='//*[@]'):
title = element.find_element(by=By.XPATH, value='//h3').text
print(title)
The function don't return any error, just the title of the first result.
Thanks.
CodePudding user response:
To search for element(s) inside another element you should use XPath starting with a dot .
Otherwise Selenium will return you the first element in the DOM matching the passed locator, //h3
in this case.
So, your code here can be as following:
for element in driver.find_elements(by=By.XPATH, value='//*[@]'):
title = element.find_element(by=By.XPATH, value='.//h3').text
print(title)
UPD
There is only one element matching //*[@class='v7W49e']
XPath locator there so driver.find_elements(by=By.XPATH, value='//*[@]')
will always return you a single web element.
You should use //*[@class='v7W49e']//h3
there instead.
This code will work much better.
It is still not good enough since not all the elements matching //*[@class='v7W49e']//h3
XPath locator will have text to show you, but this will give you multiple results.
for element in driver.find_elements(by=By.XPATH, value='//*[@]//h3'):
title = element.text
print(title)