Home > Back-end >  Trying to scrap the website using selenium but getting this error
Trying to scrap the website using selenium but getting this error

Time:01-09

type hefrom selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_experimental_option("detach", True)

s=Service('I:\chromedriver_win32\chromedriver.exe')
path='I:\chromedriver_win32\chromedriver.exe'

#Website to scrap
website='https://www.adamchoi.co.uk/overs/detailed'


driver=webdriver.Chrome(service=s,options=chrome_options)
driver.get(website)


#Locating and clicking an element
all_matches_button=driver.find_element(by='xpath',value="//label[normalize-space()='All matches']").click()


matches=driver.find_elements(by="xpath",value='tr')
for match in matches:
    print(match.text)

Error:"USB: usb_device_handle_win.cc:1045 Failed to read descriptor from node connection: A device attached to the system is not functioning. (0x1F)" and "Bluetooth: bluetooth_adapter_winrt.cc:1074 Getting Default Adapter failed."

A soultion to my problem

CodePudding user response:

  1. how did you get usb error and bluetooth? whats going on?

  2. type hefrom selenium import webdriver where from you copy pasted?

  3. variable all_matches_button never used in your code and there is no need to save anything to variable when click on element.

    #Locating and clicking an element
    all_matches_button=driver.find_element(by='xpath',value="//label[normalize-space()='All matches']").click()
    
  4. here is working code to start dig deeper

    from time import sleep
    
    from selenium import webdriver
    from selenium.webdriver.common.by import By
    from selenium.webdriver.chrome.options import Options
    
    chrome_options = Options()
    chrome_options.add_experimental_option("detach", True)
    
    website = 'https://www.adamchoi.co.uk/overs/detailed'
    
    driver = webdriver.Chrome(options=chrome_options)
    driver.get(website)
    sleep(4)
    
    tr_elements = driver.find_elements(By.XPATH, "//tr")
    for tr in tr_elements:
    
        print(tr.tag_name, tr.get_attribute('textContent'))
    

CodePudding user response:

however, this message shouldn't prevent you from getting scraping result. try to correct your xpath in the last line by adding slashes

matches=driver.find_elements(by="xpath",value='//tr')
  • Related