Home > OS >  Why is XPATH not rturning any results?
Why is XPATH not rturning any results?

Time:09-20

I was trying to get the data back, this wouldn't work but on a Formula1 website, I got the response back, your assistance would be highly appreciated, thanks.

import requests
from bs4 import BeautifulSoup
from lxml import etree
url = "https://www.etenders.gov.za/Home/opportunities?id=1"
webpage = requests.get(url)
soup    = BeautifulSoup(webpage.content, "html.parser")
dom     = etree.HTML(str(soup))
res      = (dom.xpath('//*[@id="tendeList"]/tbody/tr[2]/td/table/tbody/tr[2]/td[1]/b/text()'))
for i in res:
    print(i)
    print("----")

CodePudding user response:

Main issue here is not the XPATH it is the fact, DOM is created dynamically based on data from an XHR request, that you can inspect on network tab in your browsers devtools - So I would recommend to use this structured JSON data over other scraping solutions like selenium,...

import requests
import json

url = "https://www.etenders.gov.za/Home/TenderOpportunities/?status=1"
headers = {'user-agent': 'Mozilla/5.0'}

response = requests.get(url, headers=headers)

response.json()

Output:

[{'id': 23545,
  'tender_No': 'CORP5619 Notification of Award',
  'type': 'Request for Bid(Open-Tender)',
  'delivery': 'N/A - Notification of Award - Germiston - Germiston - 1400',
  'department': 'ESKOM',
  'date_Published': '2022-09-16T00:00:00',
  'cbrief': False,
  'cd': 'Friday, 30 September 2022 - 10:00',
  'dp': 'Friday, 16 September 2022',
  'closing_Date': '2022-09-30T10:00:00',
  'brief': '<not available>',
  'compulsory_briefing_session': None,
  'status': 'Published',
  'category': 'Civil engineering',
  'description': 'Notification of Award - Construction of Removable Bundwall at Apollo Substation',
  'province': 'National',
  'contactPerson': 'Godfrey Radzelani',
  'email': '[email protected]',
  'telephone': '011-871-3165',
  'fax': '011-871-3160',
  'briefingVenue': None,
  'conditions': 'None',
  'sd': [{'supportDocumentID': 'd2b5a3f7-3d3f-4c25-8808-740d55bf4352',
    'fileName': 'Notification of Award.pdf',
    'extension': '.pdf',
    'tendersID': 23545,
    'active': True,
    'updatedBy': '[email protected]',
    'dateModified': '2022-06-10T10:18:19.4281873',
    'tenders': None}],
  'bf': ' NO',
  'bc': ' NO'},
 {'id': 31660,
  'tender_No': 'MWP1593TX',
  'type': 'Request for Bid(Open-Tender)',
  'delivery': 'Eskom Megawatt Park Tender Office - Suninghill - Johannesburg - 2000',
  'department': 'ESKOM',
  'date_Published': '2022-09-16T00:00:00',
  'cbrief': True,
  'cd': 'Thursday, 22 September 2022 - 10:00',
  'dp': 'Friday, 16 September 2022',
  'closing_Date': '2022-09-22T10:00:00',
  'brief': 'Tuesday, 13 September 2022 - 10:00',
  'compulsory_briefing_session': '2022-09-13T10:00:00',
  'status': 'Published',
  'category': 'Services: Professional',
  'description': 'Provision of Land Surveying Services Panels for the Transmission Division on an “as and when required” basis from the start date until 30 June 2027',
  'province': 'National',
  'contactPerson': 'Godfrey Radzelani',
  'email': '[email protected]',
  'telephone': '011-871-3165',
  'fax': '011-871-3160',
  'briefingVenue': 'MS Teams',
  'conditions': 'N/A',
  'sd': [{'supportDocumentID': '6f8e65a5-6294-4b56-8fa4-11c869ecb45f',
    'fileName': '32- 136 Contractor Health and Safety Requirements.pdf',
    'extension': '.pdf',
    'tendersID': 31660,
    'active': True,
    'updatedBy': '[email protected]',
    'dateModified': '2022-09-01T10:26:13.4253523',
    'tenders': None},...]

CodePudding user response:

If you inspect your webpage.text you will find that the tbody element is not present in your response (most probably because the page is loaded dynamically using JS).

To address this, you can use Selenium and have the script wait for the DOM to load before parsing the HTML:

from cmath import exp
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
from lxml import etree

url = "https://www.etenders.gov.za/Home/opportunities?id=1"

driver = webdriver.Chrome()
wait = WebDriverWait(driver, 5000)
driver.get(url)

expand = wait.until(EC.visibility_of_element_located((By.XPATH, '//*[@id="tendeList"]/tbody/tr[1]/td[1]')))
expand.click()

table = wait.until(EC.visibility_of_element_located((By.XPATH, '//*[@id="tendeList"]/tbody/tr[2]/td/table/tbody')))
elements = table.find_elements(By.TAG_NAME, 'td')

for el in elements:
    print(el.text)

driver.quit()

I would also suggest you take a better look over your xPaths. From my understanding, you are trying to reach the expandable table, which requires clicking on the plus sign. If that is so, the xPath you indicated is incorrect.

Another way to approach such a web scraping project is to use a third party scraping API. For example, WebScrapingAPI handles javascript rendering.

Here is an implementation example using WebScrapingAPI, which is actually more related to your original code:

import requests
from bs4 import BeautifulSoup
from lxml import etree

API_KEY = '<YOUR_API_KEY>'
SCRAPER_URL = 'https://api.webscrapingapi.com/v1'

TARGET_URL = 'https://www.etenders.gov.za/Home/opportunities?id=1'

CSS_SELECTOR = '.carrefourbr-carrefour-components-0-x-productNameContainer'

PARAMS = {
    "api_key":API_KEY,
    "url": TARGET_URL,
    "render_js":1,
    "timeout":40000,
    "wait_for":10000,
    "js_instructions":'[{"action":"click","selector":"button#btn-show-all-children","timeout": 4000}]'
}

response = requests.get(SCRAPER_URL, params=PARAMS)

soup = BeautifulSoup(response.content, "html.parser")
dom = etree.HTML(str(soup))
els = (dom.xpath('//*[@id="tendeList"]/tbody/tr[2]/td/table/tbody/tr/td'))

for el in els:
    print(el.text)
    print("----")
  • Related