Trying to scrape a webpage for bond information. While using Selenium allows me to get data for the first few rows of the table containing the wanted data, some rows and columns of data are not being scraped. I do not know why.
The webpage is webpage containing bond information
The input code:
a = driver.find_elements(By.TAG_NAME,'sgx-table-row')
combined=[]
for num in range(len(a)):
combined.append([])
counter=0
for item in a:
ticker = item.find_elements(By.TAG_NAME,'a')
name = item.find_elements(By.TAG_NAME,'sgx-table-cell-text')
price1 = item.find_elements(By.TAG_NAME,'sgx-table-cell-number')
for item in ticker:
if len(item.text) != 0:
combined[counter].append(item.text)
else:
pass
for item in name:
if len(item.text) !=0:
combined[counter].append(item.text)
else:
pass
for item in price1:
if len(item.text) != 0:
combined[counter].append(item.text)
else:
pass
counter =1
df = pd.DataFrame(combined)
print(df)
The output code:
N518100E 230201 CMHS 99.000 99 0.827 98.173 ﹣ ﹣ 0
1 N519100A 240201 LSHS 97.000 97 0.945 96.055 ﹣ ﹣ 0
2 N520100A 251101 QGES ﹣ ﹣ 0.111 ﹣ ﹣ ﹣ 0
3 N521100V 261101 IRRS ﹣ ﹣ 0 ﹣ ﹣ ﹣ 0
4 NA12100N 420401 PH1S 110.000 110 0.842 109.158 ﹣ ﹣ 0
5 NA16100H 460301 BJGS 108.000 108 1.069 106.931 ﹣ ﹣ 0
6 NA20100F 500301 ZL8S 108.000 108 0.729 107.271 ﹣ ﹣ 0
7 NA21200W 511001 ZFGS 87.000 87 0 87 ﹣ ﹣ 0
8 NX13100H 230701 R1MS 101.500 101.5 0.157 101.343 ﹣ ﹣ 0
9 NX15100Z 250601 AFUS 99.701 99.701 0.331 99.37 ﹣ ﹣ 0
10 NX16100F 260601 BJHS 102.000 102 0.296 101.704 ﹣ ﹣ 0
11 NX18100A 280501 CMGS 90.000 90 0.585 89.415 ﹣ ﹣ 0
12 NX21100N 310701 RXYS ﹣ ﹣ 0.093 ﹣ ﹣ ﹣ 0
13 NY07100X 220901 7PMS 101.380 101.38 1.214 100.166 ﹣ ﹣ 0
14 None None None None None None None None None
15 None None None None None None None None None
16 None None None None None None None None None
17 None None None None None None None None None
18 None None None None None None None None None
19 None None None None None None None None None
20 None None None None None None None None None
21 None None None None None None None None None
22 None None
As seen, past a certain point, the find_all method returns None even though the html code in the webpage is in the same format (same class names and tags).
CodePudding user response:
Page is loading dynamically, and it calls a couple of APIs, receiving some json data from them. Would the following result help you?
import requests
import pandas as pd
r = requests.get('https://api.sgx.com/securities/v1.1/bonds?params=nc,adjusted-vwap,bond_accrued_interest,bond_clean_price,bond_dirty_price,bond_date,b,bv,p,c,change_vs_pc,change_vs_pc_percentage,cx,cn,dp,dpc,du,ed,fn,h,iiv,iopv,lt,l,o,p_,pv,ptd,s,sv,trading_time,v_,v,vl,vwap,vwap-currency')
df = pd.DataFrame(r.json()['data']['prices'])
df
This returns a dataframe with 38 rows × 34 columns:
pv bond_dirty_price lt fn trading_time dp type du bv dpc ... p_ p bond_accrued_interest change_vs_pc s nc cx vl v bond_date
0 1.014 101.4 1.014 None 20220722_090753 None retailbonds None 45.0 None ... X 0.000 0.501 None 1.019 RMRB 0.0 45.0 45630.0 1658419200000
1 0.998 99.7 0.997 None 20220722_090824 None retailbonds None 22.0 None ... X -0.100 0.380 None 1.000 5A1B 0.0 41.0 40874.0 1658419200000
2 0.964 96.3 0.963 None 20220722_090824 None retailbonds None 7.0 None ... X -0.104 1.068 None 0.966 6AZB 0.0 82.0 79028.0 1658419200000
3 1.013 101.3 1.013 None 20220722_090824 None retailbonds None 20.0 None ... X 0.000 0.678 None 1.015 V7AB 0.0 80.0 81040.0 1658419200000
4 1.011 101.3 1.013 None 20220722_090825 None retailbonds None 22.0 None ... X 0.198 0.983 None 1.013 V7BB 0.0 9.0 9117.0 1658419200000