Home > Back-end >  scrape responsive table from site whose url doesnt change
scrape responsive table from site whose url doesnt change

Time:12-12

I want price history scraped from this site: https://www.sharesansar.com/company/shl upon click of price history button the table gets loaded but the url remains same. I want to scrape the table loaded.

import requests
from bs4 import BeautifulSoup
url = "https://www.sharesansar.com/company/shl"
rr = requests.get(url)
htmll = rr.text
soup = BeautifulSoup(htmll)

CodePudding user response:

Using DevTools (tab: Network) in Chrome/Firefox you can see this page uses JavaScript to load data from another URL.

https://www.sharesansar.com/company-price-history?draw=1&columns[0][data]=DT_Row_Index&columns[0][name]=&columns[0][searchable]=false&columns[0][orderable]=false&columns[0][search][value]=&columns[0][search][regex]=false&columns[1][data]=published_date&columns[1][name]=&columns[1][searchable]=true&columns[1][orderable]=false&columns[1][search][value]=&columns[1][search][regex]=false&columns[2][data]=open&columns[2][name]=&columns[2][searchable]=false&columns[2][orderable]=false&columns[2][search][value]=&columns[2][search][regex]=false&columns[3][data]=high&columns[3][name]=&columns[3][searchable]=false&columns[3][orderable]=false&columns[3][search][value]=&columns[3][search][regex]=false&columns[4][data]=low&columns[4][name]=&columns[4][searchable]=false&columns[4][orderable]=false&columns[4][search][value]=&columns[4][search][regex]=false&columns[5][data]=close&columns[5][name]=&columns[5][searchable]=false&columns[5][orderable]=false&columns[5][search][value]=&columns[5][search][regex]=false&columns[6][data]=per_change&columns[6][name]=&columns[6][searchable]=false&columns[6][orderable]=false&columns[6][search][value]=&columns[6][search][regex]=false&columns[7][data]=traded_quantity&columns[7][name]=&columns[7][searchable]=false&columns[7][orderable]=false&columns[7][search][value]=&columns[7][search][regex]=false&columns[8][data]=traded_amount&columns[8][name]=&columns[8][searchable]=false&columns[8][orderable]=false&columns[8][search][value]=&columns[8][search][regex]=false&start=0&length=20&search[value]=&search[regex]=false&company=95&_=1639245456705'

Using requests with this url you can get table as JSON data and you don't need BeautifulSoup


In code I converted all values from this url to payload - so you can easy replace values to get different data.

Probably if you change 'start' (20, 40, etc.) then you can get next pages in table.

If you use 'length": 50 then you get more values in one request - but bigger values doesn't work.

BTW: This url needs header X-Requested-With which is used by AJAX requests.

import requests

payload = {
 '_': '1639245456705',
 'columns[0][data]': 'DT_Row_Index',
 'columns[0][orderable]': 'false',
 'columns[0][search][regex]': 'false',
 'columns[0][searchable]': 'false',
 'columns[1][data]': 'published_date',
 'columns[1][orderable]': 'false',
 'columns[1][search][regex]': 'false',
 'columns[1][searchable]': 'true',
 'columns[2][data]': 'open',
 'columns[2][orderable]': 'false',
 'columns[2][search][regex]': 'false',
 'columns[2][searchable]': 'false',
 'columns[3][data]': 'high',
 'columns[3][orderable]': 'false',
 'columns[3][search][regex]': 'false',
 'columns[3][searchable]': 'false',
 'columns[4][data]': 'low',
 'columns[4][orderable]': 'false',
 'columns[4][search][regex]': 'false',
 'columns[4][searchable]': 'false',
 'columns[5][data]': 'close',
 'columns[5][orderable]': 'false',
 'columns[5][search][regex]': 'false',
 'columns[5][searchable]': 'false',
 'columns[6][data]': 'per_change',
 'columns[6][orderable]': 'false',
 'columns[6][search][regex]': 'false',
 'columns[6][searchable]': 'false',
 'columns[7][data]': 'traded_quantity',
 'columns[7][orderable]': 'false',
 'columns[7][search][regex]': 'false',
 'columns[7][searchable]': 'false',
 'columns[8][data]': 'traded_amount',
 'columns[8][orderable]': 'false',
 'columns[8][search][regex]': 'false',
 'columns[8][searchable]': 'false',
 'company': '95',
 'draw': '1',
 'length': '20',
 'search[regex]': 'false',
 'start': '0'
}

headers = {
    'X-Requested-With': 'XMLHttpRequest'
}

url = 'https://www.sharesansar.com/company-price-history'

response = requests.get(url, params=payload, headers=headers)
data = response.json() 
#print(data)

print('NR  | DATA       | OPEN   | CLOSE  |')
for number, item in enumerate(data['data'], 1):
    print(f'{number:3} |', item['published_date'], "|", item['open'], "|", item['close'], "|")

Result:

NR  | DATA       | OPEN   | CLOSE  |
  1 | 2021-12-09 | 208.70 | 206.00 |
  2 | 2021-12-08 | 214.90 | 205.00 |
  3 | 2021-12-07 | 218.00 | 211.00 |
  4 | 2021-12-06 | 208.00 | 214.00 |
  5 | 2021-12-05 | 215.00 | 211.00 |
  6 | 2021-12-02 | 225.00 | 217.10 |
  7 | 2021-12-01 | 226.00 | 224.50 |
  8 | 2021-11-30 | 224.60 | 225.00 |
  9 | 2021-11-29 | 227.00 | 225.00 |
 10 | 2021-11-28 | 233.00 | 227.00 |
 11 | 2021-11-25 | 233.00 | 237.00 |
 12 | 2021-11-24 | 228.00 | 230.00 |
 13 | 2021-11-23 | 233.50 | 230.10 |
 14 | 2021-11-22 | 238.00 | 237.00 |
 15 | 2021-11-21 | 242.70 | 234.40 |
 16 | 2021-11-18 | 236.00 | 240.00 |
 17 | 2021-11-17 | 243.00 | 240.00 |
 18 | 2021-11-16 | 232.00 | 239.90 |
 19 | 2021-11-15 | 226.00 | 231.50 |
 20 | 2021-11-14 | 228.00 | 225.60 |
  • Related