I tried BeautifulSoup but it scrapes the script from URL.
url = 'https://ekartlogistics.com/shipmenttrack/FMPP0944216480'
from bs4 import BeautifulSoup
from urllib import request, parse
read = request.urlopen(url)
soup = BeautifulSoup(read, 'html.parser')
print(soup.prettify())
It returns the script along with other HTML scripts.
I am trying to get this table data from this URL
CodePudding user response:
The url is loaded data dynamically by javascript. So you can't grab data using only beautifulsoup. You can use automation tool something like selenium. Here I use selenium to mimic javascript and grab table data by using pandas as follows:
Code:
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
import pandas as pd
driver = webdriver.Chrome('chromedriver.exe')
driver.maximize_window()
time.sleep(5)
driver.get("https://ekartlogistics.com/shipmenttrack/FMPP0944216480")
time.sleep(3)
table = driver.find_element(By.CSS_SELECTOR, 'table.table').get_attribute('outerHTML')
df = pd.read_html(table)[0]
print(df)
Output:
Date Time Place Status
0 Sunday 17 October 04:24:26 PM Kolkata Shipment Created
1 Sunday 17 October 04:24:31 PM Kolkata Dispatched to CentralHub_BAG
2 Sunday 17 October 04:56:00 PM Kolkata Received at CentralHub_BAG
3 Sunday 17 October 04:56:03 PM Kolkata Received at CentralHub_BAG
4 Monday 18 October 03:10:35 AM Patna Dispatched to CentralHub_BHT
5 Tuesday 19 October 04:48:44 AM Patna Received at CentralHub_BHT
6 Tuesday 19 October 05:03:44 PM Samastipur Dispatched to SatelliteHub_SAMA
7 Wednesday 20 October 02:47:44 AM Samastipur Received at SatelliteHub_SAMA
8 Thursday 21 October 09:21:52 AM Samastipur Out For Delivery
9 Friday 22 October 07:38:36 AM Samastipur Delivered
CodePudding user response:
NOTE: THE BELOW SOLUTION IS FOR GOOGLE COLAB.
Credits : https://stackoverflow.com/users/12848411/fazlul
!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver') # ChromeDriver Path
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage') # All above commands to install Selenium on Colab
wd = webdriver.Chrome('chromedriver',chrome_options=chrome_options)
from selenium.webdriver.common.by import By
import pandas as pd
wd.get("https://ekartlogistics.com/shipmenttrack/FMPP0944216480")
table = wd.find_element(By.CSS_SELECTOR, 'table.table').get_attribute('outerHTML')
df = pd.read_html(table)[0]
print(df)
Output:
Date Time Place Status
0 Sunday 17 October 04:24:26 PM Kolkata Shipment Created
1 Sunday 17 October 04:24:31 PM Kolkata Dispatched to CentralHub_BAG
2 Sunday 17 October 04:56:00 PM Kolkata Received at CentralHub_BAG
3 Sunday 17 October 04:56:03 PM Kolkata Received at CentralHub_BAG
4 Monday 18 October 03:10:35 AM Patna Dispatched to CentralHub_BHT
5 Tuesday 19 October 04:48:44 AM Patna Received at CentralHub_BHT
6 Tuesday 19 October 05:03:44 PM Samastipur Dispatched to SatelliteHub_SAMA
7 Wednesday 20 October 02:47:44 AM Samastipur Received at SatelliteHub_SAMA
8 Thursday 21 October 09:21:52 AM Samastipur Out For Delivery
9 Friday 22 October 07:38:36 AM Samastipur Delivered
CodePudding user response:
Maybe try loading the page headless with Selenium and then extracting html? I also couldn't get it to work with request only.