Home > OS >  Web scraping from html code of a database using python
Web scraping from html code of a database using python

Time:04-20

I am new to python and am learning things slowly. I have earlier performed API calls from databases to extract infromation. However, I was dealing with a particular Indian database. The html script seems confusing to extract the particular infromation I am looking for. Basically, I have a list of herb name links as input which looks like this(only the ID changes):

http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8
http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9
http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10
http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11

When I open each of this, I want to extract the "Distribution" detail for these herb links from the webpage. That's all. But, in the html script, I cant figure which header has the detail. I tried a lot before coming here. Can someone please help me. Thanks in advance.

Code:

import requests
import urllib.request
import time
from bs4 import BeautifulSoup
import json
import pandas as pd
import os
from pathlib import Path
from pprint import pprint

user_home = os.path.expanduser('~')
OUTPUT_DIR = os.path.join(user_home, 'vk_frlht')
Path(OUTPUT_DIR).mkdir(parents=True, exist_ok=True)

herb_url = 'http://envis.frlht.org/bot_search'
response = requests.get(herb_url)
soup = BeautifulSoup(response.text, "html.parser")
token = soup.find('Type Botanical Name', {'type': 'hidden', 'name': 'token'})
herb_query_url = 'http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8'

response = requests.get('http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8')

#optional code for many links at once

with open(Path, 'r') as f:
frlhtinput = f.readlines()
frlht = [x[:-1] for x in frlhtinput]

for line in frlht:
out = requests.get(f'http://envis.frlht.org/plantdetails/{line}')
#end of the optional code
herb_query_soup = BeautifulSoup(response.text, "html.parser")
text = herb_query_soup.find('div', {'id': 'result-details'})
pprint(text)

CodePudding user response:

This is how this page looks after scrapping:

enter image description here

Loading sign in the middle means that content can be loaded only after JavaScript code executes. Meaning someone protected this content with JS code. You have to use Selenium browser instead of BS4.

See tutorial here on how to use it.

CodePudding user response:

Try it.

import requests
from bs4 import BeautifulSoup
from pprint import pprint

plant_ids = ["3315", "2133", "845", "363"]
results = []
for plant_id in plant_ids:
    herb_query_url = f"http://envis.frlht.org/plantdetails/{plant_id}/fd01bd598f0869d65fe5a2861845f9f8"
    headers = {
        "Referer": herb_query_url,
    }
    response = requests.get(
        f"http://envis.frlht.org/bot_search/plantdetails/plantid/{plant_id}/nocache/0.7763327765552295/referredfrom/extplantdetails",
        headers=headers,
    )
    herb_query_soup = BeautifulSoup(response.text, "html.parser")
    result = herb_query_soup.findAll("div", {"class": "initbriefdescription"})
    for r in result:
        result_dict = {r.text.split(":", 1)[0].strip(): r.text.split(":", 1)[1].strip()}
        results.append(result_dict)
pprint(results)

CodePudding user response:

The information is obtained from another URL based on the URLs you have. First you need to construct the required URL (which was found looking at the browser) and requesting that.

Try the following:

import requests
from bs4 import BeautifulSoup

urls = [
    "http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8",
    "http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9",
    "http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10",
    "http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11",
]

for url in urls:
    print('\n', url)
    url_split = url.split('/')
    url_details = f"http://envis.frlht.org/bot_search/plantdetails/plantid/{url_split[4]}/nocache/{url_split[5]}/referredfrom/extplantdetails"
    
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
        'Referer' : url,
    }
    
    req = requests.get(url_details, headers=headers)
    soup = BeautifulSoup(req.content, "html.parser")
    
    for div in soup.find_all('div', class_="initbriefdescription"):
        print("  ", div.get_text(strip=True))

For your 4 IDs, it displays:

 http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8
   Accepted Name:Amaranthus hybridusL. subsp.cruentusvar.paniculatusTHELL.
   Family:AMARANTHACEAE
   Synonyms:Amaranthus paniculatusL.
   Used in:Ayurveda, Siddha, Folk
   Distribution:This species is globally distributed in Africa, Asia and India. It is said to be cultivated as a leafy vegetable in Maharashtra, Karnataka (Coorg) and on the Nilgiri hills of Tamil Nadu. It is also found as an escape.

 http://envis.frlht.org/plantdetails/2133/fd01bd598f0869d65fe5a2861845f9f9
   Accepted Name:Triticum aestivumL.
   Family:POACEAE
   Synonyms:Triticum sativumLAM.Triticum vulgareWILL.
   Used in:Ayurveda, Siddha, Unani, Folk, Chinese, Modern

 http://envis.frlht.org/plantdetails/845/fd01bd598f0869d65fe5a2861845f9f10
   Accepted Name:Dolichos biflorusL.
   Family:FABACEAE
   Synonyms:Dolichos uniflorusLAMK.Macrotyloma uniflorum(LAM.) VERDC.
   Used in:Ayurveda, Siddha, Unani, Folk, Sowa Rigpa
   Distribution:This species is native to India, globally distributed in the Paleotropics. Within India, it occurs all over up to an altitude of 1500 m. It is an important pulse crop particularly in Madras, Mysore, Bombay and Hyderabad.

 http://envis.frlht.org/plantdetails/363/fd01bd598f0869d65fe5a2861845f9f11
   Accepted Name:Brassica oleraceaL.
   Family:BRASSICACEAE
   Used in:Ayurveda, Siddha

CodePudding user response:

import requests
import urllib.request
import time
from bs4 import BeautifulSoup
import json
import pandas as pd
import os
from pathlib import Path
from pprint import pprint

user_home = os.path.expanduser('~')
OUTPUT_DIR = os.path.join(user_home, 'vk_frlht')
Path(OUTPUT_DIR).mkdir(parents=True, exist_ok=True)

herb_url = 'http://envis.frlht.org/bot_search'
response = requests.get(herb_url)
soup = BeautifulSoup(response.text, "html.parser")
token = soup.find('Type Botanical Name', {'type': 'hidden', 'name': 'token'})
herb_query_url = 'http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8'

response = requests.get('http://envis.frlht.org/plantdetails/3315/fd01bd598f0869d65fe5a2861845f9f8')

#optional code for many links at once

with open(Path, 'r') as f:
frlhtinput = f.readlines()
frlht = [x[:-1] for x in frlhtinput]

for line in frlht:
out = requests.get(f'http://envis.frlht.org/plantdetails/{line}')
#end of the optional code
herb_query_soup = BeautifulSoup(response.text, "html.parser")
text = herb_query_soup.find('div', {'id': 'result-details'})
pprint(text)
  • Related