I am new to the beautiful soup. I am working to scrape some excel files from a source. URL to source: https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date= Original Data source: https://droughtmonitor.unl.edu/DmData/GISData.aspx/
My main objective is to scrape the data from this URL and convert the same into a data frame including all the files in the original data source URL and also if some new files added could be downloaded automatically and added to the source.
from bs4 import BeautifulSoup
import requests
import json
import pandas as pd
url2 = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'
r2 = requests.get(url2)
soup = BeautifulSoup(r2.text,'html.parser')
raw_data = [data.text for data in soup]
The above code gives me an output:-
["MapDate,FIPS,County,State,Nothing,D0,D1,D2,D3,D4,ValidStart,ValidEnd\r\n20220621,01001,Autauga County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01003,Baldwin County,AL,81.22,18.78,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01005,Barbour County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01007,Bibb County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01009,Blount County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01011,Bullock County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01013,Butler County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01015,Calhoun County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01017,Chambers County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01019,Cherokee County,AL,69.27,30.73,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n20220621,01021,Chilton County,AL,100.00,0.00,0.00,0.00,0.00,0.00,2022-06-21,2022-06-27\r\n
I want to have the inital 12 values: MapDate,FIPS,County,State,Nothing,D0,D1,D2,D3,D4,ValidStart,ValidEnd to be my column name and rest to the values associated with the same.
Also, the original data source URL has values from the year 2000 to 2022. I need all the data in the same format and in a single CSV.
Also, I need to have the code in such a manner that it will automatically extract any new data added to the source.
Can someone guide me on this.
CodePudding user response:
It sends file csv
so you don't need BeautifulSoup
You can use io
with pandas.read_csv()
import requests
import pandas as pd
import io
url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'
response = requests.get(url)
fh = io.StringIO(response.text) # create file in memory
df = pd.read_csv(fh)
print(df)
or you can use io
with module csv
import requests
import csv
import io
url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'
response = requests.get(url)
fh = io.StringIO(response.text) # create file in memory
data = list(csv.reader(fh))
print(data)
EDIT:
You can even use url
directly with pandas
import pandas as pd
url = 'https://droughtmonitor.unl.edu/DmData/GISData.aspx/?mode=table&aoi=county&date=2022-06-21'
df = pd.read_csv(url)
print(df)