I am a novice programmer trying to accelerate the data analysis process by automating the conversion of .ict files to .csv files.
I am trying to create a Python program that easily converts .ict files from NASA's Earthdata Website into .csv files for data analysis. I am planning on doing this by creating a data scraper to access these files, but they are behind a user authentication wall. The data sets I am planning on accessing are found at this link: https://asdc.larc.nasa.gov/data/AJAX/O3_1/2018/02/28/AJAX-O3_ALPHA_20180228_R1_F220.ict
Here is the code that I collected from https://curlconverter.com/# and added to send the data to "log in" my session:
import requests
from bs4 import BeautifulSoup
cookies = {
'_ga': '',
'_gid': '',
'_gat_GSA_ENOR0': '1',
'_gat_UA-62340125-1': '1',
'_gat_eui_tracker': '1',
'_gat_UA-50960810-3': '1',
'_urs-gui_session': '',
'_gat_UA-62340125-2': '1',
}
headers = {
'Connection': 'keep-alive',
'Cache-Control': 'max-age=0',
'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="96", "Google Chrome";v="96"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'Upgrade-Insecure-Requests': '1',
'Origin': 'https://urs.earthdata.nasa.gov',
'Content-Type': 'application/x-www-form-urlencoded',
'User-Agent': '',
'Accept': 'text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Sec-Fetch-Site': 'same-origin',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-User': '?1',
'Sec-Fetch-Dest': 'document',
'Referer': 'https://urs.earthdata.nasa.gov/oauth/authorize?response_type=code&client_id=OLpAZlE4HqIOMr0TYqg7UQ&redirect_uri=https://d53njncz5taqi.cloudfront.net/urs_callback&state=https://search.earthdata.nasa.gov/search?ee=prod',
'Accept-Language': 'en-US,en;q=0.9',
}
data = {
'utf8': '',
'authenticity_token': '',
'username': '',
'password': '',
'client_id': '',
'redirect_uri': '',
'response_type': 'code',
'state': 'https://search.earthdata.nasa.gov/search?ee=prod',
'stay_in': '1',
'commit': 'Log in'
}
response = requests.post('https://urs.earthdata.nasa.gov/login', headers=headers, cookies=cookies, data=data)
s = requests.Session()
s.post('https://urs.earthdata.nasa.gov/login', headers=headers, cookies=cookies, data=data)
response = s.get("https://asdc.larc.nasa.gov/data/AJAX/O3_1/2018/02/28/AJAX-O3_ALPHA_20180228_R1_F220.ict")
response
result = requests.get('https://asdc.larc.nasa.gov/data/AJAX/O3_1/2018/02/28/AJAX-O3_ALPHA_20180228_R1_F220.ict')
result.status_code
result.headers
content = result.content
soup = BeautifulSoup(content, features='lxml')
print(soup.prettify())
This print function leads me to the HTML code for the login page. Does anyone know how to access the data on the other end of the login through Python?
CodePudding user response:
Couple of things missing in your data
, as in the value of authenticity_token
and encoded value of state
. The following is how I would do it. Make sure to fill in the username
and password
fields accordingly before executing the script.
import requests
from bs4 import BeautifulSoup
url = 'https://urs.earthdata.nasa.gov/oauth/authorize?splash=false&client_id=iQGRa5KtDl_e-fgYqB5x5Q&response_type=code&redirect_uri=https://asdc.larc.nasa.gov/data/urs&state=aHR0cDovL2FzZGMubGFyYy5uYXNhLmdvdi9kYXRhL0FKQVgvTzNfMS8yMDE4LzAyLzI4L0FKQVgtTzNfQUxQSEFfMjAxODAyMjhfUjFfRjIyMC5pY3Q'
link = 'https://urs.earthdata.nasa.gov/login'
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'
r = s.get(url)
soup = BeautifulSoup(r.text,"lxml")
payload = {i['name']:i.get('value','') for i in soup.select('input[name]')}
payload['username'] = 'your_username'
payload['password'] = 'your_password'
res = s.post(link,data=payload)
print(res.text)