I'm replacing requests.get()
with pd.read_csv()
and would like to write some exception logic if pandas does not get the equivalent of a status code 200.
With requests, I can write:
response = requests.get(report_url)
if response.status_code != 200:
How can I apply the same logic to pd.read_csv()
? Are there any status codes I can check on?
CodePudding user response:
You can use url
in read_csv()
but it has no method to gives you status code. It simply raises error when it has non-200 status code and you have to use try/except
to catch it. You have example in other answer.
But if you have to use requests
then you can later use io.StringIO
to create file-like object (file in memory) and use it in read_csv()
.
import io
import requests
import pandas as pd
response = requests.get("https://people.sc.fsu.edu/~jburkardt/data/csv/addresses.csv")
print('status_code:', response.status_code)
#if response.status_code == 200:
if response.ok:
df = pd.read_csv( io.StringIO(response.text) )
else:
df = None
print(df)
The same way you can use io.StringIO
when you create web page which gets csv
using HTML
with <form>
.
As I know read_csv(url)
works in similar way - it uses requests.get()
to get file data from server and later it uses io.StringIO
to read data.
CodePudding user response:
My suggestion is to write a custom reader that makes it possible to check that a URL is valid before reading it although this defeats the purpose
import requests
def custom_read(url):
try:
return_file = pd.read_csv(url)
except requests.exceptions.HTTPError as err:
raise
else:
return return_file
A valid URL will work
my_file = custom_read("https://people.sc.fsu.edu/~jburkardt/data/csv/addresses.csv")
This fails and raises a requests
error
my_file1 = custom_read("https://uhoh.com")
Otherwise, there is no way to access the status code of a URL for a DataFrame
object once it has been read.