Home > Blockchain >  Having trouble when webscraping return nothing
Having trouble when webscraping return nothing

Time:10-06

I'm building a real state web-scraper and i'm having problems when a certain index doesn't exist in the html.

How can i fix this? The code that is having this trouble is this

info_extra = container.find_all('div', class_="info-right text-xs-right")[0].text

I'm new to web-scraping so I'm kinda lost.

Thanks!

CodePudding user response:

One general way is to check the length before you attempt to access the index.

divs = container.find_all('div', class_="info-right text-xs-right")
if len(divs) > 0:
   info_extra = divs[0].text
else:
   info_extra = None

You can simplify this further by knowing that an empty list is false.

divs = container.find_all('div', class_="info-right text-xs-right")
if divs:
   info_extra = divs[0].text
else:
   info_extra = None

You can simplify even further by using the walrus operator :=


if (divs := container.find_all('div', class_="info-right text-xs-right")):
   info_extra = divs[0].text
else:
   info_extra = None

Or all in one line:

info_extra = divs[0].text if (divs := container.find_all('div', class_="info-right text-xs-right") else None

CodePudding user response:

I'm new to web-scraping too and most of my problems are when I ask for an element on the page that doesn't exist

Have you tried the Try/Except block?

try:
    info_extra = container.find_all('div', class_="info-right text-xs-right")[0].text
except Exception as e:
    raise

https://docs.python.org/3/tutorial/errors.html

Good luck

  • Related