Home > database >  Link scraping errors
Link scraping errors

Time:11-14

url = "https://www.cnn.com/"

response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

links = []

for link in soup(response).find_all("a", href=True):
    links.append(link["href"])

for link in links:
    print(links)

AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?

I'm not too sure why I'm getting this error, I'm trying to scrape all href / links from this website.

CodePudding user response:

You don't need to call soup(response), just call find_all directly on soup soup. Soup already has the response information from line 5, so it's redundant.

# Replace this:
for link in soup(response).find_all("a", href=True):

# With this
for link in soup.find_all("a", href=True):
  • Related