Home > Blockchain >  BeautifulSoup not returning the title of page
BeautifulSoup not returning the title of page

Time:03-01

I tried to getting the title of a web page by web scraping using Beautifulsoup4 python module and it's returning a string "Not Acceptable!" as the title, but when I open the webpage via browser the title is different. I tried looping through list of links and extract titles of all the webpages but it's returning the same string "Not Acceptable!" for all the links.

here is the python code

from bs4 import BeautifulSoup
import requests


URL = 'https://insights.blackcoffer.com/how-is-login-logout-time-tracking-for-employees-in-office-done-by-ai/'
result = requests.get(URL)
doc = BeautifulSoup(result.text, 'html.parser')
tag = doc.title
print(tag.get_text())

here is link to the corresponding web page webpage link

I don't know if it is a problem with Beautifulsoup4 or with requests library, is it because the site has enabled bot protection and not returning the HTML when sending the requests?

CodePudding user response:

An easy way to debug this kind of issue is to print (or write to a file) the request.text. This is because some servers don't allow scraping. Some websites generate HTML using JavaScript at runtime (e.g. YouTube). These are some of the scenarios where the request.text can be different than the source HTML we see in the browser. The below text has been returned by the server.

<head><title>Not Acceptable!</title></head><body><h1>Not Acceptable!</h1><p>An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.</p></body></html>

Edit: As pointed by DYZ, this is a 406 error and User Agent in the request header was missing.

https://www.exai.com/blog/406-not-acceptable

The 406 Not Acceptable status code is a client-side error. It's part of the HTTP response status codes in the 4xx category, which are considered client error responses

CodePudding user response:

The server expects the User-Agent header. Interestingly, it is happy with any User-Agent, even a fictitious one:

result = requests.get(URL, headers = {'User-Agent': 'My User Agent 1.0'})
  • Related