Home > other >  I am trying to do web scraping with Python and have made a request like below and got the response.
I am trying to do web scraping with Python and have made a request like below and got the response.

Time:10-28

I would like to extract the links from the response.

Request:

import requests


 headers = {
     'authority': 'www.xxxxxx.net',
     'sec-ch-ua': '"Google Chrome";v="95", "Chromium";v="95", ";Not A Brand";v="99"',
     'accept': 'text/javascript, application/javascript, application/ecmascript, application/x-ecmascript, */*; q=0.01',
     'x-requested-with': 'XMLHttpRequest',
     'sec-ch-ua-mobile': '?0',
     'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like 
  Gecko) Chrome/95.0.4638.54 Safari/537.36',
     'sec-ch-ua-platform': '"Windows"',
     'sec-fetch-site': 'same-origin',
     'sec-fetch-mode': 'cors',
     'sec-fetch-dest': 'empty',
     'referer': 'https://www.xxxxxx.net/',
     'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
     'cookie': 'bnState={"impressions":1,"delayStarted":0}; 
 pnState="impressions":2,"delayStarted":1635254046187}',
      }

params = (
     ('alt', 'json-in-script'),
     ('max-results', '12'),
     ('start-index', '13'),
     ('callback', 'jQuery22404432064732296963_1635254045161'),
     ('_', '1635254045166'),
 )

url='https://www.xxxxxx.net/feeds/posts/default?alt=json-in-script&max-results=12&start-index=1&callback=jQuery22404432064732296963_1635254045161&_=1635254045166'

response = requests.get('https://www.xxxxxx.net/feeds/posts/default', params=params)

print(response.text)

Response:

"2048" data-original-width="1367" src="https://blogger.googleusercontent.com/img/a/AVvXsEiLyCl2Qw06N8kiyC5cxMQhamcar6Nhkuh6JV5_xqLurbc5zM2E1LH8IepLzhJ_T-OsbcAF6qQZXpcmMy6ikkpOjaRu1uaBs2CsSpl0VwBrEMOln-X2-BOAsQRTbEJJNlqIbwQwzY7_WoMFKyvLd-iYP23oFK3nWiNcO1Ws"/\u003E\u003C/a\u003E\u003C/div\u003E\u003Cdiv style="clear: both;"\u003E\u003Ca href="https://blogger.googleusercontent.com/img/a/AVvXsEgPwHY717f8RMYR-GZ9U9-k1L7h3J4MSRI-bCPjF2Yb3qHv3p0Bitd73n-rGbJQfVetNuhxJdBlzPdPrtzjWWTPvrYFBGl-gcO6cPiccSygST8yR23eduWYfxAsRdh6gvVsjCpIfiWod9Qd_--wU" style="display: block; padding: 1em 0; text-align: center; "\u003E\u003Cimg alt="" border="0" data-original-height="2048" data-original-width="1365" src="https://blogger.googleusercontent.com/img/a/AVvXsEgPYR-GZ9U9-k1L7h3J4MSR1bWYocsC5PbI-bCPjF2Yb3d73n-rGbJQwZqzRqCfPrlzPdPrtzjWWTPvrYFBGl-gcO6cPiccSygST8yR23o6z4Tq8ptl4vVaeduWYfxAsRdh6gvVsjCpIfiWod9Qd_--wU"/\u003E\u003C/a\u003E\u003C/div\u003E\u003Cdiv style="clear: both;"\u003E\u003Ca href="https://blogger.googleusercontent.com/img/a/AVvXsEjEV6skKy5be_5LoMzHD-AeZWFV80c7KXV4BVpS7KTKkNTzl0U5-itDje-DbDgE0KHuoGI3ePDmfn_0AQMP1BjXPx2nn4mB1jUI9Rb7u9NQNMURGSAmk4aQK7h8qqiGH_lafBcHeNupHrm" style="display: block; padding: 1em 0; text-align: center; "\u003E\u003Cimg alt="" border="0" data-original-height="2048" data-original-width="1367" src="https://blogger.googleusercontent.com/img/a/AVvXsEjEV6skKy5be_5LoMzHD-AeZWFV80cQNKXV4BVpS7KTKkNTzl0U5-itDje-DbDgIS8A18QP7aVvME1wzZMb53ePDmfn_0AQMP1BjXRGSAmk4aQK7h8qqiGH_lafBcHeNupHrm"/\u003E\u003C/a\u003E\u003C/div\u003E\u003Cdiv style="clear: both;"\u003E\u003Ca href="https://blogger.googleusercontent.com/img/a/AVvXsEhvK-fVZGPmgnkif5OWAMDk-d22Y73FDLYRSXQQe4AYOazvk25-0DQ-o4XX35meuORitAk7WoN1vKSLdtH_P1wTa91B94vAI4ZGhlho0eE99oyOb8vRO055ZXuLloFz1n-_Y" style="display: block; padding: 1em 0; text-align: center;

Note: Please advise me how to process the response. Also please note that i have changed the url for privacy reason.

thanks in advance for your help.

CodePudding user response:

If you're doing web scraping i highly recommend to use BeautifulSoup library to parse your response. Initialize it as shown bellow:

from bs4 import BeautifulSoup
response = "" # your response
soup = BeautifulSoup(response) # Parse response and save it into a variable

to get all hrefs:

hrefs = soup.find_all(href=True)
links = [i['href'] for i in hrefs] # An array with all your links

CodePudding user response:

This is a "ascii" string with unicode content in it. You need to convert it into a normal "unicode" string first. Try this:

html_content = bytes(response.text, "ascii").decode("unicode-escape")

After that you get a normal string in the "HTML/XML" format. Then you can use "BeautifulSoup4" to parse it and get the content you need.

  • Related