I'm trying to scrape token info from poocoin. All other information is available but I can't scrape time-series data from the chart.
import requests, re
from bs4 import BeautifulSoup
import pandas as pd
url = 'https://poocoin.app/tokens/0x7606267a4bfff2c5010c92924348c3e4221955f2'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
CodePudding user response:
You can make it work by making a direct request to their API (I believe), transform it to JSON via requests
.json()
decoder, and grab the data you need the same way you would access a dictionary: ["some_key"]
.
To locate where to send the request: Dev tools -> Network -> Fetch/XHR -> find name and click on it
(in this case: candles-bsc?..) -> Preview
(see if response is what you want) -> Headers -> copy Request URL -> make a request -> optional: add additional request headers if response != 200
.
You can use Insomnia to test a response. Find name under Fetch/XHR -> right click -> copy as cURL (bash) -> place inside Insomnia -> see the reponse
.
In this case, you only need to pass a user-agent
to request headers
in order to receive a 200
status code, otherwise, it will throw a 403
or 503
status code. Check what's your user-agent
.
Pass user-agent
:
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36",
}
response = requests.get("URL", headers=headers)
Code and example in the online IDE:
import requests
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36",
}
params = {
"to":"2021-11-29T09:15:00.000Z",
"limit":"321",
"lpAddress":"0xd8b6A853095c334aD26621A301379Cc3614f9663",
"interval":"15m",
"baseLp":"0x58F876857a02D6762E0101bb5C46A8c1ED44Dc16"
}
response = requests.get("https://api2.poocoin.app/candles-bsc", params=params, headers=headers).json()
# whole response from API call for a particular token (i believe)
# some data needs to be adjusted (open/close price, etc.)
for result in response:
count = result["count"]
_time = result["time"]
open_price = result["open"]
close_price = result["close"]
high = result["high"]
low = result["low"]
volume = result["volume"]
base_open = result["baseOpen"]
base_close = result["baseClose"]
base_high = result["baseHigh"]
base_low = result["baseLow"]
print(f"{count}\n"
f"{_time}\n"
f"{open_price}\n"
f"{close_price}\n"
f"{high}\n"
f"{low}\n"
f"{volume}\n"
f"{base_open}\n"
f"{base_close}\n"
f"{base_high}\n"
f"{base_low}\n")
# part of the output:
'''
194
2021-11-29T06:00:00.000Z
6.6637177e-13
6.5189422e-13
6.9088173e-13
5.9996067e-13
109146241968737.17
610.0766516756873
611.1764494818917
612.3961994618185
606.7446709385977
1
2021-11-25T16:15:00.000Z
1.7132448e-13
1.7132448e-13
1.7132448e-13
1.7132448e-13
874858231833.1771
643.611707269882
642.5014860521045
644.5105804619558
638.9447353699617
# ...
'''
P.S. There's a dedicated web scraping blog of mine. Whenever there's a need to parse search engines, have a look at SerpApi.
Disclaimer, I work for SerpApi.