I am trying to write a program that will create a link to the API. To do this, I use bs4
, with which I search for the div I need, but I get an error due to the program not working correctly. I want to find only this coin name
that are in the coin list
. How I can fix it? Please, give me a hand.
My code:
s = requests.Session()
response = s.get(url=url, headers=headers)
soup = BeautifulSoup(response.text, 'lxml')
page_list = int(soup.find_all('li', class_='page')[-1].text)
coin_api_list = []
coin_api_list_not_in_div = []
coins_list = ["Metis Token","TownCoin","JasmyCoin","SIDUS","Paribus","Qredo Token","SENATE","YAM","CircuitsOfValue","E-RADIX","Ross Ulbricht Genesis Collection","ParaSwap","CERE Network","Victoria VR","Boba Token","Ribbon","Gelato Network Token","decentral.games","ConstitutionDAO","Octopus Network Token","HEX","AtariToken","88mph.app","BOND","Yield","Juicebox","NAGA Coin","Lido DAO Token","Life Crypto","Anyswap","AnRKey X","MarsToken","Wrapped TONCoin", "MoonRabbit","LQTY","Boson Token","GP","PolkaBridge","VISOR","TRVL","Gold Fever Native Gold","DeFine Art","Clearpool","Vader","Cellframe Token","ARCx Governance Token","DAFI Token","UniBright","NFTrade Token","RAZOR","Quickswap","Skyrim Finance","Mimir Token","DAOstack","10Set Token","SAITO","Nectar","Deri","Olympus","Dogs Of Elon","VesperToken","Geeq","PYR Token","Nahmii","Biconomy Token","Student Coin","Creaticles","Polkamarkets","Spell Token","OnX.finance","Morpheus.Network","XY Token","Unit Protocol","Dusk Network","XDEFI","EQIFi Token","NUM Token","Coinweb","pTokens TLOS","ASSEMBLE","UniTrade","SPLYT SHOPX","GDT","XCAD Token","Synapse","Game Coin","YFLink","BTRST","Tiger King","SOS","Kishu Inu","TON Coin","DeRace Token","FLOKI","Saitama Inu","ImpactXP","HyperDao","MetaCat","VLaunch","Shiryo-Inu","Radio Caca V2","Strips Token","Merit Circle","Gas DAO","DotOracle","Eden","Pendle","Tempus","Gods Unchained","Phantasma Stake","Klever","BitDAO","MCDEX Token","Keanu Inu"]
for page in range(1, page_list 1):
r = requests.get(url = f'https://coinmarketcap.com/?page={page}', headers=headers)
soup=BeautifulSoup(r.content, 'html.parser')
find_coin_href = soup.findAll('div',class_='sc-16r8icm-0 escjiH')
for coin_name in find_coin_href:
coin_name = soup.findAll('p',class_='sc-1eb5slv-0 iworPT')
for check_name in coin_name:
check_name = check_name.text
if check_name == coins_list:
for links in find_coin_href:
for link in links.find_all('a',href=True):
main_link=baseurl link['href']
coin_api_list.append(main_link)
print(f'Processed {page}/{page_list}')
with open('apilinks2.text', 'w') as file:
for url in coin_api_list coin_api_list_not_in_div:
name = (url.split('/')[-2])
file.write(f'https://api.coinmarketcap.com/data-api/v3/cryptocurrency/market-pairs/latest?slug={name}&start=1&limit=100&category=spot&sort=cmc_rank_advanced \n')
CodePudding user response:
There are two issues with your code:
- This:
if check_name == coins_list:
will always return false, sincecheck_name
is a string andcoins_list
is a list. You wantif check_name in coins_list:
. baseurl
isn't defined in the code snippet. Change it tourl
.
Perform both these changes, and you should have a nonempty output in your text file. The URLs in this file appear to be well-formed.