Home > Enterprise >  python looping fast through links
python looping fast through links

Time:05-19

import requests
import json
from tqdm import tqdm

list of links to loop through

links =['https://www.google.com/','https://www.google.com/','https://www.google.com/']

for loop for the link using requests

data = []
for link in tqdm(range(len(links))):
    response = requests.get(links[link])
    response = response.json()
    data.append(response)

the above for loop is used to loop through all the list of links but its time consuming when I tried to loop on around a thousand links any help.

CodePudding user response:

As suggested in the comment you can use asyncio and aiohttp.

import asyncio
import aiohttp

links = ["your", "links", "here"]

# create aio connector
conn = aiohttp.TCPConnector(limit_per_host=100, limit=0, ttl_dns_cache=300)

# set number of parallel requests - if you are requesting different domains you are likely to be able to set this higher, otherwise you may be rate limited
PARALLEL_REQUESTS = 10

# Create results array to collect results
results = []

async def gather_with_concurrency(n):
    # Create semaphore for async i/o  
    semaphore = asyncio.Semaphore(n)

    # create an aiohttp session using the previous connector
    session = aiohttp.ClientSession(connector=conn)

    # await logic for get request
    async def get(URL):
        async with semaphore:
            async with session.get(url, ssl=False) as response:
                obj = await response.read()
                # once object is acquired we append to list
                results.append(obj)
    # wait for all requests to be gathered and then close session
    await asyncio.gather(*(get(url) for url in urls))
    await session.close()

# get async event loop
loop = asyncio.get_event_loop()
# run using number of parallel requests
loop.run_until_complete(gather_with_concurrency(PARALLEL_REQUESTS))
# Close connection
conn.close()

# loop through results and do something to them
for res in results:
    do_something(res)

I have tried to comment on the code as well as possible.

I have used BS4 to parse requests in this manner (in the do_something logic), but it will really depend on your use case.

CodePudding user response:

You can iterate via links, next you can use a link instead of using links[link]. Then to get rid of duplicated links you can use set (you don't need to make a request again).

data = []
for link in tqdm(links):
    response = requests.get(link)
    response = response.json()
    data.append(response)
  • Related