Home > Software design >  page.close() not working as expected in Playwright and asyncio
page.close() not working as expected in Playwright and asyncio

Time:12-29

I have written a web scraper which needs to scrape few hundred pages asynchronously in Playwright-Python after login. I've came across aiometer from @Florimond Manca (https://github.com/florimondmanca/aiometer) to limit requests in the main async function - this works well.

The problem I'm having at the moment, is closing the pages after they've been scraped. The async function just increases the amount of pages load - as it should - but it increases memory consumption significantly if few hundred are loaded. In the function I'm opening a browser context and passing that to each async scraping request per page, the rationale being that it decreases memory overhead and it conserves the state from my login function (implemented in my main script - not shown).

How can I close the pages after being scraped (in the scrape function)?

import asyncio
import functools
from playwright.async_api import async_playwright
from bs4 import BeautifulSoup
import pandas as pd
import aiometer

urls = [
    "https://scrapethissite.com/pages/ajax-javascript/#2015",
    "https://scrapethissite.com/pages/ajax-javascript/#2014",
    "https://scrapethissite.com/pages/ajax-javascript/#2013",
    "https://scrapethissite.com/pages/ajax-javascript/#2012",
    "https://scrapethissite.com/pages/ajax-javascript/#2011",
    "https://scrapethissite.com/pages/ajax-javascript/#2010"
]

async def scrape(context, url):
    page = await context.new_page()
    await page.goto(url) 
    await page.wait_for_load_state(state="networkidle")
    await page.wait_for_timeout(1000)
    #Getting results off the page
    html = await page.content()
    soup = BeautifulSoup(html, "lxml")
    tables = soup.find_all('table')
    dfs = pd.read_html(str(tables))
    df=dfs[0]
    print("Dataframe in page " url  " scraped")
    page.close
    return df


async def main(urls):
    async with async_playwright() as p:
        browser = await p.chromium.launch(headless=False)
        context = await browser.new_context()
        master_results = pd.DataFrame()
        async with aiometer.amap(
            functools.partial(scrape, context),
            urls,
            max_at_once=5, # Limit maximum number of concurrently running tasks.
            max_per_second=3,  # Limit request rate to not overload the server.
        ) as results:
            async for data in results:
                print(data)
                master_results = pd.concat([master_results,data], ignore_index=True)
        print(master_results)


asyncio.run(main(urls))

I've tried the await keyword before page.close() or context.close() throws an error: "TypeError: object method can't be used in 'await' expression".

CodePudding user response:

After reading a few pages, even into the Playwright documentation bug trackers on github: https://github.com/microsoft/playwright/issues/10476 , I found the problem: I forgot to add parentheses in my page.close function.

page.close()

So simple - but yet took me hours to get to. Probably part of learning to code.

  • Related