Home > OS >  Looping until max results
Looping until max results

Time:06-25

I'm pretty new to web scraping but enjoying it so far so thought I'd test myself!

I've written this query to scrape this website but just wondering is there a way of making it more efficient? At the moment, I've had to set the max page to 87 as this is the last page that guitars appear on. However, amps only have 15 pages of results but I'm still looping through 87. Any ideas appreciated!

import pandas as pd
import requests
from bs4 import BeautifulSoup

guitar_products = []
n = 88

#ELECTRIC GUITAR DATA

for category in ['guitars/electric/','guitars/bass/','amps/','guitars/acoustic/','pedals/']:
    for x in range(1,n):
        url = "https://www.guitarguitar.co.uk/"   category   "page-"   str(x)
        
        print(url)
        
        page = requests.get(url)
        
        soup = BeautifulSoup(page.content, 'html.parser')
        
        products = [product.text.strip() for product in soup.findAll('h3', {'class': 'qa-product-list-item-title'})]
        prices = [price.text.strip()[:-1] for price in soup.findAll('span', {'class': 'js-pounds'})]
        avails = [avail.text.strip() for avail in soup.findAll('div', {'class': 'availability'})]
        
        for index in range(0, len(products)):
                  guitar_products.append({
                 'product': products[index],
                 'price' : prices[index],
                 'avail' : avails[index]
                 })

guitar_data = pd.DataFrame(guitar_products)

guitar_data['price'] = pd.to_numeric(guitar_data['price'].str.replace('[^\d.]', '', regex=True))

Thanks

CodePudding user response:

Try the following approach:

import pandas as pd
import requests
from bs4 import BeautifulSoup

guitar_products = []

#ELECTRIC GUITAR DATA

for category in ['guitars/electric/', 'guitars/bass/', 'amps/', 'guitars/acoustic/', 'pedals/']:
    page_number = 1
    
    while True:
        url = f"https://www.guitarguitar.co.uk/{category}page-{page_number}"
        print(url)
        page_number  = 1
        
        req = requests.get(url)
        soup = BeautifulSoup(req.content, 'html.parser')
        
        for div_product in soup.find_all('div', class_="product-inner"):
            product = div_product.find('h3', {'class': 'qa-product-list-item-title'}).get_text(strip=True)
            price = div_product.find('span', {'class': 'js-pounds'}).get_text(strip=True)
            avail = div_product.find('div', {'class': 'availability'}).get_text(strip=True)

            guitar_products.append({'product' : product, 'price' : price, 'avail' : avail})
        
        # Is there a next button?
        if not soup.find('a', class_="next-page-button"):
            print("No more")
            break

guitar_data = pd.DataFrame(guitar_products)
guitar_data['price'] = pd.to_numeric(guitar_data['price'].str.replace('[^\d.]', '', regex=True))

Improvements:

  1. This looks for the Next button on each page to then skip to the next category.
  2. It locates the <div> holding each product and then uses a single find to get each product detail. This avoids the need to build multiple lists and then join them.
  3. Build the URL using a Python f string.

CodePudding user response:

You can check H1:

    *soup = BeautifulSoup(page.content, 'html.parser')*

    if soup.find('h1').contents[0] == 'Page Not Found':
        break

or change circle from for to while:

is_page = True
x = 0
while is_page:
    x = x   1
    . . .

    if soup.find('h1').contents[0] == 'Page Not Found':
        is_page = False
        break

CodePudding user response:

This is probably not the most elegant solution, but it is functional and straightforward. An infinite loop which ends if no product is found.

import pandas as pd
import requests
from bs4 import BeautifulSoup

guitar_products = []
n = 1

# ELECTRIC GUITAR DATA

for category in ['guitars/electric/', 'guitars/bass/', 'amps/', 'guitars/acoustic/', 'pedals/']:
    while True:
        url = "https://www.guitarguitar.co.uk/"   category   "page-"   str(n)

        print(url)

        page = requests.get(url)

        soup = BeautifulSoup(page.content, 'html.parser')

        products = [product.text.strip() for product in soup.findAll('h3', {'class': 'qa-product-list-item-title'})]
        prices = [price.text.strip()[:-1] for price in soup.findAll('span', {'class': 'js-pounds'})]
        avails = [avail.text.strip() for avail in soup.findAll('div', {'class': 'availability'})]

        for index in range(0, len(products)):
            guitar_products.append({
                'product': products[index],
                'price': prices[index],
                'avail': avails[index]
            })

        if len(products) == 0:
            n = 1
            break
        else:
            n  = 1
guitar_data = pd.DataFrame(guitar_products)

guitar_data['price'] = pd.to_numeric(guitar_data['price'].str.replace('[^\d.]', '', regex=True))
  • Related