Home > Back-end >  What is the best way to scrape multiple page on website that use offset and not page to change it�
What is the best way to scrape multiple page on website that use offset and not page to change it�

Time:10-04

I want to scrape this website: https://repositori.usu.ac.id/handle/123456789/165?offset=0
When the offset on the link is set to 0, the content looks like this: Website When Offset is 0
And when the offset is set to 1, the content looks like this: Website When Offset is 1

Notice the top most item of the list is removed and changed to the one below it.
The website only show up to 20 list, and every 1 offset remove 1 list from the top most and replace it with the one below them. Hence we need 20 offset to change the list of content entirely .

I want to make a web scraping program that scrape said website. But i found difficulty when i need to scrape more than 20 different list. Because offset works different than page, i always scrape two to three times of the same list when doing multiple range scrape, which is not ideal.
This is my code: (The number of offset needed to enter idealy was 0, but it wont let me and always enter 1. I didnt know how to set the default offset to 0)

from unittest import result
import requests
from bs4 import BeautifulSoup
import csv
import urllib3.request

urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)


fakdep = '165'  
offset = input('Please enter number of offset:')
url = 'https://repositori.usu.ac.id/handle/123456789/{}?offset={}0'.format(fakdep,offset)
headers = {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
    }

datas = []
count_offset = 0

for offset in range(1,6):
    count_offset =1
    print('Scraping Page:', count_offset)
    result = requests.get(url str(offset), verify=False)
    #print(result)
    soup = BeautifulSoup(result.text, 'html.parser')
    items = soup.find_all('li','ds-artifact-item')
    for it in items:
        author = it.find('span','author h4').text
        title = ''.join(it.find('a',href=True).text.strip().replace('/n', ' '))
        year = it.find('span','date').text
        abstract = ''.join(it.find('div','artifact-abstract').text.strip().replace('/n', ' '))
        link = it.find('a')['href']        
        datas.append([author, title, year, abstract, "https://repositori.usu.ac.id" link])

kepala = ['Author', 'Title', 'Year', 'Abstract', 'Link']
thewriter = csv.writer(open('results/{}_{}.csv'.format(fakdep,offset), 'w', newline=''),delimiter=";")
thewriter.writerow(kepala)
for d in datas: thewriter.writerow(d)

I have yet to found another way to fix the problem.
I appreciate any kind of help.
Thankss!

CodePudding user response:

You can make the pagination using offset only inside the for loop .Each page aka offset's increment is 20 and total offset items=598 .So pagination logic is (0,598,20).You can use pandas DataFrame to store data as csv format as it's more specific and the easiest and the robust way to save data in local system.

import requests
from bs4 import BeautifulSoup
import urllib3.request
import pandas as pd
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

url = 'https://repositori.usu.ac.id/handle/123456789/165?offset={offset}'
headers = {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'
    }

datas = []

for offset in range(0,598,20):
    result = requests.get(url.format(offset=offset), verify=False)
    soup = BeautifulSoup(result.text, 'html.parser')
    items = soup.find_all('li','ds-artifact-item')
    for it in items:
        author = it.find('span','author h4').text
        title = ''.join(it.find('a',href=True).text.strip().replace('/n', ' '))
        year = it.find('span','date').text
        abstract = ''.join(it.find('div','artifact-abstract').text.strip().replace('/n', ' '))
        link = it.find('a')['href']        
        datas.append([author, title, year, abstract, "https://repositori.usu.ac.id" link])

kepala = ['Author', 'Title', 'Year', 'Abstract', 'Link']

df = pd.DataFrame(datas,columns=kepala)
df.to_csv('out.csv',index=False)
#print(df)

Output:

  Author  ...                                               Link
0     Angin, Tifany Perangin  ...  https://repositori.usu.ac.id/handle/123456789/...
1    Fandya, Rafif Akhmad Al  ...  https://repositori.usu.ac.id/handle/123456789/...
2         Rizkiansyah, Halqi  ...  https://repositori.usu.ac.id/handle/123456789/...
3      Sitompul, Ummi Balqis  ...  https://repositori.usu.ac.id/handle/123456789/...
4       Manalu, Hari Purnomo  ...  https://repositori.usu.ac.id/handle/123456789/...
..                       ...  ...                                                ...
593                 Caroline  ...  https://repositori.usu.ac.id/handle/123456789/...
594              Afif, Ridho  ...  https://repositori.usu.ac.id/handle/123456789/...
595    Putra, M.Ananda Rizki  ...  https://repositori.usu.ac.id/handle/123456789/...
596          Ignasius, Angga  ...  https://repositori.usu.ac.id/handle/123456789/...
597     Tarigan, Modalina Br  ...  https://repositori.usu.ac.id/handle/123456789/...

[598 rows x 5 columns]
  • Related