Home > Enterprise >  How to keep iterating through next pages in Python using BeautifulSoup
How to keep iterating through next pages in Python using BeautifulSoup

Time:04-09

The following code contains tools that basically parses the first page. It gets all the articles but it includes a link to the next page.

if we see the structure of this website, we can see the link to the next page is something like https://slow-communication.jp/news/?pg=2.

import re
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen

main_url = 'https://slow-communication.jp'
req = Request(main_url, headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()

soup = BeautifulSoup(webpage, "lxml")

for link in soup.findAll('a'):
    _link = str(link.get('href'))
    if '/news/' in _link:
        artice_id = _link.split("/news/")[-1]
        if len(artice_id) > 0:
            print(_link)

Using this code, I get

https://slow-communication.jp/news/3589/
https://slow-communication.jp/news/3575/
https://slow-communication.jp/news/3546/
https://slow-communication.jp/news/?pg=2

But what I would like to do is to keep every link to the articles and keep going to the next pages. So I would keep

https://slow-communication.jp/news/3589/
https://slow-communication.jp/news/3575/
https://slow-communication.jp/news/3546/

and then go to https://slow-communication.jp/news/?pg=2 and keep doing the same thing until the website has not more next page.

How do I do that?

CodePudding user response:

You can make pagination using for loop and range function along with format method which type of pagination is 2 times faster than others.You can increase or decrease page numbers whatever you want.

import re
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen

main_url = 'https://slow-communication.jp/news/?pg={page}'
for page in range(1,11):

    req = Request(main_url.format(page=page), headers={'User-Agent': 'Mozilla/5.0'})
    webpage = urlopen(req).read()

    soup = BeautifulSoup(webpage, "lxml")

    for link in soup.findAll('a'):
        _link = str(link.get('href'))
        if '/news/' in _link:
            artice_id = _link.split("/news/")[-1]
            if len(artice_id) > 0:
                print(_link)

CodePudding user response:

You can set the number of pages you want to scrape. If there isn't a next page, it will return all the news articles it found.

import requests
from bs4 import BeautifulSoup

LINK = "https://slow-communication.jp/news/"

def get_news(link, pages=1, news=[]):
    if pages == 0:
        return news
        
    res = requests.get(link, headers={'User-Agent': 'Mozilla/5.0'})
    if res.status_code == 200:
        print("getting posts from", link)
        posts, link = extract_news_and_link(res.text)
        news.extend(posts)
        if link:
            return get_news(link, pages-1, news)
        return news
    else:
        print("error getting news")

def extract_news_and_link(html):
    soup = BeautifulSoup(html, "html.parser")
    news = [post.get("href") for post in soup.select(".post-arc")]
    link = soup.select_one("main > a").get("href")
    if link:
        return news, link
    return news, None
    

def main():
    news = get_news(LINK, 10)
    print("Posts:")
    for post in news:
        print(post)

if __name__ == "__main__":
    main()

CodePudding user response:

You could use a while loop moving to each next site an break if there is no more next site available:

while True:
    req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
    webpage = urlopen(req).read()
    soup = BeautifulSoup(webpage, "lxml")
    
    ###perform some action
    
    if soup.select_one('a[href*="?pg="]'):
        url = soup.select_one('a[href*="?pg="]')['href']
        print(url)
    else:
        break

You could also collect some data and store it in a structured way it in a global list:

for a in soup.select('a.post-arc'):
    data.append({
        'title':a.h2.text,
        'url':a['href']
    })

Example

from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import pandas as pd

main_url = 'https://slow-communication.jp'
url = main_url

data = []

while True:
    req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
    webpage = urlopen(req).read()
    soup = BeautifulSoup(webpage, "lxml")
    
    for a in soup.select('a.post-arc'):
        data.append({
            'title':a.h2.text,
            'url':a['href']
        })
    
    if soup.select_one('a[href*="?pg="]'):
        url = soup.select_one('a[href*="?pg="]')['href']
        print(url)
    else:
        break
        
pd.DataFrame(data)

Output

title url
0 都立高校から「ブラック校則」が なくなる https://slow-communication.jp/news/3589/
1 北京パラリンピックが おわった https://slow-communication.jp/news/3575/
2 「優生保護法で手術された人に 国は おわびのお金を払え」という判決が出た https://slow-communication.jp/news/3546/
3 ロシアが ウクライナを 攻撃している https://slow-communication.jp/news/3535/
4 東京都が「同性パートナーシップ制度」を作る https://slow-communication.jp/news/3517/
  • Related