import requests as r
from bs4 import BeautifulSoup as bs
url=r.get("https://en.wikipedia.org/wiki/Wikipedia#Nupedia")
soup=bs(url.text,'html.parser')
print(soup)
product=soup.find('div',class_="mw-parser-output")
head=product.find('span',id="Nupedia").text
para=product.find_all('p',class_=False)
print(para)
It's not working
CodePudding user response:
Question is not quiet clear - To get only the next <p>
you could go with:
product.find('span',id="Nupedia").find_next('p').text
If you like to extract the headlines and its corresponding <p>
s you can go with something like that:
for h in soup.select('h3'):
print(h.text)
for t in h.next_siblings:
if t.name == 'h3':
break
if t.name == 'p':
print(t.text)
CodePudding user response:
To get text from p tag, you can use .find_next_sibling('p')
import requests as r
from bs4 import BeautifulSoup as bs
url=r.get("https://en.wikipedia.org/wiki/Wikipedia#Nupedia")
soup=bs(url.text,'html.parser')
#print(soup)
product=soup.find('div',class_="mw-parser-output")
head=product.find('span',id="Nupedia")
para=product.find('div',class_="thumb tright").find_next_sibling('p').get_text(strip=True)
print(para)
Output:
Other collaborative online encyclopedias were attempted before Wikipedia, but none were as successful.[17]Wikipedia began as a complementary project forNupedia, a free online English-language encyclopedia project whose articles were written by experts and reviewed under a formal process.[18]It was founded on March 9, 2000, under the ownership ofBomis, aweb portalcompany. Its main figures were Bomis CEOJimmy WalesandLarry Sanger,editor-in-chieffor Nupedia and later Wikipedia.[1][19]Nupedia was initially licensed under its own NupediaOpen ContentLicense, but even before Wikipedia was founded, Nupedia switched to theGNU Free Documentation Licenseat the
urging ofRichard Stallman.[20]Wales is credited with defining the goal of making a publicly editable encyclopedia,[21][22]while Sanger is credited with the strategy of using awikito reach
that goal.[23]On January 10, 2001, Sanger proposed on the Nupedia mailing list to create a wiki as a "feeder" project for Nupedia.[24]