I have written this script that will retrieve the contents of a web page.
import requests
import bs4
with requests.session() as r:
r = requests.get("https://www.example.com")
response = r.text
print(response)
However, I have a list of URLs in a text file. Is there any way I can pass the contents of this file directly to requests.get() instead of typing each one manually.
CodePudding user response:
You can just use a loop
Assuming file.txt
is your file:
with requests.session() as r:
with open('file.txt') as f:
for line in f:
r = requests.get(line)
response = r.text
print(response)
CodePudding user response:
Just put it all in a loop.
import requests
import bs4
text_file_name = "list_of_urls.txt"
with requests.session() as session:
with open(text_file_name) as file:
for line in file:
url = line.strip()
if url:
resp = session.get(url)
response = resp.text
print(response)
note: you weren't using the requests session object, so fixed that.
CodePudding user response:
You can try to loop at all the files and execute a requests.get() for each one
import requests
import bs4
with requests.session() as r:
with open("urls.txt", "r") as f:
urls = list(f.readlines())
for url in urls:
r = requests.get(url)
response = r.text
print("Response for " url)
print(response)
CodePudding user response:
import requests
file1 = open('myfile.txt', 'r')
URLS = file1.readlines()
for url in URLS:
r = requests.get(url)
response = r.text
print(response)
This would print the text content of all the URLs