Home > Software design >  Bs4 Python How to download .css files
Bs4 Python How to download .css files

Time:04-29

Hello I am trying to make a scraper that saves all .css files from a web page in a folder but when running my script I get this error: with open(shit, 'wb') as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://url.com/cache/themes/theme1/index.min.css'

And here is my code:

from bs4 import BeautifulSoup
import requests
import os

proxies = {
    'http': 'socks5h://127.0.0.1:9050',
    'https': 'socks5h://127.0.0.1:9050'
}

url = "https://url.com"
folder = "Files"
resp = requests.get(url, proxies=proxies)
soup = BeautifulSoup(resp.text, features='lxml')

def Downloader(url, folder):
    os.mkdir(os.path.join(os.getcwd(), folder))
    os.chdir(os.path.join(os.getcwd(), folder))   
    
    css = soup.find_all('link', rel="stylesheet")
    for cum in css:
        shit = cum['href']
        if "http://" in shit:
            with open(shit, 'wb') as f:
                piss = requests.get(shit, proxies=proxies)
                f.write(piss.content)

Downloader(url=url, folder=folder)

Does anyone know what the issue might be? Thank you <3

CodePudding user response:

You are trying to write to file with a name that has / in it. This identifies it as part a directory/folder. So either remove/replace those, or build in the logic that creates those folders structure, and it'll write to file.

import requests
from bs4 import BeautifulSoup
import os

proxies = {
    'http': 'socks5h://127.0.0.1:9050',
    'https': 'socks5h://127.0.0.1:9050'
}

url = "https://url.com"
folder = "Files"
resp = requests.get(url, proxies=proxies)
soup = BeautifulSoup(resp.text, features='lxml')

def Downloader(url, folder):
    try:
        os.mkdir(os.path.join(os.getcwd(), folder))
    except Exception as e:
        print(e)
    os.chdir(os.path.join(os.getcwd(), folder))   
    
    css = soup.find_all('link', rel="stylesheet")
    for each in css:
        href = each['href']
        if "http://" in href or "https://" in href:
            filename = href.split('//')[-1].replace('/','_').replace('.','_')
            with open(filename, 'wb') as f:
                response = requests.get(href, proxies=proxies)
                f.write(response.content)

Downloader(url=url, folder=folder)
  • Related