Home > Net >  How can I skip empty values when iterating?
How can I skip empty values when iterating?

Time:02-26

Github Link

I keep getting partly through running this code before TypeError: 'NoneType' object is not iterable gets thrown. However, I make it through many lines that write into my csv file before the error is thrown. I believe that it is due to one of the fieldnames not being present when iterating. Does anyone know what I can add to this to skip these fields and move on the the next item?

with open("urls.txt",'r') as urllist, open('data.csv','w') as outfile:
    writer = csv.DictWriter(outfile, fieldnames=["product_name","product_image","product_desc","product_company","product_country","product_type","product_abv","product_taste"],quoting=csv.QUOTE_ALL)
    writer.writeheader()
    for url in urllist.read().splitlines():
        data = scrape(url) 
        if data:
            for r in data['product']:
                writer.writerow(r)
               
    

CodePudding user response:

Perhaps try defaulting the product key?

if data:
    for r in data.get('product', []):
        # action

CodePudding user response:

Try this:

with open("urls.txt",'r') as urllist, open('data.csv','w') as outfile:
    writer = csv.DictWriter(outfile, fieldnames=["product_name","product_image","product_desc","product_company","product_country","product_type","product_abv","product_taste"],quoting=csv.QUOTE_ALL)
    writer.writeheader()
    for url in urllist.read().splitlines():
        if not url:
            continue
        data = scrape(url) 
        if data and data['product']:
            for r in data['product']:
                writer.writerow(r)

CodePudding user response:

with open('urls.txt', 'r') as urllist:
    for line in urls.txt:
        if len(line) < 1:
            pass

Or Maybe Instead of len(line)

if line == '':
    pass

Would this work in your case? I felt a bit ignorant the first time I learned you don't necessarily have to call a read function on an opened file. Unless I'm getting mixed up and confusing that with HTML parsing... and BeautifulSoup.

  • Related