Home > Enterprise >  Loop thats supposed to append data into separate text files for each API query I do, is duplicating
Loop thats supposed to append data into separate text files for each API query I do, is duplicating

Time:03-09

I have a list of websites that I'm doing an API query on. I also have a separate list that stores a shortened version of these websites which I want to be the name of the text file where the data is getting appended to from the API query.

What I want to do is for each website, append the data from the query into a file named website1.txt and so on and so forth. Instead whats happening is the all the data from website1,website2,website3 is getting appended to website1.txt,website2.txt etc. All these text files have the same data, instead of separate data being appended to each text files.

Heres my code:

list_of_websites = ['api.cats.com/animaldata', 'api.elephants.com/animaldata', 'api.dogs.com/animaldata']

name_of_websites = ['cats data', 'elephants data', 'dogs data']

for website in list_of_websites:
    
    
    counter = 1 
    while True:
        response = requests.get(f"https://api.superComputer.info?page={counter}", headers={ "Accept": "application/.v3 json", "Authorization": "Bearer 123456"}, params={"site": f"{website}"})
           
            
                
            
        
        if response.json():
            for site in name_of_websites:    
                file_name = f"{site}.txt"  
            
                f = open(file_name, "a") 
                
            
                f.write(json.dumps(response.json(), indent=4))
            counter  = 1

            
            
        else:
            break   

CodePudding user response:

Since there is 1-1 mapping with websites urls and website names you can use zip and iterate over it.

And since all the content of different pages is to be appended to a single file you can open it just once outside the while loop.

list_of_websites = ['api.cats.com/animaldata', 'api.elephants.com/animaldata', 'api.dogs.com/animaldata']

name_of_websites = ['cats data', 'elephants data', 'dogs data']

for website, website_name in zip(list_of_websites,name_of_websites):
    
    counter = 1
    file_name = f"{website_name}.txt"
    with open(file_name, 'a') as f:
        while True:  
            response = requests.get("https://api.superComputer.info", headers={ "Accept": "application/.v3 json", "Authorization": "Bearer 123456"}, params={"page": counter, "site": f"{website}"})
            if response.ok:
                f.write(json.dumps(response.json(), indent=4))
                counter  =1
            else:
                break

  • Related