run_time = 60
run_until = time.time() run_time
while time.time() < run_until:
if run_time % 5 == 0:
url = 'URL'
csv_file = open('cam_data1.csv', 'a')
req = requests.get(url)
data = req.json()
csv_file.write(str(data))
csv_file.close()
This is the code I wrote. It calls the url and saves its output data in a csv file every 5 seconds. The output-data is a json and looks like this:
{'blood_pressure_diastolic_value': 70.0, 'blood_pressure_systolic_value': 120.0, 'heart_rate_value': 120.0, 'respiratory_rate_value': 55.0, 'sat02_value': 95.0}
I have to compare the values with another dataset. The problem is that the code puts all the data in one cell. This makes working with it hard because tracing the timestamp for each data is complicated.
I want each new output to be stored in a new row and each of the arguments to have its own column. So 5 columns and a new row for every output.
Can anybody help me with this or show me an alternative way to access the data as it is?
CodePudding user response:
Here is solution:
import time
wait_seconds = 60
while True:
url = 'URL'
csv_file = open('cam_data1.csv', 'a')
req = requests.get(url)
data = req.json()
csv_file.write(str(data))
csv_file.close()
time.sleep(wait_seconds)
CodePudding user response:
A Good way of doing it would be`:
import time
wait_seconds = 60
while True:
url = 'URL'
# context manager close the file automatically
with open('cam_data1.csv', 'a') as csv_file:
req = requests. Get(url)
data = req.json() # this returns a dictionary
# we only take value of the dict not the key (title of your columns)
for value in data.value():
csv_file.write(f"{value};")
# IMPORTANT if you want 'classic' CSV replace the semi-colon by a coma
csv_file.write("\n") # This create a new line