I am using the python requests library to download data from a web server e.g:
import requests
fname = 'guppy-0.1.10.tar.gz'
url = 'https://pypi.python.org/packages/source/g/guppy/' fname
r = requests.get(url)
open(fname, 'wb').write(r.content)
Which is working as expected.
But I am needing to download data (and save) from a web server that is continuously streaming data.
So is there a callback function that I can use with the requests api that lets me do this?
CodePudding user response:
When downloading files, you should enable the stream
option.
requests.get(url, stream=True)
Of course, this has some limitations in some cases.
- If the file is large
- The internet is slow
In the above cases, a timeout exception will be thrown. If you face such an issue, see how to fix.