使用 Python requests 库保存大文件

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/14114729/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 10:30:58  来源:igfitidea点击:

Save a large file using the Python requests library

pythonfilehttpdownloadrequest

提问by Matt Williamson

Possible Duplicate:
How to download image using requests

可能的重复:
如何使用请求下载图像

I know that fetching a url is as simple as requests.getand I can get at the raw response body and save it to a file, but for large files, is there a way to stream directly to a file? Like if I'm downloading a movie with it or something?

我知道获取 url 很简单requests.get,我可以获取原始响应正文并将其保存到文件中,但是对于大文件,有没有办法直接流式传输到文件?就像我正在下载电影之类的?

采纳答案by Blender

Oddly enough, requests doesn't have anything simple for this. You'll have to iterate over the response and write those chunks to a file:

奇怪的是, requests 没有任何简单的东西。您必须遍历响应并将这些块写入文件:

response = requests.get('http://www.example.com/image.jpg', stream=True)

# Throw an error for bad status codes
response.raise_for_status()

with open('output.jpg', 'wb') as handle:
    for block in response.iter_content(1024):
        handle.write(block)

I usually just use urllib.urlretrieve(). It works, but if you need to use a session or some sort of authentication, the above code works as well.

我通常只使用urllib.urlretrieve(). 它有效,但如果您需要使用会话或某种身份验证,则上述代码也有效。