Python ConnectionResetError: 一个现有的连接被远程主机强行关闭

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/41110531/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-20 00:26:56  来源:igfitidea点击:

ConnectionResetError: An existing connection was forcibly closed by the remote host

pythonpython-3.x

提问by Kenny

I'm working on a script to download a group of files. I successfully completed this and it's working decently. Now I have tried adding a dynamic printout of the progression of the download.

我正在编写一个脚本来下载一组文件。我成功地完成了这项工作,并且运行良好。现在我尝试添加下载进度的动态打印输出。

For small downloads (it's of .mp4 files by the way) such as 5MB, the progression works great and the file closes successfully, thus resulting in a complete and working downloaded .mp4 file. For larger files, such as 250MB and above, it does not work successfully, I get the following error:

对于小下载(顺便说一下,它是 .mp4 文件),例如 5MB,进程效果很好并且文件成功关闭,从而生成完整且有效的下载 .mp4 文件。对于较大的文件,例如 250MB 及以上,它无法成功运行,出现以下错误:

enter image description here

在此处输入图片说明

And here's my code:

这是我的代码:

import urllib.request
import shutil
import os
import sys
import io

script_dir = os.path.dirname('C:/Users/Kenny/Desktop/')
rel_path = 'stupid_folder/video.mp4'
abs_file_path = os.path.join(script_dir, rel_path)
url = 'https://archive.org/download/SF145/SF145_512kb.mp4'
# Download the file from `url` and save it locally under `file_name`:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    resp = urllib.request.urlopen(url)
    length = resp.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up

    # print(length, blocksize)

    buf = io.BytesIO()
    size = 0
    while True:
        buf1 = resp.read(blocksize)
        if not buf1:
            break
        buf.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

    shutil.copyfileobj(response, out_file)

This works perfectly with small files, but larger ones I get the error. Now, I do NOT get the error, however, with larger files if I comment out the progressindicator code:

这适用于小文件,但较大的文件我得到错误。现在,如果我注释掉进度指示器代码,我不会收到错误,但是,对于较大的文件:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    # eventID = 123456
    # 
    # resp = urllib.request.urlopen(url)
    # length = resp.getheader('content-length')
    # if length:
    #     length = int(length)
    #     blocksize = max(4096, length//100)
    # else:
    #     blocksize = 1000000 # just made something up
    # 
    # # print(length, blocksize)
    # 
    # buf = io.BytesIO()
    # size = 0
    # while True:
    #     buf1 = resp.read(blocksize)
    #     if not buf1:
    #         break
    #     buf.write(buf1)
    #     size += len(buf1)
    #     if length:
    #         print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    # print()

    shutil.copyfileobj(response, out_file)

Does anyone have any ideas? This is the last part of my project and I would really like to be able to see the progress. Once again, this is Python 3.5. Thanks for any help provided!

有没有人有任何想法?这是我项目的最后一部分,我非常希望能够看到进展。再一次,这是 Python 3.5。感谢您提供的任何帮助!

采纳答案by Jean-Fran?ois Fabre

You're opening your url twice, once as responseand once as resp. With your progress bar stuff, you're consuming the data, so when the file is copied using copyfileobj, the data is empty (well maybe that is inaccurate as it works for small files, but you are doing things twice here and it is probably the orgin of your problem)

你打开你的 url 两次,一次 asresponse一次 as resp。使用进度条时,您正在消耗数据,因此当使用 复制文件时copyfileobj,数据为空(好吧,这可能不准确,因为它适用于小文件,但您在这里做了两次事情,这可能是你的问题的起源)

To get progress bar AND valid file do this:

要获取进度条和有效文件,请执行以下操作:

with urllib.request.urlopen(url) as response, open(abs_file_path, 'wb') as out_file:

    eventID = 123456

    length = response.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096, length//100)
    else:
        blocksize = 1000000 # just made something up


    size = 0
    while True:
        buf1 = response.read(blocksize)
        if not buf1:
            break
        out_file.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100, eventID), end='')#print('\rDownloading: {:.1f}%'.format(size/length*100), end='')
    print()

Simplifications done to your code:

对您的代码进行的简化:

  • only one urlopen, as response
  • no BytesIO, directly write to out_file
  • 只有一个urlopen,如response
  • BytesIO,直接写信给out_file