推送到 git 存储库时管道损坏
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/19120120/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Broken pipe when pushing to git repository
提问by Deadlock
I'm trying to push for the first time a code to my git repository but i get the following error:
我第一次尝试将代码推送到我的 git 存储库,但出现以下错误:
Counting objects: 222026, done. Compressing objects: 100% (208850/208850), done. Write failed: Broken pipe222026) error: pack-objects died of signal 13 fatal: The remote end hung up unexpectedly error: failed to push some refs to 'ssh://[email protected]/<...>'
Counting objects: 222026, done. Compressing objects: 100% (208850/208850), done. Write failed: Broken pipe222026) error: pack-objects died of signal 13 fatal: The remote end hung up unexpectedly error: failed to push some refs to 'ssh://[email protected]/<...>'
I tried to increase the http buffer size (git config http.postBuffer 524288000
), I tried to git repack
, but it did not work.
我试图增加 http 缓冲区大小 ( git config http.postBuffer 524288000
),我尝试过git repack
,但没有奏效。
I was able to push a very similar size code to another repository (it was not working like this one, but after the git repack
it did work). I'm trying to push it to bitbucket.
我能够将一个非常相似大小的代码推送到另一个存储库(它不像这个存储库那样工作,但在git repack
它确实工作之后)。我正在尝试将其推送到 bitbucket。
Any ideas?
有任何想法吗?
回答by Milan Saha
Simple solution is to increase the HTTP post buffer size to allow for larger chunks to be pushed up to the remote repo. To do that, simply type:
简单的解决方案是增加 HTTP 发布缓冲区大小,以允许将更大的块推送到远程存储库。为此,只需键入:
git config http.postBuffer 52428800
The number is in bytes, so in this case I have set it to 50MB. The default is 1MB.
该数字以字节为单位,因此在这种情况下,我将其设置为 50MB。默认值为 1MB。
回答by TsTiX
I had that issue when working with an arch distro on VMWare.
我在 VMWare 上使用 Arch 发行版时遇到了这个问题。
Adding
添加
IPQoS=throughput
IPQoS=throughput
to my ssh config (~/.ssh/config) did the trick for me.
到我的 ssh 配置 (~/.ssh/config) 对我有用。
回答by Joe
I had the same problem, and this worked for me:
我遇到了同样的问题,这对我有用:
git gc --aggressive --prune
It took a while, but after it was done all git operations started working faster.
The push operation that previously failed then succeeded, probably because it became fast enough to avoid some timeout related issue.
花了一段时间,但完成后所有 git 操作开始工作得更快。
之前失败的推送操作后来成功了,可能是因为它变得足够快以避免一些与超时相关的问题。
回答by Kasey
Because I haven't seen this answer: Change your Wifi Network. Mine was blocking me and gave me the broken pipe
error. After using my iPhone as a hotspot it worked perfectly!
因为我还没有看到这个答案:更改您的 Wifi 网络。我的阻止了我并给了我broken pipe
错误。使用我的 iPhone 作为热点后,它工作得很好!
回答by VonC
Note that a push can still freeze (even with postBuffer increased) when its pack files are corrupted (ie pack-objects fails)
请注意,当包文件损坏(即包对象失败)时,推送仍然会冻结(即使增加了 postBuffer)
That will be fixed in git 2.9 (June 2016). And better managed with Git 2.25 (Q1 2020)
这将在 git 2.9(2016 年 6 月)中修复。使用 Git 2.25(2020 年第一季度)更好地管理
See commit c4b2751, commit df85757, commit 3e8b06d, commit c792d7b, commit 739cf49(19 Apr 2016) by Jeff King (peff
).
(Merged by Junio C Hamano -- gitster
--in commit d689301, 29 Apr 2016)
请参阅Jeff King ( ) 的commit c4b2751、commit df85757、commit 3e8b06d、commit c792d7b、commit 739cf49(2016 年 4 月 19 日)。(由Junio C Hamano合并-- --在2016 年 4 月 29 日提交 d689301 中)peff
gitster
"
git push
" from a corrupt repository that attempts to push a large number of refs deadlocked; the thread to relay rejection notices for these ref updates blocked on writing them to the main thread, after the main thread at the receiving end notices that the push failed and decides not to read these notices and return a failure.
“
git push
”来自试图将大量引用推入死锁的损坏存储库;在接收端的主线程注意到推送失败并决定不读取这些通知并返回失败后,转发这些 ref 更新的拒绝通知的线程在将它们写入主线程时被阻止。
Commit 739cf49has all the details.
Commit 739cf49包含所有详细信息。
send-pack
: close demux pipe before finishing async processThis fixes a deadlock on the client side when pushing a large number of refs from a corrupted repo.
send-pack
: 在完成异步过程之前关闭 demux 管道当从损坏的存储库推送大量引用时,这修复了客户端的死锁。
With Git 2.25 (Q1 2020), Error handling after "git push
" finishes sending the packdata and waits for the response to the remote side has been improved.
在 Git 2.25(2020 年第一季度)中,改进了“ git push
”完成发送数据包并等待对远程端的响应后的错误处理。
See commit ad7a403(13 Nov 2019) by Jeff King (peff
).
(Merged by Junio C Hamano -- gitster
--in commit 3ae8def, 01 Dec 2019)
请参阅Jeff King ( ) 的commit ad7a403(2019 年 11 月 13 日)。(由Junio C Hamano合并-- --在commit 3ae8def,2019 年 12 月 1 日)peff
gitster
send-pack
: check remote ref status on pack-objects failureHelped-by: SZEDER Gábor
Signed-off-by: Jeff KingWhen we're pushing a pack and our local pack-objects fails, we enter an error code path that does a few things:
- Set the status of every ref to
REF_STATUS_NONE
- Call
receive_unpack_status()
to try to get an error report from the other side- Return an error to the caller
If
pack-objects
failed because the connection to the server dropped, there's not much more we can do than report the hangup. And indeed, step 2 will try to read a packet from the other side, which willdie()
in the packet-reading code with "the remote end hung up unexpectedly
".But if the connection didn'tdie, then the most common issue is that the remote
index-pack
orunpack-objects
complained about our pack (we could also have a local pack-objects error, but this ends up being the same; we'd send an incomplete pack and the remote side would complain).In that case we do report the error from the other side (because of step 2), but we fail to say anything further about the refs.
The issue is two-fold:
- in step 1, the "
NONE
" status is not "we saw an error, so we have no status".
It generally means "this ref did not match our refspecs, so we didn't try to push it". So when we print out the push status table, we won't mention any refs at all!
But even if we had a status enum for "pack-objects error", we wouldn't want to blindly set it for every ref. For example, in a non-atomic push we might have rejected some refs already on the client side (e.g.,REF_STATUS_REJECT_NODELETE
) and we'd want to report that.- in step 2, we read just the unpack status.
Butreceive-pack
will also tell us about each ref (usually that it rejected them because of the unpacker error).So a much better strategy is to leave the ref status fields as they are (usually
EXPECTING_REPORT)
and then actually receive (and print) the full per-ref status.This case is actually covered in the test suite, as t5504.8, which writes a pack that will be rejected by the remote unpack-objects.
But it's racy. Because our pack is small, most of the time pack-objects manages to write the whole thing before the remote rejects it, and so it returns success and we print out the errors from the remote.
But very occasionally (or when run under--stress
), it goes slow enough to see a failure in writing, andgit push
reports nothing for the refs.With this patch, the test should behave consistently.
There shouldn't be any downside to this approach.
- If we really did see the connection drop, we'd already die in
receive_unpack_status()
, and we'll continue to do so.- If the connection drops afterwe get the unpack status but before we see any ref status, we'll still print the remote unpacker error, but will now say "remote end hung up" instead of returning the error up the call-stack.
But as discussed, we weren't showing anything more useful than that with the current code. And anyway, that case is quite unlikely (the connection dropping at that point would have to be unrelated to the pack-objects error, because of the ordering of events).In the future we might want to handle packet-read errors ourself instead of dying, which would print a full ref status table even for hangups.
But in the meantime, this patch should be a strict improvement.
send-pack
: 检查包对象失败时的远程引用状态帮助者:SZEDER Gábor
签字人:Jeff King当我们推送一个包并且我们的本地包对象失败时,我们输入一个错误代码路径,它会做一些事情:
- 将每个 ref 的状态设置为
REF_STATUS_NONE
- 调用
receive_unpack_status()
尝试从对方获取错误报告- 向调用者返回错误
如果
pack-objects
由于与服务器的连接断开而失败,除了报告挂断之外,我们无能为力。实际上,第 2 步将尝试从另一侧读取数据包,这将die()
在数据包读取代码中带有“the remote end hung up unexpectedly
”。但是如果连接没有死,那么最常见的问题是远程
index-pack
或unpack-objects
抱怨我们的包(我们也可能有本地包对象错误,但这最终是一样的;我们会发送一个不完整的包并且远程端会抱怨)。在这种情况下,我们确实从另一方报告了错误(因为第 2 步),但我们没有进一步说明 refs。
这个问题有两个方面:
- 在第 1 步中,“
NONE
”状态不是“我们看到了错误,所以我们没有状态”。
这通常意味着“这个 ref 与我们的 refspecs 不匹配,所以我们没有尝试推送它”。所以当我们打印出推送状态表时,我们根本不会提到任何引用!
但即使我们有一个“pack-objects error”的状态枚举,我们也不希望盲目地为每个引用设置它。例如,在非原子推送中,我们可能已经拒绝了客户端上已经存在的一些引用(例如,REF_STATUS_REJECT_NODELETE
),我们想要报告这一点。- 在第 2 步中,我们只读取解包状态。
但receive-pack
也会告诉我们每个引用(通常是因为解包器错误而拒绝它们)。因此,更好的策略是保留 ref 状态字段原样(通常
EXPECTING_REPORT)
然后实际接收(并打印)完整的 per-ref 状态。这种情况实际上包含在测试套件中,如 t5504.8,它编写了一个将被远程解包对象拒绝的包。
但它很活泼。因为我们的包很小,大多数时候包对象设法在远程拒绝之前写下整个内容,因此它返回成功并且我们从远程打印出错误。
但偶尔(或在 下运行时--stress
),它运行得足够慢,以至于看到写入失败,并且git push
不向 refs 报告任何内容。使用此补丁,测试应保持一致。
这种方法不应该有任何缺点。
- 如果我们真的看到连接断开,我们已经死了
receive_unpack_status()
,我们将继续这样做。- 如果在我们获得解包状态之后但在我们看到任何引用状态之前连接断开,我们仍然会打印远程解包器错误,但现在会说“远程端挂断”而不是将错误返回到调用堆栈。
但正如所讨论的,我们没有展示比当前代码更有用的东西。无论如何,这种情况不太可能发生(由于事件的顺序,此时连接断开必须与 pack-objects 错误无关)。将来,我们可能希望自己处理数据包读取错误而不是死亡,即使挂断也会打印完整的引用状态表。
但与此同时,这个补丁应该是一个严格的改进。
回答by seki
I met the same problem when uploading my gigabytes of data to github repository. Increasing the HTTP buffer size did not work for this size of data. I am not sure if it is a problem of git itself or github server. Anyway I made a shell script to handle this problem, which uploades files in the current directory step by step, in each step less than 100 MB of data. It is working fine for me. It takes time but I can just detach screen and wait overnight.
将千兆字节的数据上传到 github 存储库时,我遇到了同样的问题。增加 HTTP 缓冲区大小对这种数据大小不起作用。不知道是git本身的问题还是github服务器的问题。无论如何我做了一个shell脚本来处理这个问题,它一步一步地上传当前目录中的文件,每一步都小于100 MB的数据。它对我来说很好。这需要时间,但我可以拆下屏幕并等待一夜。
Here is the shell script: https://gist.github.com/sekika/570495bd0627acff6c836de18e78f6fd
这是 shell 脚本:https: //gist.github.com/sekika/570495bd0627acff6c836de18e78f6fd