Git pull 致命:内存不足,malloc 失败
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/14038074/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Git pull fatal: Out of memory, malloc failed
提问by Elmor
I've a repo on https://bitbucket.org/
我在https://bitbucket.org/上有一个仓库
Few days ago by a mistake big number of image files were pushed in the repo. then files were deleted via another push. after that repo worked ok, but today when i try to pull from the repo:
几天前,由于错误将大量图像文件推送到 repo 中。然后文件通过另一个推送被删除。在那之后回购工作正常,但今天当我尝试从回购中提取时:
$ git pull
Password for 'https://[email protected]':
warning: no common commits
remote: Counting objects: 4635, done.
remote: Compressing objects: 100% (1710/1710), done.
fatal: Out of memory, malloc failed (tried to allocate 4266852665 bytes)
fatal: index-pack failed
I've tried:
1) git config --global pack.windowMemory 1024m
2)
我试过:
1)git config --global pack.windowMemory 1024m
2)
$ git count-objects -v
count: 9
size: 48
in-pack: 4504
packs: 1
size-pack: 106822
prune-packable: 0
garbage: 0
No luck there, not sure what actions should i take next...
The size of the repo should be around 10-20m of code. what actions should i take next?
没有运气,不知道我接下来应该采取什么行动......
repo 的大小应该是大约 10-20m 的代码。我接下来应该采取什么行动?
UPDATE 1
i executed these commands:
更新 1
我执行了这些命令:
$ git filter-branch --index-filter 'git rm --cached --ignore-unmatch public/images/*' HEAD
Rewrite a1c9fb8324a2d261aa745fc176ce2846d7a2bfd7 (288/288)
WARNING: Ref 'refs/heads/master' is unchanged
and
和
$ git push --force --all
Counting objects: 4513, done.
Compressing objects: 100% (1614/1614), done.
Writing objects: 100% (4513/4513), 104.20 MiB | 451 KiB/s, done.
Total 4513 (delta 2678), reused 4500 (delta 2671)
remote: bb/acl: ayermolenko is allowed. accepted payload.
To https://[email protected]/repo.git
+ 203e824...ed003ce demo -> demo (forced update)
+ d59fd1b...a1c9fb8 master -> master (forced update)
Pull then works ok:
拉然后工作正常:
$ git pull
Already up-to-date.
But when i try to clone repo i get
但是当我尝试克隆 repo 时,我得到了
~/www/clone$ git clone [email protected]:repo.git
Cloning into 'clone'...
remote: Counting objects: 5319, done.
remote: Compressing objects: 100% (1971/1971), done.
fatal: Out of memory, malloc failed (tried to allocate 4266852665 bytes)
fatal: index-pack failed
UPDATE 2
Sadly enough i didn't find all of the large files. some are still left. So i asked support to kill all the logs of the repo
更新 2
遗憾的是我没有找到所有的大文件。有些还剩下。所以我要求支持杀死 repo 的所有日志
UPDATE 3
In the end i had to kill old & create new repo.
更新 3
最后,我不得不杀死旧的并创建新的 repo。
采纳答案by VonC
If you are the only one using this repo, you can follow the git filter-branch option described in "How to purge a huge file from commits history in Git?"
如果您是唯一一个使用此 repo 的人,您可以按照“如何从 Git 中的提交历史记录中清除大文件?”中描述的 git filter-branch 选项进行操作。
The simpler option is cloning the repo to an old commit, and force push it, as described in "git-filter-branch
to delete large file".
更简单的选择是将 repo 克隆到旧提交,并强制推送它,如“git-filter-branch
删除大文件”中所述。
Either one would force any collaborator to reset his/her own local repo to the new state you are publishing. Again, if you are the only collaborator, it isn't an issue.
任何一个都会迫使任何合作者将他/她自己的本地存储库重置为您正在发布的新状态。同样,如果您是唯一的合作者,这不是问题。
回答by steinkel
In my case it was something as simple as trying to pull a big repo in a 1GB RAM box without swap.
就我而言,这就像尝试在没有交换的情况下在 1GB RAM 盒中拉出一个大的 repo 一样简单。
I followed this tutorial https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04to create some swap space on the server and worked.
我按照本教程https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04在服务器上创建了一些交换空间并工作。
Their "faster" way:
他们“更快”的方式:
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
You can make these changes permanent by adding to /etc/fstab:
您可以通过添加到 /etc/fstab 来使这些更改永久化:
/swapfile none swap sw 0 0
They recommend adding to /etc/sysctl.conf:
他们建议添加到 /etc/sysctl.conf:
vm.swappiness=10
vm.vfs_cache_pressure = 50
回答by Basile Starynkevitch
Even if the big image files have been deleted after having being pushed, they do stay in the git
history.
即使推送后的大图像文件被删除,它们也会保留在git
历史记录中。
I would suggest to forcibly remove them from the git history (I think that is possible, but it involves a delicate procedure that I don't know).
我建议将它们从 git 历史记录中强行删除(我认为这是可能的,但它涉及一个我不知道的微妙程序)。
Alternatively, pull the repository before the mistakenly added files, patch the repository to make the relevant small patches, clone that, and use that (perhaps with a dump/restore) as your master git.
或者,在错误添加的文件之前拉出存储库,修补存储库以制作相关的小补丁,克隆它,并使用它(可能带有转储/恢复)作为您的主 git。
I don't know well the details, but I did read it could be possible
我不太清楚细节,但我确实读过它可能是可能的
回答by mbarnettjones
I recently encountered this issue with one of my repositories. Similar error, hinting at a large file hidden in the repo somewhere.
我最近在我的一个存储库中遇到了这个问题。类似的错误,暗示在 repo 某处隐藏了一个大文件。
Cloning into 'test_framework'...
remote: Counting objects: 11889, done.
remote: Compressing objects: 100% (5991/5991), done.
Receiving objects: 66% (7847/11889), 3.22 MiB | 43 KiB/sremote: fatal: Out of memory, malloc failed (tried to allocate 893191377 bytes)
remote: aborting due to possible repository corruption on the remote side.
fatal: early EOFs: 66% (7933/11889), 3.24 MiB
fatal: index-pack failed
To get around this I temporarily created a large swap drive (in excess of the 893191377 bytes the server was asking for) following Method 2from here: http://www.thegeekstuff.com/2010/08/how-to-add-swap-space/
为了解决这个问题,我按照以下方法 2临时创建了一个大型交换驱动器(超过服务器要求的 893191377 字节) :http: //www.thegeekstuff.com/2010/08/how-to-add-交换空间/
This allowed me to successfully clone and then remove the culprit (someone had checked in an sql dumpfile). You can use:
这使我能够成功克隆然后删除罪魁祸首(有人检查了 sql 转储文件)。您可以使用:
git filter-branch --tree-filter 'rm -rf dumpfile.sql' HEAD
to remove the file from the git repo.
从 git repo 中删除文件。