macos IOError: [Errno 22] 读/写大字节串时参数无效

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/11662960/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-21 09:22:44  来源:igfitidea点击:

IOError: [Errno 22] Invalid argument when reading/writing large bytestring

pythonmacospython-3.x

提问by Dougal

I'm getting

我越来越

IOError: [Errno 22] Invalid argument

when I try to write a large bytestring to disk with f.write(), where fwas opened with mode wb.

当我尝试使用 将一个大字节串写入磁盘时f.write(),其中f以 mode 打开wb

I've seen lots of people online getting this error when using a Windows network drive, but I'm on OSX (10.7 when I originally asked the question but 10.8 now, with a standard HFS+ local filesystem). I'm using Python 3.2.2 (happens on both a python.org binary and a homebrew install). I don't see this problem with the system Python 2.7.2.

我在网上看到很多人在使用 Windows 网络驱动器时收到此错误,但我使用的是 OSX(最初提出问题时为 10.7,但现在使用标准 HFS+ 本地文件系统时为 10.8)。我正在使用 Python 3.2.2(在 python.org 二进制文件和自制软件安装上都发生)。我在系统 Python 2.7.2 中没有看到这个问题。

I also tried mode w+bbased on this Windows bug workaround, but of course that didn't help.

我还尝试了w+b基于此 Windows 错误解决方法的模式,但这当然没有帮助。

The data is coming from a large numpy array (almost 4GB of floats). It works fine if I manually loop over the string and write it out in chunks. But because I can't write it all in one pass, np.saveand np.savezfail -- since they just use f.write(ary.tostring()). I get a similar error when I try to save it into an existing HDF5 file with h5py.

数据来自一个大的 numpy 数组(将近 4GB 的浮点数)。如果我手动循环遍历字符串并将其分块写出,它就可以正常工作。但是因为我无法一口气写完所有内容np.savenp.savez失败 - 因为他们只是使用f.write(ary.tostring()). 当我尝试将其保存到现有的 HDF5 文件中时,出现类似的错误h5py

Note that I get the same problem when reading a file opened with file(filename, 'rb'): f.read()gives this IOError, while f.read(chunk_size)for reasonable chunk_sizeworks.

请注意,我在读取用file(filename, 'rb'):打开的文件时遇到了同样的问题:f.read()给出了这个IOError,而f.read(chunk_size)对于合理的chunk_size作品。

Any thoughts?

有什么想法吗?

采纳答案by Dougal

This appears to be a general OSX bug with fread / fwrite and so isn't really fixable by a Python user. See numpy #3858, this torch7 commit, this SO question/answer, ....

这似乎是 fread / fwrite 的一般 OSX 错误,因此 Python 用户无法真正修复。参见numpy #3858这个 torch7 提交这个 SO 问题/答案,....

Supposedly it's been fixed in Mavericks, but I'm still seeing the issue.

据说它已在小牛队中修复,但我仍然看到这个问题。

Python 2 may have worked around this or its io module may have always buffered large reads/writes; I haven't investigated thoroughly.

Python 2 可能已经解决了这个问题,或者它的 io 模块可能总是缓冲大的读/写;我没有仔细调查。

回答by user318904

Perhaps try not opening with the b flag, I didn't think that was supported on all OS / filesystems.

也许尝试不使用 b 标志打开,我认为所有操作系统/文件系统都不支持。