python 在python中将二进制缓冲区写入文件

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/652535/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-03 20:32:51  来源:igfitidea点击:

Writing a binary buffer to a file in python

pythonbinaryio

提问by Jamie Love

I have some python code that:

我有一些 python 代码:

  1. Takes a BLOB from a database which is compressed.
  2. Calls an uncompression routine in C that uncompresses the data.
  3. Writes the uncompressed data to a file.
  1. 从压缩的数据库中获取 BLOB。
  2. 在 C 中调用解压缩例程来解压缩数据。
  3. 将未压缩的数据写入文件。

It uses ctypesto call the C routine, which is in a shared library.

它使用ctypes调用共享库中的 C 例程。

This mostly works, except for the actual writing to the file. To uncompress, I get the data uncompressed into a python buffer, created using the ctypes create_string_buffermethod:

这主要是有效的,除了实际写入文件。要解压缩,我将解压缩的数据放入使用 ctypescreate_string_buffer方法创建的 python 缓冲区中:

c_uncompData_p = create_string_buffer(64000)

c_uncompData_p = create_string_buffer(64000)

so the uncompression call is like this:

所以解压调用是这样的:

c_uncompSize = mylib.explodeCharBuffer (c_data_p, c_data_len, c_uncompData_p)

c_uncompSize = mylib.explodeCharBuffer (c_data_p, c_data_len, c_uncompData_p)

The size of the resulting uncompressed data is returned as the return value.

生成的未压缩数据的大小作为返回值返回。

But... I have no idea how to force python on only write c_uncompSizebytes out - if I do:

但是......我不知道如何强制 python 只写c_uncompSize字节 - 如果我这样做:

myfile.write (c_uncompData_p.raw)

myfile.write (c_uncompData_p.raw)

it writes the whole 64k buffer out (the data is binary - so it is not null terminated).

它写出整个 64k 缓冲区(数据是二进制的 - 所以它不是空终止)。

So, my question is - using Python 2.5 how do I get c_uncompSize bytes printed out, rather than the whole 64k?

所以,我的问题是 - 使用 Python 2.5 如何打印出 c_uncompSize 字节,而不是整个 64k?

Thanks Jamie

谢谢杰米

采纳答案by elo80ka

Slicing works for c_char_Arrays too:

切片也适用于 c_char_Arrays:

myfile.write(c_uncompData_p[:c_uncompSize])

回答by jfs

buffer()might help to avoid unnecessary copying (caused by slicing as in @elo80ka's answer):

buffer()可能有助于避免不必要的复制(由@elo80ka 的回答中的切片引起):

myfile.write(buffer(c_uncompData_p.raw, 0, c_uncompSize))

In your example it doesn't matter (due to c_uncompData_pis written only once and it is small) but in general it could be useful.

在您的示例中,这无关紧要(因为c_uncompData_p只写了一次并且很小)但总的来说它可能很有用。



Just for the sake of exercise here's the answer that uses C stdio's fwrite():

为了锻炼的缘故这里是一个使用C中回答stdiofwrite()

from ctypes import *

# load C library
try: libc = cdll.msvcrt # Windows
except AttributeError:
     libc = CDLL("libc.so.6") # Linux

# fopen()
libc.fopen.restype = c_void_p
def errcheck(res, func, args):
    if not res: raise IOError
    return res
libc.fopen.errcheck = errcheck
# errcheck() could be similarly defined for `fwrite`, `fclose` 

# write data
file_p  = libc.fopen("output.bin", "wb")
sizeof_item = 1 # bytes
nitems  = libc.fwrite(c_uncompData_p, sizeof_item, c_uncompSize, file_p)
retcode = libc.fclose(file_p)
if nitems != c_uncompSize: # not all data were written
   pass
if retcode != 0: # the file was NOT successfully closed
   pass