Linux 多个进程可以使用 fopen 附加到一个文件而没有任何并发问题吗?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/7552451/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Can multiple processes append to a file using fopen without any concurrency problems?
提问by Deleted
I have a process opening a file in append mode. In this case it is a log file. Sample code:
我有一个以追加模式打开文件的进程。在这种情况下,它是一个日志文件。示例代码:
int main(int argc, char **argv) {
FILE *f;
f = fopen("log.txt", "a");
fprintf(f, "log entry line");
fclose(f);
}
Two questions:
两个问题:
- If I have multiple processes appending to the same file, will each log line appear distinctly or can they be interlaced as the processes context switch?
- Will this write block if lots of processes require access to the file, therefore causing concurrency problems?
- 如果我有多个进程附加到同一个文件,每个日志行会清晰地显示还是可以在进程上下文切换时交错?
- 如果大量进程需要访问文件,则此写入会阻塞,从而导致并发问题吗?
I am considering either doing this in its simplest incarnation or using zeromq to pump log entries over pipes to a log collector.
我正在考虑以最简单的形式执行此操作,或者使用 zeromq 将日志条目通过管道泵送到日志收集器。
I did consider syslog but I don't really want any platform dependencies on the software.
我确实考虑过 syslog,但我真的不希望对软件有任何平台依赖性。
The default platform is Linux for this btw.
顺便说一句,默认平台是 Linux。
采纳答案by thiton
You'll certainly have platform dependencies since Windows can't handle multiple processes appending to the same file.
您肯定会有平台依赖性,因为 Windows 无法处理附加到同一文件的多个进程。
Regarding synchronization problems, I think that line-buffered output /should/ save you most of the time, i.e. more than 99.99% of short log lines should be intact according to my short shell-based test, but not every time. Explicit semantics are definitely preferable, and since you won't be able to write this hack system-independently anyway, I'd recommend a syslog approach.
关于同步问题,我认为行缓冲输出 /should/ 大部分时间可以节省您的时间,即根据我的基于 shell 的简短测试,超过 99.99% 的短日志行应该是完整的,但不是每次都如此。显式语义绝对是可取的,因为无论如何你都不能独立地编写这个 hack 系统,我建议使用 syslog 方法。
回答by Andrey Atapin
When your processes will be going to write something like:
当您的流程将要编写如下内容时:
"Here's process #1"
"Here's process #2"
you will probably get something like:
你可能会得到类似的东西:
"Hehere's process #2re's process #1"
You will need to synchronize them.
您将需要同步它们。
回答by harald
Unless you do some sort of synchronization the log lines may overlap. So to answer number two, that depends on how you implement the locking and logging code. If you just lock, write to file and unlock, that may cause problems if you have lots of processes trying to access the file at the same time.
除非您进行某种同步,否则日志行可能会重叠。所以要回答第二个问题,这取决于您如何实现锁定和日志记录代码。如果您只是锁定、写入文件并解锁,那么如果您有许多进程试图同时访问该文件,则可能会导致问题。
回答by cnicutar
I don't know about fopen
and fprintf
but you could open
the file using O_APPEND
. Then each write
will go at the end of the file without a hitch (without getting mixed with another write).
我不知道fopen
,fprintf
但你可以open
使用O_APPEND
. 然后每个都write
将顺利到达文件的末尾(不会与另一个写入混合)。
Actually looking in the standard:
实际上在标准中寻找:
The file descriptor associated with the opened stream shall be allocated and opened as if by a call to open() with the following flags:
a or ab O_WRONLY|O_CREAT|O_APPEND
与打开的流相关联的文件描述符应被分配和打开,就像通过调用 open() 并具有以下标志一样:
a or ab O_WRONLY|O_CREAT|O_APPEND
So I guess it's safe to fprintf
from multiple processes as long as the file has been opened with a
.
所以我想fprintf
只要文件是用a
.
回答by glglgl
The standard(for open/write, not fopen/fwrite) states that
该标准(开放/写,不会的fopen / FWRITE)指出,
If the O_APPEND flag of the file status flags is set, the file offset shall be set to the end of the file prior to each write and no intervening file modification operation shall occur between changing the file offset and the write operation.
如果设置了文件状态标志的 O_APPEND 标志,则文件偏移量应在每次写入之前设置为文件末尾,并且在更改文件偏移量和写入操作之间不应发生中间文件修改操作。
For fprintf()
to be used, you have to disable buffering on the file.
对于fprintf()
要使用,您必须对文件禁用缓存。
回答by Pete Wilson
EDITto answer your questions explicitly:
编辑以明确回答您的问题:
- If I have multiple processes appending to the same file, will each log line appear distinctly or can they be interlaced as the processes context switch?
- 如果我有多个进程附加到同一个文件,每个日志行会清晰地显示还是可以在进程上下文切换时交错?
Yes, each log line will appear intact because according to msdn/vs2010:
是的,每个日志行都会完好无损,因为根据msdn/vs2010:
"This function [that is, fwrite( )] locks the calling thread and is therefore thread-safe. For a non-locking version, see _fwrite_nolock."
“该函数 [即 fwrite()] 锁定调用线程,因此是线程安全的。对于非锁定版本,请参阅 _fwrite_nolock。”
The same is implied on the GNU manpage:
"— Function: size_t fwrite (const void *data, size_t size, size_t count, FILE *stream)
"— 函数:size_t fwrite (const void *data, size_t size, size_t count, FILE *stream)
This function writes up to count objects of size size from the array data, to the stream stream. The return value is normally count, if the call succeeds. Any other value indicates some sort of error, such as running out of space.
— Function: size_t fwrite_unlocked (const void *data, size_t size, size_t count, FILE *stream)
— 函数:size_t fwrite_unlocked (const void *data, size_t size, size_t count, FILE *stream)
The fwrite_unlocked function is equivalent to the fwrite function except that it does not implicitly lock the stream.
This function [i.e., fwrite_unlocked( )] is a GNU extension. "
- Will this write block if lots of processes require access to the file, therefore causing concurrency problems?
- 如果大量进程需要访问文件,则此写入会阻塞,从而导致并发问题吗?
Yes, by implication from question 1.
是的,来自问题 1 的暗示。