windows CreateFileMapping、MapViewOfFile,如何避免占用系统内存

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1880714/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-15 13:34:26  来源:igfitidea点击:

CreateFileMapping, MapViewOfFile, how to avoid holding up the system memory

windowsperformancefile-iomemory-management

提问by Guillermo Prandi

I'm developing an application targeted for desktop systems which may have as little as 256MB RAM (Windows 2000 and up). In my application I have this large file (>256MB) which contains fixed records of about 160 bytes/each. This application has a rather lengthy process in which, over time, it will be randomly accessing about 90% of the file (for reading and writing). Any given record write will not be more than 1,000 record accesses away from the read of that particular record (I can tune this value).

我正在开发一个针对桌面系统的应用程序,它可能只有 256MB 的 RAM(Windows 2000 及更高版本)。在我的应用程序中,我有一个大文件 (>256MB),其中包含大约 160 字节/每个的固定记录。这个应用程序有一个相当长的过程,随着时间的推移,它将随机访问大约 90% 的文件(用于读取和写入)。任何给定的记录写入与该特定记录的读取相距不超过 1,000 条记录访问(我可以调整此值)。

I have two obvious options for this process: regular I/O (FileRead, FileWrite) and memory mapping (CreateFileMapping, MapViewOfFile). The latter should be much more efficient in systems with enough memory, but in systems with low memory it will swap out most of other applications' memory, which in my application is a no-no. Is there a way to keep the process from eating up all memory (e.g., like forcing the flushing of memory pages I'm no longer accessing)? If this is not possible, then I must resort back to regular I/O; I would have liked to use overlapped I/O for the writing part (since access is so random), but documentation says writes of less than 64K are always served synchronously.

对于这个过程,我有两个明显的选择:常规 I/O(FileRead、FileWrite)和内存映射(CreateFileMapping、MapViewOfFile)。后者在具有足够内存的系统中应该效率更高,但在内存不足的系统中,它将交换大多数其他应用程序的内存,这在我的应用程序中是禁忌。有没有办法防止进程占用所有内存(例如,强制刷新我不再访问的内存页面)?如果这是不可能的,那么我必须求助于常规 I/O;我本来希望在写入部分使用重叠 I/O(因为访问是如此随机),但文档说小于 64K 的写入总是同步提供的

Any ideas for improving I/O are welcomed.

欢迎任何改进 I/O 的想法。

回答by Guillermo Prandi

I finally found the way, derived from a thread here. The trick is using VirtualUnlock() on the ranges I need to uncommit; although this function returns FALSE with error 0x9e ("The segment is already unlocked"), memory is actually released, even if the pages were modified (file is correctly updated).

我终于找到了方法,源自这里的一个线程。诀窍是在我需要取消提交的范围内使用 VirtualUnlock();尽管此函数返回 FALSE 并显示错误 0x9e(“段已解锁”),但实际上已释放内存,即使页面已被修改(文件已正确更新)。

Here's my sample test program:

这是我的示例测试程序:

#include "stdafx.h"

void getenter(void)
{
    int     ch;
    for(;;)
    {
        ch = getch();
        if( ch == '\n' || ch == '\r' ) return;
    }
}

int main(int argc, char* argv[])
{
    char*   fname = "c:\temp\MMFTest\TestFile.rar";      // 54 MB
    HANDLE  hfile = CreateFile( fname, GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_RANDOM_ACCESS, NULL );
    if( hfile == INVALID_HANDLE_VALUE )
    {
        fprintf( stderr, "CreateFile() error 0x%08x\n", GetLastError() );
        getenter();
        return 1;
    }

    HANDLE map_handle = CreateFileMapping( hfile, NULL, PAGE_READWRITE | SEC_RESERVE, 0, 0, 0);
    if( map_handle == NULL )
    {
        fprintf( stderr, "CreateFileMapping() error 0x%08x\n", GetLastError() );
        getenter();
        CloseHandle(hfile);
        return 1;
    }

    char* map_ptr = (char*) MapViewOfFile( map_handle, FILE_MAP_WRITE | FILE_MAP_READ, 0, 0, 0 );
    if( map_ptr == NULL )
    {
        fprintf( stderr, "MapViewOfFile() error 0x%08x\n", GetLastError() );
        getenter();
        CloseHandle(map_handle);
        CloseHandle(hfile);
        return 1;
    }

    // Memory usage here is 704KB
    printf("Mapped.\n"); getenter();

    for( int n = 0 ; n < 10000 ; n++ )
    {
        map_ptr[n*4096]++;
    }

    // Memory usage here is ~40MB
    printf("Used.\n"); getenter();

    if( !VirtualUnlock( map_ptr, 5000 * 4096 ) )
    {
        // Memory usage here is ~20MB
        // 20MB already freed!
        fprintf( stderr, "VirtualUnlock() error 0x%08x\n", GetLastError() );
        getenter();
        UnmapViewOfFile(map_ptr);
        CloseHandle(map_handle);
        CloseHandle(hfile);
        return 1;
    }

    // Code never reached
    printf("VirtualUnlock() executed.\n"); getenter();

    UnmapViewOfFile(map_ptr);
    CloseHandle(map_handle);
    CloseHandle(hfile);

    printf("Unmapped and closed.\n"); getenter();

    return 0;
}

As you can see, the working set of the program is reduced after executing VirtualUnlock(), just as I needed. I only need to keep track of the pages I change in order to unlock as appropriate.

如您所见,执行 VirtualUnlock() 后程序的工作集减少了,正如我所需要的。我只需要跟踪我更改的页面,以便在适当时解锁。

回答by Bruno Martinez

Just map the whole file to memory. This consumes virtual but not physical memory. The file is read from disk piecewise and is evicted from memory by the same policies that govern the swap file.

只需将整个文件映射到内存。这会消耗虚拟内存而不是物理内存。该文件是从磁盘分段读取的,并通过管理交换文件的相同策略从内存中逐出。

回答by W.F

VirtualUnlock does not appear to work. What you need to do is call FlushViewOfFile(map_ptr,0) immediately before UnmapViewOfFile(map_ptr). Windows Task Manager will not show the physical memory usage. Use ProcessExplorer from SysInternals

VirtualUnlock 似乎不起作用。您需要做的是在 UnmapViewOfFile(map_ptr) 之前立即调用 FlushViewOfFile(map_ptr,0)。Windows 任务管理器不会显示物理内存使用情况。使用 SysInternals 的 ProcessExplorer

回答by Anders

Are you mapping the whole file as one block with MapViewOfFile? If you are, try mapping smaller parts. You can flush a view with FlushViewOfFile()

您是否使用 MapViewOfFile 将整个文件映射为一个块?如果是,请尝试映射较小的部分。您可以使用 FlushViewOfFile() 刷新视图