C# 大对象堆分片

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/686950/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-04 13:29:27  来源:igfitidea点击:

Large Object Heap Fragmentation

c#.netmemory-managementmemory-leakswindbg

提问by Paul Ruane

The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before.

我正在处理的 C#/.NET 应用程序正在遭受缓慢的内存泄漏。我已经使用 CDB 和 SOS 来尝试确定发生了什么,但数据似乎没有任何意义,所以我希望你们中的一个人以前可能经历过这种情况。

The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH:

该应用程序在 64 位框架上运行。它不断地计算数据并将其序列化到远程主机,并且对大对象堆 (LOH) 产生了相当大的影响。但是,我希望大多数 LOH 对象都是暂时的:一旦计算完成并已发送到远程主机,就应该释放内存。然而,我看到的是大量(实时)对象数组与空闲内存块交错,例如,从 LOH 中获取一个随机段:

0:000> !DumpHeap 000000005b5b1000  000000006351da10
         Address               MT     Size
...
000000005d4f92e0 0000064280c7c970 16147872
000000005e45f880 00000000001661d0  1901752 Free
000000005e62fd38 00000642788d8ba8     1056       <--
000000005e630158 00000000001661d0  5988848 Free
000000005ebe6348 00000642788d8ba8     1056
000000005ebe6768 00000000001661d0  6481336 Free
000000005f214d20 00000642788d8ba8     1056
000000005f215140 00000000001661d0  7346016 Free
000000005f9168a0 00000642788d8ba8     1056
000000005f916cc0 00000000001661d0  7611648 Free
00000000600591c0 00000642788d8ba8     1056
00000000600595e0 00000000001661d0   264808 Free
...

Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow.

显然,如果我的应用程序在每次计算期间都创建了长期存在的大型对象,我会希望情况如此。(它确实这样做,我接受会有一定程度的 LOH 碎片,但这不是这里的问题。)问题是您可以在上面的转储中看到的非常小的(1056 字节)对象数组,而我在代码中看不到正在创建并且以某种方式仍然扎根。

Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine:

还要注意,当堆段被转储时,CDB 不会报告类型:我不确定这是否相关。如果我转储标记的 (<--) 对象,CDB/SOS 报告它很好:

0:015> !DumpObj 000000005e62fd38
Name: System.Object[]
MethodTable: 00000642788d8ba8
EEClass: 00000642789d7660
Size: 1056(0x420) bytes
Array: Rank 1, Number of elements 128, Type CLASS
Element Type: System.Object
Fields:
None

The elements of the object array are all strings and the strings are recognisable as from our application code.

对象数组的元素都是字符串,这些字符串可以从我们的应用程序代码中识别出来。

Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight).

此外,我无法找到他们的 GC 根,因为 !GCRoot 命令挂起并且永远不会回来(我什至尝试将其放置过夜)。

So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects?

因此,如果有人能够解释为什么这些小(<85k)对象数组最终会出现在 LOH 上,我将非常感激:.NET 在什么情况下会在其中放置一个小对象数组?另外,有没有人碰巧知道另一种确定这些对象根源的方法?



Update 1

更新 1

Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead.

我昨天晚些时候提出的另一个理论是,这些对象数组开始时很大,但已经缩小,留下了在内存转储中很明显的可用内存块。让我怀疑的是对象数组总是看起来是 1056 字节长(128 个元素),128 * 8 用于引用和 32 字节的开销。

The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know...

这个想法是,库或 CLR 中的某些不安全代码可能会破坏数组头中的元素字段数。我知道有点远...



Update 2

更新 2

Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this:

感谢 Brian Rasmussen(请参阅已接受的答案),问题已被确定为字符串实习表引起的 LOH 碎片!我写了一个快速测试应用程序来确认这一点:

static void Main()
{
    const int ITERATIONS = 100000;

    for (int index = 0; index < ITERATIONS; ++index)
    {
        string str = "NonInterned" + index;
        Console.Out.WriteLine(str);
    }

    Console.Out.WriteLine("Continue.");
    Console.In.ReadLine();

    for (int index = 0; index < ITERATIONS; ++index)
    {
        string str = string.Intern("Interned" + index);
        Console.Out.WriteLine(str);
    }

    Console.Out.WriteLine("Continue?");
    Console.In.ReadLine();
}

The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not.

应用程序首先在循环中创建和取消引用唯一的字符串。这只是为了证明在这种情况下内存不会泄漏。显然它不应该,也不应该。

In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS:

在第二个循环中,创建并保留唯一字符串。此操作将它们植根于实习表中。我没有意识到实习表是如何表示的。看起来它由一组页面组成——128 个字符串元素的对象数组——在 LOH 中创建。这在 CDB/SOS 中更为明显:

0:000> .loadby sos mscorwks
0:000> !EEHeap -gc
Number of GC Heaps: 1
generation 0 starts at 0x00f7a9b0
generation 1 starts at 0x00e79c3c
generation 2 starts at 0x00b21000
ephemeral segment allocation context: none
 segment    begin allocated     size
00b20000 00b21000  010029bc 0x004e19bc(5118396)
Large object heap starts at 0x01b21000
 segment    begin allocated     size
01b20000 01b21000  01b8ade0 0x00069de0(433632)
Total Size  0x54b79c(5552028)
------------------------------
GC Heap Size  0x54b79c(5552028)

Taking a dump of the LOH segment reveals the pattern I saw in the leaking application:

转储 LOH 段揭示了我在泄漏应用程序中看到的模式:

0:000> !DumpHeap 01b21000 01b8ade0
...
01b8a120 793040bc      528
01b8a330 00175e88       16 Free
01b8a340 793040bc      528
01b8a550 00175e88       16 Free
01b8a560 793040bc      528
01b8a770 00175e88       16 Free
01b8a780 793040bc      528
01b8a990 00175e88       16 Free
01b8a9a0 793040bc      528
01b8abb0 00175e88       16 Free
01b8abc0 793040bc      528
01b8add0 00175e88       16 Free    total 1568 objects
Statistics:
      MT    Count    TotalSize Class Name
00175e88      784        12544      Free
793040bc      784       421088 System.Object[]
Total 1568 objects

Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long.

请注意,对象数组大小为 528(而不是 1056),因为我的工作站是 32 位,而应用程序服务器是 64 位。对象数组的长度仍然是 128 个元素。

So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR.

所以这个故事的寓意是要非常小心地实习。如果您正在实习的字符串不知道是有限集的成员,那么您的应用程序将由于 LOH 的碎片而泄漏,至少在 CLR 的第 2 版中是这样。

In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

在我们的应用程序的情况下,反序列化代码路径中有通用代码在解组期间实习实体标识符:我现在强烈怀疑这是罪魁祸首。然而,开发人员的意图显然是好的,因为他们想确保如果同一个实体被多次反序列化,那么只有一个标识符字符串的实例将保留在内存中。

采纳答案by Brian Rasmussen

The CLR uses the LOH to preallocate a few objects (such as the array used for interned strings). Some of these are less than 85000 bytes and thus would not normally be allocated on the LOH.

CLR 使用 LOH 预分配一些对象(例如用于内部字符串的数组)。其中一些小于 85000 字节,因此通常不会在 LOH 上分配。

It is an implementation detail, but I assume the reason for this is to avoid unnecessary garbage collection of instances that are supposed to survive as long as the process it self.

这是一个实现细节,但我认为这样做的原因是为了避免不必要的实例垃圾收集,这些实例应该在进程本身的时间内继续存在。

Also due to a somewhat esoteric optimization, any double[]of 1000 or more elements is also allocated on the LOH.

同样由于一些深奥的优化,double[]1000 个或更多元素中的任何一个也分配在 LOH 上。

回答by HUAGHAGUAH

If the format is recognizable as your application, why haven't you identified the code that is generating this string format? If there's several possibilities, try adding unique data to figure out which code path is the culprit.

如果格式可以识别为您的应用程序,为什么您没有识别生成此字符串格式的代码?如果有多种可能性,请尝试添加唯一数据以确定哪个代码路径是罪魁祸首。

The fact that the arrays are interleaved with large freed items leads me to guess that they were originally paired or at least related. Try to identify the freed objects to figure out what was generating them and the associated strings.

数组与大的释放项交错的事实让我猜测它们最初是配对的或至少是相关的。尝试识别释放的对象以找出生成它们的原因以及相关的字符串。

Once you identify what is generating these strings, try to figure out what would be keeping them from being GCed. Perhaps they're being stuffed in a forgotten or unused list for logging purposes or something similar.

一旦你确定了是什么产生了这些字符串,试着找出什么会阻止它们被 GC。也许它们被塞进了一个被遗忘或未使用的列表中,用于记录目的或类似的东西。



EDIT: Ignore the memory region and the specific array size for the moment: just figure out what is being done with these strings to cause a leak. Try the !GCRoot when your program has created or manipulated these strings just once or twice, when there's fewer objects to trace.

编辑:暂时忽略内存区域和特定的数组大小:只需弄清楚对这些字符串做了什么导致泄漏。当您的程序创建或操作这些字符串一两次时,当要跟踪的对象较少时,请尝试使用 !GCRoot。

回答by Daniel Earwicker

When reading descriptions of how GC works, and the part about how long-lived objects end up in generation 2, and the collection of LOH objects happens at full collection only - as does collection of generation 2, the idea that springs to mind is... why not just keep generation 2 and large objects in the same heap, as they're going to get collected together?

当阅读关于 GC 工作原理的描述,以及关于长期对象如何在第 2 代中结束的部分时,以及 LOH 对象的收集仅在完全收集时发生 - 与第 2 代的收集一样,我想到的想法是。 .. 为什么不将第 2 代和大对象放在同一个堆中,因为它们将被收集在一起?

If that's what actually happens then it would explain how small objects end up in the same place as the LOH - if they're long lived enough to end up in generation 2.

如果这就是实际发生的事情,那么它就可以解释小物体如何最终与 LOH 处于同一位置——如果它们的寿命足够长以最终进入第 2 代。

And so your problem would appear to be a pretty good rebuttal to the idea that occurs to me - it would result in the fragmentation of the LOH.

所以你的问题似乎是对我想到的想法的一个很好的反驳——它会导致 LOH 的碎片化。

Summary: your problem couldbe explained by the LOH and generation 2 sharing the same heap region, although that is by no means proof that this is the explanation.

总结:您的问题可以通过 LOH 和第 2 代共享相同的堆区域来解释,尽管这绝不是证明这就是解释的证据。

Update:the output of !dumpheap -statpretty much blows this theory out of the water! The generation 2 and LOH have their own regions.

更新:的输出!dumpheap -stat几乎把这个理论从水里吹出来了!第 2 代和 LOH 有自己的区域。

回答by Ian Ringrose

Great question, I learned by reading the questions.

很好的问题,我通过阅读问题学到了。

I think other bit of the deserialisation code path are also using the large object heap, hence the fragmentation. If all the strings were interned at the SAME time, I think you would be ok.

我认为反序列化代码路径的其他部分也在使用大对象堆,因此存在碎片。如果所有的字符串都在同一时间实习,我想你会没事的。

Given how good the .net garbage collector is, just letting the deserialisation code path create normal string object is likely to be good enough. Don't do anything more complex until the need is proven.

鉴于 .net 垃圾收集器有多好,让反序列化代码路径创建普通字符串对象可能就足够了。在证明需要之前不要做任何更复杂的事情。

I would at most look at keeping a hash table of the last few strings you have seen and reusing these. By limiting the hash table size and passing the size in when you create the table you can stop most fragmentation. You then need a way to remove strings you have not seen recently from the hash table to limit it's size. But if the strings the deserialisation code path create are short lived anyway you will not gain much if anything.

我最多会考虑保留您看到的最后几个字符串的哈希表并重用它们。通过限制哈希表大小并在创建表时传入大小,您可以阻止大多数碎片。然后,您需要一种方法来从哈希表中删除您最近未见过的字符串以限制其大小。 但是,如果反序列化代码路径创建的字符串无论如何都是短暂的,那么您将不会获得太多收益。

回答by Naveen

Here are couple of ways to Identify the exact call-stackof LOHallocation.

以下是确定LOH分配的确切调用堆栈的几种方法。

And to avoid LOH fragmentation Pre-allocate large array of objects and pin them. Reuse these objects when needed. Here is poston LOH Fragmentation. Something like this could help in avoiding LOH fragmentation.

并避免 LOH 碎片预分配大量对象并固定它们。需要时重用这些对象。这是关于 LOH 碎片的帖子。这样的事情可以帮助避免 LOH 碎片化。

回答by Andre Abrantes

The .NET Framework 4.5.1, has the ability to explicitly compact the large object heap (LOH) during garbage collection.

.NET Framework 4.5.1 能够在垃圾回收期间显式压缩大对象堆 (LOH)。

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();

See more info in GCSettings.LargeObjectHeapCompactionMode

GCSettings.LargeObjectHeapCompactionMode 中查看更多信息