Java HashMap.clear() 和 remove() 内存有效吗?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2811537/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-13 13:08:02  来源:igfitidea点击:

Is Java HashMap.clear() and remove() memory effective?

javamemoryhashmap

提问by Illarion Kovalchuk

Consider the follwing HashMap.clear()code:

考虑以下HashMap.clear()代码:

 /**
 * Removes all of the mappings from this map.
 * The map will be empty after this call returns.
 */
public void clear() {
    modCount++;
    Entry[] tab = table;
    for (int i = 0; i < tab.length; i++)
        tab[i] = null;
    size = 0;
}

It seems, that the internal array (table) of Entryobjects is never shrinked. So, when I add 10000 elements to a map, and after that call map.clear(), it will keep 10000 nulls in it's internal array. So, my question is, how does JVM handle this array of nothing, and thus, is HashMapmemory effective?

看起来,对象的内部数组 ( table)Entry永远不会缩小。因此,当我向地图添加 10000 个元素时,在调用之后map.clear(),它将在其内部数组中保留 10000 个空值。所以,我的问题是,JVM 如何处理这个空数组,因此HashMap内存是否有效?

采纳答案by Joachim Sauer

The idea is that clear()is only called when you want to re-use the HashMap. Reusing an object should only be done for the same reason it was used before, so chances are that you'll have roughly the same number of entries. To avoid useless shrinking and resizing of the Mapthe capacity is held the same when clear()is called.

这个想法是clear()只有当你想重新使用HashMap. 重用对象的原因应该与之前使用的原因相同,因此您可能会拥有大致相同数量的条目。为避免无用的缩小和调整Map容量在clear()调用时保持不变。

If all you want to do is discard the data in the Map, then you need not (and in fact should not) call clear()on it, but simply clear all references to the Mapitself, in which case it will be garbage collected eventually.

如果您只想丢弃 中的数据Map,那么您不需要(实际上也不应该)调用clear()它,而只需清除对Map本身的所有引用,在这种情况下,最终将对其进行垃圾收集。

回答by Konerak

You are right, but considering that increasing the array is a much more expensive operation, it's not unreasonable for the HashMap to think "once the user has increased the array, chances are he'll need the array this size again later" and just leave the array instead of decreasing it and risking to have to expensively expand it later again. It's a heuristic I guess - you could advocate the other way around too.

你是对的,但是考虑到增加数组是一个更昂贵的操作,HashMap 认为“一旦用户增加了数组,他很可能会在以后再次需要这个大小的数组”然后离开就不是没有道理的数组而不是减少它并冒着以后不得不再次昂贵地扩展它的风险。我猜这是一种启发式方法——你也可以反过来提倡。

回答by danben

Another thing to consider is that each element in tableis simply a reference. Setting these entries to null will remove the references from the items in your Map, which will then be free for garbage collection. So it isn't as if you are not freeing any memory at all.

另一件要考虑的事情是,中的每个元素table只是一个引用。将这些条目设置为 null 将从您的 中的项目中删除引用Map,然后可以免费进行垃圾回收。所以这并不是说你根本没有释放任何内存。

However, if you need to free even the memory being used by the Mapitself, then you should release it as per Joachim Sauer's suggestion.

但是,如果您甚至需要Map释放它本身正在使用的内存,那么您应该按照 Joachim Sauer 的建议释放它。

回答by polygenelubricants

Looking at the source code, it does look like HashMapnever shrinks. The resizemethod is called to double the size whenever required, but doesn't have anything ala ArrayList.trimToSize().

查看源代码,它看起来像HashMap永远不会缩小。该resize方法在需要时被调用以将大小加倍,但没有任何内容 ala ArrayList.trimToSize()

If you're using a HashMapin such a way that it grows and shrinks dramatically often, you may want to just create a new HashMapinstead of calling clear().

如果您使用 aHashMap的方式经常急剧增长和缩小,您可能只想创建一个新的HashMap而不是调用clear().