System.gc();
System.out.println((Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory())/1024/1024);// print 1M
Map map=new HashMap();
for(int i=0;i<100000;i ){
map.put("key" i,"i");
}
System.gc();
System.out.println((Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory())/1024/1024); //print 10M
map.clear();
System.gc();
System.out.println((Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory()));//print less than 1M
It seems the memory is reduced when call the clear method. However, from looking at other answers it seems the clear
method never shrinks the HashMap
. So why is the memory reduced?
CodePudding user response:
If you're referring to this question's answers, they're telling you that the entries array (table
) in the HashMap
is never shrunk. Instead, its entries are all set to null
.
But clearing the map makes the 100,000 strings you created ("key0"
, "key1"
, ...) and their associated Map.Entry
objects eligible for garbage collection, despite table
not getting smaller.
CodePudding user response:
It's an implementation detail, so the exact answer may change depending on the exact version of Java.
Here's the Java 8 implementation of HashMap::clear
:
public void clear() {
Node<K,V>[] tab;
modCount ;
if ((tab = table) != null && size > 0) {
size = 0;
for (int i = 0; i < tab.length; i)
tab[i] = null;
}
}
The table of buckets is completely emptied, but the table itself, and so the non-default capacity, is retained.
Regardless of the exact implementation, you would expect to free up a significant chunk of memory, because all of those non-interned strings created by "key" i
will be eligible for collection.
If you really care about reducing the capacity back to default then just reassign the HashMap with a new instance.