RFR: 8324573: HashMap::putAll should resize to sum of both map sizes
Volker Simonis
simonis at openjdk.org
Wed Jan 24 19:09:25 UTC 2024
On Wed, 24 Jan 2024 00:26:09 GMT, Joshua Cao <duke at openjdk.org> wrote:
> This change mirrors what we did for ConcurrentHashMap in https://github.com/openjdk/jdk/pull/17116. When we add all entries from one map to anther, we should resize that map to the size of the sum of both maps.
>
> I used the command below to run the benchmarks. I set a high heap to reduce garbage collection noise.
>
> java -Xms25G -jar benchmarks.jar -p size=100000 -p addSize=100000 -gc true org.openjdk.bench.java.util.HashMapBench
>
>
> Before change
>
>
> Benchmark (addSize) (mapType) (size) Mode Cnt Score Error Units
> HashMapBench.putAll 100000 HASH_MAP 100000 avgt 4 22.927 ± 3.170 ms/op
> HashMapBench.putAll 100000 LINKED_HASH_MAP 100000 avgt 4 25.198 ± 2.189 ms/op
>
>
> After change
>
>
> Benchmark (addSize) (mapType) (size) Mode Cnt Score Error Units
> HashMapBench.putAll 100000 HASH_MAP 100000 avgt 4 16.780 ± 0.526 ms/op
> HashMapBench.putAll 100000 LINKED_HASH_MAP 100000 avgt 4 19.721 ± 0.349 ms/op
>
>
> We see about average time improvements of 26% in HashMap and 20% in LinkedHashMap.
> The current benchmark and the change don't really cover the case where many keys exist in _both_ maps. Could you add a benchmark for that? Also `int s = size() + m.size();` can overflow now, leading to different behavior. It might make sense to just cap the value?
@SirYwell , I agree with the "*overflow*" part which could easily be fixed by changing the type of `s` to double.
I don't understand the first part about "*the case where many keys exist in _both_ maps*". The benchmark and the results presented in the PR are for a hash map with 100000 elements into which we insert (i.e. `putAll()`) another 100000 elements. Or am I missing something?
-------------
PR Comment: https://git.openjdk.org/jdk/pull/17544#issuecomment-1908751920
More information about the core-libs-dev
mailing list