RFR: 8338977: Parallel: Improve heap resizing heuristics [v3]
Zhengyu Gu
zgu at openjdk.org
Mon Jul 7 00:42:41 UTC 2025
On Sun, 6 Jul 2025 20:59:49 GMT, Albert Mingkun Yang <ayang at openjdk.org> wrote:
>> Returning `null` outside of a safe point does have unexpected negative effects. e.g. `HeapDumpOnOutOfMemoryError` may not get good heap dump, as other threads can come in and run additional GCs, then the heap dump may contain confusingly little live data.
>>
>> I wonder if you can improve this situtation.
>
> Let me try to rephrase your concern to ensure that I understand you correctly -- after `_gc_overhead_counter >= GCOverheadLimitThreshold`, it's possible that another GC is triggered, which reclaims much free memory, and resets `_gc_overhead_counter` to zero. Then, `HeapDumpOnOutOfMemoryError` will not be able to get the intended heap snapshot.
>
> Is my understanding correct?
>
> (Seems that baseline resets the condition and can encounter the same problem as well.)
>
>
> if (limit_exceeded && softrefs_clear) {
> *gc_overhead_limit_was_exceeded = true;
> size_policy()->set_gc_overhead_limit_exceeded(false);
Yes. I don't think it is exceeding overhead limit only problem, but OOM and heap dump have to be, in a sense of `atomic`. We encountered this problem in real production system.
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/25000#discussion_r2188761699
More information about the serviceability-dev
mailing list