Feedback on G1GC
kirk at kodewerk.com
kirk at kodewerk.com
Mon Dec 21 08:02:30 UTC 2015
Hi Charlie,
>
> The gist of the issue is whether G1 should reduce the size of eden space when MaxGCPauseMillis is exceeded.
>
> To pickup things on where this thread is going …
>
> If the workload is very reproducible, then is it unreasonable to ask for a configuration run that enabled ParallelRefProcEnabled with set of command line options that were used in the first run? How about we exercise some good practices here and change one configuration setting at a time?
Very much agreed here. It seems like a number of things changed which is why I would also like to back-off on all the changes ‘cept ParallelRefProcEnabled if this is at all possible.
> And, let’s also ensure we have results that are reproducible. We have several unanswered questions between the first and second run, i.e. why did ref proc times drop so drastically, is it all due to ParallelRefProcEnabled? How could a forced larger Eden size allow Ref Proc times to be reduced? Is the workload producing repeatable results / behavior?
Completely agree...
>
> Aside from the specifics just mentioned, I think the key thing to understand here is the school thought behind shrinking the size of eden when GC pauses exceed MaxGCPauseMillis, and why it is not a good idea to grow the size of Eden in such a case? Perhaps one of the long time GC engineers would like to join the fun? ;-)
>
> @Kirk: You mentioned, “reference processing times clearly dominated resulting in Eden being shrunk in a feeble attempt to meet the pause time goal”. Can you offer some alternatives that would be a better alternative that G1 could do adaptively to meet the pause time goal in the presence of high reference processing times, and for bonus points, could you file those enhancements in JIRA so they can be further evaluated and vetted?
I wish I could but at this point in time I don’t have a good answer. The problem is I simply have too few G1 GC logs from real apps to clearly see the problems. My feeling is that adaptive sizing, as it is currently implemented, doesn’t take into account all of the factors and simply shrinking or growing heap in reaction to meeting or missing a pause time goal maybe too simplistic. Unfortunately, one cannot really generalize the results produce by this app is that there hasn’t been a methodical attempt to understand what is going on. However, what I can say is that quite often if one gets a poor reaction to a tuning strategy, typically one should tune in the opposite direction. So, I would say that in this case taking memory away didn’t work, so the logical reaction is to add memory. This is what was done in this case but many other things seemed to have changed between runs and so we are left trying to discuss inconclusive results. My understanding is that this is a production application and if it’s current performance is acceptable I fear we’re done. I would like to see a run with the min shrinkage percent relaxed but unless someone is willing to take the time and effort to move it to a proper test environment…..
Regards,
Kirk
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20151221/8b6934b3/signature.asc>
More information about the hotspot-gc-dev
mailing list