JDK-8166500 Adaptive sizing for IHOP causes excessively long mixed GC pauses
yu.zhang at oracle.com
yu.zhang at oracle.com
Wed Oct 12 15:31:07 UTC 2016
Ildar,
Another thing you can try is to increase G1HeapWastePercent to get rid
of the expensive mixed gcs. From the log snip, the heap is not tight.
Thanks
Jenny
On 10/12/2016 05:51 AM, Thomas Schatzl wrote:
> Hi,
>
> On Wed, 2016-10-12 at 11:39 +0300, Ildar Nurislamov wrote:
>> Hi Thomas,
>>
>> It was too early to make conclusions.
>> After some prolonged testing i've noticed that more thorough tuning
>> may be required to avoid this issue completely.
>> And -XX:-G1UseAdaptiveIHOP not always enough too.
>>
>> What bothers me is the steep jump in time required between the last
>> Mixed GC and the previous:
>> In 9th it took 129.8ms to evacuate 104 old region:
>> [64394.771s][info][gc,phases ] GC(38781) Evacuate Collection
>> Set: 129.8ms
>> [64394.771s][info][gc,phases ] GC(38781) Code Roots: 0.0ms
>> [64394.771s][info][gc,phases ] GC(38781) Clear Card Table:
>> 3.4ms
>> [64394.771s][info][gc,phases ] GC(38781) Expand Heap After
>> Collection: 0.0ms
>> [64394.771s][info][gc,phases ] GC(38781) Free Collection Set:
>> 3.9ms
>> [64394.771s][info][gc,phases ] GC(38781) Merge Per-Thread
>> State: 0.1ms
>> [64394.771s][info][gc,phases ] GC(38781) Other: 13.8ms
>> [64394.771s][info][gc,heap ] GC(38781) Eden regions: 37->0(37)
>> [64394.771s][info][gc,heap ] GC(38781) Survivor regions: 3-
>>> 3(5)
>> [64394.771s][info][gc,heap ] GC(38781) Old regions: 457->353
>> [64394.771s][info][gc,heap ] GC(38781) Humongous regions: 3->3
>> [64394.771s][info][gc,metaspace ] GC(38781) Metaspace: 70587K-
>>> 70587K(83968K)
>> [64394.771s][info][gc ] GC(38781) Pause Mixed (G1
>> Evacuation Pause) 15972M->11457M(65536M) (64394.620s, 64394.771s)
>> 150.931ms
>>
>> While in 10th (the last) it took 3401.3ms to evacuate 87:
>> [64398.393s][info][gc,phases ] GC(38782) Evacuate Collection
>> Set: 3401.3ms
>> [64398.393s][info][gc,phases ] GC(38782) Code Roots: 0.0ms
>> [64398.393s][info][gc,phases ] GC(38782) Clear Card Table:
>> 2.8ms
>> [64398.393s][info][gc,phases ] GC(38782) Expand Heap After
>> Collection: 0.0ms
>> [64398.393s][info][gc,phases ] GC(38782) Free Collection Set:
>> 4.3ms
>> [64398.393s][info][gc,phases ] GC(38782) Merge Per-Thread
>> State: 0.1ms
>> [64398.393s][info][gc,phases ] GC(38782) Other: 12.2ms
>> [64398.393s][info][gc,heap ] GC(38782) Eden regions: 37->0(37)
>> [64398.393s][info][gc,heap ] GC(38782) Survivor regions: 3-
>>> 3(5)
>> [64398.393s][info][gc,heap ] GC(38782) Old regions: 353->266
>> [64398.393s][info][gc,heap ] GC(38782) Humongous regions: 3->3
>> [64398.393s][info][gc,metaspace ] GC(38782) Metaspace: 70587K-
>>> 70587K(83968K)
>> [64398.393s][info][gc ] GC(38782) Pause Mixed (G1
>> Evacuation Pause) 12641M->8678M(65536M) (64394.973s, 64398.393s)
>> 3420.666ms
>>
>> It looks like at average old regions in 10th Mixed GC were 31.5 times
>> more expensive than in 9th and it took 39ms to collect just one
>> region. Does it make sense? To what extent one old region may be more
>> expensive than another?
> Mostly remembered set operations.
>
>> I wish G1Ergonomics similar to "reason: predicted time is too high"
>> but for order of magnitude jump cases worked here even when min old
>> regions number has not been reached. We didn't spend all
>> XX:G1MixedGCCountTarget=12 yet here.
>>
>> Log file: https://www.dropbox.com/s/ubpkosh0a8tomss/jdk9_135_tuned_11
>> _10_16.log.zip?dl=0
>> Sadly with no ergonomics.
>>
>> Next thing i'm going to try is adjusting
>> XX:G1MixedGCLiveThresholdPercent.
> I did not have time for a look at the logs yet, but you can try to
> avoid this by either increasing MixedGCCountTarget further - as you
> noticed this is a hint for G1 only anyway - or trying to get rid of
> these expensive regions. One way is to decrease
> G1MixedGCLiveThresholdPercent (default 85), as regions with lots of
> occupancy also often have a large remembered set that is expensive to
> reclaim.
>
> Another way to explore is looking at statistics for remembered set
> sizes directly. There is -XX:G1SummarizeRSetStatsPeriod which takes a
> number that tells G1 to collect and print these statistics every
> G1SummarizeRSetStatsPeriod'th GC. Note that this is an expensive
> operation, so you might only want to do this every 10th or so GC (needs
> -XX:+UnlockDiagnosticVMOptions).
>
> Thanks,
> Thomas
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
More information about the hotspot-gc-use
mailing list