Odd pause with ParNew
Srinivas Ramakrishna
ysr1729 at gmail.com
Mon Jan 23 17:45:34 PST 2012
Hi Greg -- one of the first things I check in such cases is to see if that
particular scavenge
happened to copy much more data than the rest. (Since scavenge times are
directly
proportional to copy/allocation costs.) The second thing I check is whether
the
scavenge immediately followed a compacting collection (for which there is a
known
allocation pathology with an existing CR).
For the first above, there isn't sufficient surrounding info/context, but
the
"new threshold (max 6)" indicates that there may have been a spurt in
promotion (which is explained by the sudden lowering of tenuring threshold),
so you may find upon further analysis that there was indeed a sudden jump in
surviving objects (and thence a concommitant increase perhaps in objects
copied).
>From the safepoint stats you present it doesn't look like we'd have to look
beyond GC
to find an explanation (i.e. safeppointing or non-GC processing do not seem
to be
implicated here).
HTHS.
-- ramki
On Mon, Jan 23, 2012 at 2:39 PM, Greg Bowyer <gbowyer at fastmail.co.uk> wrote:
> Hi all, working through my GC issues I have found that the fix for CMS
> weak references is making my GC far more predictable
>
> However I occasionally find that sometimes I see a ParNew collection of
> 1 second, outside of the number of VM operations for the safepoint there
> is nothing that would seem to be an issue ? does anyone know why this
> ParNew claims a 1 second wait ?
>
> This is for a JDK compiled from the tip of jdk7/jdk7 repo in openjdk
> with the recent fix for CMS parallel marking.
>
> ---- %< ----
> Total time for which application threads were stopped: 0.0005490 seconds
> Application time: 0.7253910 seconds
> 38668.349: [GC 38668.349: [ParNew38669.323: [SoftReference, 0 refs,
> 0.0000160 secs]38669.323: [WeakReference, 12 refs, 0.0000080
> secs]38669.323: [FinalReference, 0 refs, 0.0000040 secs]38669.323:
> [PhantomReference, 0 refs, 0.0000060 secs]38669.323: [JNI Weak
> Reference, 0.0000080 secs]
> Desired survivor size 34865152 bytes, new threshold 1 (max 6)
> - age 1: 69728352 bytes, 69728352 total
> : 610922K->68096K(613440K), 0.9741030 secs]
> 10005726K->9689043K(12514816K), 0.9742160 secs] [Times: user=7.26
> sys=0.01, real=0.97 secs]
> Total time for which application threads were stopped: 1.0050700 seconds
> ---- >% ----
>
> This matches the following safepoint (I guess):
> ---- %< ----
> 38668.316: GenCollectForAllocation [ 55
> 7 10 ] [ 0 0 30 0 974 ] 7
> ---- >% ----
>
> Which is hiding for reference in
>
> --- %< ----
> vmop [threads: total initially_running
> wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
> 38643.172: GenCollectForAllocation [ 55
> 9 12 ] [ 0 0 148 0 130 ] 9
> 38646.203: GenCollectForAllocation [ 55
> 12 12 ] [ 2 0 22 0 139 ] 12
> 38648.293: GenCollectForAllocation [ 55
> 13 13 ] [ 1 0 1 0 144 ] 13
> 38649.852: CMS_Final_Remark [ 55
> 14 14 ] [ 0 0 1 0 168 ] 14
> 38652.328: FindDeadlocks [ 55
> 16 16 ] [ 0 0 59 0 0 ] 16
> 38652.387: FindDeadlocks [ 55
> 4 4 ] [ 0 0 0 0 0 ] 4
> 38652.762: GenCollectForAllocation [ 55
> 9 9 ] [ 0 0 24 0 132 ] 9
> 38655.586: GenCollectForAllocation [ 55
> 10 10 ] [ 0 1 9 0 294 ] 10
> 38656.961: GenCollectForAllocation [ 55
> 6 6 ] [ 0 0 0 0 215 ] 5
> 38658.125: GenCollectForAllocation [ 55
> 3 4 ] [ 0 0 1 0 91 ] 2
> 38658.223: CMS_Initial_Mark [ 55
> 2 4 ] [ 0 0 0 0 6 ] 1
> 38658.926: GenCollectForAllocation [ 55
> 6 6 ] [ 0 0 1 0 102 ] 6
> 38661.621: GenCollectForAllocation [ 55
> 5 5 ] [ 0 0 1 0 72 ] 5
> 38663.527: GenCollectForAllocation [ 55
> 7 7 ] [ 0 0 0 0 56 ] 7
> 38665.180: GenCollectForAllocation [ 55
> 5 5 ] [ 0 0 1 0 659 ] 4
> 38667.230: GenCollectForAllocation [ 55
> 11 11 ] [ 0 0 0 0 88 ] 11
> 38667.566: FindDeadlocks [ 55
> 13 13 ] [ 0 0 0 0 0 ] 13
> 38667.566: FindDeadlocks [ 55
> 5 5 ] [ 0 0 0 0 0 ] 5
> 38667.582: RevokeBias [ 55
> 10 11 ] [ 0 0 8 0 0 ] 9
> > 38668.316: GenCollectForAllocation [ 55
> 7 10 ] [ 0 0 30 0 974 ] 7
> 38670.551: GenCollectForAllocation [ 55
> 10 10 ] [ 0 0 1 0 391 ] 10
> 38671.875: GenCollectForAllocation [ 55
> 6 6 ] [ 0 0 25 0 409 ] 6
> 38674.230: GenCollectForAllocation [ 55
> 11 11 ] [ 1 0 1 0 415 ] 10
> 38676.121: GenCollectForAllocation [ 55
> 1 1 ] [ 0 0 0 0 558 ] 0
> 38677.691: GenCollectForAllocation [ 55
> 7 9 ] [ 0 0 1 0 388 ] 6
> 38679.488: GenCollectForAllocation [ 55
> 6 9 ] [ 0 0 10 0 297 ] 6
> 38680.367: CMS_Final_Remark [ 55
> 12 12 ] [ 0 0 0 0 310 ] 12
> 38682.016: GenCollectForAllocation [ 55
> 10 11 ] [ 0 0 0 0 295 ] 10
> 38683.473: FindDeadlocks [ 55
> 6 8 ] [ 1 0 9 0 0 ] 4
> 38683.484: FindDeadlocks [ 55
> 7 7 ] [ 0 0 0 0 0 ] 7
> vmop [threads: total initially_running
> wait_to_block] [time: spin block sync cleanup vmop] page_trap_count
> 38683.867: GenCollectForAllocation [ 55
> 7 8 ] [ 1 0 2 0 198 ] 7
> 38685.070: GenCollectForAllocation [ 55
> 8 8 ] [ 0 0 0 0 430 ] 8
> 38687.312: GenCollectForAllocation [ 55
> 6 8 ] [ 0 0 1 0 287 ] 6
> 38690.094: GenCollectForAllocation [ 55
> 8 10 ] [ 0 0 16 0 49 ] 7
> 38692.910: CMS_Initial_Mark [ 55
> 3 3 ] [ 0 0 1 0 129 ] 2
> 38694.043: no vm operation [ 55
> 7 9 ] [ 0 0 644 0 0 ] 5
> 38696.605: GenCollectForAllocation [ 55
> 9 9 ] [ 0 0 272 0 90 ] 9
> 38698.535: FindDeadlocks [ 55
> 8 13 ] [ 0 0 40 0 0 ] 8
> 38698.578: FindDeadlocks [ 55
> 8 8 ] [ 0 0 3 0 0 ] 8
> 38702.559: GenCollectForAllocation [ 55
> 6 6 ] [ 0 0 1 0 55 ] 6
> 38709.008: GenCollectForAllocation [ 55
> 1 1 ] [ 0 0 0 0 48 ] 0
> 38712.719: CMS_Final_Remark [ 55
> 3 3 ] [ 0 0 0 0 51 ] 2
> 38713.625: FindDeadlocks [ 55
> 2 2 ] [ 0 0 0 0 0 ] 0
> 38713.625: FindDeadlocks [ 55
> 2 2 ] [ 0 0 0 0 0 ] 2
> 38720.492: CMS_Initial_Mark [ 55
> 4 4 ] [ 0 0 1 0 88 ] 4
> 38721.059: GenCollectForAllocation [ 55
> 4 4 ] [ 0 0 1 0 58 ] 3
> 38725.684: GenCollectForAllocation [ 55
> 3 4 ] [ 0 0 0 0 57 ] 2
> 38725.742: CMS_Final_Remark [ 55
> 2 4 ] [ 0 0 0 0 254 ] 2
> 38729.410: FindDeadlocks [ 55
> 5 5 ] [ 0 0 0 0 0 ] 5
> 38729.410: FindDeadlocks [ 55
> 5 5 ] [ 0 0 0 0 0 ] 5
> 38730.527: GenCollectForAllocation [ 55
> 11 10 ] [ 1 0 16 0 43 ] 8
> 38734.977: GenCollectForAllocation [ 55
> 2 2 ] [ 0 0 0 0 53 ] 0
>
> ---- >% ----
>
> Any thoughts ? Many thanks
>
> -- Greg
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120123/08d1c232/attachment-0001.html
More information about the hotspot-gc-use
mailing list