potential causes of long remark pause?
Srinivas Ramakrishna
ysr1729 at gmail.com
Sat Apr 11 02:16:15 UTC 2015
Hi Brian --
Note that the jvm terminology "weak refs processing" involves not just
Java's WeakReferences, but indeed all subtypes of java.lang.Reference
together. Sorry if that may have caused confusion. In particular,
therefore, it includes the time spent by the garbage collector in dealing
with SoftReferences, WeakReferences, FinalReferences, PhantomReferences and
JNI weak global references. I'd go with Jon's suggestion of turning on
-XX:+PrintReferenceGC to get at the details of what kind of references
caused the issue in your case, and thence whether
-XX:+ParallelRefProcEnabled would help.
Relatedly, it might make sense for HotSpot's concurrent collectors to
consider if some form of the Ugawa-Jones-Riston technique may be
gainfully applied to reduce the pause time necessary for Reference object
processing by GC.
-- ramki
On Fri, Apr 10, 2015 at 4:04 PM, Brian Williams <brian.williams at mayalane.com
> wrote:
> Thanks Jon. We noticed that as well. We do not create any WeakReferences
> within our application directly, and we couldn't find any in the libraries
> that we use. Perhaps they are coming from something within the JVM
> libraries. To our knowledge nothing has changed in the application load, so
> we were surprised to have such a large outlier. I guess we'll just see if
> it happens again and look for a pattern.
>
> Brian
>
> > On Apr 10, 2015, at 1:23 PM, Jon Masamitsu <jon.masamitsu at oracle.com>
> wrote:
> >
> > Brian,
> >
> > It looks like the weak reference processing.
> >
> > 2015-04-09T12:36:38.313-0700: [Rescan (parallel) , 0.0218780
> secs]2015-04-09T12:36:38.334-0700: [weak refs processing, 5.0944360 secs]
> >
> >
> > If you turn on PrintReferenceGC, you might get more information.
> > If you usually see long reference processing times, try
> > turning on ParallelRefProcEnabled. But if you only see
> > very intermittent spikes in reference processing times,
> > it might not be worth it (the parallel overhead will slow
> > down the remark times).
> >
> > Jon
> >
> >
> > On 4/9/2015 3:07 PM, Brian Williams wrote:
> >> I'm looking for ideas on what would lead to a 5 second CMS Remark
> pause. Looking through previous GC logs, the largest remark pause I could
> find was .58 seconds, with the average being .15. The snippet of the GC log
> that covers the full CMS cycle is below.
> >>
> >> We're using Java 1.7.0_55 running on "Linux 2.6.32-431.29.2.el6.x86_64
> amd64" with 24 virtual CPUs.
> >>
> >> Here are the JVM options as reported by -XX:+PrintCommandLineFlags:
> >>
> >> -XX:+CMSConcurrentMTEnabled -XX:CMSInitiatingOccupancyFraction=80
> -XX:CMSMaxAbortablePrecleanTime=3600000 -XX:+CMSParallelRemarkEnabled
> -XX:+CMSParallelSurvivorRemarkEnabled -XX:+CMSScavengeBeforeRemark
> -XX:ConcGCThreads=6 -XX:+HeapDumpOnOutOfMemoryError
> -XX:InitialHeapSize=94489280512 -XX:MaxHeapSize=94489280512
> -XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6 -XX:NewSize=2147483648
> -XX:OldPLABSize=16 -XX:PermSize=67108864 -XX:+PrintCommandLineFlags
> -XX:PrintFLSStatistics=1 -XX:+PrintGCDateStamps -XX:+PrintGCDetails
> -XX:RefDiscoveryPolicy=1 -XX:+UseBiasedLocking -XX:+UseConcMarkSweepGC
> -XX:+UseParNewGC
> >>
> >> And the details of the CMS collection with the long remark pause:
> >>
> >> 2015-04-09T12:36:10.393-0700: [GC [1 CMS-initial-mark:
> 45088837K(90177536K)] 45111889K(92065024K), 0.0296420 secs] [Times:
> user=0.01 sys=0.00, real=0.03 secs]
> >> 2015-04-09T12:36:24.055-0700: [CMS-concurrent-mark: 13.633/13.633 secs]
> [Times: user=75.95 sys=2.86, real=13.63 secs]
> >> 2015-04-09T12:36:24.371-0700: [CMS-concurrent-preclean: 0.315/0.315
> secs] [Times: user=0.44 sys=0.02, real=0.32 secs]
> >> 2015-04-09T12:36:29.659-0700: [GCBefore GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 1476244390
> >> Max Chunk Size: 1476244390
> >> Number of Blocks: 1
> >> Av. Block Size: 1476244390
> >> Tree Height: 1
> >> Before GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 0
> >> Max Chunk Size: 0
> >> Number of Blocks: 0
> >> Tree Height: 0
> >> 2015-04-09T12:36:29.660-0700: [ParNew: 1680929K->4322K(1887488K),
> 0.0794930 secs] 46769766K->45093205K(92065024K)After GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 1476237004
> >> Max Chunk Size: 1476237004
> >> Number of Blocks: 1
> >> Av. Block Size: 1476237004
> >> Tree Height: 1
> >> After GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 0
> >> Max Chunk Size: 0
> >> Number of Blocks: 0
> >> Tree Height: 0
> >> , 0.0797770 secs] [Times: user=1.40 sys=0.00, real=0.08 secs]
> >> 2015-04-09T12:36:38.230-0700: [CMS-concurrent-abortable-preclean:
> 13.774/13.859 secs] [Times: user=18.14 sys=0.50, real=13.86 secs]
> >> 2015-04-09T12:36:38.232-0700: [GC[YG occupancy: 845557 K (1887488
> K)]2015-04-09T12:36:38.232-0700: [GCBefore GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 1476237004
> >> Max Chunk Size: 1476237004
> >> Number of Blocks: 1
> >> Av. Block Size: 1476237004
> >> Tree Height: 1
> >> Before GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 0
> >> Max Chunk Size: 0
> >> Number of Blocks: 0
> >> Tree Height: 0
> >> 2015-04-09T12:36:38.232-0700: [ParNew: 845557K->4086K(1887488K),
> 0.0805340 secs] 45934440K->45093032K(92065024K)After GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 1476225574
> >> Max Chunk Size: 1476225574
> >> Number of Blocks: 1
> >> Av. Block Size: 1476225574
> >> Tree Height: 1
> >> After GC:
> >> Statistics for BinaryTreeDictionary:
> >> ------------------------------------
> >> Total Free Space: 0
> >> Max Chunk Size: 0
> >> Number of Blocks: 0
> >> Tree Height: 0
> >> , 0.0808020 secs] [Times: user=1.42 sys=0.00, real=0.08 secs]
> >> 2015-04-09T12:36:38.313-0700: [Rescan (parallel) , 0.0218780
> secs]2015-04-09T12:36:38.334-0700: [weak refs processing, 5.0944360
> secs]2015-04-09T12:36:43.429-0700: [scrub string table, 0.0013140 secs] [1
> CMS-remark: 45088945K(90177536K)] 45093032K(92065024K), 5.3492750 secs]
> [Times: user=6.98 sys=0.00, real=5.35 secs]
> >> CMS: Large Block: 0x00007f8e42f4aed0; Proximity: 0x0000000000000000 ->
> 0x00007f8e276f5b08
> >> CMS: Large block 0x00007f8e42f4aed0
> >> 2015-04-09T12:36:55.505-0700: [CMS-concurrent-sweep: 11.913/11.924
> secs] [Times: user=16.93 sys=0.26, real=11.93 secs]
> >> 2015-04-09T12:36:55.703-0700: [CMS-concurrent-reset: 0.187/0.187 secs]
> [Times: user=0.21 sys=0.00, real=0.19 secs]
> >> _______________________________________________
> >> hotspot-gc-use mailing list
> >>
> >> hotspot-gc-use at openjdk.java.net
> >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> >
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150410/bf62e126/attachment.html>
More information about the hotspot-gc-use
mailing list