High Reference Processing/Object Copy time
Sundara Mohan M
email.sundarms at gmail.com
Thu Jul 13 15:51:57 UTC 2017
Thanks for tuning link.
I have 2 more JVM (31G heap with CMS GC) running on same machine but
gc logs never showed sys time > 1.4 only the JVM running with G1GC shows
very high sys time.
Another thing noticed is resident size is high compared to another process
which has CMS GC.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1438 test 20 0 42.0g 27g 34m S 283.8 21.7 5232:24 java . >>>
CMSGC
619 test 20 0 42.0g 26g 34m S 105.0 21.3 5085:47 java . >>>
CMSGC
600 test 20 0 43.4g *33g* 33m S 4.3 26.9 361:54.17 java .
>>> G1GC
Is this something to do with G1GC region based allocation?
Thanks
Sundar
On Thu, Jul 13, 2017 at 1:08 AM, Thomas Schatzl <thomas.schatzl at oracle.com>
wrote:
> Hi,
>
> copying the answer from the hotspot-gc-dev list here (thanks!):
>
> The documentation offers some more potential issues when there is high
> system time: https://docs.oracle.com/javase/9/gctuning/garbage-first-ga
> rbage-collector-tuning.htm#GUID-8D9B2530-E370-4B8B-8ADD-A43674FC6658 (T
> he section is applicable to both JDK8 and 9).
>
> The VM/garbage collector is a user level program. High system time (at
> least as high as in your snippet) strongly indicates a problem in the
> environment (or in the interaction with your environment, i.e. memory
> or I/O related).
>
> [...]
>
> Thanks,
> Thomas
>
> On Wed, 2017-07-12 at 23:52 -0700, Sundara Mohan M wrote:
> > Sending to correct ilist,
> > --
> > ________________________________
> > From: hotspot-gc-dev <hotspot-gc-dev-bounces at openjdk.java.net> on
> > behalf of Sundara Mohan M <email.sundarms at gmail.com>
> > Sent: Thursday, July 13, 2017 4:11:41 AM
> > To: hotspot-gc-dev at openjdk.java.net
> > Subject: High Reference Processing/Object Copy time
> >
> > Hi,
> > I am observing a odd behaviour (very high ref proc time once) with
> > G1GC
> >
> > gc log snippet,
> >
> > flags used
> > Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE
> > (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc
> > 4.3.0 20080428 (Red Hat 4.3.0-8)
> > Memory: 4k page, physical 132290100k(1065596k free), swap
> > 132120572k(131992000k free)
> > CommandLine flags: -XX:G1MaxNewSizePercent=30
> > -XX:G1OldCSetRegionThresholdPercent=20 -XX:GCLogFileSize=20971520
> > -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=out-of-memory-heap-
> > dump -XX:InitialHeapSize=33285996544 -XX:MaxGCPauseMillis=500
> > -XX:MaxHeapSize=33285996544 -XX:MetaspaceSize=536870912
> > -XX:NumberOfGCLogFiles=20 -XX:+ParallelRefProcEnabled
> > -XX:+PrintAdaptiveSizePolicy -XX:+PrintGC
> > -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps
> > -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
> > -XX:+PrintTenuringDistribution -XX:+UnlockExperimentalVMOptions
> > -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseG1GC
> > -XX:+UseGCLogFileRotation -XX:+UseStringDeduplication
> >
> >
> > ....
> > 2017-07-12T17:02:40.227+0000: 77743.943: [GC pause (G1 Evacuation
> > Pause) (young)
> > Desired survivor size 104857600 bytes, new threshold 2 (max 15)
> > - age 1: 38456192 bytes, 38456192 total
> > - age 2: 86746408 bytes, 125202600 total
> > 77743.943: [G1Ergonomics (CSet Construction) start choosing CSet,
> > _pending_cards: 149039, predicted base time: 374.57 ms, remaining
> > time: 125.43 ms, target pause time: 50
> > 0.00 ms]
> > 77743.943: [G1Ergonomics (CSet Construction) add young regions to
> > CSet, eden: 174 regions, survivors: 24 regions, predicted young
> > region time: 1277.98 ms]
> > 77743.943: [G1Ergonomics (CSet Construction) finish choosing CSet,
> > eden: 174 regions, survivors: 24 regions, old: 0 regions, predicted
> > pause time: 1652.55 ms, target paus
> > e time: 500.00 ms]
> > 77751.132: [G1Ergonomics (Concurrent Cycles) request concurrent
> > cycle initiation, reason: occupancy higher than threshold, occupancy:
> > 21147680768 bytes, allocation reques
> > t: 0 bytes, threshold: 14978698425 bytes (45.00 %), source: end of
> > GC]
> > , 7.1891696 secs]
> > [Parallel Time: 2253.1 ms, GC Workers: 13]
> > [GC Worker Start (ms): Min: 77743943.2, Avg: 77743943.3, Max:
> > 77743943.4, Diff: 0.2]
> > [Ext Root Scanning (ms): Min: 1.7, Avg: 3.5, Max: 6.5, Diff:
> > 4.8, Sum: 44.9]
> > [Update RS (ms): Min: 39.2, Avg: 42.4, Max: 45.1, Diff: 5.9,
> > Sum: 551.8]
> > [Processed Buffers: Min: 26, Avg: 57.4, Max: 78, Diff: 52,
> > Sum: 746]
> > [Scan RS (ms): Min: 1.8, Avg: 3.7, Max: 4.5, Diff: 2.7, Sum:
> > 47.5]
> > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff:
> > 0.0, Sum: 0.0]
> > [Object Copy (ms): Min: 2198.1, Avg: 2198.7, Max: 2202.7, Diff:
> > 4.6, Sum: 28583.3]
> > [Termination (ms): Min: 0.0, Avg: 4.5, Max: 4.9, Diff: 4.9,
> > Sum: 58.4]
> > [Termination Attempts: Min: 1, Avg: 16.7, Max: 28, Diff: 27,
> > Sum: 217]
> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0,
> > Sum: 0.4]
> > [GC Worker Total (ms): Min: 2252.7, Avg: 2252.8, Max: 2252.9,
> > Diff: 0.2, Sum: 29286.3]
> > [GC Worker End (ms): Min: 77746196.1, Avg: 77746196.1, Max:
> > 77746196.1, Diff: 0.0]
> > [Code Root Fixup: 0.1 ms]
> > [Code Root Purge: 0.0 ms]
> > [String Dedup Fixup: 167.7 ms, GC Workers: 13]
> > [Queue Fixup (ms): Min: 0.0, Avg: 0.4, Max: 1.2, Diff: 1.2,
> > Sum: 5.1]
> > [Table Fixup (ms): Min: 165.5, Avg: 165.9, Max: 166.3, Diff:
> > 0.9, Sum: 2156.9]
> > [Clear CT: 1.5 ms]
> > [Other: 4766.8 ms]
> > [Choose CSet: 0.0 ms]
> > [Ref Proc: 4763.9 ms]
> > [Ref Enq: 0.8 ms]
> > [Redirty Cards: 0.7 ms]
> > [Humongous Register: 0.2 ms]
> > [Humongous Reclaim: 0.1 ms]
> > [Free CSet: 0.4 ms]
> > [Eden: 1392.0M(1392.0M)->0.0B(1440.0M) Survivors: 192.0M->144.0M
> > Heap: 20.8G(31.0G)->19.6G(31.0G)]
> > [Times: user=22.82 sys=13.83, real=7.19 secs]
> >
> > Question
> > 1. Is there a way to find out why Ref Proc took 4.7 s at this
> > instance only? all other instances it was less than a second.
> > 2. Why object copy took 2.1s even though young gen region size is
> > 1.3G in this case and there was not much garbage collected in this
> > case.
> > 3. Why is this happening occasionally and is there a way to enable
> > more logs when this happens.
> >
> >
> > Thanks,
> > Sundar
> > Date: Thu, 13 Jul 2017 04:58:51 +0000
> > From: Bernd Eckenfels <ecki at zusammenkunft.net>
> > To: "hotspot-gc-dev at openjdk.java.net"
> > <hotspot-gc-dev at openjdk.java.net>
> > Subject: Re: High Reference Processing/Object Copy time
> > Message-ID:
> > <HE1PR08MB2795E9C7342192F3E5524B9AFFAC0 at HE1PR08MB2795.eurprd0
> > 8.prod.outlook.com>
> >
> > Content-Type: text/plain; charset="us-ascii"
> >
> > The sys time is very high in this snippet, how to the other snippets
> > compare? Did you turn off transparent huge pages (THP) in your OS and
> > is there no swapping happening?
> >
> > BTW: this is more a discussion for the user mailing list.
> >
> > Gruss
> > Bernd
> > --
> > http://bernd.eckenfels.net
> > ________________________________
> >
> > --
> >
> >
> > I do see THP is used in the process
> > a lot of entries are there like this
> > /proc/619/smaps:AnonHugePages: 120832 kB
> >
> > Also there are some swap activity in the system.
> > Machine has 124G RAM.
> >
> > Thanks
> > Sundar
> >
> >
> > _______________________________________________
> > hotspot-gc-use mailing list
> > hotspot-gc-use at openjdk.java.net
> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20170713/2d82e0e8/attachment.html>
More information about the hotspot-gc-use
mailing list