G1 - Mixed and Full GC's
Wolfgang Pedot
wolfgang.pedot at finkzeit.at
Thu Dec 15 13:00:03 UTC 2016
It is my understanding that what gets collected during a mixed cycle is
determined at the beginning of the cycle so if you have a high
allocation rate of short lived humongous objects you can fill up the
heap with stuff the concurrent cycle will not get rid of even though
they would be reclaimable. Newer JRE-Versions can reclaim humongous
regions during young generation collects so they can deal better with
them but I guess its still best to avoid humongous allocations if easily
possible.
Can you check in your log how long G1 is usually doing mixed collects?
-XX:+G1PrintRegionLivenessInfo may also be an interesting option, it
lets you see how many humongous objects are in your heap at the start of
a concurrent cycle and how full your regions are.
regards
Wolfgang Pedot
Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK:
>
> Hi Wolfgang
>
> Thanks for your reply. Yes I agree we have high CPU overhead. We have
> the following settings for CPU-Cores / GC Threads.
>
> virtualProcessorCount=48
>
> ParallelGCThreads=8
>
> *> AFAIR humongous regions are only collected during mixed/full GC in
> your version.*
>
> **This is my understanding as well, but I am not sure why these
> objects were not collected during Mixed GC cycles. The Mixed cycle is
> seeing only 29 MB as claimable, but the Full GC was able to claim
> 4041.8 MB within 100 ms
>
> *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason:
> reclaimable percentage not over threshold, candidate old regions: 46
> regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]*
>
> **
>
> **
>
> Regards
>
> Prasanna **
>
> -----Original Message-----
> From: hotspot-gc-use [mailto:hotspot-gc-use-bounces at openjdk.java.net]
> On Behalf Of Wolfgang Pedot
> Sent: 15 December 2016 12:10
> To: hotspot-gc-use at openjdk.java.net
> Subject: Re: G1 - Mixed and Full GC's
>
> Hi,
>
> not sure about my "expert"-ness but something caught my eye:
>
> you do have a pretty high GC overhead there (recent GC overhead: 30.14
> %), it is possible that G1 can simply not keep up with your
> allocations and that leads to a full GC. How many CPU-cores does your
> machine have and how many GC threads are used?
>
> Can you determine the size of your humongous objects? Getting rid of
> those by increasing the region-size could help, AFAIR humongous
> regions are only collected during mixed/full GC in your version.
>
> regards
>
> Wolfgang Pedot
>
> Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK:
>
> >
>
> > Hi Experts
>
> >
>
> > We have one of our application running G1 with following
>
> > configuration
>
> >
>
> > -Xmx7G
>
> >
>
> > -Xms7G
>
> >
>
> > -XX:+UseG1GC
>
> >
>
> > -XX:+UnlockExperimentalVMOptions
>
> >
>
> > -XX:InitiatingHeapOccupancyPercent=60
>
> >
>
> > -XX:G1ReservePercent=20
>
> >
>
> > -XX:G1MixedGCLiveThresholdPercent=85
>
> >
>
> > -XX:MaxGCPauseMillis=500
>
> >
>
> > -XX:+ParallelRefProcEnabled
>
> >
>
> > -XX:+PrintAdaptiveSizePolicy
>
> >
>
> > -XX:+PrintHeapAtGC
>
> >
>
> > -XX:+PrintReferenceGC
>
> >
>
> > -XX:+PrintGCDateStamps
>
> >
>
> > -XX:+PrintTenuringDistribution
>
> >
>
> > -XX:+PrintGCApplicationConcurrentTime
>
> >
>
> > -XX:+PrintGCApplicationStoppedTime
>
> >
>
> > -Duser.timezone=America/Los_Angeles
>
> >
>
> > -d64 -Dserver
>
> >
>
> > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK)
>
> >
>
> > We are seeing the following behaviour
>
> >
>
> > End of a concurrent cycle
>
> >
>
> > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which
>
> > application threads were stopped: 0.0189970 seconds
>
> >
>
> > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190
>
> > seconds
>
> >
>
> > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup
>
> > 5093M->3488M(7168M), 0.0075650 secs] *
>
> >
>
> > [Times: user=0.06 sys=0.00, real=0.01 secs]
>
> >
>
> > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which
>
> > application threads were stopped: 0.0081250 seconds
>
> >
>
> > 2016-12-14T12:59:01.315-0500: 45342.201: [GC concurrent-cleanup-start]
>
> >
>
> > 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end,
>
> > 0.0022490 secs]
>
> >
>
> > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed
>
> > GC’s
>
> >
>
> > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs,
>
> > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak
>
> > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing)
>
> > attempt heap expansion, reason: recent GC overhead higher than
>
> > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %,
>
> > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)]
>
> >
>
> > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request
>
> > concurrent cycle initiation, reason: still doing mixed collections,
>
> > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold:
>
> > 4509715620 bytes (60.00 %), source: end of GC]*
>
> >
>
> > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason:
>
> > reclaimable percentage not over threshold, candidate old regions: 46
>
> > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]*
>
> >
>
> > , 0.0765450 secs]
>
> >
>
> > [Parallel Time: 74.1 ms, GC Workers: 8]
>
> >
>
> > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max:
>
> > 45349894.2, Diff: 0.3]
>
> >
>
> > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff:
>
> > 2.1, Sum: 41.2]
>
> >
>
> > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5,
>
> > Sum: 529.2]
>
> >
>
> > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff:
>
> > 200, Sum: 2910]
>
> >
>
> > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum:
>
> > 3.7]
>
> >
>
> > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum:
>
> > 15.9]
>
> >
>
> > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum:
>
> > 0.0]
>
> >
>
> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0,
>
> > Sum: 0.4]
>
> >
>
> > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff:
>
> > 0.2, Sum: 590.3]
>
> >
>
> > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max:
>
> > 45349967.9, Diff: 0.0]
>
> >
>
> > [Code Root Fixup: 0.0 ms]
>
> >
>
> > [Clear CT: 0.1 ms]
>
> >
>
> > [Other: 2.3 ms]
>
> >
>
> > [Choose CSet: 0.0 ms]
>
> >
>
> > [Ref Proc: 1.2 ms]
>
> >
>
> > [Ref Enq: 0.2 ms]
>
> >
>
> > [Free CSet: 0.6 ms]
>
> >
>
> > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K
>
> > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)]
>
> >
>
> > Heap after GC invocations=4395 (full 207):
>
> >
>
> > garbage-first heap total 7340032K, used 6449092K
>
> > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000)
>
> >
>
> > region size 2048K, 3 young (6144K), 3 survivors (6144K)
>
> >
>
> > compacting perm gen total 77824K, used 77354K [0x00000007fae00000,
>
> > 0x00000007ffa00000, 0x0000000800000000)
>
> >
>
> > the space 77824K, 99% used [0x00000007fae00000,
>
> > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000)
>
> >
>
> > No shared spaces configured.
>
> >
>
> > }
>
> >
>
> > [Times: user=0.60 sys=0.00, real=0.08 secs]
>
> >
>
> > Immediately after this another concurrent cycle kicks in
>
> >
>
> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
>
> > initiation, reason: occupancy higher than threshold, occupancy:
>
> > 6943670272 bytes, allocation request: 15728656 bytes, threshold:
>
> > 4509715620 bytes (60.00 %), source: concurrent humongous allocation]
>
> >
>
> > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930
>
> > seconds
>
> >
>
> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
>
> > initiation, reason: requested by GC cause, GC cause: G1 Humongous
>
> > Allocation]
>
> >
>
> > Before the concurrent cycle can complete, Full GC kicks in
>
> >
>
> > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full
>
> > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 refs,
>
> > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331:
>
> > [WeakReference, 2417 refs, 0.0004400
>
> > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898
>
> > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332:
>
> > [PhantomReference, 10 refs, 0.0000060
>
> > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference,
>
> > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs]
>
> >
>
> > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/:
>
> > 6557.1M(7168.0M)->2515.3M(7168.0M)]/
>
> >
>
> > It will be helpful for us to understand,
>
> >
>
> > 1)Why If Full GC can clean up nearly 4 GB (:
>
> > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn’t the mixed GC kicked
>
> > in earlier ?. Does this mean all our old regions are tightly packed
>
> > (greater than G1MixedGCLiveThresholdPercent=85 ) ?
>
> >
>
> > 2)We also see , high amount of Humongous object allocation. Could it
>
> > be the case that Only Full GC’s can remove the Humongous objects ?
>
> >
>
> > 3)Is there any other tuning parameters we can use to mitigate this
>
> > scenario ?
>
> >
>
> > Please do let me know, if you need any more information. Thanks in
>
> > advance for your help.
>
> >
>
> > Regards
>
> >
>
> > Prasanna
>
> >
>
> > This message may contain information that is confidential or
>
> > privileged. If you are not the intended recipient, please advise the
>
> > sender immediately and delete this message. See
>
> > http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers
>
> > for further information. Please refer to
>
> > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for
>
> > more information about BlackRock’s Privacy Policy.
>
> >
>
> > BlackRock Advisors (UK) Limited and BlackRock Investment Management
>
> > (UK) Limited are authorised and regulated by the Financial Conduct
>
> > Authority. Registered in England No. 796793 and No. 2020394
>
> > respectively. BlackRock Life Limited is authorised by the Prudential
>
> > Regulation Authority and regulated by the Financial Conduct Authority
>
> > and the Prudential Regulation Authority. Registered in England No.
>
> > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL.
>
> > BlackRock International Limited is authorised and regulated by the
>
> > Financial Conduct Authority and is a registered investment adviser
>
> > with the Securities and Exchange Commission (SEC). Registered in
>
> > Scotland No. SC160821. Registered Office: Exchange Place One, 1 Semple
>
> > Street, Edinburgh EH3 8BL.
>
> >
>
> > For a list of BlackRock's office addresses worldwide, see
>
> > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
>
> >
>
> > © 2016 BlackRock, Inc. All rights reserved.
>
> >
>
> >
>
> >
>
> > _______________________________________________
>
> > hotspot-gc-use mailing list
>
> > hotspot-gc-use at openjdk.java.net <mailto:hotspot-gc-use at openjdk.java.net>
>
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.
>
> > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cA
>
> > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtP
>
> > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8
>
> > Y&e=
>
> _______________________________________________
>
> hotspot-gc-use mailing list
>
> hotspot-gc-use at openjdk.java.net <mailto:hotspot-gc-use at openjdk.java.net>
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8Y&e=
>
>
More information about the hotspot-gc-use
mailing list