From prasanna.gopal at blackrock.com Thu Dec 15 09:44:09 2016 From: prasanna.gopal at blackrock.com (Gopal, Prasanna CWK) Date: Thu, 15 Dec 2016 09:44:09 +0000 Subject: G1 - Mixed and Full GC's Message-ID: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> Hi Experts We have one of our application running G1 with following configuration -Xmx7G -Xms7G -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:InitiatingHeapOccupancyPercent=60 -XX:G1ReservePercent=20 -XX:G1MixedGCLiveThresholdPercent=85 -XX:MaxGCPauseMillis=500 -XX:+ParallelRefProcEnabled -XX:+PrintAdaptiveSizePolicy -XX:+PrintHeapAtGC -XX:+PrintReferenceGC -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCApplicationStoppedTime -Duser.timezone=America/Los_Angeles -d64 -Dserver JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) We are seeing the following behaviour End of a concurrent cycle 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which application threads were stopped: 0.0189970 seconds 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 seconds 2016-12-14T12:59:01.308-0500: 45342.193: [GC cleanup 5093M->3488M(7168M), 0.0075650 secs] [Times: user=0.06 sys=0.00, real=0.01 secs] 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which application threads were stopped: 0.0081250 seconds 2016-12-14T12:59:01.315-0500: 45342.201: [GC concurrent-cleanup-start] 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end, 0.0022490 secs] After 8 Seconds, Minor GC kicks in and decides not to do any Mixed GC?s 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] 45349.970: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: 4509715620 bytes (60.00 %), source: end of GC] 45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 46 regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %] , 0.0765450 secs] [Parallel Time: 74.1 ms, GC Workers: 8] [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: 45349894.2, Diff: 0.3] [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: 2.1, Sum: 41.2] [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, Sum: 529.2] [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: 200, Sum: 2910] [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: 3.7] [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum: 15.9] [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.4] [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: 0.2, Sum: 590.3] [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: 45349967.9, Diff: 0.0] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 2.3 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.2 ms] [Ref Enq: 0.2 ms] [Free CSet: 0.6 ms] [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] Heap after GC invocations=4395 (full 207): garbage-first heap total 7340032K, used 6449092K [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) region size 2048K, 3 young (6144K), 3 survivors (6144K) compacting perm gen total 77824K, used 77354K [0x00000007fae00000, 0x00000007ffa00000, 0x0000000800000000) the space 77824K, 99% used [0x00000007fae00000, 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) No shared spaces configured. } [Times: user=0.60 sys=0.00, real=0.08 secs] Immediately after this another concurrent cycle kicks in 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: occupancy higher than threshold, occupancy: 6943670272 bytes, allocation request: 15728656 bytes, threshold: 4509715620 bytes (60.00 %), source: concurrent humongous allocation] 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930 seconds 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] Before the concurrent cycle can complete, Full GC kicks in 2016-12-14T12:59:09.195-0500: 45350.081: [Full GC2016-12-14T12:59:11.445-0500: 45352.331: [SoftReference, 765 refs, 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: [WeakReference, 2417 refs, 0.0004400 secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: [PhantomReference, 10 refs, 0.0000060 secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap: 6557.1M(7168.0M)->2515.3M(7168.0M)] It will be helpful for us to understand, 1) Why If Full GC can clean up nearly 4 GB (: 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn?t the mixed GC kicked in earlier ?. Does this mean all our old regions are tightly packed (greater than G1MixedGCLiveThresholdPercent=85 ) ? 2) We also see , high amount of Humongous object allocation. Could it be the case that Only Full GC?s can remove the Humongous objects ? 3) Is there any other tuning parameters we can use to mitigate this scenario ? Please do let me know, if you need any more information. Thanks in advance for your help. Regards Prasanna This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information. Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock?s Privacy Policy. BlackRock Advisors (UK) Limited and BlackRock Investment Management (UK) Limited are authorised and regulated by the Financial Conduct Authority. Registered in England No. 796793 and No. 2020394 respectively. BlackRock Life Limited is authorised by the Prudential Regulation Authority and regulated by the Financial Conduct Authority and the Prudential Regulation Authority. Registered in England No. 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. BlackRock International Limited is authorised and regulated by the Financial Conduct Authority and is a registered investment adviser with the Securities and Exchange Commission (SEC). Registered in Scotland No. SC160821. Registered Office: Exchange Place One, 1 Semple Street, Edinburgh EH3 8BL. For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. ? 2016 BlackRock, Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfgang.pedot at finkzeit.at Thu Dec 15 12:10:12 2016 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Thu, 15 Dec 2016 13:10:12 +0100 Subject: G1 - Mixed and Full GC's In-Reply-To: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> Message-ID: <58528824.4010005@finkzeit.at> Hi, not sure about my "expert"-ness but something caught my eye: you do have a pretty high GC overhead there (recent GC overhead: 30.14 %), it is possible that G1 can simply not keep up with your allocations and that leads to a full GC. How many CPU-cores does your machine have and how many GC threads are used? Can you determine the size of your humongous objects? Getting rid of those by increasing the region-size could help, AFAIR humongous regions are only collected during mixed/full GC in your version. regards Wolfgang Pedot Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: > > Hi Experts > > We have one of our application running G1 with following configuration > > -Xmx7G > > -Xms7G > > -XX:+UseG1GC > > -XX:+UnlockExperimentalVMOptions > > -XX:InitiatingHeapOccupancyPercent=60 > > -XX:G1ReservePercent=20 > > -XX:G1MixedGCLiveThresholdPercent=85 > > -XX:MaxGCPauseMillis=500 > > -XX:+ParallelRefProcEnabled > > -XX:+PrintAdaptiveSizePolicy > > -XX:+PrintHeapAtGC > > -XX:+PrintReferenceGC > > -XX:+PrintGCDateStamps > > -XX:+PrintTenuringDistribution > > -XX:+PrintGCApplicationConcurrentTime > > -XX:+PrintGCApplicationStoppedTime > > -Duser.timezone=America/Los_Angeles > > -d64 -Dserver > > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) > > We are seeing the following behaviour > > End of a concurrent cycle > > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which > application threads were stopped: 0.0189970 seconds > > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 > seconds > > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup > 5093M->3488M(7168M), 0.0075650 secs] * > > [Times: user=0.06 sys=0.00, real=0.01 secs] > > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which > application threads were stopped: 0.0081250 seconds > > 2016-12-14T12:59:01.315-0500: 45342.201: [GC concurrent-cleanup-start] > > 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end, > 0.0022490 secs] > > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed GC?s > > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) > attempt heap expansion, reason: recent GC overhead higher than > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] > > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request > concurrent cycle initiation, reason: still doing mixed collections, > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: > 4509715620 bytes (60.00 %), source: end of GC]* > > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: > reclaimable percentage not over threshold, candidate old regions: 46 > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* > > , 0.0765450 secs] > > [Parallel Time: 74.1 ms, GC Workers: 8] > > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: > 45349894.2, Diff: 0.3] > > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: > 2.1, Sum: 41.2] > > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, > Sum: 529.2] > > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: > 200, Sum: 2910] > > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: 3.7] > > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum: > 15.9] > > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: > 0.0] > > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, > Sum: 0.4] > > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: > 0.2, Sum: 590.3] > > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: > 45349967.9, Diff: 0.0] > > [Code Root Fixup: 0.0 ms] > > [Clear CT: 0.1 ms] > > [Other: 2.3 ms] > > [Choose CSet: 0.0 ms] > > [Ref Proc: 1.2 ms] > > [Ref Enq: 0.2 ms] > > [Free CSet: 0.6 ms] > > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] > > Heap after GC invocations=4395 (full 207): > > garbage-first heap total 7340032K, used 6449092K > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) > > region size 2048K, 3 young (6144K), 3 survivors (6144K) > > compacting perm gen total 77824K, used 77354K [0x00000007fae00000, > 0x00000007ffa00000, 0x0000000800000000) > > the space 77824K, 99% used [0x00000007fae00000, > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) > > No shared spaces configured. > > } > > [Times: user=0.60 sys=0.00, real=0.08 secs] > > Immediately after this another concurrent cycle kicks in > > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle > initiation, reason: occupancy higher than threshold, occupancy: > 6943670272 bytes, allocation request: 15728656 bytes, threshold: > 4509715620 bytes (60.00 %), source: concurrent humongous allocation] > > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930 > seconds > > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle > initiation, reason: requested by GC cause, GC cause: G1 Humongous > Allocation] > > Before the concurrent cycle can complete, Full GC kicks in > > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 refs, > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: > [WeakReference, 2417 refs, 0.0004400 > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: > [PhantomReference, 10 refs, 0.0000060 > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] > > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ > > It will be helpful for us to understand, > > 1)Why If Full GC can clean up nearly 4 GB (: > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn?t the mixed GC kicked > in earlier ?. Does this mean all our old regions are tightly packed > (greater than G1MixedGCLiveThresholdPercent=85 ) ? > > 2)We also see , high amount of Humongous object allocation. Could it > be the case that Only Full GC?s can remove the Humongous objects ? > > 3)Is there any other tuning parameters we can use to mitigate this > scenario ? > > Please do let me know, if you need any more information. Thanks in > advance for your help. > > Regards > > Prasanna > > This message may contain information that is confidential or > privileged. If you are not the intended recipient, please advise the > sender immediately and delete this message. See > http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers > for further information. Please refer to > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for > more information about BlackRock?s Privacy Policy. > > BlackRock Advisors (UK) Limited and BlackRock Investment Management > (UK) Limited are authorised and regulated by the Financial Conduct > Authority. Registered in England No. 796793 and No. 2020394 > respectively. BlackRock Life Limited is authorised by the Prudential > Regulation Authority and regulated by the Financial Conduct Authority > and the Prudential Regulation Authority. Registered in England No. > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. > BlackRock International Limited is authorised and regulated by the > Financial Conduct Authority and is a registered investment adviser > with the Securities and Exchange Commission (SEC). Registered in > Scotland No. SC160821. Registered Office: Exchange Place One, 1 Semple > Street, Edinburgh EH3 8BL. > > For a list of BlackRock's office addresses worldwide, see > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. > > ? 2016 BlackRock, Inc. All rights reserved. > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From prasanna.gopal at blackrock.com Thu Dec 15 12:29:25 2016 From: prasanna.gopal at blackrock.com (Gopal, Prasanna CWK) Date: Thu, 15 Dec 2016 12:29:25 +0000 Subject: G1 - Mixed and Full GC's In-Reply-To: <58528824.4010005@finkzeit.at> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> Message-ID: <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> Hi Wolfgang Thanks for your reply. Yes I agree we have high CPU overhead. We have the following settings for CPU-Cores / GC Threads. virtualProcessorCount=48 ParallelGCThreads=8 > AFAIR humongous regions are only collected during mixed/full GC in your version. This is my understanding as well, but I am not sure why these objects were not collected during Mixed GC cycles. The Mixed cycle is seeing only 29 MB as claimable, but the Full GC was able to claim 4041.8 MB within 100 ms 45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 46 regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %] Regards Prasanna -----Original Message----- From: hotspot-gc-use [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of Wolfgang Pedot Sent: 15 December 2016 12:10 To: hotspot-gc-use at openjdk.java.net Subject: Re: G1 - Mixed and Full GC's Hi, not sure about my "expert"-ness but something caught my eye: you do have a pretty high GC overhead there (recent GC overhead: 30.14 %), it is possible that G1 can simply not keep up with your allocations and that leads to a full GC. How many CPU-cores does your machine have and how many GC threads are used? Can you determine the size of your humongous objects? Getting rid of those by increasing the region-size could help, AFAIR humongous regions are only collected during mixed/full GC in your version. regards Wolfgang Pedot Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: > > Hi Experts > > We have one of our application running G1 with following > configuration > > -Xmx7G > > -Xms7G > > -XX:+UseG1GC > > -XX:+UnlockExperimentalVMOptions > > -XX:InitiatingHeapOccupancyPercent=60 > > -XX:G1ReservePercent=20 > > -XX:G1MixedGCLiveThresholdPercent=85 > > -XX:MaxGCPauseMillis=500 > > -XX:+ParallelRefProcEnabled > > -XX:+PrintAdaptiveSizePolicy > > -XX:+PrintHeapAtGC > > -XX:+PrintReferenceGC > > -XX:+PrintGCDateStamps > > -XX:+PrintTenuringDistribution > > -XX:+PrintGCApplicationConcurrentTime > > -XX:+PrintGCApplicationStoppedTime > > -Duser.timezone=America/Los_Angeles > > -d64 -Dserver > > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) > > We are seeing the following behaviour > > End of a concurrent cycle > > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which > application threads were stopped: 0.0189970 seconds > > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 > seconds > > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup > 5093M->3488M(7168M), 0.0075650 secs] * > > [Times: user=0.06 sys=0.00, real=0.01 secs] > > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which > application threads were stopped: 0.0081250 seconds > > 2016-12-14T12:59:01.315-0500: 45342.201: [GC concurrent-cleanup-start] > > 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end, > 0.0022490 secs] > > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed > GC's > > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) > attempt heap expansion, reason: recent GC overhead higher than > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] > > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request > concurrent cycle initiation, reason: still doing mixed collections, > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: > 4509715620 bytes (60.00 %), source: end of GC]* > > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: > reclaimable percentage not over threshold, candidate old regions: 46 > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* > > , 0.0765450 secs] > > [Parallel Time: 74.1 ms, GC Workers: 8] > > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: > 45349894.2, Diff: 0.3] > > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: > 2.1, Sum: 41.2] > > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, > Sum: 529.2] > > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: > 200, Sum: 2910] > > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: > 3.7] > > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum: > 15.9] > > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: > 0.0] > > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, > Sum: 0.4] > > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: > 0.2, Sum: 590.3] > > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: > 45349967.9, Diff: 0.0] > > [Code Root Fixup: 0.0 ms] > > [Clear CT: 0.1 ms] > > [Other: 2.3 ms] > > [Choose CSet: 0.0 ms] > > [Ref Proc: 1.2 ms] > > [Ref Enq: 0.2 ms] > > [Free CSet: 0.6 ms] > > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] > > Heap after GC invocations=4395 (full 207): > > garbage-first heap total 7340032K, used 6449092K > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) > > region size 2048K, 3 young (6144K), 3 survivors (6144K) > > compacting perm gen total 77824K, used 77354K [0x00000007fae00000, > 0x00000007ffa00000, 0x0000000800000000) > > the space 77824K, 99% used [0x00000007fae00000, > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) > > No shared spaces configured. > > } > > [Times: user=0.60 sys=0.00, real=0.08 secs] > > Immediately after this another concurrent cycle kicks in > > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle > initiation, reason: occupancy higher than threshold, occupancy: > 6943670272 bytes, allocation request: 15728656 bytes, threshold: > 4509715620 bytes (60.00 %), source: concurrent humongous allocation] > > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930 > seconds > > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle > initiation, reason: requested by GC cause, GC cause: G1 Humongous > Allocation] > > Before the concurrent cycle can complete, Full GC kicks in > > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 refs, > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: > [WeakReference, 2417 refs, 0.0004400 > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: > [PhantomReference, 10 refs, 0.0000060 > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] > > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ > > It will be helpful for us to understand, > > 1)Why If Full GC can clean up nearly 4 GB (: > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn't the mixed GC kicked > in earlier ?. Does this mean all our old regions are tightly packed > (greater than G1MixedGCLiveThresholdPercent=85 ) ? > > 2)We also see , high amount of Humongous object allocation. Could it > be the case that Only Full GC's can remove the Humongous objects ? > > 3)Is there any other tuning parameters we can use to mitigate this > scenario ? > > Please do let me know, if you need any more information. Thanks in > advance for your help. > > Regards > > Prasanna > > This message may contain information that is confidential or > privileged. If you are not the intended recipient, please advise the > sender immediately and delete this message. See > http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers > for further information. Please refer to > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for > more information about BlackRock's Privacy Policy. > > BlackRock Advisors (UK) Limited and BlackRock Investment Management > (UK) Limited are authorised and regulated by the Financial Conduct > Authority. Registered in England No. 796793 and No. 2020394 > respectively. BlackRock Life Limited is authorised by the Prudential > Regulation Authority and regulated by the Financial Conduct Authority > and the Prudential Regulation Authority. Registered in England No. > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. > BlackRock International Limited is authorised and regulated by the > Financial Conduct Authority and is a registered investment adviser > with the Securities and Exchange Commission (SEC). Registered in > Scotland No. SC160821. Registered Office: Exchange Place One, 1 Semple > Street, Edinburgh EH3 8BL. > > For a list of BlackRock's office addresses worldwide, see > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. > > (c) 2016 BlackRock, Inc. All rights reserved. > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cA > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtP > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8 > Y&e= _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8Y&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfgang.pedot at finkzeit.at Thu Dec 15 13:00:03 2016 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Thu, 15 Dec 2016 14:00:03 +0100 Subject: G1 - Mixed and Full GC's In-Reply-To: <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> Message-ID: <585293D3.3060400@finkzeit.at> It is my understanding that what gets collected during a mixed cycle is determined at the beginning of the cycle so if you have a high allocation rate of short lived humongous objects you can fill up the heap with stuff the concurrent cycle will not get rid of even though they would be reclaimable. Newer JRE-Versions can reclaim humongous regions during young generation collects so they can deal better with them but I guess its still best to avoid humongous allocations if easily possible. Can you check in your log how long G1 is usually doing mixed collects? -XX:+G1PrintRegionLivenessInfo may also be an interesting option, it lets you see how many humongous objects are in your heap at the start of a concurrent cycle and how full your regions are. regards Wolfgang Pedot Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK: > > Hi Wolfgang > > Thanks for your reply. Yes I agree we have high CPU overhead. We have > the following settings for CPU-Cores / GC Threads. > > virtualProcessorCount=48 > > ParallelGCThreads=8 > > *> AFAIR humongous regions are only collected during mixed/full GC in > your version.* > > **This is my understanding as well, but I am not sure why these > objects were not collected during Mixed GC cycles. The Mixed cycle is > seeing only 29 MB as claimable, but the Full GC was able to claim > 4041.8 MB within 100 ms > > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: > reclaimable percentage not over threshold, candidate old regions: 46 > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* > > ** > > ** > > Regards > > Prasanna ** > > -----Original Message----- > From: hotspot-gc-use [mailto:hotspot-gc-use-bounces at openjdk.java.net] > On Behalf Of Wolfgang Pedot > Sent: 15 December 2016 12:10 > To: hotspot-gc-use at openjdk.java.net > Subject: Re: G1 - Mixed and Full GC's > > Hi, > > not sure about my "expert"-ness but something caught my eye: > > you do have a pretty high GC overhead there (recent GC overhead: 30.14 > %), it is possible that G1 can simply not keep up with your > allocations and that leads to a full GC. How many CPU-cores does your > machine have and how many GC threads are used? > > Can you determine the size of your humongous objects? Getting rid of > those by increasing the region-size could help, AFAIR humongous > regions are only collected during mixed/full GC in your version. > > regards > > Wolfgang Pedot > > Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: > > > > > > Hi Experts > > > > > > We have one of our application running G1 with following > > > configuration > > > > > > -Xmx7G > > > > > > -Xms7G > > > > > > -XX:+UseG1GC > > > > > > -XX:+UnlockExperimentalVMOptions > > > > > > -XX:InitiatingHeapOccupancyPercent=60 > > > > > > -XX:G1ReservePercent=20 > > > > > > -XX:G1MixedGCLiveThresholdPercent=85 > > > > > > -XX:MaxGCPauseMillis=500 > > > > > > -XX:+ParallelRefProcEnabled > > > > > > -XX:+PrintAdaptiveSizePolicy > > > > > > -XX:+PrintHeapAtGC > > > > > > -XX:+PrintReferenceGC > > > > > > -XX:+PrintGCDateStamps > > > > > > -XX:+PrintTenuringDistribution > > > > > > -XX:+PrintGCApplicationConcurrentTime > > > > > > -XX:+PrintGCApplicationStoppedTime > > > > > > -Duser.timezone=America/Los_Angeles > > > > > > -d64 -Dserver > > > > > > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) > > > > > > We are seeing the following behaviour > > > > > > End of a concurrent cycle > > > > > > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which > > > application threads were stopped: 0.0189970 seconds > > > > > > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 > > > seconds > > > > > > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup > > > 5093M->3488M(7168M), 0.0075650 secs] * > > > > > > [Times: user=0.06 sys=0.00, real=0.01 secs] > > > > > > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which > > > application threads were stopped: 0.0081250 seconds > > > > > > 2016-12-14T12:59:01.315-0500: 45342.201: [GC concurrent-cleanup-start] > > > > > > 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end, > > > 0.0022490 secs] > > > > > > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed > > > GC?s > > > > > > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, > > > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak > > > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) > > > attempt heap expansion, reason: recent GC overhead higher than > > > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, > > > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] > > > > > > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request > > > concurrent cycle initiation, reason: still doing mixed collections, > > > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: > > > 4509715620 bytes (60.00 %), source: end of GC]* > > > > > > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: > > > reclaimable percentage not over threshold, candidate old regions: 46 > > > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* > > > > > > , 0.0765450 secs] > > > > > > [Parallel Time: 74.1 ms, GC Workers: 8] > > > > > > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: > > > 45349894.2, Diff: 0.3] > > > > > > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: > > > 2.1, Sum: 41.2] > > > > > > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, > > > Sum: 529.2] > > > > > > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: > > > 200, Sum: 2910] > > > > > > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: > > > 3.7] > > > > > > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum: > > > 15.9] > > > > > > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: > > > 0.0] > > > > > > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, > > > Sum: 0.4] > > > > > > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: > > > 0.2, Sum: 590.3] > > > > > > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: > > > 45349967.9, Diff: 0.0] > > > > > > [Code Root Fixup: 0.0 ms] > > > > > > [Clear CT: 0.1 ms] > > > > > > [Other: 2.3 ms] > > > > > > [Choose CSet: 0.0 ms] > > > > > > [Ref Proc: 1.2 ms] > > > > > > [Ref Enq: 0.2 ms] > > > > > > [Free CSet: 0.6 ms] > > > > > > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K > > > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] > > > > > > Heap after GC invocations=4395 (full 207): > > > > > > garbage-first heap total 7340032K, used 6449092K > > > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) > > > > > > region size 2048K, 3 young (6144K), 3 survivors (6144K) > > > > > > compacting perm gen total 77824K, used 77354K [0x00000007fae00000, > > > 0x00000007ffa00000, 0x0000000800000000) > > > > > > the space 77824K, 99% used [0x00000007fae00000, > > > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) > > > > > > No shared spaces configured. > > > > > > } > > > > > > [Times: user=0.60 sys=0.00, real=0.08 secs] > > > > > > Immediately after this another concurrent cycle kicks in > > > > > > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle > > > initiation, reason: occupancy higher than threshold, occupancy: > > > 6943670272 bytes, allocation request: 15728656 bytes, threshold: > > > 4509715620 bytes (60.00 %), source: concurrent humongous allocation] > > > > > > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930 > > > seconds > > > > > > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle > > > initiation, reason: requested by GC cause, GC cause: G1 Humongous > > > Allocation] > > > > > > Before the concurrent cycle can complete, Full GC kicks in > > > > > > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full > > > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 refs, > > > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: > > > [WeakReference, 2417 refs, 0.0004400 > > > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 > > > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: > > > [PhantomReference, 10 refs, 0.0000060 > > > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, > > > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] > > > > > > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: > > > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ > > > > > > It will be helpful for us to understand, > > > > > > 1)Why If Full GC can clean up nearly 4 GB (: > > > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn?t the mixed GC kicked > > > in earlier ?. Does this mean all our old regions are tightly packed > > > (greater than G1MixedGCLiveThresholdPercent=85 ) ? > > > > > > 2)We also see , high amount of Humongous object allocation. Could it > > > be the case that Only Full GC?s can remove the Humongous objects ? > > > > > > 3)Is there any other tuning parameters we can use to mitigate this > > > scenario ? > > > > > > Please do let me know, if you need any more information. Thanks in > > > advance for your help. > > > > > > Regards > > > > > > Prasanna > > > > > > This message may contain information that is confidential or > > > privileged. If you are not the intended recipient, please advise the > > > sender immediately and delete this message. See > > > http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers > > > for further information. Please refer to > > > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for > > > more information about BlackRock?s Privacy Policy. > > > > > > BlackRock Advisors (UK) Limited and BlackRock Investment Management > > > (UK) Limited are authorised and regulated by the Financial Conduct > > > Authority. Registered in England No. 796793 and No. 2020394 > > > respectively. BlackRock Life Limited is authorised by the Prudential > > > Regulation Authority and regulated by the Financial Conduct Authority > > > and the Prudential Regulation Authority. Registered in England No. > > > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. > > > BlackRock International Limited is authorised and regulated by the > > > Financial Conduct Authority and is a registered investment adviser > > > with the Securities and Exchange Commission (SEC). Registered in > > > Scotland No. SC160821. Registered Office: Exchange Place One, 1 Semple > > > Street, Edinburgh EH3 8BL. > > > > > > For a list of BlackRock's office addresses worldwide, see > > > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. > > > > > > ? 2016 BlackRock, Inc. All rights reserved. > > > > > > > > > > > > _______________________________________________ > > > hotspot-gc-use mailing list > > > hotspot-gc-use at openjdk.java.net > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. > > > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cA > > > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtP > > > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8 > > > Y&e= > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8Y&e= > > From yu.zhang at oracle.com Thu Dec 15 17:28:20 2016 From: yu.zhang at oracle.com (yu.zhang at oracle.com) Date: Thu, 15 Dec 2016 09:28:20 -0800 Subject: G1 - Mixed and Full GC's In-Reply-To: <585293D3.3060400@finkzeit.at> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> <585293D3.3060400@finkzeit.at> Message-ID: <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> Hi, In the clean up cycle, about 1.6G is claimed. Those are cleaned because everything in the region is dead. It is possible those are regions for humongous objects. Wolfgang mentioned this already, in later jdk, certain type of humongous objects can be collected during young gc. To be more specific on the comments about what is collected during a mixed gc cycle, it is determined after REMARK, not the beginning of the marking cycle. About the question why mixed gc sees only 29M while full gc can claim 4041.8M, there could be several reasons: 1. -XX:G1MixedGCLiveThresholdPercent=85 is still too low. This means if the region is above 85% full, they are not considered to be candidate of mixed gc. But it might not be the reason, since full gc cleans up 4041.8M. 2. In your PrintAdaptiveSizePolicy output, there is information about how the collection set for mixed gc is decided. Can you post it? Though not likely, there is a chance it is not including some regions since the pause time goal is reached. 3. Full gc cleans everything, including classes and other references. In later jdk, classes are unloaded during marking cycle. But I am not sure if jdk7 has that. 4. As Wolfgang mentioned, maybe the allocation rate is too high for g1 to keep up. Can you try: 1 increase the region size? It will not help the 15m allocation, but maybe helpful to reduce other humongous objects. 2. lower the -XX:InitiatingHeapOccupancyPercent=60 (default is 45) to introduce the marking cycle earlier 3. Depends on how long the marking cycle is, maybe increase the concurrent marking threads, Thanks Jenny On 12/15/2016 05:00 AM, Wolfgang Pedot wrote: > It is my understanding that what gets collected during a mixed cycle > is determined at the beginning of the cycle so if you have a high > allocation rate of short lived humongous objects you can fill up the > heap with stuff the concurrent cycle will not get rid of even though > they would be reclaimable. Newer JRE-Versions can reclaim humongous > regions during young generation collects so they can deal better with > them but I guess its still best to avoid humongous allocations if > easily possible. > > Can you check in your log how long G1 is usually doing mixed collects? > > -XX:+G1PrintRegionLivenessInfo may also be an interesting option, it > lets you see how many humongous objects are in your heap at the start > of a concurrent cycle and how full your regions are. > > regards > Wolfgang Pedot > > Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK: >> >> Hi Wolfgang >> >> Thanks for your reply. Yes I agree we have high CPU overhead. We have >> the following settings for CPU-Cores / GC Threads. >> >> virtualProcessorCount=48 >> >> ParallelGCThreads=8 >> >> *> AFAIR humongous regions are only collected during mixed/full GC in >> your version.* >> >> **This is my understanding as well, but I am not sure why these >> objects were not collected during Mixed GC cycles. The Mixed cycle >> is seeing only 29 MB as claimable, but the Full GC was able to claim >> 4041.8 MB within 100 ms >> >> *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: >> reclaimable percentage not over threshold, candidate old regions: 46 >> regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* >> >> ** >> >> ** >> >> Regards >> >> Prasanna ** >> >> -----Original Message----- >> From: hotspot-gc-use [mailto:hotspot-gc-use-bounces at openjdk.java.net] >> On Behalf Of Wolfgang Pedot >> Sent: 15 December 2016 12:10 >> To: hotspot-gc-use at openjdk.java.net >> Subject: Re: G1 - Mixed and Full GC's >> >> Hi, >> >> not sure about my "expert"-ness but something caught my eye: >> >> you do have a pretty high GC overhead there (recent GC overhead: >> 30.14 %), it is possible that G1 can simply not keep up with your >> allocations and that leads to a full GC. How many CPU-cores does your >> machine have and how many GC threads are used? >> >> Can you determine the size of your humongous objects? Getting rid of >> those by increasing the region-size could help, AFAIR humongous >> regions are only collected during mixed/full GC in your version. >> >> regards >> >> Wolfgang Pedot >> >> Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: >> >> > >> >> > Hi Experts >> >> > >> >> > We have one of our application running G1 with following >> >> > configuration >> >> > >> >> > -Xmx7G >> >> > >> >> > -Xms7G >> >> > >> >> > -XX:+UseG1GC >> >> > >> >> > -XX:+UnlockExperimentalVMOptions >> >> > >> >> > -XX:InitiatingHeapOccupancyPercent=60 >> >> > >> >> > -XX:G1ReservePercent=20 >> >> > >> >> > -XX:G1MixedGCLiveThresholdPercent=85 >> >> > >> >> > -XX:MaxGCPauseMillis=500 >> >> > >> >> > -XX:+ParallelRefProcEnabled >> >> > >> >> > -XX:+PrintAdaptiveSizePolicy >> >> > >> >> > -XX:+PrintHeapAtGC >> >> > >> >> > -XX:+PrintReferenceGC >> >> > >> >> > -XX:+PrintGCDateStamps >> >> > >> >> > -XX:+PrintTenuringDistribution >> >> > >> >> > -XX:+PrintGCApplicationConcurrentTime >> >> > >> >> > -XX:+PrintGCApplicationStoppedTime >> >> > >> >> > -Duser.timezone=America/Los_Angeles >> >> > >> >> > -d64 -Dserver >> >> > >> >> > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) >> >> > >> >> > We are seeing the following behaviour >> >> > >> >> > End of a concurrent cycle >> >> > >> >> > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which >> >> > application threads were stopped: 0.0189970 seconds >> >> > >> >> > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 >> >> > seconds >> >> > >> >> > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup >> >> > 5093M->3488M(7168M), 0.0075650 secs] * >> >> > >> >> > [Times: user=0.06 sys=0.00, real=0.01 secs] >> >> > >> >> > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which >> >> > application threads were stopped: 0.0081250 seconds >> >> > >> >> > 2016-12-14T12:59:01.315-0500: 45342.201: [GC concurrent-cleanup-start] >> >> > >> >> > 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end, >> >> > 0.0022490 secs] >> >> > >> >> > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed >> >> > GC?s >> >> > >> >> > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, >> >> > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak >> >> > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) >> >> > attempt heap expansion, reason: recent GC overhead higher than >> >> > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, >> >> > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] >> >> > >> >> > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request >> >> > concurrent cycle initiation, reason: still doing mixed collections, >> >> > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: >> >> > 4509715620 bytes (60.00 %), source: end of GC]* >> >> > >> >> > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: >> >> > reclaimable percentage not over threshold, candidate old regions: 46 >> >> > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* >> >> > >> >> > , 0.0765450 secs] >> >> > >> >> > [Parallel Time: 74.1 ms, GC Workers: 8] >> >> > >> >> > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: >> >> > 45349894.2, Diff: 0.3] >> >> > >> >> > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: >> >> > 2.1, Sum: 41.2] >> >> > >> >> > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, >> >> > Sum: 529.2] >> >> > >> >> > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: >> >> > 200, Sum: 2910] >> >> > >> >> > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: >> >> > 3.7] >> >> > >> >> > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum: >> >> > 15.9] >> >> > >> >> > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: >> >> > 0.0] >> >> > >> >> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, >> >> > Sum: 0.4] >> >> > >> >> > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: >> >> > 0.2, Sum: 590.3] >> >> > >> >> > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: >> >> > 45349967.9, Diff: 0.0] >> >> > >> >> > [Code Root Fixup: 0.0 ms] >> >> > >> >> > [Clear CT: 0.1 ms] >> >> > >> >> > [Other: 2.3 ms] >> >> > >> >> > [Choose CSet: 0.0 ms] >> >> > >> >> > [Ref Proc: 1.2 ms] >> >> > >> >> > [Ref Enq: 0.2 ms] >> >> > >> >> > [Free CSet: 0.6 ms] >> >> > >> >> > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K >> >> > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] >> >> > >> >> > Heap after GC invocations=4395 (full 207): >> >> > >> >> > garbage-first heap total 7340032K, used 6449092K >> >> > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) >> >> > >> >> > region size 2048K, 3 young (6144K), 3 survivors (6144K) >> >> > >> >> > compacting perm gen total 77824K, used 77354K [0x00000007fae00000, >> >> > 0x00000007ffa00000, 0x0000000800000000) >> >> > >> >> > the space 77824K, 99% used [0x00000007fae00000, >> >> > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) >> >> > >> >> > No shared spaces configured. >> >> > >> >> > } >> >> > >> >> > [Times: user=0.60 sys=0.00, real=0.08 secs] >> >> > >> >> > Immediately after this another concurrent cycle kicks in >> >> > >> >> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle >> >> > initiation, reason: occupancy higher than threshold, occupancy: >> >> > 6943670272 bytes, allocation request: 15728656 bytes, threshold: >> >> > 4509715620 bytes (60.00 %), source: concurrent humongous allocation] >> >> > >> >> > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930 >> >> > seconds >> >> > >> >> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle >> >> > initiation, reason: requested by GC cause, GC cause: G1 Humongous >> >> > Allocation] >> >> > >> >> > Before the concurrent cycle can complete, Full GC kicks in >> >> > >> >> > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full >> >> > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 refs, >> >> > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: >> >> > [WeakReference, 2417 refs, 0.0004400 >> >> > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 >> >> > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: >> >> > [PhantomReference, 10 refs, 0.0000060 >> >> > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, >> >> > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] >> >> > >> >> > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: >> >> > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ >> >> > >> >> > It will be helpful for us to understand, >> >> > >> >> > 1)Why If Full GC can clean up nearly 4 GB (: >> >> > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn?t the mixed GC kicked >> >> > in earlier ?. Does this mean all our old regions are tightly packed >> >> > (greater than G1MixedGCLiveThresholdPercent=85 ) ? >> >> > >> >> > 2)We also see , high amount of Humongous object allocation. Could it >> >> > be the case that Only Full GC?s can remove the Humongous objects ? >> >> > >> >> > 3)Is there any other tuning parameters we can use to mitigate this >> >> > scenario ? >> >> > >> >> > Please do let me know, if you need any more information. Thanks in >> >> > advance for your help. >> >> > >> >> > Regards >> >> > >> >> > Prasanna >> >> > >> >> > This message may contain information that is confidential or >> >> > privileged. If you are not the intended recipient, please advise the >> >> > sender immediately and delete this message. See >> >> > http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers >> >> > for further information. Please refer to >> >> > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for >> >> > more information about BlackRock?s Privacy Policy. >> >> > >> >> > BlackRock Advisors (UK) Limited and BlackRock Investment Management >> >> > (UK) Limited are authorised and regulated by the Financial Conduct >> >> > Authority. Registered in England No. 796793 and No. 2020394 >> >> > respectively. BlackRock Life Limited is authorised by the Prudential >> >> > Regulation Authority and regulated by the Financial Conduct Authority >> >> > and the Prudential Regulation Authority. Registered in England No. >> >> > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. >> >> > BlackRock International Limited is authorised and regulated by the >> >> > Financial Conduct Authority and is a registered investment adviser >> >> > with the Securities and Exchange Commission (SEC). Registered in >> >> > Scotland No. SC160821. Registered Office: Exchange Place One, 1 Semple >> >> > Street, Edinburgh EH3 8BL. >> >> > >> >> > For a list of BlackRock's office addresses worldwide, see >> >> > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. >> >> > >> >> > ? 2016 BlackRock, Inc. All rights reserved. >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > hotspot-gc-use mailing list >> >> > hotspot-gc-use at openjdk.java.net >> >> >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. >> >> > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cA >> >> > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtP >> >> > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8 >> >> > Y&e= >> >> _______________________________________________ >> >> hotspot-gc-use mailing list >> >> hotspot-gc-use at openjdk.java.net >> >> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8Y&e= >> >> > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From prasanna.gopal at blackrock.com Thu Dec 15 17:52:28 2016 From: prasanna.gopal at blackrock.com (Gopal, Prasanna CWK) Date: Thu, 15 Dec 2016 17:52:28 +0000 Subject: G1 - Mixed and Full GC's In-Reply-To: <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> <585293D3.3060400@finkzeit.at> <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> Message-ID: <8fdca83ee4124a80b44e60cb0d8b6de8@UKPMSEXD202N02.na.blkint.com> Hi Jenny / Wolfgang Thanks for your comments @ Jenny - Please find the adaptive sizing policy outputs are requested , it is not picking any old regions. It doesn't look like Pause time goal has reached as well, as we are not having very aggressive pause time goal. 45349.894: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 735623, predicted base time: 79.06 ms, remaining time: 420.94 ms, target pause time: 500.00 ms] 45349.894: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 155 regions, survivors: 2 regions, predicted young region time: 26.61 ms] 45349.894: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 155 regions, survivors: 2 regions, old: 0 regions, predicted pause time: 105.67 ms, target pause time: 500.00 ms] 2016-12-14T12:59:09.082-0500: 45349.968: [SoftReference, 0 refs, 0.0003840 secs]2016-12-14T12:59:09.083-0500: 45349.969: [WeakReference, 13 refs, 0.0002010 secs]2016-12-14T12:59:09.083-0500: 45349.969: [FinalReference, 137 refs, 0.000255 0 secs]2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason : recent GC overhead higher than threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] 45349.970: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: 4509715620 bytes (60.00 %), source: end of GC] I will try making the changes suggested by you and update you all with the results. Thanks again for your help. Regards Prasanna -----Original Message----- From: yu.zhang at oracle.com [mailto:yu.zhang at oracle.com] Sent: 15 December 2016 17:28 To: Wolfgang Pedot ; Gopal, Prasanna CWK ; hotspot-gc-use at openjdk.java.net Subject: Re: G1 - Mixed and Full GC's Hi, In the clean up cycle, about 1.6G is claimed. Those are cleaned because everything in the region is dead. It is possible those are regions for humongous objects. Wolfgang mentioned this already, in later jdk, certain type of humongous objects can be collected during young gc. To be more specific on the comments about what is collected during a mixed gc cycle, it is determined after REMARK, not the beginning of the marking cycle. About the question why mixed gc sees only 29M while full gc can claim 4041.8M, there could be several reasons: 1. -XX:G1MixedGCLiveThresholdPercent=85 is still too low. This means if the region is above 85% full, they are not considered to be candidate of mixed gc. But it might not be the reason, since full gc cleans up 4041.8M. 2. In your PrintAdaptiveSizePolicy output, there is information about how the collection set for mixed gc is decided. Can you post it? Though not likely, there is a chance it is not including some regions since the pause time goal is reached. 3. Full gc cleans everything, including classes and other references. In later jdk, classes are unloaded during marking cycle. But I am not sure if jdk7 has that. 4. As Wolfgang mentioned, maybe the allocation rate is too high for g1 to keep up. Can you try: 1 increase the region size? It will not help the 15m allocation, but maybe helpful to reduce other humongous objects. 2. lower the -XX:InitiatingHeapOccupancyPercent=60 (default is 45) to introduce the marking cycle earlier 3. Depends on how long the marking cycle is, maybe increase the concurrent marking threads, Thanks Jenny On 12/15/2016 05:00 AM, Wolfgang Pedot wrote: > It is my understanding that what gets collected during a mixed cycle > is determined at the beginning of the cycle so if you have a high > allocation rate of short lived humongous objects you can fill up the > heap with stuff the concurrent cycle will not get rid of even though > they would be reclaimable. Newer JRE-Versions can reclaim humongous > regions during young generation collects so they can deal better with > them but I guess its still best to avoid humongous allocations if > easily possible. > > Can you check in your log how long G1 is usually doing mixed collects? > > -XX:+G1PrintRegionLivenessInfo may also be an interesting option, it > lets you see how many humongous objects are in your heap at the start > of a concurrent cycle and how full your regions are. > > regards > Wolfgang Pedot > > Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK: >> >> Hi Wolfgang >> >> Thanks for your reply. Yes I agree we have high CPU overhead. We have >> the following settings for CPU-Cores / GC Threads. >> >> virtualProcessorCount=48 >> >> ParallelGCThreads=8 >> >> *> AFAIR humongous regions are only collected during mixed/full GC in >> your version.* >> >> **This is my understanding as well, but I am not sure why these >> objects were not collected during Mixed GC cycles. The Mixed cycle >> is seeing only 29 MB as claimable, but the Full GC was able to claim >> 4041.8 MB within 100 ms >> >> *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: >> reclaimable percentage not over threshold, candidate old regions: 46 >> regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* >> >> ** >> >> ** >> >> Regards >> >> Prasanna ** >> >> -----Original Message----- >> From: hotspot-gc-use [mailto:hotspot-gc-use-bounces at openjdk.java.net] >> On Behalf Of Wolfgang Pedot >> Sent: 15 December 2016 12:10 >> To: hotspot-gc-use at openjdk.java.net >> Subject: Re: G1 - Mixed and Full GC's >> >> Hi, >> >> not sure about my "expert"-ness but something caught my eye: >> >> you do have a pretty high GC overhead there (recent GC overhead: >> 30.14 %), it is possible that G1 can simply not keep up with your >> allocations and that leads to a full GC. How many CPU-cores does your >> machine have and how many GC threads are used? >> >> Can you determine the size of your humongous objects? Getting rid of >> those by increasing the region-size could help, AFAIR humongous >> regions are only collected during mixed/full GC in your version. >> >> regards >> >> Wolfgang Pedot >> >> Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: >> >> > >> >> > Hi Experts >> >> > >> >> > We have one of our application running G1 with following >> >> > configuration >> >> > >> >> > -Xmx7G >> >> > >> >> > -Xms7G >> >> > >> >> > -XX:+UseG1GC >> >> > >> >> > -XX:+UnlockExperimentalVMOptions >> >> > >> >> > -XX:InitiatingHeapOccupancyPercent=60 >> >> > >> >> > -XX:G1ReservePercent=20 >> >> > >> >> > -XX:G1MixedGCLiveThresholdPercent=85 >> >> > >> >> > -XX:MaxGCPauseMillis=500 >> >> > >> >> > -XX:+ParallelRefProcEnabled >> >> > >> >> > -XX:+PrintAdaptiveSizePolicy >> >> > >> >> > -XX:+PrintHeapAtGC >> >> > >> >> > -XX:+PrintReferenceGC >> >> > >> >> > -XX:+PrintGCDateStamps >> >> > >> >> > -XX:+PrintTenuringDistribution >> >> > >> >> > -XX:+PrintGCApplicationConcurrentTime >> >> > >> >> > -XX:+PrintGCApplicationStoppedTime >> >> > >> >> > -Duser.timezone=America/Los_Angeles >> >> > >> >> > -d64 -Dserver >> >> > >> >> > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) >> >> > >> >> > We are seeing the following behaviour >> >> > >> >> > End of a concurrent cycle >> >> > >> >> > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which >> >> > application threads were stopped: 0.0189970 seconds >> >> > >> >> > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: >> > 0.0001190 >> >> > seconds >> >> > >> >> > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup >> >> > 5093M->3488M(7168M), 0.0075650 secs] * >> >> > >> >> > [Times: user=0.06 sys=0.00, real=0.01 secs] >> >> > >> >> > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which >> >> > application threads were stopped: 0.0081250 seconds >> >> > >> >> > 2016-12-14T12:59:01.315-0500: 45342.201: [GC >> > concurrent-cleanup-start] >> >> > >> >> > 2016-12-14T12:59:01.318-0500: 45342.203: [GC >> > concurrent-cleanup-end, >> >> > 0.0022490 secs] >> >> > >> >> > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed >> >> > GC's >> >> > >> >> > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, >> >> > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak >> >> > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) >> >> > attempt heap expansion, reason: recent GC overhead higher than >> >> > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 >> > %, >> >> > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 >> > %)] >> >> > >> >> > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request >> >> > concurrent cycle initiation, reason: still doing mixed collections, >> >> > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: >> >> > 4509715620 bytes (60.00 %), source: end of GC]* >> >> > >> >> > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: >> >> > reclaimable percentage not over threshold, candidate old regions: >> > 46 >> >> > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* >> >> > >> >> > , 0.0765450 secs] >> >> > >> >> > [Parallel Time: 74.1 ms, GC Workers: 8] >> >> > >> >> > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: >> >> > 45349894.2, Diff: 0.3] >> >> > >> >> > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: >> >> > 2.1, Sum: 41.2] >> >> > >> >> > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, >> >> > Sum: 529.2] >> >> > >> >> > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: >> >> > 200, Sum: 2910] >> >> > >> >> > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: >> >> > 3.7] >> >> > >> >> > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, Sum: >> >> > 15.9] >> >> > >> >> > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: >> >> > 0.0] >> >> > >> >> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: >> > 0.0, >> >> > Sum: 0.4] >> >> > >> >> > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: >> >> > 0.2, Sum: 590.3] >> >> > >> >> > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: >> >> > 45349967.9, Diff: 0.0] >> >> > >> >> > [Code Root Fixup: 0.0 ms] >> >> > >> >> > [Clear CT: 0.1 ms] >> >> > >> >> > [Other: 2.3 ms] >> >> > >> >> > [Choose CSet: 0.0 ms] >> >> > >> >> > [Ref Proc: 1.2 ms] >> >> > >> >> > [Ref Enq: 0.2 ms] >> >> > >> >> > [Free CSet: 0.6 ms] >> >> > >> >> > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K >> >> > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] >> >> > >> >> > Heap after GC invocations=4395 (full 207): >> >> > >> >> > garbage-first heap total 7340032K, used 6449092K >> >> > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) >> >> > >> >> > region size 2048K, 3 young (6144K), 3 survivors (6144K) >> >> > >> >> > compacting perm gen total 77824K, used 77354K [0x00000007fae00000, >> >> > 0x00000007ffa00000, 0x0000000800000000) >> >> > >> >> > the space 77824K, 99% used [0x00000007fae00000, >> >> > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) >> >> > >> >> > No shared spaces configured. >> >> > >> >> > } >> >> > >> >> > [Times: user=0.60 sys=0.00, real=0.08 secs] >> >> > >> >> > Immediately after this another concurrent cycle kicks in >> >> > >> >> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent >> > cycle >> >> > initiation, reason: occupancy higher than threshold, occupancy: >> >> > 6943670272 bytes, allocation request: 15728656 bytes, threshold: >> >> > 4509715620 bytes (60.00 %), source: concurrent humongous >> > allocation] >> >> > >> >> > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: >> > 0.0004930 >> >> > seconds >> >> > >> >> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent >> > cycle >> >> > initiation, reason: requested by GC cause, GC cause: G1 Humongous >> >> > Allocation] >> >> > >> >> > Before the concurrent cycle can complete, Full GC kicks in >> >> > >> >> > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full >> >> > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 >> > refs, >> >> > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: >> >> > [WeakReference, 2417 refs, 0.0004400 >> >> > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 >> >> > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: >> >> > [PhantomReference, 10 refs, 0.0000060 >> >> > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, >> >> > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] >> >> > >> >> > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: >> >> > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ >> >> > >> >> > It will be helpful for us to understand, >> >> > >> >> > 1)Why If Full GC can clean up nearly 4 GB (: >> >> > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn't the mixed GC >> > kicked >> >> > in earlier ?. Does this mean all our old regions are tightly packed >> >> > (greater than G1MixedGCLiveThresholdPercent=85 ) ? >> >> > >> >> > 2)We also see , high amount of Humongous object allocation. Could >> > it >> >> > be the case that Only Full GC's can remove the Humongous objects ? >> >> > >> >> > 3)Is there any other tuning parameters we can use to mitigate this >> >> > scenario ? >> >> > >> >> > Please do let me know, if you need any more information. Thanks in >> >> > advance for your help. >> >> > >> >> > Regards >> >> > >> >> > Prasanna >> >> > >> >> > This message may contain information that is confidential or >> >> > privileged. If you are not the intended recipient, please advise >> > the >> >> > sender immediately and delete this message. See >> >> > http://www.blackrock.com/corporate/en-us/compliance/email-disclaime >> > rs >> >> > for further information. Please refer to >> >> > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy >> > for >> >> > more information about BlackRock's Privacy Policy. >> >> > >> >> > BlackRock Advisors (UK) Limited and BlackRock Investment Management >> >> > (UK) Limited are authorised and regulated by the Financial Conduct >> >> > Authority. Registered in England No. 796793 and No. 2020394 >> >> > respectively. BlackRock Life Limited is authorised by the >> > Prudential >> >> > Regulation Authority and regulated by the Financial Conduct >> > Authority >> >> > and the Prudential Regulation Authority. Registered in England No. >> >> > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. >> >> > BlackRock International Limited is authorised and regulated by the >> >> > Financial Conduct Authority and is a registered investment adviser >> >> > with the Securities and Exchange Commission (SEC). Registered in >> >> > Scotland No. SC160821. Registered Office: Exchange Place One, 1 >> > Semple >> >> > Street, Edinburgh EH3 8BL. >> >> > >> >> > For a list of BlackRock's office addresses worldwide, see >> >> > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. >> >> > >> >> > (c) 2016 BlackRock, Inc. All rights reserved. >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > hotspot-gc-use mailing list >> >> > hotspot-gc-use at openjdk.java.net >> >> >> > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. >> >> > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ >> > 4cA >> >> > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_ >> > rtP >> >> > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1 >> > EP8 >> >> > Y&e= >> >> _______________________________________________ >> >> hotspot-gc-use mailing list >> >> hotspot-gc-use at openjdk.java.net >> >> >> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java >> .net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4 >> cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_ >> rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV >> 1EP8Y&e= >> >> > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgID-g&c=zUO0BtkCe66yJvAZ4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=XvV66qN71enuto3UfSC4T3sOzk82-DfLGKFx9TZO-O4&s=QcGyt4Cp0IpiA2nSFmFXfrtxe8nFZ6wxgrK5AFh5HaM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfgang.pedot at finkzeit.at Thu Dec 15 19:03:05 2016 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Thu, 15 Dec 2016 20:03:05 +0100 Subject: G1 - Mixed and Full GC's In-Reply-To: <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> <585293D3.3060400@finkzeit.at> <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> Message-ID: <5852E8E9.8070809@finkzeit.at> Hi, thanks for correcting me Jenny, learned something new today. Here is a short example from one of my servers (1.8.0_101) during normal operation: 2016-12-15T14:19:42.749+0100: 961594.487: [GC pause (G1 Evacuation Pause) (young) (initial-mark) 2016-12-15T14:19:42.955+0100: 961594.693: [GC concurrent-root-region-scan-start] 2016-12-15T14:19:43.050+0100: 961594.788: [GC concurrent-root-region-scan-end, 0.0947334 secs] 2016-12-15T14:19:43.050+0100: 961594.788: [GC concurrent-mark-start] 2016-12-15T14:19:46.094+0100: 961597.832: [GC concurrent-mark-end, 3.0445129 secs] 2016-12-15T14:19:46.112+0100: 961597.850: [GC remark 2016-12-15T14:19:46.112+0100: 961597.850: [Finalize Marking, 0.0023737 secs] 2016-12-15T14:19:46.114+0100: 961597.852: [GC ref-proc, 0.0095962 secs] 2016-12-15T14:19:46.124+0100: 961597.862: [Unloading 2016-12-15T14:19:46.131+0100: 961597.869: [System Dictionary Unloading, 0.1269161 secs] 2016-12-15T14:19:46.258+0100: 961597.996: [Parallel Unloading, 0.0361294 secs] 2016-12-15T14:19:46.294+0100: 961598.032: [Deallocate Metadata, 0.0017183 secs] 2016-12-15T14:19:46.296+0100: 961598.034: [String Deduplication Unlink, 0.0375517 secs], 0.2094571 secs], 0.2401670 secs] 2016-12-15T14:19:46.366+0100: 961598.104: [GC cleanup 2016-12-15T14:19:46.552+0100: 961598.290: [GC concurrent-cleanup-start] 2016-12-15T14:19:46.552+0100: 961598.290: [GC concurrent-cleanup-end, 0.0000160 secs] 2016-12-15T14:19:58.388+0100: 961610.126: [GC pause (G1 Evacuation Pause) (young) 2016-12-15T14:19:59.684+0100: 961611.422: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:00.537+0100: 961612.275: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:01.023+0100: 961612.761: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:03.012+0100: 961614.750: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:04.339+0100: 961616.077: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:08.207+0100: 961619.945: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:12.424+0100: 961624.162: [GC pause (G1 Evacuation Pause) (young) Here is how I read the log-snippet: Concurrent cycle was initiated at 14:19:42, REMARK was done at 14:19:46. Mixed collects were started afterwards and the last one happened at 14:20:08, so the first chance to start a new concurrent cycle would have been at 14:20:12. If there are short lived small allocations during that time the young/mixed-collects can get rid of them right away but new humongous objects can not be collected until the next badge of mixed collects in your JRE. I have no idea how long such a cycle takes on your system but the longer it takes the higher the chance of an allocation failure leading to a full-gc. The eden space is drastically reduced during mixed collects (10-15% of normal in my case) so that they happen more rapidly than pure young collects, I guess if there are mainly humongous allocations happening this would slow down the mixed collects. regards Wolfgang Am 15.12.2016 um 18:28 schrieb yu.zhang at oracle.com: > Hi, > > In the clean up cycle, about 1.6G is claimed. Those are cleaned > because everything in the region is dead. It is possible those are > regions for humongous objects. > > Wolfgang mentioned this already, in later jdk, certain type of > humongous objects can be collected during young gc. > To be more specific on the comments about what is collected during a > mixed gc cycle, it is determined after REMARK, not the beginning of > the marking cycle. > > About the question why mixed gc sees only 29M while full gc can claim > 4041.8M, there could be several reasons: > 1. -XX:G1MixedGCLiveThresholdPercent=85 is still too low. This means > if the region is above 85% full, they are not considered to be > candidate of mixed gc. But it might not be the reason, since full gc > cleans up 4041.8M. > 2. In your PrintAdaptiveSizePolicy output, there is information about > how the collection set for mixed gc is decided. Can you post it? > Though not likely, there is a chance it is not including some regions > since the pause time goal is reached. > 3. Full gc cleans everything, including classes and other references. > In later jdk, classes are unloaded during marking cycle. But I am not > sure if jdk7 has that. > 4. As Wolfgang mentioned, maybe the allocation rate is too high for g1 > to keep up. > > Can you try: > 1 increase the region size? It will not help the 15m allocation, but > maybe helpful to reduce other humongous objects. > 2. lower the -XX:InitiatingHeapOccupancyPercent=60 (default is 45) to > introduce the marking cycle earlier > 3. Depends on how long the marking cycle is, maybe increase the > concurrent marking threads, > > Thanks > Jenny > > > On 12/15/2016 05:00 AM, Wolfgang Pedot wrote: >> It is my understanding that what gets collected during a mixed cycle >> is determined at the beginning of the cycle so if you have a high >> allocation rate of short lived humongous objects you can fill up the >> heap with stuff the concurrent cycle will not get rid of even though >> they would be reclaimable. Newer JRE-Versions can reclaim humongous >> regions during young generation collects so they can deal better with >> them but I guess its still best to avoid humongous allocations if >> easily possible. >> >> Can you check in your log how long G1 is usually doing mixed collects? >> >> -XX:+G1PrintRegionLivenessInfo may also be an interesting option, it >> lets you see how many humongous objects are in your heap at the start >> of a concurrent cycle and how full your regions are. >> >> regards >> Wolfgang Pedot >> >> Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK: >>> >>> Hi Wolfgang >>> >>> Thanks for your reply. Yes I agree we have high CPU overhead. We >>> have the following settings for CPU-Cores / GC Threads. >>> >>> virtualProcessorCount=48 >>> >>> ParallelGCThreads=8 >>> >>> *> AFAIR humongous regions are only collected during mixed/full GC >>> in your version.* >>> >>> **This is my understanding as well, but I am not sure why these >>> objects were not collected during Mixed GC cycles. The Mixed cycle >>> is seeing only 29 MB as claimable, but the Full GC was able to claim >>> 4041.8 MB within 100 ms >>> >>> *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, >>> reason: reclaimable percentage not over threshold, candidate old >>> regions: 46 regions, reclaimable: 31277496 bytes (0.42 %), >>> threshold: 10.00 %]* >>> >>> ** >>> >>> ** >>> >>> Regards >>> >>> Prasanna ** >>> >>> -----Original Message----- >>> From: hotspot-gc-use >>> [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of >>> Wolfgang Pedot >>> Sent: 15 December 2016 12:10 >>> To: hotspot-gc-use at openjdk.java.net >>> Subject: Re: G1 - Mixed and Full GC's >>> >>> Hi, >>> >>> not sure about my "expert"-ness but something caught my eye: >>> >>> you do have a pretty high GC overhead there (recent GC overhead: >>> 30.14 %), it is possible that G1 can simply not keep up with your >>> allocations and that leads to a full GC. How many CPU-cores does >>> your machine have and how many GC threads are used? >>> >>> Can you determine the size of your humongous objects? Getting rid of >>> those by increasing the region-size could help, AFAIR humongous >>> regions are only collected during mixed/full GC in your version. >>> >>> regards >>> >>> Wolfgang Pedot >>> >>> Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: >>> >>> > >>> >>> > Hi Experts >>> >>> > >>> >>> > We have one of our application running G1 with following >>> >>> > configuration >>> >>> > >>> >>> > -Xmx7G >>> >>> > >>> >>> > -Xms7G >>> >>> > >>> >>> > -XX:+UseG1GC >>> >>> > >>> >>> > -XX:+UnlockExperimentalVMOptions >>> >>> > >>> >>> > -XX:InitiatingHeapOccupancyPercent=60 >>> >>> > >>> >>> > -XX:G1ReservePercent=20 >>> >>> > >>> >>> > -XX:G1MixedGCLiveThresholdPercent=85 >>> >>> > >>> >>> > -XX:MaxGCPauseMillis=500 >>> >>> > >>> >>> > -XX:+ParallelRefProcEnabled >>> >>> > >>> >>> > -XX:+PrintAdaptiveSizePolicy >>> >>> > >>> >>> > -XX:+PrintHeapAtGC >>> >>> > >>> >>> > -XX:+PrintReferenceGC >>> >>> > >>> >>> > -XX:+PrintGCDateStamps >>> >>> > >>> >>> > -XX:+PrintTenuringDistribution >>> >>> > >>> >>> > -XX:+PrintGCApplicationConcurrentTime >>> >>> > >>> >>> > -XX:+PrintGCApplicationStoppedTime >>> >>> > >>> >>> > -Duser.timezone=America/Los_Angeles >>> >>> > >>> >>> > -d64 -Dserver >>> >>> > >>> >>> > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) >>> >>> > >>> >>> > We are seeing the following behaviour >>> >>> > >>> >>> > End of a concurrent cycle >>> >>> > >>> >>> > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which >>> >>> > application threads were stopped: 0.0189970 seconds >>> >>> > >>> >>> > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 >>> >>> > seconds >>> >>> > >>> >>> > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup >>> >>> > 5093M->3488M(7168M), 0.0075650 secs] * >>> >>> > >>> >>> > [Times: user=0.06 sys=0.00, real=0.01 secs] >>> >>> > >>> >>> > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which >>> >>> > application threads were stopped: 0.0081250 seconds >>> >>> > >>> >>> > 2016-12-14T12:59:01.315-0500: 45342.201: [GC >>> concurrent-cleanup-start] >>> >>> > >>> >>> > 2016-12-14T12:59:01.318-0500: 45342.203: [GC concurrent-cleanup-end, >>> >>> > 0.0022490 secs] >>> >>> > >>> >>> > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed >>> >>> > GC?s >>> >>> > >>> >>> > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, >>> >>> > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak >>> >>> > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) >>> >>> > attempt heap expansion, reason: recent GC overhead higher than >>> >>> > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, >>> >>> > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] >>> >>> > >>> >>> > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request >>> >>> > concurrent cycle initiation, reason: still doing mixed collections, >>> >>> > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: >>> >>> > 4509715620 bytes (60.00 %), source: end of GC]* >>> >>> > >>> >>> > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: >>> >>> > reclaimable percentage not over threshold, candidate old regions: 46 >>> >>> > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* >>> >>> > >>> >>> > , 0.0765450 secs] >>> >>> > >>> >>> > [Parallel Time: 74.1 ms, GC Workers: 8] >>> >>> > >>> >>> > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: >>> >>> > 45349894.2, Diff: 0.3] >>> >>> > >>> >>> > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: >>> >>> > 2.1, Sum: 41.2] >>> >>> > >>> >>> > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, >>> >>> > Sum: 529.2] >>> >>> > >>> >>> > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: >>> >>> > 200, Sum: 2910] >>> >>> > >>> >>> > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: >>> >>> > 3.7] >>> >>> > >>> >>> > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, >>> Sum: >>> >>> > 15.9] >>> >>> > >>> >>> > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, >>> Sum: >>> >>> > 0.0] >>> >>> > >>> >>> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, >>> >>> > Sum: 0.4] >>> >>> > >>> >>> > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: >>> >>> > 0.2, Sum: 590.3] >>> >>> > >>> >>> > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: >>> >>> > 45349967.9, Diff: 0.0] >>> >>> > >>> >>> > [Code Root Fixup: 0.0 ms] >>> >>> > >>> >>> > [Clear CT: 0.1 ms] >>> >>> > >>> >>> > [Other: 2.3 ms] >>> >>> > >>> >>> > [Choose CSet: 0.0 ms] >>> >>> > >>> >>> > [Ref Proc: 1.2 ms] >>> >>> > >>> >>> > [Ref Enq: 0.2 ms] >>> >>> > >>> >>> > [Free CSet: 0.6 ms] >>> >>> > >>> >>> > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: 4096.0K->6144.0K >>> >>> > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] >>> >>> > >>> >>> > Heap after GC invocations=4395 (full 207): >>> >>> > >>> >>> > garbage-first heap total 7340032K, used 6449092K >>> >>> > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) >>> >>> > >>> >>> > region size 2048K, 3 young (6144K), 3 survivors (6144K) >>> >>> > >>> >>> > compacting perm gen total 77824K, used 77354K [0x00000007fae00000, >>> >>> > 0x00000007ffa00000, 0x0000000800000000) >>> >>> > >>> >>> > the space 77824K, 99% used [0x00000007fae00000, >>> >>> > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) >>> >>> > >>> >>> > No shared spaces configured. >>> >>> > >>> >>> > } >>> >>> > >>> >>> > [Times: user=0.60 sys=0.00, real=0.08 secs] >>> >>> > >>> >>> > Immediately after this another concurrent cycle kicks in >>> >>> > >>> >>> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle >>> >>> > initiation, reason: occupancy higher than threshold, occupancy: >>> >>> > 6943670272 bytes, allocation request: 15728656 bytes, threshold: >>> >>> > 4509715620 bytes (60.00 %), source: concurrent humongous allocation] >>> >>> > >>> >>> > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: 0.0004930 >>> >>> > seconds >>> >>> > >>> >>> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent cycle >>> >>> > initiation, reason: requested by GC cause, GC cause: G1 Humongous >>> >>> > Allocation] >>> >>> > >>> >>> > Before the concurrent cycle can complete, Full GC kicks in >>> >>> > >>> >>> > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full >>> >>> > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 refs, >>> >>> > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: >>> >>> > [WeakReference, 2417 refs, 0.0004400 >>> >>> > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 >>> >>> > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: >>> >>> > [PhantomReference, 10 refs, 0.0000060 >>> >>> > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, >>> >>> > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] >>> >>> > >>> >>> > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: >>> >>> > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ >>> >>> > >>> >>> > It will be helpful for us to understand, >>> >>> > >>> >>> > 1)Why If Full GC can clean up nearly 4 GB (: >>> >>> > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn?t the mixed GC kicked >>> >>> > in earlier ?. Does this mean all our old regions are tightly packed >>> >>> > (greater than G1MixedGCLiveThresholdPercent=85 ) ? >>> >>> > >>> >>> > 2)We also see , high amount of Humongous object allocation. Could it >>> >>> > be the case that Only Full GC?s can remove the Humongous objects ? >>> >>> > >>> >>> > 3)Is there any other tuning parameters we can use to mitigate this >>> >>> > scenario ? >>> >>> > >>> >>> > Please do let me know, if you need any more information. Thanks in >>> >>> > advance for your help. >>> >>> > >>> >>> > Regards >>> >>> > >>> >>> > Prasanna >>> >>> > >>> >>> > This message may contain information that is confidential or >>> >>> > privileged. If you are not the intended recipient, please advise the >>> >>> > sender immediately and delete this message. See >>> >>> > http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers >>> >>> > for further information. Please refer to >>> >>> > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy >>> for >>> >>> > more information about BlackRock?s Privacy Policy. >>> >>> > >>> >>> > BlackRock Advisors (UK) Limited and BlackRock Investment Management >>> >>> > (UK) Limited are authorised and regulated by the Financial Conduct >>> >>> > Authority. Registered in England No. 796793 and No. 2020394 >>> >>> > respectively. BlackRock Life Limited is authorised by the Prudential >>> >>> > Regulation Authority and regulated by the Financial Conduct Authority >>> >>> > and the Prudential Regulation Authority. Registered in England No. >>> >>> > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. >>> >>> > BlackRock International Limited is authorised and regulated by the >>> >>> > Financial Conduct Authority and is a registered investment adviser >>> >>> > with the Securities and Exchange Commission (SEC). Registered in >>> >>> > Scotland No. SC160821. Registered Office: Exchange Place One, 1 >>> Semple >>> >>> > Street, Edinburgh EH3 8BL. >>> >>> > >>> >>> > For a list of BlackRock's office addresses worldwide, see >>> >>> > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. >>> >>> > >>> >>> > ? 2016 BlackRock, Inc. All rights reserved. >>> >>> > >>> >>> > >>> >>> > >>> >>> > _______________________________________________ >>> >>> > hotspot-gc-use mailing list >>> >>> > hotspot-gc-use at openjdk.java.net >>> >>> >>> > >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. >>> >>> > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cA >>> >>> > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtP >>> >>> > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8 >>> >>> > Y&e= >>> >>> _______________________________________________ >>> >>> hotspot-gc-use mailing list >>> >>> hotspot-gc-use at openjdk.java.net >>> >>> >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvAZ4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV1EP8Y&e= >>> >>> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From prasanna.gopal at blackrock.com Fri Dec 16 15:46:42 2016 From: prasanna.gopal at blackrock.com (Gopal, Prasanna CWK) Date: Fri, 16 Dec 2016 15:46:42 +0000 Subject: G1 - Mixed and Full GC's In-Reply-To: <5852E8E9.8070809@finkzeit.at> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> <585293D3.3060400@finkzeit.at> <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> <5852E8E9.8070809@finkzeit.at> Message-ID: <0c5c4ab0140f438a882efadd3cb0d54e@UKPMSEXD202N02.na.blkint.com> Hi Wolfgang Thanks for providing these logs. It looks like in our case; As you have pointed our issue is due to high humongous object allocation rate. After the following concurrent cycle, our application is doing lot of humongous allocation 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which application threads were stopped: 0.0189970 seconds 2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 seconds 2016-12-14T12:59:01.308-0500: 45342.193: [GC cleanup 5093M->3488M(7168M), 0.0075650 secs] Immediately after one second, I am seeing a lot of similar logs. 45343.883: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 4508876800 bytes, allocation request: 15728656 bytes, threshold: 4509715620 bytes (60.00 %), source: concurrent humongous allocation] G1 couldn't invoke the next concurrent cycle as it has only completed one recently. It is not able to start mixed GC, as the young generation is not full for invoking mixed GC. This continues, to a point where there is no more space to allocate humongous objects in old gen. 45349.891: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 6926893056 bytes, allocation request: 15728656 bytes, threshold: 4509715620 bytes (60.00 %), source: concurrent humongous allocation] 45349.891: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: humongous allocation request failed, allocation request: 15728656 bytes] Once the heap allocation failed, G1 tries starting a young GC. But it is not finding much to collect 2016-12-14T12:59:09.008-0500: 45349.894: [GC pause (young) Desired survivor size 91226112 bytes, new threshold 15 (max 15) - age 1: 1608504 bytes, 1608504 total 45349.894: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 735623, predicted base time: 79.06 ms, remaining time: 420.94 ms, target pause time: 500.00 ms] 45349.894: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 155 regions, survivors: 2 regions, predicted young region time: 26.61 ms] 45349.894: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 155 regions, survivors: 2 regions, old: 0 regions, predicted pause time: 105.67 ms, target pause time: 500.00 ms] 2016-12-14T12:59:09.082-0500: 45349.968: [SoftReference, 0 refs, 0.0003840 secs]2016-12-14T12:59:09.083-0500: 45349.969: [WeakReference, 13 refs, 0.0002010 secs]2016-12-14T12:59:09.083-0500: 45349.969: [FinalReference, 137 refs, 0.0002550 secs]2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 refs, 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] 45349.970: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: 4509715620 bytes (60.00 %), source: end of GC] 45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 46 regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %] , 0.0765450 secs] After this young GC , G1 is starting concurrent cycle as the initiating threshold is still breached. Due to high object allocation rate, Full GC kicks in before the concurrent cycle can complete. 45349.974: [G1Ergonomics (Concurrent Cycles) initiate concurrent cycle, reason: concurrent cycle initiation requested] 2016-12-14T12:59:09.088-0500: 45349.974: [GC pause (young) (initial-mark) 1) In this scenario, I am not sure, whether lowering -XX:InitiatingHeapOccupancyPercent=60 will help or not. G1 is not able to start Concurrent cycle, as it is waiting for Mixed GC to happen (Though there is nothing to collect) . Mixed GC is not happening immediately as the younger generation is not getting filled up. G1 is only able to start Mixed GC (eventually concurrent cycle) , only when it is not seeing any more heap space to allocate humongous objects in old generation . Does my observation sounds logical ? 2) We certainly need to do some application changes to fix this issue, but Just out of curiosity Is there any G1 parameter we can use to force concurrent cycle ( when we don't have any space) , without waiting for mixed GC to happen. I know it will be sub-optimal in most of the scenarios , but just curios to know. Thanks in advance for your comments. Appreciate your help. Thanks and Regards Prasanna -----Original Message----- From: Wolfgang Pedot [mailto:wolfgang.pedot at finkzeit.at] Sent: 15 December 2016 19:03 To: yu.zhang at oracle.com; Gopal, Prasanna CWK ; hotspot-gc-use at openjdk.java.net Subject: Re: G1 - Mixed and Full GC's Hi, thanks for correcting me Jenny, learned something new today. Here is a short example from one of my servers (1.8.0_101) during normal operation: 2016-12-15T14:19:42.749+0100: 961594.487: [GC pause (G1 Evacuation Pause) (young) (initial-mark) 2016-12-15T14:19:42.955+0100: 961594.693: [GC concurrent-root-region-scan-start] 2016-12-15T14:19:43.050+0100: 961594.788: [GC concurrent-root-region-scan-end, 0.0947334 secs] 2016-12-15T14:19:43.050+0100: 961594.788: [GC concurrent-mark-start] 2016-12-15T14:19:46.094+0100: 961597.832: [GC concurrent-mark-end, 3.0445129 secs] 2016-12-15T14:19:46.112+0100: 961597.850: [GC remark 2016-12-15T14:19:46.112+0100: 961597.850: [Finalize Marking, 0.0023737 secs] 2016-12-15T14:19:46.114+0100: 961597.852: [GC ref-proc, 0.0095962 secs] 2016-12-15T14:19:46.124+0100: 961597.862: [Unloading 2016-12-15T14:19:46.131+0100: 961597.869: [System Dictionary Unloading, 0.1269161 secs] 2016-12-15T14:19:46.258+0100: 961597.996: [Parallel Unloading, 0.0361294 secs] 2016-12-15T14:19:46.294+0100: 961598.032: [Deallocate Metadata, 0.0017183 secs] 2016-12-15T14:19:46.296+0100: 961598.034: [String Deduplication Unlink, 0.0375517 secs], 0.2094571 secs], 0.2401670 secs] 2016-12-15T14:19:46.366+0100: 961598.104: [GC cleanup 2016-12-15T14:19:46.552+0100: 961598.290: [GC concurrent-cleanup-start] 2016-12-15T14:19:46.552+0100: 961598.290: [GC concurrent-cleanup-end, 0.0000160 secs] 2016-12-15T14:19:58.388+0100: 961610.126: [GC pause (G1 Evacuation Pause) (young) 2016-12-15T14:19:59.684+0100: 961611.422: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:00.537+0100: 961612.275: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:01.023+0100: 961612.761: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:03.012+0100: 961614.750: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:04.339+0100: 961616.077: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:08.207+0100: 961619.945: [GC pause (G1 Evacuation Pause) (mixed) 2016-12-15T14:20:12.424+0100: 961624.162: [GC pause (G1 Evacuation Pause) (young) Here is how I read the log-snippet: Concurrent cycle was initiated at 14:19:42, REMARK was done at 14:19:46. Mixed collects were started afterwards and the last one happened at 14:20:08, so the first chance to start a new concurrent cycle would have been at 14:20:12. If there are short lived small allocations during that time the young/mixed-collects can get rid of them right away but new humongous objects can not be collected until the next badge of mixed collects in your JRE. I have no idea how long such a cycle takes on your system but the longer it takes the higher the chance of an allocation failure leading to a full-gc. The eden space is drastically reduced during mixed collects (10-15% of normal in my case) so that they happen more rapidly than pure young collects, I guess if there are mainly humongous allocations happening this would slow down the mixed collects. regards Wolfgang Am 15.12.2016 um 18:28 schrieb yu.zhang at oracle.com: > Hi, > > In the clean up cycle, about 1.6G is claimed. Those are cleaned > because everything in the region is dead. It is possible those are > regions for humongous objects. > > Wolfgang mentioned this already, in later jdk, certain type of > humongous objects can be collected during young gc. > To be more specific on the comments about what is collected during a > mixed gc cycle, it is determined after REMARK, not the beginning of > the marking cycle. > > About the question why mixed gc sees only 29M while full gc can claim > 4041.8M, there could be several reasons: > 1. -XX:G1MixedGCLiveThresholdPercent=85 is still too low. This means > if the region is above 85% full, they are not considered to be > candidate of mixed gc. But it might not be the reason, since full gc > cleans up 4041.8M. > 2. In your PrintAdaptiveSizePolicy output, there is information about > how the collection set for mixed gc is decided. Can you post it? > Though not likely, there is a chance it is not including some regions > since the pause time goal is reached. > 3. Full gc cleans everything, including classes and other references. > In later jdk, classes are unloaded during marking cycle. But I am not > sure if jdk7 has that. > 4. As Wolfgang mentioned, maybe the allocation rate is too high for g1 > to keep up. > > Can you try: > 1 increase the region size? It will not help the 15m allocation, but > maybe helpful to reduce other humongous objects. > 2. lower the -XX:InitiatingHeapOccupancyPercent=60 (default is 45) to > introduce the marking cycle earlier 3. Depends on how long the marking > cycle is, maybe increase the concurrent marking threads, > > Thanks > Jenny > > > On 12/15/2016 05:00 AM, Wolfgang Pedot wrote: >> It is my understanding that what gets collected during a mixed cycle >> is determined at the beginning of the cycle so if you have a high >> allocation rate of short lived humongous objects you can fill up the >> heap with stuff the concurrent cycle will not get rid of even though >> they would be reclaimable. Newer JRE-Versions can reclaim humongous >> regions during young generation collects so they can deal better with >> them but I guess its still best to avoid humongous allocations if >> easily possible. >> >> Can you check in your log how long G1 is usually doing mixed collects? >> >> -XX:+G1PrintRegionLivenessInfo may also be an interesting option, it >> lets you see how many humongous objects are in your heap at the start >> of a concurrent cycle and how full your regions are. >> >> regards >> Wolfgang Pedot >> >> Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK: >>> >>> Hi Wolfgang >>> >>> Thanks for your reply. Yes I agree we have high CPU overhead. We >>> have the following settings for CPU-Cores / GC Threads. >>> >>> virtualProcessorCount=48 >>> >>> ParallelGCThreads=8 >>> >>> *> AFAIR humongous regions are only collected during mixed/full GC >>> in your version.* >>> >>> **This is my understanding as well, but I am not sure why these >>> objects were not collected during Mixed GC cycles. The Mixed cycle >>> is seeing only 29 MB as claimable, but the Full GC was able to claim >>> 4041.8 MB within 100 ms >>> >>> *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, >>> reason: reclaimable percentage not over threshold, candidate old >>> regions: 46 regions, reclaimable: 31277496 bytes (0.42 %), >>> threshold: 10.00 %]* >>> >>> ** >>> >>> ** >>> >>> Regards >>> >>> Prasanna ** >>> >>> -----Original Message----- >>> From: hotspot-gc-use >>> [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of >>> Wolfgang Pedot >>> Sent: 15 December 2016 12:10 >>> To: hotspot-gc-use at openjdk.java.net >>> Subject: Re: G1 - Mixed and Full GC's >>> >>> Hi, >>> >>> not sure about my "expert"-ness but something caught my eye: >>> >>> you do have a pretty high GC overhead there (recent GC overhead: >>> 30.14 %), it is possible that G1 can simply not keep up with your >>> allocations and that leads to a full GC. How many CPU-cores does >>> your machine have and how many GC threads are used? >>> >>> Can you determine the size of your humongous objects? Getting rid of >>> those by increasing the region-size could help, AFAIR humongous >>> regions are only collected during mixed/full GC in your version. >>> >>> regards >>> >>> Wolfgang Pedot >>> >>> Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: >>> >>> > >>> >>> > Hi Experts >>> >>> > >>> >>> > We have one of our application running G1 with following >>> >>> > configuration >>> >>> > >>> >>> > -Xmx7G >>> >>> > >>> >>> > -Xms7G >>> >>> > >>> >>> > -XX:+UseG1GC >>> >>> > >>> >>> > -XX:+UnlockExperimentalVMOptions >>> >>> > >>> >>> > -XX:InitiatingHeapOccupancyPercent=60 >>> >>> > >>> >>> > -XX:G1ReservePercent=20 >>> >>> > >>> >>> > -XX:G1MixedGCLiveThresholdPercent=85 >>> >>> > >>> >>> > -XX:MaxGCPauseMillis=500 >>> >>> > >>> >>> > -XX:+ParallelRefProcEnabled >>> >>> > >>> >>> > -XX:+PrintAdaptiveSizePolicy >>> >>> > >>> >>> > -XX:+PrintHeapAtGC >>> >>> > >>> >>> > -XX:+PrintReferenceGC >>> >>> > >>> >>> > -XX:+PrintGCDateStamps >>> >>> > >>> >>> > -XX:+PrintTenuringDistribution >>> >>> > >>> >>> > -XX:+PrintGCApplicationConcurrentTime >>> >>> > >>> >>> > -XX:+PrintGCApplicationStoppedTime >>> >>> > >>> >>> > -Duser.timezone=America/Los_Angeles >>> >>> > >>> >>> > -d64 -Dserver >>> >>> > >>> >>> > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) >>> >>> > >>> >>> > We are seeing the following behaviour >>> >>> > >>> >>> > End of a concurrent cycle >>> >>> > >>> >>> > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which >>> >>> > application threads were stopped: 0.0189970 seconds >>> >>> > >>> >>> > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: >>> > 0.0001190 >>> >>> > seconds >>> >>> > >>> >>> > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup >>> >>> > 5093M->3488M(7168M), 0.0075650 secs] * >>> >>> > >>> >>> > [Times: user=0.06 sys=0.00, real=0.01 secs] >>> >>> > >>> >>> > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which >>> >>> > application threads were stopped: 0.0081250 seconds >>> >>> > >>> >>> > 2016-12-14T12:59:01.315-0500: 45342.201: [GC >>> concurrent-cleanup-start] >>> >>> > >>> >>> > 2016-12-14T12:59:01.318-0500: 45342.203: [GC >>> > concurrent-cleanup-end, >>> >>> > 0.0022490 secs] >>> >>> > >>> >>> > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed >>> >>> > GC's >>> >>> > >>> >>> > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 >>> > refs, >>> >>> > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak >>> >>> > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) >>> >>> > attempt heap expansion, reason: recent GC overhead higher than >>> >>> > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 >>> > %, >>> >>> > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 >>> > %)] >>> >>> > >>> >>> > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request >>> >>> > concurrent cycle initiation, reason: still doing mixed >>> > collections, >>> >>> > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: >>> >>> > 4509715620 bytes (60.00 %), source: end of GC]* >>> >>> > >>> >>> > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: >>> >>> > reclaimable percentage not over threshold, candidate old regions: >>> > 46 >>> >>> > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 >>> > %]* >>> >>> > >>> >>> > , 0.0765450 secs] >>> >>> > >>> >>> > [Parallel Time: 74.1 ms, GC Workers: 8] >>> >>> > >>> >>> > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: >>> >>> > 45349894.2, Diff: 0.3] >>> >>> > >>> >>> > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: >>> >>> > 2.1, Sum: 41.2] >>> >>> > >>> >>> > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, >>> >>> > Sum: 529.2] >>> >>> > >>> >>> > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: >>> >>> > 200, Sum: 2910] >>> >>> > >>> >>> > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: >>> >>> > 3.7] >>> >>> > >>> >>> > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, >>> Sum: >>> >>> > 15.9] >>> >>> > >>> >>> > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, >>> Sum: >>> >>> > 0.0] >>> >>> > >>> >>> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: >>> > 0.0, >>> >>> > Sum: 0.4] >>> >>> > >>> >>> > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: >>> >>> > 0.2, Sum: 590.3] >>> >>> > >>> >>> > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: >>> >>> > 45349967.9, Diff: 0.0] >>> >>> > >>> >>> > [Code Root Fixup: 0.0 ms] >>> >>> > >>> >>> > [Clear CT: 0.1 ms] >>> >>> > >>> >>> > [Other: 2.3 ms] >>> >>> > >>> >>> > [Choose CSet: 0.0 ms] >>> >>> > >>> >>> > [Ref Proc: 1.2 ms] >>> >>> > >>> >>> > [Ref Enq: 0.2 ms] >>> >>> > >>> >>> > [Free CSet: 0.6 ms] >>> >>> > >>> >>> > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: >>> > 4096.0K->6144.0K >>> >>> > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] >>> >>> > >>> >>> > Heap after GC invocations=4395 (full 207): >>> >>> > >>> >>> > garbage-first heap total 7340032K, used 6449092K >>> >>> > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) >>> >>> > >>> >>> > region size 2048K, 3 young (6144K), 3 survivors (6144K) >>> >>> > >>> >>> > compacting perm gen total 77824K, used 77354K >>> > [0x00000007fae00000, >>> >>> > 0x00000007ffa00000, 0x0000000800000000) >>> >>> > >>> >>> > the space 77824K, 99% used [0x00000007fae00000, >>> >>> > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) >>> >>> > >>> >>> > No shared spaces configured. >>> >>> > >>> >>> > } >>> >>> > >>> >>> > [Times: user=0.60 sys=0.00, real=0.08 secs] >>> >>> > >>> >>> > Immediately after this another concurrent cycle kicks in >>> >>> > >>> >>> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent >>> > cycle >>> >>> > initiation, reason: occupancy higher than threshold, occupancy: >>> >>> > 6943670272 bytes, allocation request: 15728656 bytes, threshold: >>> >>> > 4509715620 bytes (60.00 %), source: concurrent humongous >>> > allocation] >>> >>> > >>> >>> > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: >>> > 0.0004930 >>> >>> > seconds >>> >>> > >>> >>> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent >>> > cycle >>> >>> > initiation, reason: requested by GC cause, GC cause: G1 Humongous >>> >>> > Allocation] >>> >>> > >>> >>> > Before the concurrent cycle can complete, Full GC kicks in >>> >>> > >>> >>> > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full >>> >>> > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 >>> > refs, >>> >>> > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: >>> >>> > [WeakReference, 2417 refs, 0.0004400 >>> >>> > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 >>> >>> > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: >>> >>> > [PhantomReference, 10 refs, 0.0000060 >>> >>> > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, >>> >>> > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] >>> >>> > >>> >>> > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: >>> >>> > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ >>> >>> > >>> >>> > It will be helpful for us to understand, >>> >>> > >>> >>> > 1)Why If Full GC can clean up nearly 4 GB (: >>> >>> > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn't the mixed GC >>> > kicked >>> >>> > in earlier ?. Does this mean all our old regions are tightly >>> > packed >>> >>> > (greater than G1MixedGCLiveThresholdPercent=85 ) ? >>> >>> > >>> >>> > 2)We also see , high amount of Humongous object allocation. Could >>> > it >>> >>> > be the case that Only Full GC's can remove the Humongous objects ? >>> >>> > >>> >>> > 3)Is there any other tuning parameters we can use to mitigate this >>> >>> > scenario ? >>> >>> > >>> >>> > Please do let me know, if you need any more information. Thanks in >>> >>> > advance for your help. >>> >>> > >>> >>> > Regards >>> >>> > >>> >>> > Prasanna >>> >>> > >>> >>> > This message may contain information that is confidential or >>> >>> > privileged. If you are not the intended recipient, please advise >>> > the >>> >>> > sender immediately and delete this message. See >>> >>> > http://www.blackrock.com/corporate/en-us/compliance/email-disclaim >>> > ers >>> >>> > for further information. Please refer to >>> >>> > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy >>> for >>> >>> > more information about BlackRock's Privacy Policy. >>> >>> > >>> >>> > BlackRock Advisors (UK) Limited and BlackRock Investment >>> > Management >>> >>> > (UK) Limited are authorised and regulated by the Financial Conduct >>> >>> > Authority. Registered in England No. 796793 and No. 2020394 >>> >>> > respectively. BlackRock Life Limited is authorised by the >>> > Prudential >>> >>> > Regulation Authority and regulated by the Financial Conduct >>> > Authority >>> >>> > and the Prudential Regulation Authority. Registered in England No. >>> >>> > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. >>> >>> > BlackRock International Limited is authorised and regulated by the >>> >>> > Financial Conduct Authority and is a registered investment adviser >>> >>> > with the Securities and Exchange Commission (SEC). Registered in >>> >>> > Scotland No. SC160821. Registered Office: Exchange Place One, 1 >>> Semple >>> >>> > Street, Edinburgh EH3 8BL. >>> >>> > >>> >>> > For a list of BlackRock's office addresses worldwide, see >>> >>> > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. >>> >>> > >>> >>> > (c) 2016 BlackRock, Inc. All rights reserved. >>> >>> > >>> >>> > >>> >>> > >>> >>> > _______________________________________________ >>> >>> > hotspot-gc-use mailing list >>> >>> > hotspot-gc-use at openjdk.java.net >>> >>> >>> > >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. >>> >>> > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvA >>> > Z4cA >>> >>> > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G >>> > _rtP >>> >>> > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV >>> > 1EP8 >>> >>> > Y&e= >>> >>> _______________________________________________ >>> >>> hotspot-gc-use mailing list >>> >>> hotspot-gc-use at openjdk.java.net >>> >>> >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.jav >>> a.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvA >>> Z4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa >>> 0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6 >>> OImV1EP8Y&e= >>> >>> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java >> .net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgID-g&c=zUO0BtkCe66yJvAZ4 >> cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=bv1F1664zG-7DUe >> MNOQfsC10wOiDwYzXcWJfke5o2dQ&s=vfQasdaRMuQTX-5iM3zPNcH3lQjWbxUfJqSzA6 >> fauwg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Fri Dec 16 18:17:56 2016 From: yu.zhang at oracle.com (yu.zhang at oracle.com) Date: Fri, 16 Dec 2016 10:17:56 -0800 Subject: G1 - Mixed and Full GC's In-Reply-To: <0c5c4ab0140f438a882efadd3cb0d54e@UKPMSEXD202N02.na.blkint.com> References: <37d0012dc87e475abe6c464ca4d6be23@UKPMSEXD202N02.na.blkint.com> <58528824.4010005@finkzeit.at> <8fd10c74db7a4a1ab9504438da8a5542@UKPMSEXD202N02.na.blkint.com> <585293D3.3060400@finkzeit.at> <43487cd7-9bbe-c5d9-9bbe-f82f03515a74@oracle.com> <5852E8E9.8070809@finkzeit.at> <0c5c4ab0140f438a882efadd3cb0d54e@UKPMSEXD202N02.na.blkint.com> Message-ID: Hi, On 12/16/2016 07:46 AM, Gopal, Prasanna CWK wrote: > > Hi Wolfgang > > Thanks for providing these logs. It looks like in our case; As you > have pointed our issue is due to high humongous object allocation rate. > > After the following concurrent cycle, our application is doing lot of > humongous allocation > > *2016-12-14T12:59:01.307-0500: 45342.193: Total time for which > application threads were stopped: 0.0189970 seconds* > > *2016-12-14T12:59:01.307-0500: 45342.193: Application time: 0.0001190 > seconds* > > *2016-12-14T12:59:01.308-0500: 45342.193: [GC cleanup > 5093M->3488M(7168M), 0.0075650 secs]* > > Immediately after one second, I am seeing a lot of similar logs. > > *45343.883: [G1Ergonomics (Concurrent Cycles) do not request > concurrent cycle initiation, reason: still doing mixed collections, > occupancy: 4508876800 bytes, allocation request: 15728656 bytes, > threshold: 4509715620 bytes (60.00 %), source: concurrent humongous > allocation]* > > G1 couldn't invoke the next concurrent cycle as it has only completed > one recently. It is not able to start mixed GC, as the young > generation is not full for invoking mixed GC. This continues, to a > point where there is no more space to allocate humongous objects in > old gen. > > *45349.891: [G1Ergonomics (Concurrent Cycles) do not request > concurrent cycle initiation, reason: still doing mixed collections, > occupancy: 6926893056 bytes, allocation request: 15728656 bytes, > threshold: 4509715620 bytes (60.00 %), source: concurrent humongous > allocation]* > > *45349.891: [G1Ergonomics (Heap Sizing) attempt heap expansion, > reason: humongous allocation request failed, allocation request: > 15728656 bytes]* > > Once the heap allocation failed, G1 tries starting a young GC. But it > is not finding much to collect > > *2016-12-14T12:59:09.008-0500: 45349.894: [GC pause (young)* > > *Desired survivor size 91226112 bytes, new threshold 15 (max 15)* > > *- age 1: 1608504 bytes, 1608504 total* > > *45349.894: [G1Ergonomics (CSet Construction) start choosing CSet, > _pending_cards: 735623, predicted base time: 79.06 ms, remaining time: > 420.94 ms, target pause time: 500.00 ms]* > > *45349.894: [G1Ergonomics (CSet Construction) add young regions to > CSet, eden: 155 regions, survivors: 2 regions, predicted young region > time: 26.61 ms]* > > *45349.894: [G1Ergonomics (CSet Construction) finish choosing CSet, > eden: 155 regions, survivors: 2 regions, old: 0 regions, predicted > pause time: 105.67 ms, target pause time: 500.00 ms]* > > *2016-12-14T12:59:09.082-0500: 45349.968: [SoftReference, 0 refs, > 0.0003840 secs]2016-12-14T12:59:09.083-0500: 45349.969: > [WeakReference, 13 refs, 0.0002010 secs]2016-12-14T12:59:09.083-0500: > 45349.969: [FinalReference, 137 refs, 0.0002550 > secs]2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 > refs, 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI > Weak Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) > attempt heap expansion, reason: recent GC overhead higher than > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 %, > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)]* > > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request > concurrent cycle initiation, reason: still doing mixed collections, > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: > 4509715620 bytes (60.00 %), source: end of GC]* > > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, reason: > reclaimable percentage not over threshold, candidate old regions: 46 > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 %]* > > *, 0.0765450 secs]* > > After this young GC , G1 is starting concurrent cycle as the > initiating threshold is still breached. Due to high object > allocation rate, Full GC kicks in before the concurrent cycle can > complete. > > *45349.974: [G1Ergonomics (Concurrent Cycles) initiate concurrent > cycle, reason: concurrent cycle initiation requested]* > > *2016-12-14T12:59:09.088-0500: 45349.974: [GC pause (young) > (initial-mark)* > > 1)In this scenario, I am not sure, whether lowering > -XX:InitiatingHeapOccupancyPercent=60 will help or not. G1 is not > able to start Concurrent cycle, as it is waiting for Mixed GC to > happen (Though there is nothing to collect) . Mixed GC is not > happening immediately as the younger generation is not getting filled > up. G1 is only able to start Mixed GC (eventually concurrent cycle) , > only when it is not seeing any more heap space to allocate humongous > objects in old generation . Does my observation sounds logical ? > This sounds right. Also the 0 old regions in collection set for mixed gc indicates they might be Humongous regions. > > 2)We certainly need to do some application changes to fix this issue, > but Just out of curiosity Is there any G1 parameter we can use to > force concurrent cycle ( when we don?t have any space) , without > waiting for mixed GC to happen. I know it will be sub-optimal in most > of the scenarios , but just curios to know. > I can not think of a way to force marking cycle. You can try to increase the region size to 32M, it might get rid of most of the humongous objects. Thanks Jenny > Thanks in advance for your comments. Appreciate your help. > > Thanks and Regards > > Prasanna > > -----Original Message----- > From: Wolfgang Pedot [mailto:wolfgang.pedot at finkzeit.at] > Sent: 15 December 2016 19:03 > To: yu.zhang at oracle.com; Gopal, Prasanna CWK > ; hotspot-gc-use at openjdk.java.net > Subject: Re: G1 - Mixed and Full GC's > > Hi, > > thanks for correcting me Jenny, learned something new today. > > Here is a short example from one of my servers (1.8.0_101) during normal > > operation: > > 2016-12-15T14:19:42.749+0100: 961594.487: [GC pause (G1 Evacuation > > Pause) (young) (initial-mark) > > 2016-12-15T14:19:42.955+0100: 961594.693: [GC > concurrent-root-region-scan-start] > > 2016-12-15T14:19:43.050+0100: 961594.788: [GC > concurrent-root-region-scan-end, 0.0947334 secs] > > 2016-12-15T14:19:43.050+0100: 961594.788: [GC concurrent-mark-start] > > 2016-12-15T14:19:46.094+0100: 961597.832: [GC concurrent-mark-end, > > 3.0445129 secs] > > 2016-12-15T14:19:46.112+0100: 961597.850: [GC remark > > 2016-12-15T14:19:46.112+0100: 961597.850: [Finalize Marking, 0.0023737 > secs] > > 2016-12-15T14:19:46.114+0100: 961597.852: [GC ref-proc, 0.0095962 secs] > > 2016-12-15T14:19:46.124+0100: 961597.862: [Unloading > > 2016-12-15T14:19:46.131+0100: 961597.869: [System Dictionary Unloading, > > 0.1269161 secs] > > 2016-12-15T14:19:46.258+0100: 961597.996: [Parallel Unloading, > 0.0361294 secs] > > 2016-12-15T14:19:46.294+0100: 961598.032: [Deallocate Metadata, > > 0.0017183 secs] > > 2016-12-15T14:19:46.296+0100: 961598.034: [String Deduplication Unlink, > > 0.0375517 secs], 0.2094571 secs], 0.2401670 secs] > > 2016-12-15T14:19:46.366+0100: 961598.104: [GC cleanup > > 2016-12-15T14:19:46.552+0100: 961598.290: [GC concurrent-cleanup-start] > > 2016-12-15T14:19:46.552+0100: 961598.290: [GC concurrent-cleanup-end, > > 0.0000160 secs] > > 2016-12-15T14:19:58.388+0100: 961610.126: [GC pause (G1 Evacuation > > Pause) (young) > > 2016-12-15T14:19:59.684+0100: 961611.422: [GC pause (G1 Evacuation > > Pause) (mixed) > > 2016-12-15T14:20:00.537+0100: 961612.275: [GC pause (G1 Evacuation > > Pause) (mixed) > > 2016-12-15T14:20:01.023+0100: 961612.761: [GC pause (G1 Evacuation > > Pause) (mixed) > > 2016-12-15T14:20:03.012+0100: 961614.750: [GC pause (G1 Evacuation > > Pause) (mixed) > > 2016-12-15T14:20:04.339+0100: 961616.077: [GC pause (G1 Evacuation > > Pause) (mixed) > > 2016-12-15T14:20:08.207+0100: 961619.945: [GC pause (G1 Evacuation > > Pause) (mixed) > > 2016-12-15T14:20:12.424+0100: 961624.162: [GC pause (G1 Evacuation > > Pause) (young) > > Here is how I read the log-snippet: > > Concurrent cycle was initiated at 14:19:42, REMARK was done at 14:19:46. > > Mixed collects were started afterwards and the last one happened at > 14:20:08, so the first chance to start a new concurrent cycle would > have been at 14:20:12. > > If there are short lived small allocations during that time the > young/mixed-collects can get rid of them right away but new humongous > objects can not be collected until the next badge of mixed collects in > your JRE. I have no idea how long such a cycle takes on your system > but the longer it takes the higher the chance of an allocation failure > leading to a full-gc. > > The eden space is drastically reduced during mixed collects (10-15% of > normal in my case) so that they happen more rapidly than pure young > collects, I guess if there are mainly humongous allocations happening > this would slow down the mixed collects. > > regards > > Wolfgang > > Am 15.12.2016 um 18:28 schrieb yu.zhang at oracle.com > : > > > Hi, > > > > > > In the clean up cycle, about 1.6G is claimed. Those are cleaned > > > because everything in the region is dead. It is possible those are > > > regions for humongous objects. > > > > > > Wolfgang mentioned this already, in later jdk, certain type of > > > humongous objects can be collected during young gc. > > > To be more specific on the comments about what is collected during a > > > mixed gc cycle, it is determined after REMARK, not the beginning of > > > the marking cycle. > > > > > > About the question why mixed gc sees only 29M while full gc can claim > > > 4041.8M, there could be several reasons: > > > 1. -XX:G1MixedGCLiveThresholdPercent=85 is still too low. This means > > > if the region is above 85% full, they are not considered to be > > > candidate of mixed gc. But it might not be the reason, since full gc > > > cleans up 4041.8M. > > > 2. In your PrintAdaptiveSizePolicy output, there is information about > > > how the collection set for mixed gc is decided. Can you post it? > > > Though not likely, there is a chance it is not including some regions > > > since the pause time goal is reached. > > > 3. Full gc cleans everything, including classes and other references. > > > In later jdk, classes are unloaded during marking cycle. But I am not > > > sure if jdk7 has that. > > > 4. As Wolfgang mentioned, maybe the allocation rate is too high for g1 > > > to keep up. > > > > > > Can you try: > > > 1 increase the region size? It will not help the 15m allocation, but > > > maybe helpful to reduce other humongous objects. > > > 2. lower the -XX:InitiatingHeapOccupancyPercent=60 (default is 45) to > > > introduce the marking cycle earlier 3. Depends on how long the marking > > > cycle is, maybe increase the concurrent marking threads, > > > > > > Thanks > > > Jenny > > > > > > > > > On 12/15/2016 05:00 AM, Wolfgang Pedot wrote: > > >> It is my understanding that what gets collected during a mixed cycle > > >> is determined at the beginning of the cycle so if you have a high > > >> allocation rate of short lived humongous objects you can fill up the > > >> heap with stuff the concurrent cycle will not get rid of even though > > >> they would be reclaimable. Newer JRE-Versions can reclaim humongous > > >> regions during young generation collects so they can deal better with > > >> them but I guess its still best to avoid humongous allocations if > > >> easily possible. > > >> > > >> Can you check in your log how long G1 is usually doing mixed collects? > > >> > > >> -XX:+G1PrintRegionLivenessInfo may also be an interesting option, it > > >> lets you see how many humongous objects are in your heap at the start > > >> of a concurrent cycle and how full your regions are. > > >> > > >> regards > > >> Wolfgang Pedot > > >> > > >> Am 15.12.2016 um 13:29 schrieb Gopal, Prasanna CWK: > > >>> > > >>> Hi Wolfgang > > >>> > > >>> Thanks for your reply. Yes I agree we have high CPU overhead. We > > >>> have the following settings for CPU-Cores / GC Threads. > > >>> > > >>> virtualProcessorCount=48 > > >>> > > >>> ParallelGCThreads=8 > > >>> > > >>> *> AFAIR humongous regions are only collected during mixed/full GC > > >>> in your version.* > > >>> > > >>> **This is my understanding as well, but I am not sure why these > > >>> objects were not collected during Mixed GC cycles. The Mixed cycle > > >>> is seeing only 29 MB as claimable, but the Full GC was able to claim > > >>> 4041.8 MB within 100 ms > > >>> > > >>> *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, > > >>> reason: reclaimable percentage not over threshold, candidate old > > >>> regions: 46 regions, reclaimable: 31277496 bytes (0.42 %), > > >>> threshold: 10.00 %]* > > >>> > > >>> ** > > >>> > > >>> ** > > >>> > > >>> Regards > > >>> > > >>> Prasanna ** > > >>> > > >>> -----Original Message----- > > >>> From: hotspot-gc-use > > >>> [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of > > >>> Wolfgang Pedot > > >>> Sent: 15 December 2016 12:10 > > >>> To: hotspot-gc-use at openjdk.java.net > > > >>> Subject: Re: G1 - Mixed and Full GC's > > >>> > > >>> Hi, > > >>> > > >>> not sure about my "expert"-ness but something caught my eye: > > >>> > > >>> you do have a pretty high GC overhead there (recent GC overhead: > > >>> 30.14 %), it is possible that G1 can simply not keep up with your > > >>> allocations and that leads to a full GC. How many CPU-cores does > > >>> your machine have and how many GC threads are used? > > >>> > > >>> Can you determine the size of your humongous objects? Getting rid of > > >>> those by increasing the region-size could help, AFAIR humongous > > >>> regions are only collected during mixed/full GC in your version. > > >>> > > >>> regards > > >>> > > >>> Wolfgang Pedot > > >>> > > >>> Am 15.12.2016 um 10:44 schrieb Gopal, Prasanna CWK: > > >>> > > >>> > > > >>> > > >>> > Hi Experts > > >>> > > >>> > > > >>> > > >>> > We have one of our application running G1 with following > > >>> > > >>> > configuration > > >>> > > >>> > > > >>> > > >>> > -Xmx7G > > >>> > > >>> > > > >>> > > >>> > -Xms7G > > >>> > > >>> > > > >>> > > >>> > -XX:+UseG1GC > > >>> > > >>> > > > >>> > > >>> > -XX:+UnlockExperimentalVMOptions > > >>> > > >>> > > > >>> > > >>> > -XX:InitiatingHeapOccupancyPercent=60 > > >>> > > >>> > > > >>> > > >>> > -XX:G1ReservePercent=20 > > >>> > > >>> > > > >>> > > >>> > -XX:G1MixedGCLiveThresholdPercent=85 > > >>> > > >>> > > > >>> > > >>> > -XX:MaxGCPauseMillis=500 > > >>> > > >>> > > > >>> > > >>> > -XX:+ParallelRefProcEnabled > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintAdaptiveSizePolicy > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintHeapAtGC > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintReferenceGC > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintGCDateStamps > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintTenuringDistribution > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintGCApplicationConcurrentTime > > >>> > > >>> > > > >>> > > >>> > -XX:+PrintGCApplicationStoppedTime > > >>> > > >>> > > > >>> > > >>> > -Duser.timezone=America/Los_Angeles > > >>> > > >>> > > > >>> > > >>> > -d64 -Dserver > > >>> > > >>> > > > >>> > > >>> > JDK : jdk_7u40_x64 ( Yes we need to move to latest JDK) > > >>> > > >>> > > > >>> > > >>> > We are seeing the following behaviour > > >>> > > >>> > > > >>> > > >>> > End of a concurrent cycle > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:01.307-0500: 45342.193: Total time for which > > >>> > > >>> > application threads were stopped: 0.0189970 seconds > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:01.307-0500: 45342.193: Application time: > > >>> > 0.0001190 > > >>> > > >>> > seconds > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:01.308-0500: 45342.193: *[GC cleanup > > >>> > > >>> > 5093M->3488M(7168M), 0.0075650 secs] * > > >>> > > >>> > > > >>> > > >>> > [Times: user=0.06 sys=0.00, real=0.01 secs] > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:01.315-0500: 45342.201: Total time for which > > >>> > > >>> > application threads were stopped: 0.0081250 seconds > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:01.315-0500: 45342.201: [GC > > >>> concurrent-cleanup-start] > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:01.318-0500: 45342.203: [GC > > >>> > concurrent-cleanup-end, > > >>> > > >>> > 0.0022490 secs] > > >>> > > >>> > > > >>> > > >>> > After 8 Seconds, Minor GC kicks in and decides not to do any Mixed > > >>> > > >>> > GC?s > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:09.083-0500: 45349.969: [PhantomReference, 0 > > >>> > refs, > > >>> > > >>> > 0.0001880 secs]2016-12-14T12:59:09.083-0500: 45349.969: [JNI Weak > > >>> > > >>> > Reference, 0.0000050 secs] 45349.970: [G1Ergonomics (Heap Sizing) > > >>> > > >>> > attempt heap expansion, reason: recent GC overhead higher than > > >>> > > >>> > threshold after GC, recent GC overhead: 30.14 %, threshold: 10.00 > > >>> > %, > > >>> > > >>> > uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 > > >>> > %)] > > >>> > > >>> > > > >>> > > >>> > *45349.970: [G1Ergonomics (Concurrent Cycles) do not request > > >>> > > >>> > concurrent cycle initiation, reason: still doing mixed > > >>> > collections, > > >>> > > >>> > occupancy: 6926893056 bytes, allocation request: 0 bytes, threshold: > > >>> > > >>> > 4509715620 bytes (60.00 %), source: end of GC]* > > >>> > > >>> > > > >>> > > >>> > *45349.970: [G1Ergonomics (Mixed GCs) do not start mixed GCs, > reason: > > >>> > > >>> > reclaimable percentage not over threshold, candidate old regions: > > >>> > 46 > > >>> > > >>> > regions, reclaimable: 31277496 bytes (0.42 %), threshold: 10.00 > > >>> > %]* > > >>> > > >>> > > > >>> > > >>> > , 0.0765450 secs] > > >>> > > >>> > > > >>> > > >>> > [Parallel Time: 74.1 ms, GC Workers: 8] > > >>> > > >>> > > > >>> > > >>> > [GC Worker Start (ms): Min: 45349894.0, Avg: 45349894.1, Max: > > >>> > > >>> > 45349894.2, Diff: 0.3] > > >>> > > >>> > > > >>> > > >>> > [Ext Root Scanning (ms): Min: 4.3, Avg: 5.1, Max: 6.4, Diff: > > >>> > > >>> > 2.1, Sum: 41.2] > > >>> > > >>> > > > >>> > > >>> > [Update RS (ms): Min: 64.6, Avg: 66.2, Max: 67.1, Diff: 2.5, > > >>> > > >>> > Sum: 529.2] > > >>> > > >>> > > > >>> > > >>> > [Processed Buffers: Min: 324, Avg: 363.8, Max: 524, Diff: > > >>> > > >>> > 200, Sum: 2910] > > >>> > > >>> > > > >>> > > >>> > [Scan RS (ms): Min: 0.0, Avg: 0.5, Max: 0.6, Diff: 0.6, Sum: > > >>> > > >>> > 3.7] > > >>> > > >>> > > > >>> > > >>> > [Object Copy (ms): Min: 1.6, Avg: 2.0, Max: 2.3, Diff: 0.7, > > >>> Sum: > > >>> > > >>> > 15.9] > > >>> > > >>> > > > >>> > > >>> > [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, > > >>> Sum: > > >>> > > >>> > 0.0] > > >>> > > >>> > > > >>> > > >>> > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: > > >>> > 0.0, > > >>> > > >>> > Sum: 0.4] > > >>> > > >>> > > > >>> > > >>> > [GC Worker Total (ms): Min: 73.7, Avg: 73.8, Max: 73.9, Diff: > > >>> > > >>> > 0.2, Sum: 590.3] > > >>> > > >>> > > > >>> > > >>> > [GC Worker End (ms): Min: 45349967.8, Avg: 45349967.9, Max: > > >>> > > >>> > 45349967.9, Diff: 0.0] > > >>> > > >>> > > > >>> > > >>> > [Code Root Fixup: 0.0 ms] > > >>> > > >>> > > > >>> > > >>> > [Clear CT: 0.1 ms] > > >>> > > >>> > > > >>> > > >>> > [Other: 2.3 ms] > > >>> > > >>> > > > >>> > > >>> > [Choose CSet: 0.0 ms] > > >>> > > >>> > > > >>> > > >>> > [Ref Proc: 1.2 ms] > > >>> > > >>> > > > >>> > > >>> > [Ref Enq: 0.2 ms] > > >>> > > >>> > > > >>> > > >>> > [Free CSet: 0.6 ms] > > >>> > > >>> > > > >>> > > >>> > [Eden: 310.0M(1382.0M)->0.0B(352.0M) Survivors: > > >>> > 4096.0K->6144.0K > > >>> > > >>> > Heap: 6603.4M(7168.0M)->6297.9M(7168.0M)] > > >>> > > >>> > > > >>> > > >>> > Heap after GC invocations=4395 (full 207): > > >>> > > >>> > > > >>> > > >>> > garbage-first heap total 7340032K, used 6449092K > > >>> > > >>> > [0x000000063ae00000, 0x00000007fae00000, 0x00000007fae00000) > > >>> > > >>> > > > >>> > > >>> > region size 2048K, 3 young (6144K), 3 survivors (6144K) > > >>> > > >>> > > > >>> > > >>> > compacting perm gen total 77824K, used 77354K > > >>> > [0x00000007fae00000, > > >>> > > >>> > 0x00000007ffa00000, 0x0000000800000000) > > >>> > > >>> > > > >>> > > >>> > the space 77824K, 99% used [0x00000007fae00000, > > >>> > > >>> > 0x00000007ff98a840, 0x00000007ff98aa00, 0x00000007ffa00000) > > >>> > > >>> > > > >>> > > >>> > No shared spaces configured. > > >>> > > >>> > > > >>> > > >>> > } > > >>> > > >>> > > > >>> > > >>> > [Times: user=0.60 sys=0.00, real=0.08 secs] > > >>> > > >>> > > > >>> > > >>> > Immediately after this another concurrent cycle kicks in > > >>> > > >>> > > > >>> > > >>> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent > > >>> > cycle > > >>> > > >>> > initiation, reason: occupancy higher than threshold, occupancy: > > >>> > > >>> > 6943670272 bytes, allocation request: 15728656 bytes, threshold: > > >>> > > >>> > 4509715620 bytes (60.00 %), source: concurrent humongous > > >>> > allocation] > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:09.088-0500: 45349.974: Application time: > > >>> > 0.0004930 > > >>> > > >>> > seconds > > >>> > > >>> > > > >>> > > >>> > 45349.974: [G1Ergonomics (Concurrent Cycles) request concurrent > > >>> > cycle > > >>> > > >>> > initiation, reason: requested by GC cause, GC cause: G1 Humongous > > >>> > > >>> > Allocation] > > >>> > > >>> > > > >>> > > >>> > Before the concurrent cycle can complete, Full GC kicks in > > >>> > > >>> > > > >>> > > >>> > 2016-12-14T12:59:09.195-0500: 45350.081: [*Full > > >>> > > >>> > GC2016-12-14T12:59:11.445-0500: 45352.331*: [SoftReference, 765 > > >>> > refs, > > >>> > > >>> > 0.0001920 secs]2016-12-14T12:59:11.445-0500: 45352.331: > > >>> > > >>> > [WeakReference, 2417 refs, 0.0004400 > > >>> > > >>> > secs]2016-12-14T12:59:11.446-0500: 45352.331: [FinalReference, 898 > > >>> > > >>> > refs, 0.0001680 secs]2016-12-14T12:59:11.446-0500: 45352.332: > > >>> > > >>> > [PhantomReference, 10 refs, 0.0000060 > > >>> > > >>> > secs]2016-12-14T12:59:11.446-0500: 45352.332: [JNI Weak Reference, > > >>> > > >>> > 0.0000190 secs] 6557M->2515M(7168M), 6.1312980 secs] > > >>> > > >>> > > > >>> > > >>> > [Eden: 2048.0K(348.0M)->0.0B(3070.0M) Survivors: 10.0M->0.0B Heap/: > > >>> > > >>> > 6557.1M(7168.0M)->2515.3M(7168.0M)]/ > > >>> > > >>> > > > >>> > > >>> > It will be helpful for us to understand, > > >>> > > >>> > > > >>> > > >>> > 1)Why If Full GC can clean up nearly 4 GB (: > > >>> > > >>> > 6557.1M(7168.0M)->2515.3M(7168.0M)]) , why didn?t the mixed GC > > >>> > kicked > > >>> > > >>> > in earlier ?. Does this mean all our old regions are tightly > > >>> > packed > > >>> > > >>> > (greater than G1MixedGCLiveThresholdPercent=85 ) ? > > >>> > > >>> > > > >>> > > >>> > 2)We also see , high amount of Humongous object allocation. Could > > >>> > it > > >>> > > >>> > be the case that Only Full GC?s can remove the Humongous objects ? > > >>> > > >>> > > > >>> > > >>> > 3)Is there any other tuning parameters we can use to mitigate this > > >>> > > >>> > scenario ? > > >>> > > >>> > > > >>> > > >>> > Please do let me know, if you need any more information. Thanks in > > >>> > > >>> > advance for your help. > > >>> > > >>> > > > >>> > > >>> > Regards > > >>> > > >>> > > > >>> > > >>> > Prasanna > > >>> > > >>> > > > >>> > > >>> > This message may contain information that is confidential or > > >>> > > >>> > privileged. If you are not the intended recipient, please advise > > >>> > the > > >>> > > >>> > sender immediately and delete this message. See > > >>> > > >>> > http://www.blackrock.com/corporate/en-us/compliance/email-disclaim > > >>> > ers > > >>> > > >>> > for further information. Please refer to > > >>> > > >>> > http://www.blackrock.com/corporate/en-us/compliance/privacy-policy > > >>> for > > >>> > > >>> > more information about BlackRock?s Privacy Policy. > > >>> > > >>> > > > >>> > > >>> > BlackRock Advisors (UK) Limited and BlackRock Investment > > >>> > Management > > >>> > > >>> > (UK) Limited are authorised and regulated by the Financial Conduct > > >>> > > >>> > Authority. Registered in England No. 796793 and No. 2020394 > > >>> > > >>> > respectively. BlackRock Life Limited is authorised by the > > >>> > Prudential > > >>> > > >>> > Regulation Authority and regulated by the Financial Conduct > > >>> > Authority > > >>> > > >>> > and the Prudential Regulation Authority. Registered in England No. > > >>> > > >>> > 2223202. Registered Offices: 12 Throgmorton Avenue, London EC2N 2DL. > > >>> > > >>> > BlackRock International Limited is authorised and regulated by the > > >>> > > >>> > Financial Conduct Authority and is a registered investment adviser > > >>> > > >>> > with the Securities and Exchange Commission (SEC). Registered in > > >>> > > >>> > Scotland No. SC160821. Registered Office: Exchange Place One, 1 > > >>> Semple > > >>> > > >>> > Street, Edinburgh EH3 8BL. > > >>> > > >>> > > > >>> > > >>> > For a list of BlackRock's office addresses worldwide, see > > >>> > > >>> > > http://www.blackrock.com/corporate/en-us/about-us/contacts-locations. > > >>> > > >>> > > > >>> > > >>> > ? 2016 BlackRock, Inc. All rights reserved. > > >>> > > >>> > > > >>> > > >>> > > > >>> > > >>> > > > >>> > > >>> > _______________________________________________ > > >>> > > >>> > hotspot-gc-use mailing list > > >>> > > >>> > hotspot-gc-use at openjdk.java.net > > > >>> > > >>> > > >>> > > > >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java. > > >>> > > >>> > net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvA > > >>> > Z4cA > > >>> > > >>> > vZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa0G > > >>> > _rtP > > >>> > > >>> > wLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6OImV > > >>> > 1EP8 > > >>> > > >>> > Y&e= > > >>> > > >>> _______________________________________________ > > >>> > > >>> hotspot-gc-use mailing list > > >>> > > >>> hotspot-gc-use at openjdk.java.net > > > >>> > > >>> > > >>> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.jav > > >>> a.net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgIF-g&c=zUO0BtkCe66yJvA > > >>> Z4cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=HPr1mqYayxpa > > >>> 0G_rtPwLGWFkvYKVz9FdwvF7z1SkS3c&s=5p47SWF52v-EjwMvGiabOcfqPVCmBe99A6 > > >>> OImV1EP8Y&e= > > >>> > > >>> > > >> > > >> _______________________________________________ > > >> hotspot-gc-use mailing list > > >> hotspot-gc-use at openjdk.java.net > > > >> https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.openjdk.java > > >> .net_mailman_listinfo_hotspot-2Dgc-2Duse&d=DgID-g&c=zUO0BtkCe66yJvAZ4 > > >> cAvZg&r=zRhnqN6xuCQh8NZ-MtoiYBMlItU6r8UBO9AjZ3c3DEY&m=bv1F1664zG-7DUe > > >> MNOQfsC10wOiDwYzXcWJfke5o2dQ&s=vfQasdaRMuQTX-5iM3zPNcH3lQjWbxUfJqSzA6 > > >> fauwg&e= > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amit.mishra at redknee.com Fri Dec 23 08:45:21 2016 From: amit.mishra at redknee.com (Amit Mishra) Date: Fri, 23 Dec 2016 08:45:21 +0000 Subject: Regarding leap second change on 31st Dec 2016@23:59:60 Message-ID: Hello Charlie/Poonam/JDK team, As we all know that leap second is going to happen on 31st Dec 2016 23:59:59 whereas an extra second(23:59:60) will be added to UTC time zone(& to local offset for rest of world) and for that Oracle has released tzdata2016g tzupdate which needs to be installed for all JRE instances. My question is if system(OS clock) will be synced by NTP server for this change even then if there will be need for this tzdata patch to be installed or not for applications running on Java 1.6 ? What are associated risks if we don't update JRE instances time zone data using tzdata2016g ? Also I want to understand how Java different data/time API on 1.6 calculate current time , does it takes from system clock or it has any other mechanism or reference source from where it can derives accurate current time ? In case if it is taking from system clock then why Oracle is releasing tzupdates as current time will synced to all Java application just after restart as they will fetch correct time from system clock(which is updated via ntp). I am bit confused about how IANA database updates are helping Java apps for time related calculation in case if Java solely depends upon system clock for reference clock source. Your prompt response will be appreciated. Many thanks for support and cooperation. Regards, Amit Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: