From simone.bordet at gmail.com Mon Oct 7 17:17:00 2013 From: simone.bordet at gmail.com (Simone Bordet) Date: Tue, 8 Oct 2013 02:17:00 +0200 Subject: G1 young dominated by code root marking Message-ID: Hi, using JDK 8 b109 (see below for details), and I see that when a concurrent cycle is requested, the young pause is completely dominated by code root marking. Below a typical log entry where a total pause of 210 ms spent 155 ms in code root marking. Is this normal behaviour ? Does it depend on the ReservedCodeCacheSize (below at 256 MiB) ? Any hint to speed this up ? Thanks ! -------- Java HotSpot(TM) 64-Bit Server VM (build 25.0-b51, mixed mode) -XX:InitialHeapSize=1073741824 -XX:InitiatingHeapOccupancyPercent=60 -XX:MaxGCPauseMillis=100 -XX:MaxHeapSize=1073741824 -XX:MetaspaceSize=402653184 -XX:+PrintAdaptiveSizePolicy -XX:+PrintCommandLineFlags -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:ReservedCodeCacheSize=268435456 -XX:+UseCodeCacheFlushing -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseG1GC 0.005: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 1073741824 bytes, attempted expansion amount: 1073741824 bytes] java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b109) Java HotSpot(TM) 64-Bit Server VM (build 25.0-b51, mixed mode) 2013-10-08T00:41:52.118+0200: [GC pause (G1 Evacuation Pause) (young) (initial-mark) 56754.114: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 2562, predicted base time: 25.80 ms, remaining time: 74.20 ms, target pause time: 100.00 ms] 56754.114: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 154 regions, survivors: 3 regions, predicted young region time: 65.36 ms] 56754.114: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 154 regions, survivors: 3 regions, old: 0 regions, predicted pause time: 91.15 ms, target pause time: 100.00 ms] , 0.2012130 secs] [Parallel Time: 199.3 ms, GC Workers: 2] [GC Worker Start (ms): Min: 56754114.4, Avg: 56754114.4, Max: 56754114.4, Diff: 0.0] [Ext Root Scanning (ms): Min: 11.4, Avg: 13.6, Max: 15.7, Diff: 4.4, Sum: 27.1] [Code Root Marking (ms): Min: 155.7, Avg: 156.1, Max: 156.6, Diff: 0.9, Sum: 312.3] [Update RS (ms): Min: 4.9, Avg: 5.3, Max: 5.7, Diff: 0.8, Sum: 10.7] [Processed Buffers: Min: 6, Avg: 13.5, Max: 21, Diff: 15, Sum: 27] [Scan RS (ms): Min: 3.8, Avg: 3.8, Max: 3.8, Diff: 0.1, Sum: 7.6] [Code Root Scanning (ms): Min: 0.1, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.2] [Object Copy (ms): Min: 18.1, Avg: 20.3, Max: 22.5, Diff: 4.4, Sum: 40.6] [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [GC Worker Total (ms): Min: 199.2, Avg: 199.2, Max: 199.2, Diff: 0.0, Sum: 398.4] [GC Worker End (ms): Min: 56754313.6, Avg: 56754313.6, Max: 56754313.6, Diff: 0.0] [Code Root Fixup: 0.2 ms] [Code Root Migration: 0.0 ms] [Clear CT: 0.2 ms] [Other: 1.5 ms] [Choose CSet: 0.1 ms] [Ref Proc: 0.5 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.4 ms] [Eden: 154.0M(117.0M)->0.0B(45.0M) Survivors: 3072.0K->6144.0K Heap: 721.1M(1024.0M)->570.1M(1024.0M)] [Times: user=0.35 sys=0.00, real=0.21 secs] -- Simone Bordet http://bordet.blogspot.com --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz From simone.bordet at gmail.com Mon Oct 7 17:22:58 2013 From: simone.bordet at gmail.com (Simone Bordet) Date: Tue, 8 Oct 2013 02:22:58 +0200 Subject: G1: no concurrent cycle initiation for humongous allocation In-Reply-To: References: Message-ID: Resending, as the 100 KiB attached log was rejected. I can provide the log separately. On Tue, Oct 1, 2013 at 1:28 PM, Simone Bordet wrote: > Hi, > > can I use some experience here to interpret the attached log ? > > Seems to me that G1 is keeping the young gen really small (~50 MiB) to > meet the pause goal (100 ms). This results in a high promotion rate of > mostly dead objects, so that a marking cycle is able to cleanup 250+ > MiB (out of a 1 GiB heap). > But after the marking cycle, many log lines (10s to 100s) of this kind appear few ms apart from each other: > > [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle > initiation, reason: still doing mixed collections, occupancy: > 642777088 bytes, allocation request: 2322928 bytes, threshold: > 644245080 bytes (60.00 %), source: concurrent humongous allocation] > > These lines appear after several seconds (30+) of no GC logging, so > the application did not trigger a young GC despite the young gen is > really small. > Yet, seems to me that G1 thinks the heap is at IHOP despite the marking cycle just > freed 250+ MiB, and apparently the application did not allocate more > than ~50 MiB (otherwise a young GC would have triggered). > > Thanks ! > > -- > Simone Bordet > http://bordet.blogspot.com > --- > Finally, no matter how good the architecture and design are, > to deliver bug-free software with optimal performance and reliability, > the implementation technique must be flawless. Victoria Livschitz From thomas.schatzl at oracle.com Mon Oct 7 23:55:58 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 08 Oct 2013 08:55:58 +0200 Subject: G1 young dominated by code root marking In-Reply-To: References: Message-ID: <1381215358.2594.9.camel@cirrus> Hi, On Tue, 2013-10-08 at 02:17 +0200, Simone Bordet wrote: > Hi, > > using JDK 8 b109 (see below for details), and I see that when a > concurrent cycle is requested, the young pause is completely dominated > by code root marking. > Below a typical log entry where a total pause of 210 ms spent 155 ms > in code root marking. > > Is this normal behaviour ? We are aware of code cache scanning taking up a significant time of the collection pause - that's why starting with b106 the code cache scanning has been moved into an extra phase and parallelized. Previously the issue should have been worse (i.e. serialized) - or does this problem only surfaced with the new build? (Previously you would get a very long "Ext root scan" phase) The references scanned are references from the code cache (not from the collected regions) into the Java heap. > Does it depend on the ReservedCodeCacheSize (below at 256 MiB) ? So, yes, the time taken is somewhat proportional to the amount of code cache. > Any hint to speed this up ? I am afraid that at the moment afaik there is nothing but to reduce code cache size (or increase the number of available processors to benefit from parallelization). To help us analyze possible alternative remedies (i.e. class unloading after a concurrent marking cycle), does this problem persist even after stale code has been cleaned out, e.g. full gc? (I think full gc cleans out old code) I.e. could you try a "jcmd GC.run" when this issue occurs and report back results? Thanks, Thomas From simone.bordet at gmail.com Tue Oct 8 11:42:07 2013 From: simone.bordet at gmail.com (Simone Bordet) Date: Tue, 8 Oct 2013 20:42:07 +0200 Subject: G1 young dominated by code root marking In-Reply-To: <1381215358.2594.9.camel@cirrus> References: <1381215358.2594.9.camel@cirrus> Message-ID: Hi, On Tue, Oct 8, 2013 at 8:55 AM, Thomas Schatzl wrote: > We are aware of code cache scanning taking up a significant time of the > collection pause - that's why starting with b106 the code cache scanning > has been moved into an extra phase and parallelized. > Previously the issue should have been worse (i.e. serialized) - or does > this problem only surfaced with the new build? > (Previously you would get a very long "Ext root scan" phase) I did not test jdk 8 prior b109, unfortunately. > I am afraid that at the moment afaik there is nothing but to reduce code > cache size (or increase the number of available processors to benefit > from parallelization). > > To help us analyze possible alternative remedies (i.e. class unloading > after a concurrent marking cycle), does this problem persist even after > stale code has been cleaned out, e.g. full gc? > (I think full gc cleans out old code) A preliminary test done by triggering a full GC after detecting a long "code root marking" like you suggested showed a small reduction in those times, but they remain dominant and exceed the G1 max GC pause. Perhaps at steady state the code cache does not change much so the amount of work related to it remain the same despite full GCs. I am guessing code root marking cannot be made concurrent ? Or split, like old generation regions evacuation is split during mixed GCs ? Thanks ! -- Simone Bordet http://bordet.blogspot.com --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz From thomas.schatzl at oracle.com Mon Oct 14 03:07:37 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 14 Oct 2013 12:07:37 +0200 Subject: G1: no concurrent cycle initiation for humongous allocation In-Reply-To: References: Message-ID: <1381745257.2715.34.camel@cirrus> Hi Simone, first, sorry for getting back so late.... On Tue, 2013-10-08 at 02:22 +0200, Simone Bordet wrote: > Resending, as the 100 KiB attached log was rejected. > I can provide the log separately. > > On Tue, Oct 1, 2013 at 1:28 PM, Simone Bordet wrote: > > Hi, > > > > can I use some experience here to interpret the attached log ? > > > > Seems to me that G1 is keeping the young gen really small (~50 MiB) to > > meet the pause goal (100 ms). This results in a high promotion rate of > > mostly dead objects, so that a marking cycle is able to cleanup 250+ > > MiB (out of a 1 GiB heap). > > But after the marking cycle, many log lines (10s to 100s) of this kind appear few ms apart from each other: > > > > [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle > > initiation, reason: still doing mixed collections, occupancy: > > 642777088 bytes, allocation request: 2322928 bytes, threshold: > > 644245080 bytes (60.00 %), source: concurrent humongous allocation] > > > > These lines appear after several seconds (30+) of no GC logging, so > > the application did not trigger a young GC despite the young gen is > > really small. > > Yet, seems to me that G1 thinks the heap is at IHOP despite the marking cycle just > > freed 250+ MiB, and apparently the application did not allocate more > > than ~50 MiB (otherwise a young GC would have triggered). > > If a humongous/large object allocation makes the occupied heap (after allocation of the humongous object) larger than the threshold, G1 tries to schedule a concurrent cycle. At the same time G1 still in the mixed-gc phase (cleaning up the regular heap), and the collection policy does not allow initiation of a concurrent cycle during that phase. The message in essence just indicates that. Is there any problem, apart from the messages? Thomas From thomas.schatzl at oracle.com Mon Oct 14 03:12:32 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 14 Oct 2013 12:12:32 +0200 Subject: G1 young dominated by code root marking In-Reply-To: References: <1381215358.2594.9.camel@cirrus> Message-ID: <1381745552.2715.38.camel@cirrus> Hi, On Tue, 2013-10-08 at 20:42 +0200, Simone Bordet wrote: > Hi, > > On Tue, Oct 8, 2013 at 8:55 AM, Thomas Schatzl > wrote: > > We are aware of code cache scanning taking up a significant time of the > > collection pause - that's why starting with b106 the code cache scanning > > has been moved into an extra phase and parallelized. > > Previously the issue should have been worse (i.e. serialized) - or does > > this problem only surfaced with the new build? > > (Previously you would get a very long "Ext root scan" phase) > > I did not test jdk 8 prior b109, unfortunately. Okay, thanks anyway. > > > I am afraid that at the moment afaik there is nothing but to reduce code > > cache size (or increase the number of available processors to benefit > > from parallelization). > > > > To help us analyze possible alternative remedies (i.e. class unloading > > after a concurrent marking cycle), does this problem persist even after > > stale code has been cleaned out, e.g. full gc? > > (I think full gc cleans out old code) > > A preliminary test done by triggering a full GC after detecting a long > "code root marking" like you suggested showed a small reduction in > those times, but they remain dominant and exceed the G1 max GC pause. > Perhaps at steady state the code cache does not change much so the > amount of work related to it remain the same despite full GCs. > > I am guessing code root marking cannot be made concurrent ? > Or split, like old generation regions evacuation is split during mixed GCs ? > In G1, with class-unloading at concurrent mark in place and enabled this phase will disappear in initial-mark young GCs iirc (except for code on the stack; other code roots will be weak roots in this case), however there will be a new cleanup phase during remark :) It may be shorter though. Other than that I think it should be possible to make (parts of) the phase concurrent, but I am not sure if it is on our radar yet. Thomas From simone.bordet at gmail.com Mon Oct 14 12:27:40 2013 From: simone.bordet at gmail.com (Simone Bordet) Date: Mon, 14 Oct 2013 21:27:40 +0200 Subject: G1 young dominated by code root marking In-Reply-To: <1381745552.2715.38.camel@cirrus> References: <1381215358.2594.9.camel@cirrus> <1381745552.2715.38.camel@cirrus> Message-ID: Hi, On Mon, Oct 14, 2013 at 12:12 PM, Thomas Schatzl wrote: > In G1, with class-unloading at concurrent mark in place and enabled this > phase will disappear in initial-mark young GCs iirc (except for code on > the stack; other code roots will be weak roots in this case), however > there will be a new cleanup phase during remark :) It may be shorter > though. I am not sure I understood this. Are you saying that by explicitating the class unloading option on the command line, then code root marking will be done concurrently during the normal marking phase ? If so, can this be made the default ? > Other than that I think it should be possible to make (parts of) the > phase concurrent, but I am not sure if it is on our radar yet. My worry is the following: the code root marking phase happens only from time to time, and G1 has been tuned to stay within the pause goal, and can nicely do so; but from time to time the pause goal is exceeded by a large percentage because of code root marking. Seems like a shame that a nicely behaving G1 exceeds the pause goal for code root marking only. In any case, I just wanted to give a heads up in case the issue was not known. Thanks ! -- Simone Bordet http://bordet.blogspot.com --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz From simone.bordet at gmail.com Mon Oct 14 13:18:42 2013 From: simone.bordet at gmail.com (Simone Bordet) Date: Mon, 14 Oct 2013 22:18:42 +0200 Subject: G1: no concurrent cycle initiation for humongous allocation In-Reply-To: <1381745257.2715.34.camel@cirrus> References: <1381745257.2715.34.camel@cirrus> Message-ID: Hi, On Mon, Oct 14, 2013 at 12:07 PM, Thomas Schatzl wrote: > If a humongous/large object allocation makes the occupied heap (after > allocation of the humongous object) larger than the threshold, G1 tries > to schedule a concurrent cycle. > > At the same time G1 still in the mixed-gc phase (cleaning up the regular > heap), and the collection policy does not allow initiation of a > concurrent cycle during that phase. > > The message in essence just indicates that. > > Is there any problem, apart from the messages? See the logs excerpt below. What I think it happens is that the end of the concurrent cycle cleans up (as in the "GC Cleanup" phase) a large number of regions because they are all garbage (early promotions from young since young is small). The GC Cleanup phase moves from 509->268 MiB. Then some time passes (~50s: from 989 to 1041); during this time young has not been filled (even if it's only 140 MiB), yet at 1041.161 G1 reports that 643 MiB are now occupied. After 1041.161, other 136 lines of the same type happen (and nothing else), all of them within 32s, all of them requesting the same allocation size. The last of these lines happens at 1073.224, showing that during those 32s (1073-1041) ~288 MiB have been allocated (931-643 MiB), probably all of them as humongous object directly in old generation (because no young GC has been triggered despite its small size). If the same happened during the prior ~50s between the GC Cleanup and 1041.161, then young would still be "empty" but it would explain why a concurrent cycle would have been requested: the heap is actually indeed over IHOP because of humongous allocations. At the beginning I could not believe that only humongous allocations were happening, but it seems to be the case. I guess this case is not handled well with the default region size, but it's a good case where setting the region size to 3 MiB (because the humongous allocations seems all to be of size 1438336=1.4 MiB) would "solve" this problem. If I am wrong in my interpretation, I'd be happy to hear. Thanks ! -- Simone Bordet http://bordet.blogspot.com --- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz 989.437: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: occupancy higher than threshold, occupancy: 644874240 bytes, allocation request: 1438336 bytes, threshold: 644245080 bytes (60.00 %), source: concurrent humongous allocation] 989.438: [G1Ergonomics (Concurrent Cycles) request concurrent cycle initiation, reason: requested by GC cause, GC cause: G1 Humongous Allocation] 989.438: [G1Ergonomics (Concurrent Cycles) initiate concurrent cycle, reason: concurrent cycle initiation requested] 2013-09-30T18:07:10.376+0200: [GC pause (young) (initial-mark) 989.439: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 6593, predicted base time: 68.56 ms, remaining time: 31.44 ms, target pause time: 100.00 ms] 989.439: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 140 regions, survivors: 4 regions, predicted young region time: 22.72 ms] 989.439: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 140 regions, survivors: 4 regions, old: 0 regions, predicted pause time: 91.28 ms, target pause time: 100.00 ms] , 0.0879000 secs] [Parallel Time: 86.0 ms, GC Workers: 2] [GC Worker Start (ms): Min: 989439.0, Avg: 989439.7, Max: 989440.3, Diff: 1.3] [Ext Root Scanning (ms): Min: 55.7, Avg: 58.9, Max: 62.2, Diff: 6.5, Sum: 117.8] [Update RS (ms): Min: 0.0, Avg: 4.5, Max: 8.9, Diff: 8.9, Sum: 8.9] [Processed Buffers: Min: 0, Avg: 31.0, Max: 62, Diff: 62, Sum: 62] [Scan RS (ms): Min: 0.0, Avg: 1.3, Max: 2.6, Diff: 2.5, Sum: 2.6] [Object Copy (ms): Min: 18.7, Avg: 20.6, Max: 22.4, Diff: 3.7, Sum: 41.1] [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [GC Worker Total (ms): Min: 84.6, Avg: 85.3, Max: 85.9, Diff: 1.3, Sum: 170.6] [GC Worker End (ms): Min: 989524.9, Avg: 989524.9, Max: 989524.9, Diff: 0.0] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 1.8 ms] [Choose CSet: 0.3 ms] [Ref Proc: 0.7 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 140.0M(274.0M)->0.0B(194.0M) Survivors: 4096.0K->7168.0K Heap: 638.0M(1024.0M)->501.4M(1024.0M)] [Times: user=0.12 sys=0.00, real=0.09 secs] 2013-09-30T18:07:10.464+0200: [GC concurrent-root-region-scan-start] 2013-09-30T18:07:10.486+0200: [GC concurrent-root-region-scan-end, 0.0217720 secs] 2013-09-30T18:07:10.486+0200: [GC concurrent-mark-start] 2013-09-30T18:07:11.002+0200: [GC concurrent-mark-end, 0.5160090 secs] 2013-09-30T18:07:11.002+0200: [GC remark 2013-09-30T18:07:11.003+0200: [GC ref-proc, 0.0038790 secs], 0.0398310 secs] [Times: user=0.04 sys=0.00, real=0.04 secs] 2013-09-30T18:07:11.044+0200: [GC cleanup 509M->268M(1024M), 0.0141640 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] 2013-09-30T18:07:11.058+0200: [GC concurrent-cleanup-start] 2013-09-30T18:07:11.059+0200: [GC concurrent-cleanup-end, 0.0007930 secs] 1041.161: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 643825664 bytes, allocation request: 1438336 bytes, threshold: 644245080 bytes (60.00 %), source: concurrent humongous allocation] ... Other 135 lines of the same format 1073.224: [G1Ergonomics (Concurrent Cycles) do not request concurrent cycle initiation, reason: still doing mixed collections, occupancy: 931135488 bytes, allocation request: 1438336 bytes, threshold: 644245080 bytes (60.00 %), source: concurrent humongous allocation] From monica.b at servergy.com Mon Oct 14 17:21:58 2013 From: monica.b at servergy.com (Monica Beckwith) Date: Mon, 14 Oct 2013 19:21:58 -0500 Subject: G1: no concurrent cycle initiation for humongous allocation In-Reply-To: References: <1381745257.2715.34.camel@cirrus> Message-ID: <525C8AA6.1000206@servergy.com> Hi again, Simone - Thanks for sending the log file. Two things stand out from the log file- 1) It seems to me that the application has a lot of "humongous" allocations <3MB. Please set your -XX:G1HeapRegionSize=8M and you shouldn't see (as much of) the behavior where the concurrent marking's being pushed off due to mixed GC cycles. 2) High external root scanning times (coupled with incorrect prediction times (earlier on in the log)) are the reason why G1 doesn't increase it's nursery. Here's a link to the JavaOne 2013 presentation that has more details on humongous objects: http://www.slideshare.net/MonicaBeckwith/con5497. -Monica On 10/14/13 3:18 PM, Simone Bordet wrote: > Hi, > > On Mon, Oct 14, 2013 at 12:07 PM, Thomas Schatzl > wrote: >> If a humongous/large object allocation makes the occupied heap (after >> allocation of the humongous object) larger than the threshold, G1 tries >> to schedule a concurrent cycle. >> >> At the same time G1 still in the mixed-gc phase (cleaning up the regular >> heap), and the collection policy does not allow initiation of a >> concurrent cycle during that phase. >> >> The message in essence just indicates that. >> >> Is there any problem, apart from the messages? > See the logs excerpt below. > > What I think it happens is that the end of the concurrent cycle cleans > up (as in the "GC Cleanup" phase) a large number of regions because > they are all garbage (early promotions from young since young is > small). > The GC Cleanup phase moves from 509->268 MiB. > > Then some time passes (~50s: from 989 to 1041); during this time young > has not been filled (even if it's only 140 MiB), yet at 1041.161 G1 > reports that 643 MiB are now occupied. > > After 1041.161, other 136 lines of the same type happen (and nothing > else), all of them within 32s, all of them requesting the same > allocation size. > The last of these lines happens at 1073.224, showing that during those > 32s (1073-1041) ~288 MiB have been allocated (931-643 MiB), probably > all of them as humongous object directly in old generation (because no > young GC has been triggered despite its small size). > If the same happened during the prior ~50s between the GC Cleanup and > 1041.161, then young would still be "empty" but it would explain why a > concurrent cycle would have been requested: the heap is actually > indeed over IHOP because of humongous allocations. > > At the beginning I could not believe that only humongous allocations > were happening, but it seems to be the case. > > I guess this case is not handled well with the default region size, > but it's a good case where setting the region size to 3 MiB (because > the humongous allocations seems all to be of size 1438336=1.4 MiB) > would "solve" this problem. > > If I am wrong in my interpretation, I'd be happy to hear. > > Thanks ! > From daubman at gmail.com Mon Oct 14 18:36:57 2013 From: daubman at gmail.com (Aaron Daubman) Date: Mon, 14 Oct 2013 21:36:57 -0400 Subject: Troubleshooting a ~40-second minor collection Message-ID: Hi All, I have unfortunately lost my GC log file (server restarted shortly after the event) but have AppDynamics stats. I am running Solr 3.6.1 in a jetty 9 container under jdk 1.7u25. Max heap is 16G. <7G were used at the time of the event. Typical heap usage is ~28%. There were around 10 minor collection events (fairly typical) during the minute the event occurred. The event was an almost 40-second max minor collection time. Around that time JVM Heap utilization was only between 27%-31% utilized - I cannot remember the last time we had a major collection, and I have also never seen such a long minor collection time. There is nothing I can see about traffic to the JVM that appeared abnormal. I did see some long external (JDBC) query times about this time as well, but thought they were more likely a symptom of the minor collection pause, rather than a cause. AppDynamics monitors Code Cache, G1 Eden Space, G1 Old Gen, G1 Perm Gen, and G1 Survivor - the max of which at the time was Perm Gen at only 60%. Is there anything else I can do (without the GC log file) to try and determine the cause of the unexpected 40s minor collection pause time? Thanks, Aaron -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131014/cdb78287/attachment.html From wolfgang.pedot at finkzeit.at Tue Oct 15 08:55:01 2013 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Tue, 15 Oct 2013 17:55:01 +0200 Subject: G1 collector, incremental PermGen collection possible? Message-ID: <525D6555.1090901@finkzeit.at> Hello, we are running a web-based java application-server (heap 15000MB, young-gen 5500MB, old-gen usage ~6GB). Since its interactive we have had some issues with full-GCs in the past (takes ~15sec) and worked around that by using CMS with class-unloading enabled. That works reasonably well but it also has the occasional promotion-failure triggering an STW full-GC. After running the (smaller) test-system on java7 and GC1 for a while I have switched the live-system to java7/GC1 yesterday and the young/old-gens look fine so far. The young-collector takes a little more time than before but it also keeps the old-gen in check by doing mixed-collects so we have not had a "normal" old-collect all day (more than 20 per day before). We have had 1226 young-collects so far taking a total of 291sec, there have been 2 full-GCs taking 15sec each which I call "abnormal" because I have to get rid of them. One has been triggered by "out of to-space" and the second one by a full PermGen. The problem is PermGen, part of the application creates a lot of dynamic classes and so PermGen gets full at some point which will trigger a full-GC blocking the system for ~15sec once or twice a day. With CMS and class unloading enabled that work was distributed throughout the day but I guess since the young-collector also takes care of some old-gen stuff in G1 it never looks at PermGen until its to late. Is there any way to get G1 to collect PermGen without falling back to a 15sec full-GC? Here are the relevant parameters I used for java7u40, I based the command-line on the one we used for java6/CMS: -Xmx15000M -Xms15000M -Xmn5500M -Xss228k -Xloggc:gclog.txt -XX:ParallelGCThreads=12 -XX:+PrintTenuringDistribution -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:LargePageSizeInBytes=2m -XX:+UseLargePages -XX:SurvivorRatio=8 -XX:ReservedCodeCacheSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=150 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=31 Quite a long mail, I hope I included all the relevant parts... kind regards Wolfgang From Andreas.Mueller at mgm-tp.com Mon Oct 21 10:09:54 2013 From: Andreas.Mueller at mgm-tp.com (=?iso-8859-1?Q?Andreas_M=FCller?=) Date: Mon, 21 Oct 2013 17:09:54 +0000 Subject: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m Message-ID: <46FF8393B58AD84D95E444264805D98FBDDF067B@edata01.mgm-edv.de> Hi all, while experimenting a bit with different Garbage Collectors and applying them to my homegrown micro benchmarks I stumbled into the following problem: I run the below sample with the following command line (using Java 1.7.0_40 on Windows and probably others): java -Xms6g -Xmx6g -XX:+UseParallelGC - de.am.gc.benchmarks.MixedRandomList 100 8 12500000 The Default and proven ParallelGC collector does mostly Full GCs and shows only poor out-of-the-box performance, more than a factor 10 lower than the ParNew collector. More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m reveal that the problem occurs as soon as the NewSize rises beyond 1800m which it obviously does by default. Below that threshold ParallelGC performance is similar to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is as low as 600 MB/s. Any ideas why this might happen? Note that the sample is constructed such that the live heap is always around 3GB. If any I would expect a problem only at around NewSize=3GB, when Old Gen shrinks to less than the live heap size. As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little variation around a maximum of 7600 MB/s at NewSize=2000m. I also provide source, gc.log and a plot of the NewSize dependency to anyone interested in that problem. Regards Andreas -------------------------------------------------------MixedRandomList.java------------------------------------------------------------------------------------------------------------------------ package de.am.gc.benchmarks; import java.util.ArrayList; import java.util.List; /** * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects which are kept in randomly updated lists. * * @author Andreas Mueller */ public class MixedRandomList { private static final int DEFAULT_NUMBEROFTHREADS=1; // object size in bytes private static final int DEFAULT_OBJECTSIZE=100; private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; private static int objectSize=DEFAULT_OBJECTSIZE; // number of objects to fill half of the available memory with (permanent) live objects private static long numLive = (Runtime.getRuntime().maxMemory()/objectSize/5); /** * @param args the command line arguments */ public static void main(String[] args) { if( args.length>0 ) { // first, optional argument is the size of the objects objectSize = Integer.parseInt(args[0]); // second, optional argument is the number of live objects if( args.length>1 ) { numberOfThreads = Integer.parseInt(args[1]); // third, optional argument is the number of live objects if( args.length>2 ) { numLive = Long.parseLong(args[2]); } } } for( int i=0; i0 which is distributed about an average lifetime. * This average lifetime is a function of fractionLive and numLive * * @param fractionLive * @param numLive */ public GarbageProducer(int fractionLive, long numLive) { this.fractionLive = fractionLive; this.myNumLive = numLive; } @Override public void run() { int osize = objectSize; char[] chars = getCharArray(objectSize); List liveList = new ArrayList((int)myNumLive); // initially, the lifeList is filled for(int i=0; i Innovation Implemented. Sitz der Gesellschaft: M?nchen Gesch?ftsf?hrer: Hamarz Mehmanesh Handelsregister: AG M?nchen HRB 105068 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/5e296ba3/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: SrcAndLog.zip Type: application/x-zip-compressed Size: 2992 bytes Desc: SrcAndLog.zip Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/5e296ba3/SrcAndLog-0001.zip -------------- next part -------------- A non-text attachment was scrubbed... Name: GCThroughput.jpg Type: image/jpeg Size: 47777 bytes Desc: GCThroughput.jpg Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/5e296ba3/GCThroughput-0001.jpg From jon.masamitsu at oracle.com Mon Oct 21 15:29:10 2013 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Mon, 21 Oct 2013 15:29:10 -0700 Subject: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m In-Reply-To: <46FF8393B58AD84D95E444264805D98FBDDF067B@edata01.mgm-edv.de> References: <46FF8393B58AD84D95E444264805D98FBDDF067B@edata01.mgm-edv.de> Message-ID: <5265AAB6.1050700@oracle.com> Andreas, There was a bug fixed in jdk8 that had similar symptoms. If you can try a jdk8 build that might tell us something. If jdk8 doesn't help it's likely that the prediction model thinks that there is not enough free space in the old gen to support a young collection. We've been working on 7098155 to fix that. Jon On 10/21/2013 10:09 AM, Andreas M?ller wrote: > Hi all, > > while experimenting a bit with different Garbage Collectors and applying them to my homegrown micro benchmarks I stumbled into the > following problem: > I run the below sample with the following command line (using Java 1.7.0_40 on Windows and probably others): > java -Xms6g -Xmx6g -XX:+UseParallelGC - de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > The Default and proven ParallelGC collector does mostly Full GCs and shows only poor out-of-the-box performance, more than a factor 10 lower than the ParNew collector. > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m reveal that the problem occurs as soon as the NewSize rises beyond 1800m which it obviously does by default. > Below that threshold ParallelGC performance is similar to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is as low as 600 MB/s. > > Any ideas why this might happen? > > Note that the sample is constructed such that the live heap is always around 3GB. If any I would expect a problem only at around NewSize=3GB, when Old Gen shrinks to less than the live heap size. As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little variation around a maximum of 7600 MB/s at NewSize=2000m. > > I also provide source, gc.log and a plot of the NewSize dependency to anyone interested in that problem. > > Regards > Andreas > > -------------------------------------------------------MixedRandomList.java------------------------------------------------------------------------------------------------------------------------ > package de.am.gc.benchmarks; > > import java.util.ArrayList; > import java.util.List; > > /** > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects which are kept in randomly updated lists. > * > * @author Andreas Mueller > */ > public class MixedRandomList { > private static final int DEFAULT_NUMBEROFTHREADS=1; > // object size in bytes > private static final int DEFAULT_OBJECTSIZE=100; > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > private static int objectSize=DEFAULT_OBJECTSIZE; > // number of objects to fill half of the available memory with (permanent) live objects > private static long numLive = (Runtime.getRuntime().maxMemory()/objectSize/5); > > /** > * @param args the command line arguments > */ > public static void main(String[] args) { > if( args.length>0 ) { > // first, optional argument is the size of the objects > objectSize = Integer.parseInt(args[0]); > // second, optional argument is the number of live objects > if( args.length>1 ) { > numberOfThreads = Integer.parseInt(args[1]); > // third, optional argument is the number of live objects > if( args.length>2 ) { > numLive = Long.parseLong(args[2]); > } > } > } > for( int i=0; i // run several GarbageProducer threads, each with its own mix of lifetime=0 and higher lifetime objects > new Thread(new GarbageProducer((int)Math.pow(50.0,(double)(i+1)), numLive/numberOfThreads)).start(); > } > try { > Thread.sleep(1200000); > } catch( InterruptedException iexc) { > iexc.printStackTrace(); > } > System.exit(0); > } > > private static char[] getCharArray(int length) { > char[] retVal = new char[length]; > for(int i=0; i retVal[i] = 'a'; > } > return retVal; > } > > public static class GarbageProducer implements Runnable { > > // the fraction of newly created objects that do not become garbage immediately but are stored in the liveList > int fractionLive; > // the size of the liveList > long myNumLive; > > /** > * Each GarbageProducer creates objects that become garbage immediately (lifetime=0) and > * objects that become garbage only after a lifetime>0 which is distributed about an average lifetime. > * This average lifetime is a function of fractionLive and numLive > * > * @param fractionLive > * @param numLive > */ > public GarbageProducer(int fractionLive, long numLive) { > this.fractionLive = fractionLive; > this.myNumLive = numLive; > } > > @Override > public void run() { > int osize = objectSize; > char[] chars = getCharArray(objectSize); > List liveList = new ArrayList((int)myNumLive); > // initially, the lifeList is filled > for(int i=0; i liveList.add(new String(chars)); > } > while(true) { > // create the majority of objects as garbage > for(int i=0; i String garbageObject = new String(chars); > } > // keep the fraction of objects live by placing them in the list (at a random index) > int index = (int)(Math.random()*myNumLive); > liveList.set(index, new String(chars)); > } > } > } > } > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > Andreas M?ller > > mgm technology partners GmbH > Frankfurter Ring 105a > 80807 M?nchen > Tel. +49 (89) 35 86 80-633 > Fax +49 (89) 35 86 80-288 > E-Mail Andreas.Mueller at mgm-tp.com > Innovation Implemented. > Sitz der Gesellschaft: M?nchen > Gesch?ftsf?hrer: Hamarz Mehmanesh > Handelsregister: AG M?nchen HRB 105068 > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment.html From Andreas.Mueller at mgm-tp.com Wed Oct 23 01:03:27 2013 From: Andreas.Mueller at mgm-tp.com (=?iso-8859-1?Q?Andreas_M=FCller?=) Date: Wed, 23 Oct 2013 08:03:27 +0000 Subject: ParallelGC issue: collector does only Full GC Message-ID: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Hi Jon, thanks for the hint to Java 8. I have verified with jdk1.8.0-ea-b112 (from October 17): behavior remains as described Best regards Andreas ---------------------------------------------------------------------- Date: Mon, 21 Oct 2013 15:29:10 -0700 From: Jon Masamitsu Subject: Re: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m To: hotspot-gc-use at openjdk.java.net Message-ID: <5265AAB6.1050700 at oracle.com> Content-Type: text/plain; charset="iso-8859-1" Andreas, There was a bug fixed in jdk8 that had similar symptoms. If you can try a jdk8 build that might tell us something. If jdk8 doesn't help it's likely that the prediction model thinks that there is not enough free space in the old gen to support a young collection. We've been working on 7098155 to fix that. Jon On 10/21/2013 10:09 AM, Andreas M?ller wrote: > Hi all, > > while experimenting a bit with different Garbage Collectors and > applying them to my homegrown micro benchmarks I stumbled into the following problem: > I run the below sample with the following command line (using Java 1.7.0_40 on Windows and probably others): > java -Xms6g -Xmx6g -XX:+UseParallelGC - > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > The Default and proven ParallelGC collector does mostly Full GCs and shows only poor out-of-the-box performance, more than a factor 10 lower than the ParNew collector. > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m reveal that the problem occurs as soon as the NewSize rises beyond 1800m which it obviously does by default. > Below that threshold ParallelGC performance is similar to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is as low as 600 MB/s. > > Any ideas why this might happen? > > Note that the sample is constructed such that the live heap is always around 3GB. If any I would expect a problem only at around NewSize=3GB, when Old Gen shrinks to less than the live heap size. As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little variation around a maximum of 7600 MB/s at NewSize=2000m. > > I also provide source, gc.log and a plot of the NewSize dependency to anyone interested in that problem. > > Regards > Andreas > > -------------------------------------------------------MixedRandomList > .java----------------------------------------------------------------- > ------------------------------------------------------- > package de.am.gc.benchmarks; > > import java.util.ArrayList; > import java.util.List; > > /** > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects which are kept in randomly updated lists. > * > * @author Andreas Mueller > */ > public class MixedRandomList { > private static final int DEFAULT_NUMBEROFTHREADS=1; > // object size in bytes > private static final int DEFAULT_OBJECTSIZE=100; > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > private static int objectSize=DEFAULT_OBJECTSIZE; > // number of objects to fill half of the available memory with (permanent) live objects > private static long numLive = > (Runtime.getRuntime().maxMemory()/objectSize/5); > > /** > * @param args the command line arguments > */ > public static void main(String[] args) { > if( args.length>0 ) { > // first, optional argument is the size of the objects > objectSize = Integer.parseInt(args[0]); > // second, optional argument is the number of live objects > if( args.length>1 ) { > numberOfThreads = Integer.parseInt(args[1]); > // third, optional argument is the number of live objects > if( args.length>2 ) { > numLive = Long.parseLong(args[2]); > } > } > } > for( int i=0; i // run several GarbageProducer threads, each with its own mix of lifetime=0 and higher lifetime objects > new Thread(new GarbageProducer((int)Math.pow(50.0,(double)(i+1)), numLive/numberOfThreads)).start(); > } > try { > Thread.sleep(1200000); > } catch( InterruptedException iexc) { > iexc.printStackTrace(); > } > System.exit(0); > } > > private static char[] getCharArray(int length) { > char[] retVal = new char[length]; > for(int i=0; i retVal[i] = 'a'; > } > return retVal; > } > > public static class GarbageProducer implements Runnable { > > // the fraction of newly created objects that do not become garbage immediately but are stored in the liveList > int fractionLive; > // the size of the liveList > long myNumLive; > > /** > * Each GarbageProducer creates objects that become garbage immediately (lifetime=0) and > * objects that become garbage only after a lifetime>0 which is distributed about an average lifetime. > * This average lifetime is a function of fractionLive and numLive > * > * @param fractionLive > * @param numLive > */ > public GarbageProducer(int fractionLive, long numLive) { > this.fractionLive = fractionLive; > this.myNumLive = numLive; > } > > @Override > public void run() { > int osize = objectSize; > char[] chars = getCharArray(objectSize); > List liveList = new ArrayList((int)myNumLive); > // initially, the lifeList is filled > for(int i=0; i liveList.add(new String(chars)); > } > while(true) { > // create the majority of objects as garbage > for(int i=0; i String garbageObject = new String(chars); > } > // keep the fraction of objects live by placing them in the list (at a random index) > int index = (int)(Math.random()*myNumLive); > liveList.set(index, new String(chars)); > } > } > } > } > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > --------------------------------------------------------------------- > > Andreas M?ller > > mgm technology partners GmbH > Frankfurter Ring 105a > 80807 M?nchen > Tel. +49 (89) 35 86 80-633 > Fax +49 (89) 35 86 80-288 > E-Mail Andreas.Mueller at mgm-tp.com > Innovation Implemented. > Sitz der Gesellschaft: M?nchen > Gesch?ftsf?hrer: Hamarz Mehmanesh > Handelsregister: AG M?nchen HRB 105068 > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html ------------------------------ _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use End of hotspot-gc-use Digest, Vol 68, Issue 5 ********************************************* From charlesjhunt at gmail.com Thu Oct 24 05:00:57 2013 From: charlesjhunt at gmail.com (charlie hunt) Date: Thu, 24 Oct 2013 07:00:57 -0500 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Message-ID: I did a little experimenting with this ... I think Jon's hypothesis is right. I first reproduced the behavior as described by Andreas. Then, I set -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. That eliminated the issue Andreas observed at 1800m. But, as I suspected the threshold at which the change in behavior merely changed at a higher sizing of young gen. It now occurs at about 2300m, up from 1800m. So, it does look like there is an issue with the prediction model since PromotedPadding can influence the prediction model. The prediction model code does not look trivial, as I'm sure Jon knows. ;) hths, charlie ... On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller wrote: > Hi Jon, > > thanks for the hint to Java 8. > I have verified with jdk1.8.0-ea-b112 (from October 17): behavior remains > as described > > Best regards > Andreas > > ---------------------------------------------------------------------- > > Date: Mon, 21 Oct 2013 15:29:10 -0700 > From: Jon Masamitsu > Subject: Re: ParallelGC issue: collector does only Full GC by default > and above NewSize=1800m > To: hotspot-gc-use at openjdk.java.net > Message-ID: <5265AAB6.1050700 at oracle.com> > Content-Type: text/plain; charset="iso-8859-1" > > Andreas, > > There was a bug fixed in jdk8 that had similar symptoms. If you can try a > jdk8 build that might tell us something. > > If jdk8 doesn't help it's likely that the prediction model thinks that > there is not enough > free space in the old gen to support a young collection. We've been > working on 7098155 to > fix that. > > Jon > > > On 10/21/2013 10:09 AM, Andreas M?ller wrote: > > Hi all, > > > > while experimenting a bit with different Garbage Collectors and > > applying them to my homegrown micro benchmarks I stumbled into the > following problem: > > I run the below sample with the following command line (using Java > 1.7.0_40 on Windows and probably others): > > java -Xms6g -Xmx6g -XX:+UseParallelGC - > > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > > > The Default and proven ParallelGC collector does mostly Full GCs and > shows only poor out-of-the-box performance, more than a factor 10 lower > than the ParNew collector. > > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m > reveal that the problem occurs as soon as the NewSize rises beyond 1800m > which it obviously does by default. > > Below that threshold ParallelGC performance is similar to ParNewGC (in > the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is > as low as 600 MB/s. > > > > Any ideas why this might happen? > > > > Note that the sample is constructed such that the live heap is always > around 3GB. If any I would expect a problem only at around NewSize=3GB, > when Old Gen shrinks to less than the live heap size. As a matter of fact, > ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little > variation around a maximum of 7600 MB/s at NewSize=2000m. > > > > I also provide source, gc.log and a plot of the NewSize dependency to > anyone interested in that problem. > > > > Regards > > Andreas > > > > -------------------------------------------------------MixedRandomList > > .java----------------------------------------------------------------- > > ------------------------------------------------------- > > package de.am.gc.benchmarks; > > > > import java.util.ArrayList; > > import java.util.List; > > > > /** > > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects > which are kept in randomly updated lists. > > * > > * @author Andreas Mueller > > */ > > public class MixedRandomList { > > private static final int DEFAULT_NUMBEROFTHREADS=1; > > // object size in bytes > > private static final int DEFAULT_OBJECTSIZE=100; > > > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > > private static int objectSize=DEFAULT_OBJECTSIZE; > > // number of objects to fill half of the available memory with > (permanent) live objects > > private static long numLive = > > (Runtime.getRuntime().maxMemory()/objectSize/5); > > > > /** > > * @param args the command line arguments > > */ > > public static void main(String[] args) { > > if( args.length>0 ) { > > // first, optional argument is the size of the objects > > objectSize = Integer.parseInt(args[0]); > > // second, optional argument is the number of live objects > > if( args.length>1 ) { > > numberOfThreads = Integer.parseInt(args[1]); > > // third, optional argument is the number of live > objects > > if( args.length>2 ) { > > numLive = Long.parseLong(args[2]); > > } > > } > > } > > for( int i=0; i > // run several GarbageProducer threads, each with its own > mix of lifetime=0 and higher lifetime objects > > new Thread(new > GarbageProducer((int)Math.pow(50.0,(double)(i+1)), > numLive/numberOfThreads)).start(); > > } > > try { > > Thread.sleep(1200000); > > } catch( InterruptedException iexc) { > > iexc.printStackTrace(); > > } > > System.exit(0); > > } > > > > private static char[] getCharArray(int length) { > > char[] retVal = new char[length]; > > for(int i=0; i > retVal[i] = 'a'; > > } > > return retVal; > > } > > > > public static class GarbageProducer implements Runnable { > > > > // the fraction of newly created objects that do not become > garbage immediately but are stored in the liveList > > int fractionLive; > > // the size of the liveList > > long myNumLive; > > > > /** > > * Each GarbageProducer creates objects that become garbage > immediately (lifetime=0) and > > * objects that become garbage only after a lifetime>0 which is > distributed about an average lifetime. > > * This average lifetime is a function of fractionLive and > numLive > > * > > * @param fractionLive > > * @param numLive > > */ > > public GarbageProducer(int fractionLive, long numLive) { > > this.fractionLive = fractionLive; > > this.myNumLive = numLive; > > } > > > > @Override > > public void run() { > > int osize = objectSize; > > char[] chars = getCharArray(objectSize); > > List liveList = new > ArrayList((int)myNumLive); > > // initially, the lifeList is filled > > for(int i=0; i > liveList.add(new String(chars)); > > } > > while(true) { > > // create the majority of objects as garbage > > for(int i=0; i > String garbageObject = new String(chars); > > } > > // keep the fraction of objects live by placing them in > the list (at a random index) > > int index = (int)(Math.random()*myNumLive); > > liveList.set(index, new String(chars)); > > } > > } > > } > > } > > ---------------------------------------------------------------------- > > ---------------------------------------------------------------------- > > --------------------------------------------------------------------- > > > > Andreas M?ller > > > > mgm technology partners GmbH > > Frankfurter Ring 105a > > 80807 M?nchen > > Tel. +49 (89) 35 86 80-633 > > Fax +49 (89) 35 86 80-288 > > E-Mail Andreas.Mueller at mgm-tp.com > > Innovation Implemented. > > Sitz der Gesellschaft: M?nchen > > Gesch?ftsf?hrer: Hamarz Mehmanesh > > Handelsregister: AG M?nchen HRB 105068 > > > > > > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html > > ------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > End of hotspot-gc-use Digest, Vol 68, Issue 5 > ********************************************* > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/f2c9ee78/attachment.html From bengt.rutisson at oracle.com Thu Oct 24 05:31:53 2013 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Thu, 24 Oct 2013 14:31:53 +0200 Subject: Fwd: Heads up: Deprecating the CMS foreground mode In-Reply-To: <5269090B.2020702@oracle.com> References: <5269090B.2020702@oracle.com> Message-ID: <52691339.40101@oracle.com> I sent this email to hotspot-gc-dev at openjdk.java.net. But I should probably have sent it to this list too (hotspot-gc-use at openjdk.java.net.). As mentioned in the email below, please keep any discussions in the review request email thread on hotspot-gc-dev at openjdk.java.net Thanks, Bengt -------- Original Message -------- Subject: Heads up: Deprecating the CMS foreground mode Date: Thu, 24 Oct 2013 04:48:27 -0700 (PDT) From: Bengt Rutisson To: hotspot-gc-dev at openjdk.java.net Hi all, Just a heads up for anyone using the CMS foreground collector. I just sent out a review request to this list that proposes a change to print warning messages for the flags that enable the CMS foreground collector. The review request is titled "JDK-8027132: Print deprecation warning message for the flags controlling the CMS foreground collector" and it would be good if any discussion can be handled in that email thread. We don't know of anyone using the foreground collector, so I thought I'd send out an extra email out to draw some attention to the review request if there is anybody using it. Just to be clear. The change proposed now only prints warning messages for some flags. Everything will keep working as before. This is just to communicate that the foreground collector has been deprecated. Hopefully we will follow this up with a change to actually remove the support for the foreground collector for one of the future major releases. Thanks, Bengt -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/d5c7045d/attachment-0001.html From Andreas.Mueller at mgm-tp.com Thu Oct 24 08:00:10 2013 From: Andreas.Mueller at mgm-tp.com (=?iso-8859-1?Q?Andreas_M=FCller?=) Date: Thu, 24 Oct 2013 15:00:10 +0000 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de>, Message-ID: <46FF8393B58AD84D95E444264805D98FBDDF1113@edata01.mgm-edv.de> Hi Charlie, thanks for having a look, confirming and explaining the issue. Hi Jon, I have created a bug at http://bugs.sun.com/ (Note the URL:-) about this issue. It received the (preliminary) Id 9007664 in the bug tracking system. Because ParNewGC's retirement has been announced for Java 8, ParallelGC needs to be fixed. With some satisfaction, I noticed that in Java 1.8.0-ea-b112 -XX:+UseParNewGC is still supported. It just prints a message about deprecation and "likely" future retirement. Is that imminent for JDK1.8.0-GA ? Best regards Andreas ________________________________ Von: charlie hunt [charlesjhunt at gmail.com] Gesendet: Donnerstag, 24. Oktober 2013 14:00 An: Andreas M?ller Cc: jon.masamitsu at oracle.com; hotspot-gc-use at openjdk.java.net Betreff: Re: ParallelGC issue: collector does only Full GC I did a little experimenting with this ... I think Jon's hypothesis is right. I first reproduced the behavior as described by Andreas. Then, I set -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. That eliminated the issue Andreas observed at 1800m. But, as I suspected the threshold at which the change in behavior merely changed at a higher sizing of young gen. It now occurs at about 2300m, up from 1800m. So, it does look like there is an issue with the prediction model since PromotedPadding can influence the prediction model. The prediction model code does not look trivial, as I'm sure Jon knows. ;) hths, charlie ... On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller > wrote: Hi Jon, thanks for the hint to Java 8. I have verified with jdk1.8.0-ea-b112 (from October 17): behavior remains as described Best regards Andreas ---------------------------------------------------------------------- Date: Mon, 21 Oct 2013 15:29:10 -0700 From: Jon Masamitsu > Subject: Re: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m To: hotspot-gc-use at openjdk.java.net Message-ID: <5265AAB6.1050700 at oracle.com> Content-Type: text/plain; charset="iso-8859-1" Andreas, There was a bug fixed in jdk8 that had similar symptoms. If you can try a jdk8 build that might tell us something. If jdk8 doesn't help it's likely that the prediction model thinks that there is not enough free space in the old gen to support a young collection. We've been working on 7098155 to fix that. Jon On 10/21/2013 10:09 AM, Andreas M?ller wrote: > Hi all, > > while experimenting a bit with different Garbage Collectors and > applying them to my homegrown micro benchmarks I stumbled into the following problem: > I run the below sample with the following command line (using Java 1.7.0_40 on Windows and probably others): > java -Xms6g -Xmx6g -XX:+UseParallelGC - > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > The Default and proven ParallelGC collector does mostly Full GCs and shows only poor out-of-the-box performance, more than a factor 10 lower than the ParNew collector. > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m reveal that the problem occurs as soon as the NewSize rises beyond 1800m which it obviously does by default. > Below that threshold ParallelGC performance is similar to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is as low as 600 MB/s. > > Any ideas why this might happen? > > Note that the sample is constructed such that the live heap is always around 3GB. If any I would expect a problem only at around NewSize=3GB, when Old Gen shrinks to less than the live heap size. As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little variation around a maximum of 7600 MB/s at NewSize=2000m. > > I also provide source, gc.log and a plot of the NewSize dependency to anyone interested in that problem. > > Regards > Andreas > > -------------------------------------------------------MixedRandomList > .java----------------------------------------------------------------- > ------------------------------------------------------- > package de.am.gc.benchmarks; > > import java.util.ArrayList; > import java.util.List; > > /** > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects which are kept in randomly updated lists. > * > * @author Andreas Mueller > */ > public class MixedRandomList { > private static final int DEFAULT_NUMBEROFTHREADS=1; > // object size in bytes > private static final int DEFAULT_OBJECTSIZE=100; > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > private static int objectSize=DEFAULT_OBJECTSIZE; > // number of objects to fill half of the available memory with (permanent) live objects > private static long numLive = > (Runtime.getRuntime().maxMemory()/objectSize/5); > > /** > * @param args the command line arguments > */ > public static void main(String[] args) { > if( args.length>0 ) { > // first, optional argument is the size of the objects > objectSize = Integer.parseInt(args[0]); > // second, optional argument is the number of live objects > if( args.length>1 ) { > numberOfThreads = Integer.parseInt(args[1]); > // third, optional argument is the number of live objects > if( args.length>2 ) { > numLive = Long.parseLong(args[2]); > } > } > } > for( int i=0; i // run several GarbageProducer threads, each with its own mix of lifetime=0 and higher lifetime objects > new Thread(new GarbageProducer((int)Math.pow(50.0,(double)(i+1)), numLive/numberOfThreads)).start(); > } > try { > Thread.sleep(1200000); > } catch( InterruptedException iexc) { > iexc.printStackTrace(); > } > System.exit(0); > } > > private static char[] getCharArray(int length) { > char[] retVal = new char[length]; > for(int i=0; i retVal[i] = 'a'; > } > return retVal; > } > > public static class GarbageProducer implements Runnable { > > // the fraction of newly created objects that do not become garbage immediately but are stored in the liveList > int fractionLive; > // the size of the liveList > long myNumLive; > > /** > * Each GarbageProducer creates objects that become garbage immediately (lifetime=0) and > * objects that become garbage only after a lifetime>0 which is distributed about an average lifetime. > * This average lifetime is a function of fractionLive and numLive > * > * @param fractionLive > * @param numLive > */ > public GarbageProducer(int fractionLive, long numLive) { > this.fractionLive = fractionLive; > this.myNumLive = numLive; > } > > @Override > public void run() { > int osize = objectSize; > char[] chars = getCharArray(objectSize); > List liveList = new ArrayList((int)myNumLive); > // initially, the lifeList is filled > for(int i=0; i liveList.add(new String(chars)); > } > while(true) { > // create the majority of objects as garbage > for(int i=0; i String garbageObject = new String(chars); > } > // keep the fraction of objects live by placing them in the list (at a random index) > int index = (int)(Math.random()*myNumLive); > liveList.set(index, new String(chars)); > } > } > } > } > ---------------------------------------------------------------------- > ---------------------------------------------------------------------- > --------------------------------------------------------------------- > > Andreas M?ller > > mgm technology partners GmbH > Frankfurter Ring 105a > 80807 M?nchen > Tel. +49 (89) 35 86 80-633 > Fax +49 (89) 35 86 80-288 > E-Mail Andreas.Mueller at mgm-tp.com> > Innovation Implemented. > Sitz der Gesellschaft: M?nchen > Gesch?ftsf?hrer: Hamarz Mehmanesh > Handelsregister: AG M?nchen HRB 105068 > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html ------------------------------ _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use End of hotspot-gc-use Digest, Vol 68, Issue 5 ********************************************* _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/4bfd2130/attachment.html From ysr1729 at gmail.com Thu Oct 24 10:12:12 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Thu, 24 Oct 2013 10:12:12 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Message-ID: I had called in this bug earlier this year -- the model can be easily improved. I will try and dig up the email in which I had described the issue and the suggested fix. -- ramki On Thu, Oct 24, 2013 at 5:00 AM, charlie hunt wrote: > I did a little experimenting with this ... I think Jon's hypothesis is > right. > > I first reproduced the behavior as described by Andreas. Then, I set > -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. > That eliminated the issue Andreas observed at 1800m. But, as I suspected > the threshold at which the change in behavior merely changed at a higher > sizing of young gen. It now occurs at about 2300m, up from 1800m. > > So, it does look like there is an issue with the prediction model since > PromotedPadding can influence the prediction model. > > The prediction model code does not look trivial, as I'm sure Jon knows. ;) > > hths, > > charlie ... > > > > > On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller < > Andreas.Mueller at mgm-tp.com> wrote: > >> Hi Jon, >> >> thanks for the hint to Java 8. >> I have verified with jdk1.8.0-ea-b112 (from October 17): behavior remains >> as described >> >> Best regards >> Andreas >> >> ---------------------------------------------------------------------- >> >> Date: Mon, 21 Oct 2013 15:29:10 -0700 >> From: Jon Masamitsu >> Subject: Re: ParallelGC issue: collector does only Full GC by default >> and above NewSize=1800m >> To: hotspot-gc-use at openjdk.java.net >> Message-ID: <5265AAB6.1050700 at oracle.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Andreas, >> >> There was a bug fixed in jdk8 that had similar symptoms. If you can try >> a jdk8 build that might tell us something. >> >> If jdk8 doesn't help it's likely that the prediction model thinks that >> there is not enough >> free space in the old gen to support a young collection. We've been >> working on 7098155 to >> fix that. >> >> Jon >> >> >> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >> > Hi all, >> > >> > while experimenting a bit with different Garbage Collectors and >> > applying them to my homegrown micro benchmarks I stumbled into the >> following problem: >> > I run the below sample with the following command line (using Java >> 1.7.0_40 on Windows and probably others): >> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >> > >> > The Default and proven ParallelGC collector does mostly Full GCs and >> shows only poor out-of-the-box performance, more than a factor 10 lower >> than the ParNew collector. >> > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m >> reveal that the problem occurs as soon as the NewSize rises beyond 1800m >> which it obviously does by default. >> > Below that threshold ParallelGC performance is similar to ParNewGC (in >> the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is >> as low as 600 MB/s. >> > >> > Any ideas why this might happen? >> > >> > Note that the sample is constructed such that the live heap is always >> around 3GB. If any I would expect a problem only at around NewSize=3GB, >> when Old Gen shrinks to less than the live heap size. As a matter of fact, >> ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little >> variation around a maximum of 7600 MB/s at NewSize=2000m. >> > >> > I also provide source, gc.log and a plot of the NewSize dependency to >> anyone interested in that problem. >> > >> > Regards >> > Andreas >> > >> > -------------------------------------------------------MixedRandomList >> > .java----------------------------------------------------------------- >> > ------------------------------------------------------- >> > package de.am.gc.benchmarks; >> > >> > import java.util.ArrayList; >> > import java.util.List; >> > >> > /** >> > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects >> which are kept in randomly updated lists. >> > * >> > * @author Andreas Mueller >> > */ >> > public class MixedRandomList { >> > private static final int DEFAULT_NUMBEROFTHREADS=1; >> > // object size in bytes >> > private static final int DEFAULT_OBJECTSIZE=100; >> > >> > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; >> > private static int objectSize=DEFAULT_OBJECTSIZE; >> > // number of objects to fill half of the available memory with >> (permanent) live objects >> > private static long numLive = >> > (Runtime.getRuntime().maxMemory()/objectSize/5); >> > >> > /** >> > * @param args the command line arguments >> > */ >> > public static void main(String[] args) { >> > if( args.length>0 ) { >> > // first, optional argument is the size of the objects >> > objectSize = Integer.parseInt(args[0]); >> > // second, optional argument is the number of live objects >> > if( args.length>1 ) { >> > numberOfThreads = Integer.parseInt(args[1]); >> > // third, optional argument is the number of live >> objects >> > if( args.length>2 ) { >> > numLive = Long.parseLong(args[2]); >> > } >> > } >> > } >> > for( int i=0; i> > // run several GarbageProducer threads, each with its own >> mix of lifetime=0 and higher lifetime objects >> > new Thread(new >> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >> numLive/numberOfThreads)).start(); >> > } >> > try { >> > Thread.sleep(1200000); >> > } catch( InterruptedException iexc) { >> > iexc.printStackTrace(); >> > } >> > System.exit(0); >> > } >> > >> > private static char[] getCharArray(int length) { >> > char[] retVal = new char[length]; >> > for(int i=0; i> > retVal[i] = 'a'; >> > } >> > return retVal; >> > } >> > >> > public static class GarbageProducer implements Runnable { >> > >> > // the fraction of newly created objects that do not become >> garbage immediately but are stored in the liveList >> > int fractionLive; >> > // the size of the liveList >> > long myNumLive; >> > >> > /** >> > * Each GarbageProducer creates objects that become garbage >> immediately (lifetime=0) and >> > * objects that become garbage only after a lifetime>0 which >> is distributed about an average lifetime. >> > * This average lifetime is a function of fractionLive and >> numLive >> > * >> > * @param fractionLive >> > * @param numLive >> > */ >> > public GarbageProducer(int fractionLive, long numLive) { >> > this.fractionLive = fractionLive; >> > this.myNumLive = numLive; >> > } >> > >> > @Override >> > public void run() { >> > int osize = objectSize; >> > char[] chars = getCharArray(objectSize); >> > List liveList = new >> ArrayList((int)myNumLive); >> > // initially, the lifeList is filled >> > for(int i=0; i> > liveList.add(new String(chars)); >> > } >> > while(true) { >> > // create the majority of objects as garbage >> > for(int i=0; i> > String garbageObject = new String(chars); >> > } >> > // keep the fraction of objects live by placing them >> in the list (at a random index) >> > int index = (int)(Math.random()*myNumLive); >> > liveList.set(index, new String(chars)); >> > } >> > } >> > } >> > } >> > ---------------------------------------------------------------------- >> > ---------------------------------------------------------------------- >> > --------------------------------------------------------------------- >> > >> > Andreas M?ller >> > >> > mgm technology partners GmbH >> > Frankfurter Ring 105a >> > 80807 M?nchen >> > Tel. +49 (89) 35 86 80-633 >> > Fax +49 (89) 35 86 80-288 >> > E-Mail Andreas.Mueller at mgm-tp.com >> > Innovation Implemented. >> > Sitz der Gesellschaft: M?nchen >> > Gesch?ftsf?hrer: Hamarz Mehmanesh >> > Handelsregister: AG M?nchen HRB 105068 >> > >> > >> > >> > >> > _______________________________________________ >> > hotspot-gc-use mailing list >> > hotspot-gc-use at openjdk.java.net >> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >> >> ------------------------------ >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> End of hotspot-gc-use Digest, Vol 68, Issue 5 >> ********************************************* >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/6cf4203d/attachment-0001.html From charlesjhunt at gmail.com Thu Oct 24 10:17:32 2013 From: charlesjhunt at gmail.com (charlie hunt) Date: Thu, 24 Oct 2013 12:17:32 -0500 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Message-ID: Thanks Ramki! On Thu, Oct 24, 2013 at 12:12 PM, Srinivas Ramakrishna wrote: > I had called in this bug earlier this year -- the model can be easily > improved. I will try and dig up the email in which I had described the issue > and the suggested fix. > > -- ramki > > > On Thu, Oct 24, 2013 at 5:00 AM, charlie hunt wrote: > >> I did a little experimenting with this ... I think Jon's hypothesis is >> right. >> >> I first reproduced the behavior as described by Andreas. Then, I set >> -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. >> That eliminated the issue Andreas observed at 1800m. But, as I suspected >> the threshold at which the change in behavior merely changed at a higher >> sizing of young gen. It now occurs at about 2300m, up from 1800m. >> >> So, it does look like there is an issue with the prediction model since >> PromotedPadding can influence the prediction model. >> >> The prediction model code does not look trivial, as I'm sure Jon knows. ;) >> >> hths, >> >> charlie ... >> >> >> >> >> On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller < >> Andreas.Mueller at mgm-tp.com> wrote: >> >>> Hi Jon, >>> >>> thanks for the hint to Java 8. >>> I have verified with jdk1.8.0-ea-b112 (from October 17): behavior >>> remains as described >>> >>> Best regards >>> Andreas >>> >>> ---------------------------------------------------------------------- >>> >>> Date: Mon, 21 Oct 2013 15:29:10 -0700 >>> From: Jon Masamitsu >>> Subject: Re: ParallelGC issue: collector does only Full GC by default >>> and above NewSize=1800m >>> To: hotspot-gc-use at openjdk.java.net >>> Message-ID: <5265AAB6.1050700 at oracle.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> Andreas, >>> >>> There was a bug fixed in jdk8 that had similar symptoms. If you can try >>> a jdk8 build that might tell us something. >>> >>> If jdk8 doesn't help it's likely that the prediction model thinks that >>> there is not enough >>> free space in the old gen to support a young collection. We've been >>> working on 7098155 to >>> fix that. >>> >>> Jon >>> >>> >>> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >>> > Hi all, >>> > >>> > while experimenting a bit with different Garbage Collectors and >>> > applying them to my homegrown micro benchmarks I stumbled into the >>> following problem: >>> > I run the below sample with the following command line (using Java >>> 1.7.0_40 on Windows and probably others): >>> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >>> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >>> > >>> > The Default and proven ParallelGC collector does mostly Full GCs and >>> shows only poor out-of-the-box performance, more than a factor 10 lower >>> than the ParNew collector. >>> > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m >>> reveal that the problem occurs as soon as the NewSize rises beyond 1800m >>> which it obviously does by default. >>> > Below that threshold ParallelGC performance is similar to ParNewGC (in >>> the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is >>> as low as 600 MB/s. >>> > >>> > Any ideas why this might happen? >>> > >>> > Note that the sample is constructed such that the live heap is always >>> around 3GB. If any I would expect a problem only at around NewSize=3GB, >>> when Old Gen shrinks to less than the live heap size. As a matter of fact, >>> ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little >>> variation around a maximum of 7600 MB/s at NewSize=2000m. >>> > >>> > I also provide source, gc.log and a plot of the NewSize dependency to >>> anyone interested in that problem. >>> > >>> > Regards >>> > Andreas >>> > >>> > -------------------------------------------------------MixedRandomList >>> > .java----------------------------------------------------------------- >>> > ------------------------------------------------------- >>> > package de.am.gc.benchmarks; >>> > >>> > import java.util.ArrayList; >>> > import java.util.List; >>> > >>> > /** >>> > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects >>> which are kept in randomly updated lists. >>> > * >>> > * @author Andreas Mueller >>> > */ >>> > public class MixedRandomList { >>> > private static final int DEFAULT_NUMBEROFTHREADS=1; >>> > // object size in bytes >>> > private static final int DEFAULT_OBJECTSIZE=100; >>> > >>> > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; >>> > private static int objectSize=DEFAULT_OBJECTSIZE; >>> > // number of objects to fill half of the available memory with >>> (permanent) live objects >>> > private static long numLive = >>> > (Runtime.getRuntime().maxMemory()/objectSize/5); >>> > >>> > /** >>> > * @param args the command line arguments >>> > */ >>> > public static void main(String[] args) { >>> > if( args.length>0 ) { >>> > // first, optional argument is the size of the objects >>> > objectSize = Integer.parseInt(args[0]); >>> > // second, optional argument is the number of live objects >>> > if( args.length>1 ) { >>> > numberOfThreads = Integer.parseInt(args[1]); >>> > // third, optional argument is the number of live >>> objects >>> > if( args.length>2 ) { >>> > numLive = Long.parseLong(args[2]); >>> > } >>> > } >>> > } >>> > for( int i=0; i>> > // run several GarbageProducer threads, each with its own >>> mix of lifetime=0 and higher lifetime objects >>> > new Thread(new >>> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >>> numLive/numberOfThreads)).start(); >>> > } >>> > try { >>> > Thread.sleep(1200000); >>> > } catch( InterruptedException iexc) { >>> > iexc.printStackTrace(); >>> > } >>> > System.exit(0); >>> > } >>> > >>> > private static char[] getCharArray(int length) { >>> > char[] retVal = new char[length]; >>> > for(int i=0; i>> > retVal[i] = 'a'; >>> > } >>> > return retVal; >>> > } >>> > >>> > public static class GarbageProducer implements Runnable { >>> > >>> > // the fraction of newly created objects that do not become >>> garbage immediately but are stored in the liveList >>> > int fractionLive; >>> > // the size of the liveList >>> > long myNumLive; >>> > >>> > /** >>> > * Each GarbageProducer creates objects that become garbage >>> immediately (lifetime=0) and >>> > * objects that become garbage only after a lifetime>0 which >>> is distributed about an average lifetime. >>> > * This average lifetime is a function of fractionLive and >>> numLive >>> > * >>> > * @param fractionLive >>> > * @param numLive >>> > */ >>> > public GarbageProducer(int fractionLive, long numLive) { >>> > this.fractionLive = fractionLive; >>> > this.myNumLive = numLive; >>> > } >>> > >>> > @Override >>> > public void run() { >>> > int osize = objectSize; >>> > char[] chars = getCharArray(objectSize); >>> > List liveList = new >>> ArrayList((int)myNumLive); >>> > // initially, the lifeList is filled >>> > for(int i=0; i>> > liveList.add(new String(chars)); >>> > } >>> > while(true) { >>> > // create the majority of objects as garbage >>> > for(int i=0; i>> > String garbageObject = new String(chars); >>> > } >>> > // keep the fraction of objects live by placing them >>> in the list (at a random index) >>> > int index = (int)(Math.random()*myNumLive); >>> > liveList.set(index, new String(chars)); >>> > } >>> > } >>> > } >>> > } >>> > ---------------------------------------------------------------------- >>> > ---------------------------------------------------------------------- >>> > --------------------------------------------------------------------- >>> > >>> > Andreas M?ller >>> > >>> > mgm technology partners GmbH >>> > Frankfurter Ring 105a >>> > 80807 M?nchen >>> > Tel. +49 (89) 35 86 80-633 >>> > Fax +49 (89) 35 86 80-288 >>> > E-Mail Andreas.Mueller at mgm-tp.com >>> > Innovation Implemented. >>> > Sitz der Gesellschaft: M?nchen >>> > Gesch?ftsf?hrer: Hamarz Mehmanesh >>> > Handelsregister: AG M?nchen HRB 105068 >>> > >>> > >>> > >>> > >>> > _______________________________________________ >>> > hotspot-gc-use mailing list >>> > hotspot-gc-use at openjdk.java.net >>> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >>> >>> ------------------------------ >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >>> End of hotspot-gc-use Digest, Vol 68, Issue 5 >>> ********************************************* >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/e77e149f/attachment.html From ysr1729 at gmail.com Thu Oct 24 10:50:57 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Thu, 24 Oct 2013 10:50:57 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Message-ID: The email I had in mind was this one:- http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2013-May/007092.html It appears as though the attachments I had sent were scrubbed, so folks on the list may not have seen the attachments. I'll see if I can dig them up and reconstruct it again. I also couldn't find the email in which I promised to send the fix I had in mind, although that can pretty much be constructed from the email description. I'll see if I can find the relevant attachments and upload them into a bug report of this. I have only been half-following this exchange, so it's possible that it's somewhat tangentially related, but definitely related to the prediction model for how much gets promoted. -- ramki On Thu, Oct 24, 2013 at 10:12 AM, Srinivas Ramakrishna wrote: > I had called in this bug earlier this year -- the model can be easily > improved. I will try and dig up the email in which I had described the issue > and the suggested fix. > > -- ramki > > > On Thu, Oct 24, 2013 at 5:00 AM, charlie hunt wrote: > >> I did a little experimenting with this ... I think Jon's hypothesis is >> right. >> >> I first reproduced the behavior as described by Andreas. Then, I set >> -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. >> That eliminated the issue Andreas observed at 1800m. But, as I suspected >> the threshold at which the change in behavior merely changed at a higher >> sizing of young gen. It now occurs at about 2300m, up from 1800m. >> >> So, it does look like there is an issue with the prediction model since >> PromotedPadding can influence the prediction model. >> >> The prediction model code does not look trivial, as I'm sure Jon knows. ;) >> >> hths, >> >> charlie ... >> >> >> >> >> On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller < >> Andreas.Mueller at mgm-tp.com> wrote: >> >>> Hi Jon, >>> >>> thanks for the hint to Java 8. >>> I have verified with jdk1.8.0-ea-b112 (from October 17): behavior >>> remains as described >>> >>> Best regards >>> Andreas >>> >>> ---------------------------------------------------------------------- >>> >>> Date: Mon, 21 Oct 2013 15:29:10 -0700 >>> From: Jon Masamitsu >>> Subject: Re: ParallelGC issue: collector does only Full GC by default >>> and above NewSize=1800m >>> To: hotspot-gc-use at openjdk.java.net >>> Message-ID: <5265AAB6.1050700 at oracle.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> Andreas, >>> >>> There was a bug fixed in jdk8 that had similar symptoms. If you can try >>> a jdk8 build that might tell us something. >>> >>> If jdk8 doesn't help it's likely that the prediction model thinks that >>> there is not enough >>> free space in the old gen to support a young collection. We've been >>> working on 7098155 to >>> fix that. >>> >>> Jon >>> >>> >>> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >>> > Hi all, >>> > >>> > while experimenting a bit with different Garbage Collectors and >>> > applying them to my homegrown micro benchmarks I stumbled into the >>> following problem: >>> > I run the below sample with the following command line (using Java >>> 1.7.0_40 on Windows and probably others): >>> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >>> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >>> > >>> > The Default and proven ParallelGC collector does mostly Full GCs and >>> shows only poor out-of-the-box performance, more than a factor 10 lower >>> than the ParNew collector. >>> > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m >>> reveal that the problem occurs as soon as the NewSize rises beyond 1800m >>> which it obviously does by default. >>> > Below that threshold ParallelGC performance is similar to ParNewGC (in >>> the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is >>> as low as 600 MB/s. >>> > >>> > Any ideas why this might happen? >>> > >>> > Note that the sample is constructed such that the live heap is always >>> around 3GB. If any I would expect a problem only at around NewSize=3GB, >>> when Old Gen shrinks to less than the live heap size. As a matter of fact, >>> ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little >>> variation around a maximum of 7600 MB/s at NewSize=2000m. >>> > >>> > I also provide source, gc.log and a plot of the NewSize dependency to >>> anyone interested in that problem. >>> > >>> > Regards >>> > Andreas >>> > >>> > -------------------------------------------------------MixedRandomList >>> > .java----------------------------------------------------------------- >>> > ------------------------------------------------------- >>> > package de.am.gc.benchmarks; >>> > >>> > import java.util.ArrayList; >>> > import java.util.List; >>> > >>> > /** >>> > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects >>> which are kept in randomly updated lists. >>> > * >>> > * @author Andreas Mueller >>> > */ >>> > public class MixedRandomList { >>> > private static final int DEFAULT_NUMBEROFTHREADS=1; >>> > // object size in bytes >>> > private static final int DEFAULT_OBJECTSIZE=100; >>> > >>> > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; >>> > private static int objectSize=DEFAULT_OBJECTSIZE; >>> > // number of objects to fill half of the available memory with >>> (permanent) live objects >>> > private static long numLive = >>> > (Runtime.getRuntime().maxMemory()/objectSize/5); >>> > >>> > /** >>> > * @param args the command line arguments >>> > */ >>> > public static void main(String[] args) { >>> > if( args.length>0 ) { >>> > // first, optional argument is the size of the objects >>> > objectSize = Integer.parseInt(args[0]); >>> > // second, optional argument is the number of live objects >>> > if( args.length>1 ) { >>> > numberOfThreads = Integer.parseInt(args[1]); >>> > // third, optional argument is the number of live >>> objects >>> > if( args.length>2 ) { >>> > numLive = Long.parseLong(args[2]); >>> > } >>> > } >>> > } >>> > for( int i=0; i>> > // run several GarbageProducer threads, each with its own >>> mix of lifetime=0 and higher lifetime objects >>> > new Thread(new >>> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >>> numLive/numberOfThreads)).start(); >>> > } >>> > try { >>> > Thread.sleep(1200000); >>> > } catch( InterruptedException iexc) { >>> > iexc.printStackTrace(); >>> > } >>> > System.exit(0); >>> > } >>> > >>> > private static char[] getCharArray(int length) { >>> > char[] retVal = new char[length]; >>> > for(int i=0; i>> > retVal[i] = 'a'; >>> > } >>> > return retVal; >>> > } >>> > >>> > public static class GarbageProducer implements Runnable { >>> > >>> > // the fraction of newly created objects that do not become >>> garbage immediately but are stored in the liveList >>> > int fractionLive; >>> > // the size of the liveList >>> > long myNumLive; >>> > >>> > /** >>> > * Each GarbageProducer creates objects that become garbage >>> immediately (lifetime=0) and >>> > * objects that become garbage only after a lifetime>0 which >>> is distributed about an average lifetime. >>> > * This average lifetime is a function of fractionLive and >>> numLive >>> > * >>> > * @param fractionLive >>> > * @param numLive >>> > */ >>> > public GarbageProducer(int fractionLive, long numLive) { >>> > this.fractionLive = fractionLive; >>> > this.myNumLive = numLive; >>> > } >>> > >>> > @Override >>> > public void run() { >>> > int osize = objectSize; >>> > char[] chars = getCharArray(objectSize); >>> > List liveList = new >>> ArrayList((int)myNumLive); >>> > // initially, the lifeList is filled >>> > for(int i=0; i>> > liveList.add(new String(chars)); >>> > } >>> > while(true) { >>> > // create the majority of objects as garbage >>> > for(int i=0; i>> > String garbageObject = new String(chars); >>> > } >>> > // keep the fraction of objects live by placing them >>> in the list (at a random index) >>> > int index = (int)(Math.random()*myNumLive); >>> > liveList.set(index, new String(chars)); >>> > } >>> > } >>> > } >>> > } >>> > ---------------------------------------------------------------------- >>> > ---------------------------------------------------------------------- >>> > --------------------------------------------------------------------- >>> > >>> > Andreas M?ller >>> > >>> > mgm technology partners GmbH >>> > Frankfurter Ring 105a >>> > 80807 M?nchen >>> > Tel. +49 (89) 35 86 80-633 >>> > Fax +49 (89) 35 86 80-288 >>> > E-Mail Andreas.Mueller at mgm-tp.com >>> > Innovation Implemented. >>> > Sitz der Gesellschaft: M?nchen >>> > Gesch?ftsf?hrer: Hamarz Mehmanesh >>> > Handelsregister: AG M?nchen HRB 105068 >>> > >>> > >>> > >>> > >>> > _______________________________________________ >>> > hotspot-gc-use mailing list >>> > hotspot-gc-use at openjdk.java.net >>> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >>> >>> ------------------------------ >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >>> End of hotspot-gc-use Digest, Vol 68, Issue 5 >>> ********************************************* >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/f2de300a/attachment-0001.html From Peter.B.Kessler at Oracle.COM Thu Oct 24 11:09:42 2013 From: Peter.B.Kessler at Oracle.COM (Peter B. Kessler) Date: Thu, 24 Oct 2013 11:09:42 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Message-ID: <52696266.7030807@Oracle.COM> Pack-rat that I am*, I have your original message including the attachments (two charts: BetterPredictionFilter_time.tiff and BetterPredictionFilter_time_start.tiff). If you have trouble finding them I can send them back to you. If you have trouble uploading them to the bug report, maybe I can do that. ... peter * I don't need a garbage collector, I just need a lot of memory. :-) On 10/24/13 10:50, Srinivas Ramakrishna wrote: > > The email I had in mind was this one:- > > http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2013-May/007092.html > > It appears as though the attachments I had sent were scrubbed, so folks on the list may not have seen the > attachments. I'll see if I can dig them up and reconstruct it again. I also couldn't find the email in which I promised to > send the fix I had in mind, although that can pretty much be constructed from the email description. > > I'll see if I can find the relevant attachments and upload them into a bug report of this. I have only been > half-following this exchange, so it's possible that it's somewhat tangentially related, but definitely > related to the prediction model for how much gets promoted. > > -- ramki > > > > On Thu, Oct 24, 2013 at 10:12 AM, Srinivas Ramakrishna > wrote: > > I had called in this bug earlier this year -- the model can be easily improved. I will try and dig up the email in which I had described the issue > and the suggested fix. > > -- ramki > > > On Thu, Oct 24, 2013 at 5:00 AM, charlie hunt > wrote: > > I did a little experimenting with this ... I think Jon's hypothesis is right. > > I first reproduced the behavior as described by Andreas. Then, I set -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. That eliminated the issue Andreas observed at 1800m. But, as I suspected the threshold at which the change in behavior merely changed at a higher sizing of young gen. It now occurs at about 2300m, up from 1800m. > > So, it does look like there is an issue with the prediction model since PromotedPadding can influence the prediction model. > > The prediction model code does not look trivial, as I'm sure Jon knows. ;) > > hths, > > charlie ... > > > > On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller > wrote: > > Hi Jon, > > thanks for the hint to Java 8. > I have verified with jdk1.8.0-ea-b112 (from October 17): behavior remains as described > > Best regards > Andreas > > ---------------------------------------------------------------------- > > Date: Mon, 21 Oct 2013 15:29:10 -0700 > From: Jon Masamitsu > > Subject: Re: ParallelGC issue: collector does only Full GC by default > and above NewSize=1800m > To: hotspot-gc-use at openjdk.java.net > Message-ID: <5265AAB6.1050700 at oracle.com > > Content-Type: text/plain; charset="iso-8859-1" > > Andreas, > > There was a bug fixed in jdk8 that had similar symptoms. If you can try a jdk8 build that might tell us something. > > If jdk8 doesn't help it's likely that the prediction model thinks that there is not enough > free space in the old gen to support a young collection. We've been > working on 7098155 to > fix that. > > Jon > > > On 10/21/2013 10:09 AM, Andreas M?ller wrote: > > Hi all, > > > > while experimenting a bit with different Garbage Collectors and > > applying them to my homegrown micro benchmarks I stumbled into the following problem: > > I run the below sample with the following command line (using Java 1.7.0_40 on Windows and probably others): > > java -Xms6g -Xmx6g -XX:+UseParallelGC - > > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > > > The Default and proven ParallelGC collector does mostly Full GCs and shows only poor out-of-the-box performance, more than a factor 10 lower than the ParNew collector. > > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m reveal that the problem occurs as soon as the NewSize rises beyond 1800m which it obviously does by default. > > Below that threshold ParallelGC performance is similar to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is as low as 600 MB/s. > > > > Any ideas why this might happen? > > > > Note that the sample is constructed such that the live heap is always around 3GB. If any I would expect a problem only at around NewSize=3GB, when Old Gen shrinks to less than the live heap size. As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little variation around a maximum of 7600 MB/s at NewSize=2000m. > > > > I also provide source, gc.log and a plot of the NewSize dependency to anyone interested in that problem. > > > > Regards > > Andreas > > > > -------------------------------------------------------MixedRandomList > > .java----------------------------------------------------------------- > > ------------------------------------------------------- > > package de.am.gc.benchmarks; > > > > import java.util.ArrayList; > > import java.util.List; > > > > /** > > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects which are kept in randomly updated lists. > > * > > * @author Andreas Mueller > > */ > > public class MixedRandomList { > > private static final int DEFAULT_NUMBEROFTHREADS=1; > > // object size in bytes > > private static final int DEFAULT_OBJECTSIZE=100; > > > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > > private static int objectSize=DEFAULT_OBJECTSIZE; > > // number of objects to fill half of the available memory with (permanent) live objects > > private static long numLive = > > (Runtime.getRuntime().maxMemory()/objectSize/5); > > > > /** > > * @param args the command line arguments > > */ > > public static void main(String[] args) { > > if( args.length>0 ) { > > // first, optional argument is the size of the objects > > objectSize = Integer.parseInt(args[0]); > > // second, optional argument is the number of live objects > > if( args.length>1 ) { > > numberOfThreads = Integer.parseInt(args[1]); > > // third, optional argument is the number of live objects > > if( args.length>2 ) { > > numLive = Long.parseLong(args[2]); > > } > > } > > } > > for( int i=0; i > // run several GarbageProducer threads, each with its own mix of lifetime=0 and higher lifetime objects > > new Thread(new GarbageProducer((int)Math.pow(50.0,(double)(i+1)), numLive/numberOfThreads)).start(); > > } > > try { > > Thread.sleep(1200000); > > } catch( InterruptedException iexc) { > > iexc.printStackTrace(); > > } > > System.exit(0); > > } > > > > private static char[] getCharArray(int length) { > > char[] retVal = new char[length]; > > for(int i=0; i > retVal[i] = 'a'; > > } > > return retVal; > > } > > > > public static class GarbageProducer implements Runnable { > > > > // the fraction of newly created objects that do not become garbage immediately but are stored in the liveList > > int fractionLive; > > // the size of the liveList > > long myNumLive; > > > > /** > > * Each GarbageProducer creates objects that become garbage immediately (lifetime=0) and > > * objects that become garbage only after a lifetime>0 which is distributed about an average lifetime. > > * This average lifetime is a function of fractionLive and numLive > > * > > * @param fractionLive > > * @param numLive > > */ > > public GarbageProducer(int fractionLive, long numLive) { > > this.fractionLive = fractionLive; > > this.myNumLive = numLive; > > } > > > > @Override > > public void run() { > > int osize = objectSize; > > char[] chars = getCharArray(objectSize); > > List liveList = new ArrayList((int)myNumLive); > > // initially, the lifeList is filled > > for(int i=0; i > liveList.add(new String(chars)); > > } > > while(true) { > > // create the majority of objects as garbage > > for(int i=0; i > String garbageObject = new String(chars); > > } > > // keep the fraction of objects live by placing them in the list (at a random index) > > int index = (int)(Math.random()*myNumLive); > > liveList.set(index, new String(chars)); > > } > > } > > } > > } > > ---------------------------------------------------------------------- > > ---------------------------------------------------------------------- > > --------------------------------------------------------------------- > > > > Andreas M?ller > > > > mgm technology partners GmbH > > Frankfurter Ring 105a > > 80807 M?nchen > > Tel.+49 (89) 35 86 80-633 > > Fax+49 (89) 35 86 80-288 > > E-MailAndreas.Mueller at mgm-tp.com > > > Innovation Implemented. > > Sitz der Gesellschaft: M?nchen > > Gesch?ftsf?hrer: Hamarz Mehmanesh > > Handelsregister: AG M?nchen HRB 105068 > > > > > > > > > > _______________________________________________ > > hotspot-gc-use mailing list > >hotspot-gc-use at openjdk.java.net > >http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html > > ------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > End of hotspot-gc-use Digest, Vol 68, Issue 5 > ********************************************* > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > From jon.masamitsu at oracle.com Thu Oct 24 11:54:40 2013 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 24 Oct 2013 11:54:40 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: <46FF8393B58AD84D95E444264805D98FBDDF1113@edata01.mgm-edv.de> References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de>, <46FF8393B58AD84D95E444264805D98FBDDF1113@edata01.mgm-edv.de> Message-ID: <52696CF0.6080302@oracle.com> On 10/24/13 8:00 AM, Andreas M?ller wrote: > Hi Charlie, > > thanks for having a look, confirming and explaining the issue. > > Hi Jon, > > I have created a bug at http://bugs.sun.com/ > (Note the URL:-) about this issue. It received the (preliminary) Id > 9007664 in the bug tracking system. > Because ParNewGC's retirement has been announced for Java 8, > ParallelGC needs to be fixed. > > With some satisfaction, I noticed that in Java 1.8.0-ea-b112 > -XX:+UseParNewGC is still supported. > It just prints a message about deprecation and "likely" future > retirement. Is that imminent for JDK1.8.0-GA ? No, UseParNewGC will still work in jdk8. We deprecate at a major release and then don't remove it at the next or later major release. Jon > > Best regards > Andreas > > ------------------------------------------------------------------------ > *Von:* charlie hunt [charlesjhunt at gmail.com] > *Gesendet:* Donnerstag, 24. Oktober 2013 14:00 > *An:* Andreas M?ller > *Cc:* jon.masamitsu at oracle.com; hotspot-gc-use at openjdk.java.net > *Betreff:* Re: ParallelGC issue: collector does only Full GC > > I did a little experimenting with this ... I think Jon's hypothesis is > right. > > I first reproduced the behavior as described by Andreas. Then, I set > -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at > 1800m. That eliminated the issue Andreas observed at 1800m. But, as I > suspected the threshold at which the change in behavior merely changed > at a higher sizing of young gen. It now occurs at about 2300m, up from > 1800m. > > So, it does look like there is an issue with the prediction model > since PromotedPadding can influence the prediction model. > > The prediction model code does not look trivial, as I'm sure Jon knows. ;) > > hths, > > charlie ... > > > > On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller > > wrote: > > Hi Jon, > > thanks for the hint to Java 8. > I have verified with jdk1.8.0-ea-b112 (from October 17): behavior > remains as described > > Best regards > Andreas > > ---------------------------------------------------------------------- > > Date: Mon, 21 Oct 2013 15:29:10 -0700 > From: Jon Masamitsu > > Subject: Re: ParallelGC issue: collector does only Full GC by default > and above NewSize=1800m > To: hotspot-gc-use at openjdk.java.net > > Message-ID: <5265AAB6.1050700 at oracle.com > > > Content-Type: text/plain; charset="iso-8859-1" > > Andreas, > > There was a bug fixed in jdk8 that had similar symptoms. If you > can try a jdk8 build that might tell us something. > > If jdk8 doesn't help it's likely that the prediction model thinks > that there is not enough > free space in the old gen to support a young collection. We've been > working on 7098155 to > fix that. > > Jon > > > On 10/21/2013 10:09 AM, Andreas M?ller wrote: > > Hi all, > > > > while experimenting a bit with different Garbage Collectors and > > applying them to my homegrown micro benchmarks I stumbled into > the following problem: > > I run the below sample with the following command line (using > Java 1.7.0_40 on Windows and probably others): > > java -Xms6g -Xmx6g -XX:+UseParallelGC - > > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > > > The Default and proven ParallelGC collector does mostly Full GCs > and shows only poor out-of-the-box performance, more than a factor > 10 lower than the ParNew collector. > > More tests adding the -XX:NewSize=m and > -XX:MaxNewSize=m reveal that the problem occurs as soon as > the NewSize rises beyond 1800m which it obviously does by default. > > Below that threshold ParallelGC performance is similar to > ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), > but at NewSize=2000m is as low as 600 MB/s. > > > > Any ideas why this might happen? > > > > Note that the sample is constructed such that the live heap is > always around 3GB. If any I would expect a problem only at around > NewSize=3GB, when Old Gen shrinks to less than the live heap size. > As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m > to NewSize=3500m with little variation around a maximum of 7600 > MB/s at NewSize=2000m. > > > > I also provide source, gc.log and a plot of the NewSize > dependency to anyone interested in that problem. > > > > Regards > > Andreas > > > > > -------------------------------------------------------MixedRandomList > > > .java----------------------------------------------------------------- > > ------------------------------------------------------- > > package de.am.gc.benchmarks; > > > > import java.util.ArrayList; > > import java.util.List; > > > > /** > > * GC benchmark producing a mix of lifetime=0 and lifetime>0 > objects which are kept in randomly updated lists. > > * > > * @author Andreas Mueller > > */ > > public class MixedRandomList { > > private static final int DEFAULT_NUMBEROFTHREADS=1; > > // object size in bytes > > private static final int DEFAULT_OBJECTSIZE=100; > > > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > > private static int objectSize=DEFAULT_OBJECTSIZE; > > // number of objects to fill half of the available memory > with (permanent) live objects > > private static long numLive = > > (Runtime.getRuntime().maxMemory()/objectSize/5); > > > > /** > > * @param args the command line arguments > > */ > > public static void main(String[] args) { > > if( args.length>0 ) { > > // first, optional argument is the size of the objects > > objectSize = Integer.parseInt(args[0]); > > // second, optional argument is the number of live > objects > > if( args.length>1 ) { > > numberOfThreads = Integer.parseInt(args[1]); > > // third, optional argument is the number of > live objects > > if( args.length>2 ) { > > numLive = Long.parseLong(args[2]); > > } > > } > > } > > for( int i=0; i > // run several GarbageProducer threads, each with > its own mix of lifetime=0 and higher lifetime objects > > new Thread(new > GarbageProducer((int)Math.pow(50.0,(double)(i+1)), > numLive/numberOfThreads)).start(); > > } > > try { > > Thread.sleep(1200000); > > } catch( InterruptedException iexc) { > > iexc.printStackTrace(); > > } > > System.exit(0); > > } > > > > private static char[] getCharArray(int length) { > > char[] retVal = new char[length]; > > for(int i=0; i > retVal[i] = 'a'; > > } > > return retVal; > > } > > > > public static class GarbageProducer implements Runnable { > > > > // the fraction of newly created objects that do not > become garbage immediately but are stored in the liveList > > int fractionLive; > > // the size of the liveList > > long myNumLive; > > > > /** > > * Each GarbageProducer creates objects that become > garbage immediately (lifetime=0) and > > * objects that become garbage only after a lifetime>0 > which is distributed about an average lifetime. > > * This average lifetime is a function of fractionLive > and numLive > > * > > * @param fractionLive > > * @param numLive > > */ > > public GarbageProducer(int fractionLive, long numLive) { > > this.fractionLive = fractionLive; > > this.myNumLive = numLive; > > } > > > > @Override > > public void run() { > > int osize = objectSize; > > char[] chars = getCharArray(objectSize); > > List liveList = new > ArrayList((int)myNumLive); > > // initially, the lifeList is filled > > for(int i=0; i > liveList.add(new String(chars)); > > } > > while(true) { > > // create the majority of objects as garbage > > for(int i=0; i > String garbageObject = new String(chars); > > } > > // keep the fraction of objects live by placing > them in the list (at a random index) > > int index = (int)(Math.random()*myNumLive); > > liveList.set(index, new String(chars)); > > } > > } > > } > > } > > > ---------------------------------------------------------------------- > > > ---------------------------------------------------------------------- > > > --------------------------------------------------------------------- > > > > Andreas M?ller > > > > mgm technology partners GmbH > > Frankfurter Ring 105a > > 80807 M?nchen > > Tel. +49 (89) 35 86 80-633 > > Fax +49 (89) 35 86 80-288 > > E-Mail Andreas.Mueller at mgm-tp.com > > > > Innovation Implemented. > > Sitz der Gesellschaft: M?nchen > > Gesch?ftsf?hrer: Hamarz Mehmanesh > > Handelsregister: AG M?nchen HRB 105068 > > > > > > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html > > ------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > End of hotspot-gc-use Digest, Vol 68, Issue 5 > ********************************************* > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/293f276c/attachment-0001.html From jon.masamitsu at oracle.com Thu Oct 24 12:01:35 2013 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 24 Oct 2013 12:01:35 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> Message-ID: <52696E8F.2070408@oracle.com> On 10/24/13 5:00 AM, charlie hunt wrote: > I did a little experimenting with this ... I think Jon's hypothesis is > right. > > I first reproduced the behavior as described by Andreas. Then, I set > -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at > 1800m. That eliminated the issue Andreas observed at 1800m. But, as I > suspected the threshold at which the change in behavior merely changed > at a higher sizing of young gen. It now occurs at about 2300m, up from > 1800m. > > So, it does look like there is an issue with the prediction model > since PromotedPadding can influence the prediction model. > > The prediction model code does not look trivial, as I'm sure Jon knows. ;) Part of the problem is that some of the inputs to the prediction model are not updated at a Full GC. So depending on how much space gets freed up during the Full GC, the prediction model can get frozen into the wrong decision. That's what we're trying to fix with 7098155. Jon > > hths, > > charlie ... > > > > On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller > > wrote: > > Hi Jon, > > thanks for the hint to Java 8. > I have verified with jdk1.8.0-ea-b112 (from October 17): behavior > remains as described > > Best regards > Andreas > > ---------------------------------------------------------------------- > > Date: Mon, 21 Oct 2013 15:29:10 -0700 > From: Jon Masamitsu > > Subject: Re: ParallelGC issue: collector does only Full GC by default > and above NewSize=1800m > To: hotspot-gc-use at openjdk.java.net > > Message-ID: <5265AAB6.1050700 at oracle.com > > > Content-Type: text/plain; charset="iso-8859-1" > > Andreas, > > There was a bug fixed in jdk8 that had similar symptoms. If you > can try a jdk8 build that might tell us something. > > If jdk8 doesn't help it's likely that the prediction model thinks > that there is not enough > free space in the old gen to support a young collection. We've been > working on 7098155 to > fix that. > > Jon > > > On 10/21/2013 10:09 AM, Andreas M?ller wrote: > > Hi all, > > > > while experimenting a bit with different Garbage Collectors and > > applying them to my homegrown micro benchmarks I stumbled into > the following problem: > > I run the below sample with the following command line (using > Java 1.7.0_40 on Windows and probably others): > > java -Xms6g -Xmx6g -XX:+UseParallelGC - > > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > > > The Default and proven ParallelGC collector does mostly Full GCs > and shows only poor out-of-the-box performance, more than a factor > 10 lower than the ParNew collector. > > More tests adding the -XX:NewSize=m and > -XX:MaxNewSize=m reveal that the problem occurs as soon as > the NewSize rises beyond 1800m which it obviously does by default. > > Below that threshold ParallelGC performance is similar to > ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), > but at NewSize=2000m is as low as 600 MB/s. > > > > Any ideas why this might happen? > > > > Note that the sample is constructed such that the live heap is > always around 3GB. If any I would expect a problem only at around > NewSize=3GB, when Old Gen shrinks to less than the live heap size. > As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m > to NewSize=3500m with little variation around a maximum of 7600 > MB/s at NewSize=2000m. > > > > I also provide source, gc.log and a plot of the NewSize > dependency to anyone interested in that problem. > > > > Regards > > Andreas > > > > > -------------------------------------------------------MixedRandomList > > > .java----------------------------------------------------------------- > > ------------------------------------------------------- > > package de.am.gc.benchmarks; > > > > import java.util.ArrayList; > > import java.util.List; > > > > /** > > * GC benchmark producing a mix of lifetime=0 and lifetime>0 > objects which are kept in randomly updated lists. > > * > > * @author Andreas Mueller > > */ > > public class MixedRandomList { > > private static final int DEFAULT_NUMBEROFTHREADS=1; > > // object size in bytes > > private static final int DEFAULT_OBJECTSIZE=100; > > > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > > private static int objectSize=DEFAULT_OBJECTSIZE; > > // number of objects to fill half of the available memory > with (permanent) live objects > > private static long numLive = > > (Runtime.getRuntime().maxMemory()/objectSize/5); > > > > /** > > * @param args the command line arguments > > */ > > public static void main(String[] args) { > > if( args.length>0 ) { > > // first, optional argument is the size of the objects > > objectSize = Integer.parseInt(args[0]); > > // second, optional argument is the number of live > objects > > if( args.length>1 ) { > > numberOfThreads = Integer.parseInt(args[1]); > > // third, optional argument is the number of > live objects > > if( args.length>2 ) { > > numLive = Long.parseLong(args[2]); > > } > > } > > } > > for( int i=0; i > // run several GarbageProducer threads, each with > its own mix of lifetime=0 and higher lifetime objects > > new Thread(new > GarbageProducer((int)Math.pow(50.0,(double)(i+1)), > numLive/numberOfThreads)).start(); > > } > > try { > > Thread.sleep(1200000); > > } catch( InterruptedException iexc) { > > iexc.printStackTrace(); > > } > > System.exit(0); > > } > > > > private static char[] getCharArray(int length) { > > char[] retVal = new char[length]; > > for(int i=0; i > retVal[i] = 'a'; > > } > > return retVal; > > } > > > > public static class GarbageProducer implements Runnable { > > > > // the fraction of newly created objects that do not > become garbage immediately but are stored in the liveList > > int fractionLive; > > // the size of the liveList > > long myNumLive; > > > > /** > > * Each GarbageProducer creates objects that become > garbage immediately (lifetime=0) and > > * objects that become garbage only after a lifetime>0 > which is distributed about an average lifetime. > > * This average lifetime is a function of fractionLive > and numLive > > * > > * @param fractionLive > > * @param numLive > > */ > > public GarbageProducer(int fractionLive, long numLive) { > > this.fractionLive = fractionLive; > > this.myNumLive = numLive; > > } > > > > @Override > > public void run() { > > int osize = objectSize; > > char[] chars = getCharArray(objectSize); > > List liveList = new > ArrayList((int)myNumLive); > > // initially, the lifeList is filled > > for(int i=0; i > liveList.add(new String(chars)); > > } > > while(true) { > > // create the majority of objects as garbage > > for(int i=0; i > String garbageObject = new String(chars); > > } > > // keep the fraction of objects live by placing > them in the list (at a random index) > > int index = (int)(Math.random()*myNumLive); > > liveList.set(index, new String(chars)); > > } > > } > > } > > } > > > ---------------------------------------------------------------------- > > > ---------------------------------------------------------------------- > > > --------------------------------------------------------------------- > > > > Andreas M?ller > > > > mgm technology partners GmbH > > Frankfurter Ring 105a > > 80807 M?nchen > > Tel. +49 (89) 35 86 80-633 > > Fax +49 (89) 35 86 80-288 > > E-Mail Andreas.Mueller at mgm-tp.com > > > > Innovation Implemented. > > Sitz der Gesellschaft: M?nchen > > Gesch?ftsf?hrer: Hamarz Mehmanesh > > Handelsregister: AG M?nchen HRB 105068 > > > > > > > > > > _______________________________________________ > > hotspot-gc-use mailing list > > hotspot-gc-use at openjdk.java.net > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html > > ------------------------------ > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > End of hotspot-gc-use Digest, Vol 68, Issue 5 > ********************************************* > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/7c4cc59a/attachment-0001.html From ysr1729 at gmail.com Thu Oct 24 12:28:33 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Thu, 24 Oct 2013 12:28:33 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: <52696E8F.2070408@oracle.com> References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> <52696E8F.2070408@oracle.com> Message-ID: Yes, exactly: that's point (3) of my email from 5/9, a pointer to which I included earlier in this (slightly bifuracted) thread. Thanks for fixing it! I'd submit that the other two points, especially point (1), should also be done. -- ramki On Thu, Oct 24, 2013 at 12:01 PM, Jon Masamitsu wrote: > > On 10/24/13 5:00 AM, charlie hunt wrote: > > I did a little experimenting with this ... I think Jon's hypothesis is > right. > > I first reproduced the behavior as described by Andreas. Then, I set > -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. > That eliminated the issue Andreas observed at 1800m. But, as I suspected > the threshold at which the change in behavior merely changed at a higher > sizing of young gen. It now occurs at about 2300m, up from 1800m. > > So, it does look like there is an issue with the prediction model since > PromotedPadding can influence the prediction model. > > The prediction model code does not look trivial, as I'm sure Jon knows. > ;) > > > Part of the problem is that some of the inputs to the prediction model > are not updated at a Full GC. So depending on how much space > gets freed up during the Full GC, the prediction model can get > frozen into the wrong decision. That's what we're trying to fix > with 7098155. > > Jon > > > > hths, > > charlie ... > > > > > On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller < > Andreas.Mueller at mgm-tp.com> wrote: > >> Hi Jon, >> >> thanks for the hint to Java 8. >> I have verified with jdk1.8.0-ea-b112 (from October 17): behavior remains >> as described >> >> Best regards >> Andreas >> >> ---------------------------------------------------------------------- >> >> Date: Mon, 21 Oct 2013 15:29:10 -0700 >> From: Jon Masamitsu >> Subject: Re: ParallelGC issue: collector does only Full GC by default >> and above NewSize=1800m >> To: hotspot-gc-use at openjdk.java.net >> Message-ID: <5265AAB6.1050700 at oracle.com> >> Content-Type: text/plain; charset="iso-8859-1" >> >> Andreas, >> >> There was a bug fixed in jdk8 that had similar symptoms. If you can try >> a jdk8 build that might tell us something. >> >> If jdk8 doesn't help it's likely that the prediction model thinks that >> there is not enough >> free space in the old gen to support a young collection. We've been >> working on 7098155 to >> fix that. >> >> Jon >> >> >> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >> > Hi all, >> > >> > while experimenting a bit with different Garbage Collectors and >> > applying them to my homegrown micro benchmarks I stumbled into the >> following problem: >> > I run the below sample with the following command line (using Java >> 1.7.0_40 on Windows and probably others): >> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >> > >> > The Default and proven ParallelGC collector does mostly Full GCs and >> shows only poor out-of-the-box performance, more than a factor 10 lower >> than the ParNew collector. >> > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m >> reveal that the problem occurs as soon as the NewSize rises beyond 1800m >> which it obviously does by default. >> > Below that threshold ParallelGC performance is similar to ParNewGC (in >> the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is >> as low as 600 MB/s. >> > >> > Any ideas why this might happen? >> > >> > Note that the sample is constructed such that the live heap is always >> around 3GB. If any I would expect a problem only at around NewSize=3GB, >> when Old Gen shrinks to less than the live heap size. As a matter of fact, >> ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little >> variation around a maximum of 7600 MB/s at NewSize=2000m. >> > >> > I also provide source, gc.log and a plot of the NewSize dependency to >> anyone interested in that problem. >> > >> > Regards >> > Andreas >> > >> > -------------------------------------------------------MixedRandomList >> > .java----------------------------------------------------------------- >> > ------------------------------------------------------- >> > package de.am.gc.benchmarks; >> > >> > import java.util.ArrayList; >> > import java.util.List; >> > >> > /** >> > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects >> which are kept in randomly updated lists. >> > * >> > * @author Andreas Mueller >> > */ >> > public class MixedRandomList { >> > private static final int DEFAULT_NUMBEROFTHREADS=1; >> > // object size in bytes >> > private static final int DEFAULT_OBJECTSIZE=100; >> > >> > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; >> > private static int objectSize=DEFAULT_OBJECTSIZE; >> > // number of objects to fill half of the available memory with >> (permanent) live objects >> > private static long numLive = >> > (Runtime.getRuntime().maxMemory()/objectSize/5); >> > >> > /** >> > * @param args the command line arguments >> > */ >> > public static void main(String[] args) { >> > if( args.length>0 ) { >> > // first, optional argument is the size of the objects >> > objectSize = Integer.parseInt(args[0]); >> > // second, optional argument is the number of live objects >> > if( args.length>1 ) { >> > numberOfThreads = Integer.parseInt(args[1]); >> > // third, optional argument is the number of live >> objects >> > if( args.length>2 ) { >> > numLive = Long.parseLong(args[2]); >> > } >> > } >> > } >> > for( int i=0; i> > // run several GarbageProducer threads, each with its own >> mix of lifetime=0 and higher lifetime objects >> > new Thread(new >> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >> numLive/numberOfThreads)).start(); >> > } >> > try { >> > Thread.sleep(1200000); >> > } catch( InterruptedException iexc) { >> > iexc.printStackTrace(); >> > } >> > System.exit(0); >> > } >> > >> > private static char[] getCharArray(int length) { >> > char[] retVal = new char[length]; >> > for(int i=0; i> > retVal[i] = 'a'; >> > } >> > return retVal; >> > } >> > >> > public static class GarbageProducer implements Runnable { >> > >> > // the fraction of newly created objects that do not become >> garbage immediately but are stored in the liveList >> > int fractionLive; >> > // the size of the liveList >> > long myNumLive; >> > >> > /** >> > * Each GarbageProducer creates objects that become garbage >> immediately (lifetime=0) and >> > * objects that become garbage only after a lifetime>0 which >> is distributed about an average lifetime. >> > * This average lifetime is a function of fractionLive and >> numLive >> > * >> > * @param fractionLive >> > * @param numLive >> > */ >> > public GarbageProducer(int fractionLive, long numLive) { >> > this.fractionLive = fractionLive; >> > this.myNumLive = numLive; >> > } >> > >> > @Override >> > public void run() { >> > int osize = objectSize; >> > char[] chars = getCharArray(objectSize); >> > List liveList = new >> ArrayList((int)myNumLive); >> > // initially, the lifeList is filled >> > for(int i=0; i> > liveList.add(new String(chars)); >> > } >> > while(true) { >> > // create the majority of objects as garbage >> > for(int i=0; i> > String garbageObject = new String(chars); >> > } >> > // keep the fraction of objects live by placing them >> in the list (at a random index) >> > int index = (int)(Math.random()*myNumLive); >> > liveList.set(index, new String(chars)); >> > } >> > } >> > } >> > } >> > ---------------------------------------------------------------------- >> > ---------------------------------------------------------------------- >> > --------------------------------------------------------------------- >> > >> > Andreas M?ller >> > >> > mgm technology partners GmbH >> > Frankfurter Ring 105a >> > 80807 M?nchen >> > Tel. +49 (89) 35 86 80-633 >> > Fax +49 (89) 35 86 80-288 >> > E-Mail Andreas.Mueller at mgm-tp.com >> > Innovation Implemented. >> > Sitz der Gesellschaft: M?nchen >> > Gesch?ftsf?hrer: Hamarz Mehmanesh >> > Handelsregister: AG M?nchen HRB 105068 >> > >> > >> > >> > >> > _______________________________________________ >> > hotspot-gc-use mailing list >> > hotspot-gc-use at openjdk.java.net >> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >> >> ------------------------------ >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> End of hotspot-gc-use Digest, Vol 68, Issue 5 >> ********************************************* >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/67bce8c3/attachment.html From ysr1729 at gmail.com Thu Oct 24 13:00:17 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Thu, 24 Oct 2013 13:00:17 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> <52696E8F.2070408@oracle.com> Message-ID: Just looked at the publicly visible description of 7098155, but it doesn't directly talk about the "freeze" in input telemetry of promotion volume. Just about the need to resize at a full gc. That latter to me is a somewhat orthogonal (but not unrelated) issue. May be the non-public parts of the CR talk about the lack of new telemetry which would correct the promotion estimate/prediction. -- ramki On Thu, Oct 24, 2013 at 12:28 PM, Srinivas Ramakrishna wrote: > Yes, exactly: that's point (3) of my email from 5/9, a pointer to which I > included earlier in this (slightly bifuracted) thread. > Thanks for fixing it! > > I'd submit that the other two points, especially point (1), should also be > done. > -- ramki > > > On Thu, Oct 24, 2013 at 12:01 PM, Jon Masamitsu wrote: > >> >> On 10/24/13 5:00 AM, charlie hunt wrote: >> >> I did a little experimenting with this ... I think Jon's hypothesis is >> right. >> >> I first reproduced the behavior as described by Andreas. Then, I set >> -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. >> That eliminated the issue Andreas observed at 1800m. But, as I suspected >> the threshold at which the change in behavior merely changed at a higher >> sizing of young gen. It now occurs at about 2300m, up from 1800m. >> >> So, it does look like there is an issue with the prediction model since >> PromotedPadding can influence the prediction model. >> >> The prediction model code does not look trivial, as I'm sure Jon knows. >> ;) >> >> >> Part of the problem is that some of the inputs to the prediction model >> are not updated at a Full GC. So depending on how much space >> gets freed up during the Full GC, the prediction model can get >> frozen into the wrong decision. That's what we're trying to fix >> with 7098155. >> >> Jon >> >> >> >> hths, >> >> charlie ... >> >> >> >> >> On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller < >> Andreas.Mueller at mgm-tp.com> wrote: >> >>> Hi Jon, >>> >>> thanks for the hint to Java 8. >>> I have verified with jdk1.8.0-ea-b112 (from October 17): behavior >>> remains as described >>> >>> Best regards >>> Andreas >>> >>> ---------------------------------------------------------------------- >>> >>> Date: Mon, 21 Oct 2013 15:29:10 -0700 >>> From: Jon Masamitsu >>> Subject: Re: ParallelGC issue: collector does only Full GC by default >>> and above NewSize=1800m >>> To: hotspot-gc-use at openjdk.java.net >>> Message-ID: <5265AAB6.1050700 at oracle.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> Andreas, >>> >>> There was a bug fixed in jdk8 that had similar symptoms. If you can try >>> a jdk8 build that might tell us something. >>> >>> If jdk8 doesn't help it's likely that the prediction model thinks that >>> there is not enough >>> free space in the old gen to support a young collection. We've been >>> working on 7098155 to >>> fix that. >>> >>> Jon >>> >>> >>> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >>> > Hi all, >>> > >>> > while experimenting a bit with different Garbage Collectors and >>> > applying them to my homegrown micro benchmarks I stumbled into the >>> following problem: >>> > I run the below sample with the following command line (using Java >>> 1.7.0_40 on Windows and probably others): >>> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >>> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >>> > >>> > The Default and proven ParallelGC collector does mostly Full GCs and >>> shows only poor out-of-the-box performance, more than a factor 10 lower >>> than the ParNew collector. >>> > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m >>> reveal that the problem occurs as soon as the NewSize rises beyond 1800m >>> which it obviously does by default. >>> > Below that threshold ParallelGC performance is similar to ParNewGC (in >>> the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is >>> as low as 600 MB/s. >>> > >>> > Any ideas why this might happen? >>> > >>> > Note that the sample is constructed such that the live heap is always >>> around 3GB. If any I would expect a problem only at around NewSize=3GB, >>> when Old Gen shrinks to less than the live heap size. As a matter of fact, >>> ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little >>> variation around a maximum of 7600 MB/s at NewSize=2000m. >>> > >>> > I also provide source, gc.log and a plot of the NewSize dependency to >>> anyone interested in that problem. >>> > >>> > Regards >>> > Andreas >>> > >>> > -------------------------------------------------------MixedRandomList >>> > .java----------------------------------------------------------------- >>> > ------------------------------------------------------- >>> > package de.am.gc.benchmarks; >>> > >>> > import java.util.ArrayList; >>> > import java.util.List; >>> > >>> > /** >>> > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects >>> which are kept in randomly updated lists. >>> > * >>> > * @author Andreas Mueller >>> > */ >>> > public class MixedRandomList { >>> > private static final int DEFAULT_NUMBEROFTHREADS=1; >>> > // object size in bytes >>> > private static final int DEFAULT_OBJECTSIZE=100; >>> > >>> > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; >>> > private static int objectSize=DEFAULT_OBJECTSIZE; >>> > // number of objects to fill half of the available memory with >>> (permanent) live objects >>> > private static long numLive = >>> > (Runtime.getRuntime().maxMemory()/objectSize/5); >>> > >>> > /** >>> > * @param args the command line arguments >>> > */ >>> > public static void main(String[] args) { >>> > if( args.length>0 ) { >>> > // first, optional argument is the size of the objects >>> > objectSize = Integer.parseInt(args[0]); >>> > // second, optional argument is the number of live objects >>> > if( args.length>1 ) { >>> > numberOfThreads = Integer.parseInt(args[1]); >>> > // third, optional argument is the number of live >>> objects >>> > if( args.length>2 ) { >>> > numLive = Long.parseLong(args[2]); >>> > } >>> > } >>> > } >>> > for( int i=0; i>> > // run several GarbageProducer threads, each with its own >>> mix of lifetime=0 and higher lifetime objects >>> > new Thread(new >>> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >>> numLive/numberOfThreads)).start(); >>> > } >>> > try { >>> > Thread.sleep(1200000); >>> > } catch( InterruptedException iexc) { >>> > iexc.printStackTrace(); >>> > } >>> > System.exit(0); >>> > } >>> > >>> > private static char[] getCharArray(int length) { >>> > char[] retVal = new char[length]; >>> > for(int i=0; i>> > retVal[i] = 'a'; >>> > } >>> > return retVal; >>> > } >>> > >>> > public static class GarbageProducer implements Runnable { >>> > >>> > // the fraction of newly created objects that do not become >>> garbage immediately but are stored in the liveList >>> > int fractionLive; >>> > // the size of the liveList >>> > long myNumLive; >>> > >>> > /** >>> > * Each GarbageProducer creates objects that become garbage >>> immediately (lifetime=0) and >>> > * objects that become garbage only after a lifetime>0 which >>> is distributed about an average lifetime. >>> > * This average lifetime is a function of fractionLive and >>> numLive >>> > * >>> > * @param fractionLive >>> > * @param numLive >>> > */ >>> > public GarbageProducer(int fractionLive, long numLive) { >>> > this.fractionLive = fractionLive; >>> > this.myNumLive = numLive; >>> > } >>> > >>> > @Override >>> > public void run() { >>> > int osize = objectSize; >>> > char[] chars = getCharArray(objectSize); >>> > List liveList = new >>> ArrayList((int)myNumLive); >>> > // initially, the lifeList is filled >>> > for(int i=0; i>> > liveList.add(new String(chars)); >>> > } >>> > while(true) { >>> > // create the majority of objects as garbage >>> > for(int i=0; i>> > String garbageObject = new String(chars); >>> > } >>> > // keep the fraction of objects live by placing them >>> in the list (at a random index) >>> > int index = (int)(Math.random()*myNumLive); >>> > liveList.set(index, new String(chars)); >>> > } >>> > } >>> > } >>> > } >>> > ---------------------------------------------------------------------- >>> > ---------------------------------------------------------------------- >>> > --------------------------------------------------------------------- >>> > >>> > Andreas M?ller >>> > >>> > mgm technology partners GmbH >>> > Frankfurter Ring 105a >>> > 80807 M?nchen >>> > Tel. +49 (89) 35 86 80-633 >>> > Fax +49 (89) 35 86 80-288 >>> > E-Mail Andreas.Mueller at mgm-tp.com >>> > Innovation Implemented. >>> > Sitz der Gesellschaft: M?nchen >>> > Gesch?ftsf?hrer: Hamarz Mehmanesh >>> > Handelsregister: AG M?nchen HRB 105068 >>> > >>> > >>> > >>> > >>> > _______________________________________________ >>> > hotspot-gc-use mailing list >>> > hotspot-gc-use at openjdk.java.net >>> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: >>> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >>> >>> ------------------------------ >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >>> End of hotspot-gc-use Digest, Vol 68, Issue 5 >>> ********************************************* >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/177c3faa/attachment-0001.html From jon.masamitsu at oracle.com Thu Oct 24 15:54:56 2013 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 24 Oct 2013 15:54:56 -0700 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> <52696E8F.2070408@oracle.com> Message-ID: <5269A540.1000007@oracle.com> Ramki, I think your reading is correct. I thought the growing the young gen might get the GC to modify the data going into the prediction model but Tao has convinced me that it most often won't. False hope on my part. Jon On 10/24/13 1:00 PM, Srinivas Ramakrishna wrote: > Just looked at the publicly visible description of 7098155, but it > doesn't directly talk about the "freeze" in input telemetry > of promotion volume. Just about the need to resize at a full gc. > That latter to me is a somewhat orthogonal (but not unrelated) issue. > May be the non-public parts of the CR talk about the lack of new > telemetry which would correct the promotion estimate/prediction. > > -- ramki > > > > > On Thu, Oct 24, 2013 at 12:28 PM, Srinivas Ramakrishna > > wrote: > > Yes, exactly: that's point (3) of my email from 5/9, a pointer to > which I included earlier in this (slightly bifuracted) thread. > Thanks for fixing it! > > I'd submit that the other two points, especially point (1), should > also be done. > -- ramki > > > On Thu, Oct 24, 2013 at 12:01 PM, Jon Masamitsu > > wrote: > > > On 10/24/13 5:00 AM, charlie hunt wrote: >> I did a little experimenting with this ... I think Jon's >> hypothesis is right. >> >> I first reproduced the behavior as described by Andreas. >> Then, I set -XX:PromotedPadding=1 in the case where >> NewSize/MaxNewSize is at 1800m. That eliminated the issue >> Andreas observed at 1800m. But, as I suspected the threshold >> at which the change in behavior merely changed at a higher >> sizing of young gen. It now occurs at about 2300m, up from 1800m. >> >> So, it does look like there is an issue with the prediction >> model since PromotedPadding can influence the prediction model. >> >> The prediction model code does not look trivial, as I'm sure >> Jon knows. ;) > > Part of the problem is that some of the inputs to the > prediction model > are not updated at a Full GC. So depending on how much space > gets freed up during the Full GC, the prediction model can get > frozen into the wrong decision. That's what we're trying to fix > with 7098155. > > Jon > > >> >> hths, >> >> charlie ... >> >> >> >> On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller >> > > wrote: >> >> Hi Jon, >> >> thanks for the hint to Java 8. >> I have verified with jdk1.8.0-ea-b112 (from October 17): >> behavior remains as described >> >> Best regards >> Andreas >> >> ---------------------------------------------------------------------- >> >> Date: Mon, 21 Oct 2013 15:29:10 -0700 >> From: Jon Masamitsu > > >> Subject: Re: ParallelGC issue: collector does only Full >> GC by default >> and above NewSize=1800m >> To: hotspot-gc-use at openjdk.java.net >> >> Message-ID: <5265AAB6.1050700 at oracle.com >> > >> Content-Type: text/plain; charset="iso-8859-1" >> >> Andreas, >> >> There was a bug fixed in jdk8 that had similar symptoms. >> If you can try a jdk8 build that might tell us something. >> >> If jdk8 doesn't help it's likely that the prediction >> model thinks that there is not enough >> free space in the old gen to support a young collection. >> We've been >> working on 7098155 to >> fix that. >> >> Jon >> >> >> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >> > Hi all, >> > >> > while experimenting a bit with different Garbage >> Collectors and >> > applying them to my homegrown micro benchmarks I >> stumbled into the following problem: >> > I run the below sample with the following command line >> (using Java 1.7.0_40 on Windows and probably others): >> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >> > >> > The Default and proven ParallelGC collector does mostly >> Full GCs and shows only poor out-of-the-box performance, >> more than a factor 10 lower than the ParNew collector. >> > More tests adding the -XX:NewSize=m and >> -XX:MaxNewSize=m reveal that the problem occurs as >> soon as the NewSize rises beyond 1800m which it obviously >> does by default. >> > Below that threshold ParallelGC performance is similar >> to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz >> notebook), but at NewSize=2000m is as low as 600 MB/s. >> > >> > Any ideas why this might happen? >> > >> > Note that the sample is constructed such that the live >> heap is always around 3GB. If any I would expect a >> problem only at around NewSize=3GB, when Old Gen shrinks >> to less than the live heap size. As a matter of fact, >> ParNewGC can do >7000 MB/s from NewSize=400m to >> NewSize=3500m with little variation around a maximum of >> 7600 MB/s at NewSize=2000m. >> > >> > I also provide source, gc.log and a plot of the NewSize >> dependency to anyone interested in that problem. >> > >> > Regards >> > Andreas >> > >> > >> -------------------------------------------------------MixedRandomList >> > >> .java----------------------------------------------------------------- >> > ------------------------------------------------------- >> > package de.am.gc.benchmarks; >> > >> > import java.util.ArrayList; >> > import java.util.List; >> > >> > /** >> > * GC benchmark producing a mix of lifetime=0 and >> lifetime>0 objects which are kept in randomly updated lists. >> > * >> > * @author Andreas Mueller >> > */ >> > public class MixedRandomList { >> > private static final int DEFAULT_NUMBEROFTHREADS=1; >> > // object size in bytes >> > private static final int DEFAULT_OBJECTSIZE=100; >> > >> > private static int >> numberOfThreads=DEFAULT_NUMBEROFTHREADS; >> > private static int objectSize=DEFAULT_OBJECTSIZE; >> > // number of objects to fill half of the available >> memory with (permanent) live objects >> > private static long numLive = >> > (Runtime.getRuntime().maxMemory()/objectSize/5); >> > >> > /** >> > * @param args the command line arguments >> > */ >> > public static void main(String[] args) { >> > if( args.length>0 ) { >> > // first, optional argument is the size of >> the objects >> > objectSize = Integer.parseInt(args[0]); >> > // second, optional argument is the number >> of live objects >> > if( args.length>1 ) { >> > numberOfThreads = Integer.parseInt(args[1]); >> > // third, optional argument is the >> number of live objects >> > if( args.length>2 ) { >> > numLive = Long.parseLong(args[2]); >> > } >> > } >> > } >> > for( int i=0; i> > // run several GarbageProducer threads, >> each with its own mix of lifetime=0 and higher lifetime >> objects >> > new Thread(new >> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >> numLive/numberOfThreads)).start(); >> > } >> > try { >> > Thread.sleep(1200000); >> > } catch( InterruptedException iexc) { >> > iexc.printStackTrace(); >> > } >> > System.exit(0); >> > } >> > >> > private static char[] getCharArray(int length) { >> > char[] retVal = new char[length]; >> > for(int i=0; i> > retVal[i] = 'a'; >> > } >> > return retVal; >> > } >> > >> > public static class GarbageProducer implements >> Runnable { >> > >> > // the fraction of newly created objects that >> do not become garbage immediately but are stored in the >> liveList >> > int fractionLive; >> > // the size of the liveList >> > long myNumLive; >> > >> > /** >> > * Each GarbageProducer creates objects that >> become garbage immediately (lifetime=0) and >> > * objects that become garbage only after a >> lifetime>0 which is distributed about an average lifetime. >> > * This average lifetime is a function of >> fractionLive and numLive >> > * >> > * @param fractionLive >> > * @param numLive >> > */ >> > public GarbageProducer(int fractionLive, long >> numLive) { >> > this.fractionLive = fractionLive; >> > this.myNumLive = numLive; >> > } >> > >> > @Override >> > public void run() { >> > int osize = objectSize; >> > char[] chars = getCharArray(objectSize); >> > List liveList = new >> ArrayList((int)myNumLive); >> > // initially, the lifeList is filled >> > for(int i=0; i> > liveList.add(new String(chars)); >> > } >> > while(true) { >> > // create the majority of objects as >> garbage >> > for(int i=0; i> > String garbageObject = new >> String(chars); >> > } >> > // keep the fraction of objects live >> by placing them in the list (at a random index) >> > int index = >> (int)(Math.random()*myNumLive); >> > liveList.set(index, new String(chars)); >> > } >> > } >> > } >> > } >> > >> ---------------------------------------------------------------------- >> > >> ---------------------------------------------------------------------- >> > >> --------------------------------------------------------------------- >> > >> > Andreas M?ller >> > >> > mgm technology partners GmbH >> > Frankfurter Ring 105a >> > 80807 M?nchen >> > Tel. +49 (89) 35 86 80-633 >> >> > Fax +49 (89) 35 86 80-288 >> >> > E-Mail Andreas.Mueller at mgm-tp.com >> > > >> > Innovation Implemented. >> > Sitz der Gesellschaft: M?nchen >> > Gesch?ftsf?hrer: Hamarz Mehmanesh >> > Handelsregister: AG M?nchen HRB 105068 >> > >> > >> > >> > >> > _______________________________________________ >> > hotspot-gc-use mailing list >> > hotspot-gc-use at openjdk.java.net >> >> > >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: >> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >> >> ------------------------------ >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> End of hotspot-gc-use Digest, Vol 68, Issue 5 >> ********************************************* >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/f832e909/attachment-0001.html From charlesjhunt at gmail.com Thu Oct 24 16:28:47 2013 From: charlesjhunt at gmail.com (charlie hunt) Date: Thu, 24 Oct 2013 18:28:47 -0500 Subject: ParallelGC issue: collector does only Full GC In-Reply-To: <5269A540.1000007@oracle.com> References: <46FF8393B58AD84D95E444264805D98FBDDF07EA@edata01.mgm-edv.de> <52696E8F.2070408@oracle.com> <5269A540.1000007@oracle.com> Message-ID: Here's some additional GC data I just captured with a build of the latest OpenJDK 8 code where I'm running Andreas's example. If you do the math, once it starts doing full GCs, you'll notice there's about 100ms between when the previous full GC ends, and when the next one starts. This is a hypothesis on my part that something in the prediction model is not getting updated upon full GCs. And, I'm also ignoring that the first full GC occurs at ~ 65% of old gen occupancy. So, the initial full GC prediction is off a little too. java -Xmx6g -Xms6g -XX:+UseParallelGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails MixedRandomList 100 8 12500000 0.316: [GC (Allocation Failure) [PSYoungGen: 1572864K->262124K(1835008K)] 1572864K->1474356K(6029312K), 0.8066690 secs] [Times: user=1.97 sys=6.04, real=0.80 secs] 1.224: [GC (Allocation Failure) [PSYoungGen: 1834988K->262140K(1835008K)] 3047220K->2905620K(6029312K), 0.9182400 secs] [Times: user=1.83 sys=7.30, real=0.92 secs] 2.243: [GC (Allocation Failure) [PSYoungGen: 1835004K->262140K(1835008K)] 4478484K->2995044K(6029312K), 0.0832930 secs] [Times: user=0.41 sys=0.36, real=0.08 secs] 2.326: [Full GC (Ergonomics) [PSYoungGen: 262140K->0K(1835008K)] [ParOldGen: 2732904K->2978769K(4194304K)] 2995044K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.3776390 secs] [Times: user=8.27 sys=0.11, real=1.38 secs] 3.807: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.6885150 secs] [Times: user=12.18 sys=0.03, real=1.69 secs] 5.599: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.7805610 secs] [Times: user=13.42 sys=0.05, real=1.78 secs] 7.483: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.9159070 secs] [Times: user=14.67 sys=0.05, real=1.91 secs] 9.503: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.9360530 secs] [Times: user=14.99 sys=0.00, real=1.93 secs] 11.541: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.9209240 secs] [Times: user=14.76 sys=0.03, real=1.92 secs] 13.564: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.8002220 secs] [Times: user=13.57 sys=0.04, real=1.80 secs] 15.466: [Full GC (Ergonomics) [PSYoungGen: 1572864K->0K(1835008K)] [ParOldGen: 2978769K->2978769K(4194304K)] 4551633K->2978769K(6029312K), [Metaspace: 2456K->2456K(1056768K)], 1.9347450 secs] [Times: user=14.84 sys=0.05, real=1.93 secs] On Thu, Oct 24, 2013 at 5:54 PM, Jon Masamitsu wrote: > Ramki, > > I think your reading is correct. I thought the growing the young gen might > get the GC to modify the data going into the prediction model but Tao > has convinced me that it most often won't. False hope on my part. > > Jon > > > On 10/24/13 1:00 PM, Srinivas Ramakrishna wrote: > > Just looked at the publicly visible description of 7098155, but it > doesn't directly talk about the "freeze" in input telemetry > of promotion volume. Just about the need to resize at a full gc. > That latter to me is a somewhat orthogonal (but not unrelated) issue. > May be the non-public parts of the CR talk about the lack of new > telemetry which would correct the promotion estimate/prediction. > > -- ramki > > > > > On Thu, Oct 24, 2013 at 12:28 PM, Srinivas Ramakrishna wrote: > >> Yes, exactly: that's point (3) of my email from 5/9, a pointer to which >> I included earlier in this (slightly bifuracted) thread. >> Thanks for fixing it! >> >> I'd submit that the other two points, especially point (1), should also >> be done. >> -- ramki >> >> >> On Thu, Oct 24, 2013 at 12:01 PM, Jon Masamitsu > > wrote: >> >>> >>> On 10/24/13 5:00 AM, charlie hunt wrote: >>> >>> I did a little experimenting with this ... I think Jon's hypothesis is >>> right. >>> >>> I first reproduced the behavior as described by Andreas. Then, I set >>> -XX:PromotedPadding=1 in the case where NewSize/MaxNewSize is at 1800m. >>> That eliminated the issue Andreas observed at 1800m. But, as I suspected >>> the threshold at which the change in behavior merely changed at a higher >>> sizing of young gen. It now occurs at about 2300m, up from 1800m. >>> >>> So, it does look like there is an issue with the prediction model >>> since PromotedPadding can influence the prediction model. >>> >>> The prediction model code does not look trivial, as I'm sure Jon >>> knows. ;) >>> >>> >>> Part of the problem is that some of the inputs to the prediction model >>> are not updated at a Full GC. So depending on how much space >>> gets freed up during the Full GC, the prediction model can get >>> frozen into the wrong decision. That's what we're trying to fix >>> with 7098155. >>> >>> Jon >>> >>> >>> >>> hths, >>> >>> charlie ... >>> >>> >>> >>> >>> On Wed, Oct 23, 2013 at 3:03 AM, Andreas M?ller < >>> Andreas.Mueller at mgm-tp.com> wrote: >>> >>>> Hi Jon, >>>> >>>> thanks for the hint to Java 8. >>>> I have verified with jdk1.8.0-ea-b112 (from October 17): behavior >>>> remains as described >>>> >>>> Best regards >>>> Andreas >>>> >>>> ---------------------------------------------------------------------- >>>> >>>> Date: Mon, 21 Oct 2013 15:29:10 -0700 >>>> From: Jon Masamitsu >>>> Subject: Re: ParallelGC issue: collector does only Full GC by default >>>> and above NewSize=1800m >>>> To: hotspot-gc-use at openjdk.java.net >>>> Message-ID: <5265AAB6.1050700 at oracle.com> >>>> Content-Type: text/plain; charset="iso-8859-1" >>>> >>>> Andreas, >>>> >>>> There was a bug fixed in jdk8 that had similar symptoms. If you can >>>> try a jdk8 build that might tell us something. >>>> >>>> If jdk8 doesn't help it's likely that the prediction model thinks that >>>> there is not enough >>>> free space in the old gen to support a young collection. We've been >>>> working on 7098155 to >>>> fix that. >>>> >>>> Jon >>>> >>>> >>>> On 10/21/2013 10:09 AM, Andreas M?ller wrote: >>>> > Hi all, >>>> > >>>> > while experimenting a bit with different Garbage Collectors and >>>> > applying them to my homegrown micro benchmarks I stumbled into the >>>> following problem: >>>> > I run the below sample with the following command line (using Java >>>> 1.7.0_40 on Windows and probably others): >>>> > java -Xms6g -Xmx6g -XX:+UseParallelGC - >>>> > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 >>>> > >>>> > The Default and proven ParallelGC collector does mostly Full GCs and >>>> shows only poor out-of-the-box performance, more than a factor 10 lower >>>> than the ParNew collector. >>>> > More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m >>>> reveal that the problem occurs as soon as the NewSize rises beyond 1800m >>>> which it obviously does by default. >>>> > Below that threshold ParallelGC performance is similar to ParNewGC >>>> (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m >>>> is as low as 600 MB/s. >>>> > >>>> > Any ideas why this might happen? >>>> > >>>> > Note that the sample is constructed such that the live heap is always >>>> around 3GB. If any I would expect a problem only at around NewSize=3GB, >>>> when Old Gen shrinks to less than the live heap size. As a matter of fact, >>>> ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little >>>> variation around a maximum of 7600 MB/s at NewSize=2000m. >>>> > >>>> > I also provide source, gc.log and a plot of the NewSize dependency to >>>> anyone interested in that problem. >>>> > >>>> > Regards >>>> > Andreas >>>> > >>>> > -------------------------------------------------------MixedRandomList >>>> > .java----------------------------------------------------------------- >>>> > ------------------------------------------------------- >>>> > package de.am.gc.benchmarks; >>>> > >>>> > import java.util.ArrayList; >>>> > import java.util.List; >>>> > >>>> > /** >>>> > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects >>>> which are kept in randomly updated lists. >>>> > * >>>> > * @author Andreas Mueller >>>> > */ >>>> > public class MixedRandomList { >>>> > private static final int DEFAULT_NUMBEROFTHREADS=1; >>>> > // object size in bytes >>>> > private static final int DEFAULT_OBJECTSIZE=100; >>>> > >>>> > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; >>>> > private static int objectSize=DEFAULT_OBJECTSIZE; >>>> > // number of objects to fill half of the available memory with >>>> (permanent) live objects >>>> > private static long numLive = >>>> > (Runtime.getRuntime().maxMemory()/objectSize/5); >>>> > >>>> > /** >>>> > * @param args the command line arguments >>>> > */ >>>> > public static void main(String[] args) { >>>> > if( args.length>0 ) { >>>> > // first, optional argument is the size of the objects >>>> > objectSize = Integer.parseInt(args[0]); >>>> > // second, optional argument is the number of live >>>> objects >>>> > if( args.length>1 ) { >>>> > numberOfThreads = Integer.parseInt(args[1]); >>>> > // third, optional argument is the number of live >>>> objects >>>> > if( args.length>2 ) { >>>> > numLive = Long.parseLong(args[2]); >>>> > } >>>> > } >>>> > } >>>> > for( int i=0; i>>> > // run several GarbageProducer threads, each with its >>>> own mix of lifetime=0 and higher lifetime objects >>>> > new Thread(new >>>> GarbageProducer((int)Math.pow(50.0,(double)(i+1)), >>>> numLive/numberOfThreads)).start(); >>>> > } >>>> > try { >>>> > Thread.sleep(1200000); >>>> > } catch( InterruptedException iexc) { >>>> > iexc.printStackTrace(); >>>> > } >>>> > System.exit(0); >>>> > } >>>> > >>>> > private static char[] getCharArray(int length) { >>>> > char[] retVal = new char[length]; >>>> > for(int i=0; i>>> > retVal[i] = 'a'; >>>> > } >>>> > return retVal; >>>> > } >>>> > >>>> > public static class GarbageProducer implements Runnable { >>>> > >>>> > // the fraction of newly created objects that do not become >>>> garbage immediately but are stored in the liveList >>>> > int fractionLive; >>>> > // the size of the liveList >>>> > long myNumLive; >>>> > >>>> > /** >>>> > * Each GarbageProducer creates objects that become garbage >>>> immediately (lifetime=0) and >>>> > * objects that become garbage only after a lifetime>0 which >>>> is distributed about an average lifetime. >>>> > * This average lifetime is a function of fractionLive and >>>> numLive >>>> > * >>>> > * @param fractionLive >>>> > * @param numLive >>>> > */ >>>> > public GarbageProducer(int fractionLive, long numLive) { >>>> > this.fractionLive = fractionLive; >>>> > this.myNumLive = numLive; >>>> > } >>>> > >>>> > @Override >>>> > public void run() { >>>> > int osize = objectSize; >>>> > char[] chars = getCharArray(objectSize); >>>> > List liveList = new >>>> ArrayList((int)myNumLive); >>>> > // initially, the lifeList is filled >>>> > for(int i=0; i>>> > liveList.add(new String(chars)); >>>> > } >>>> > while(true) { >>>> > // create the majority of objects as garbage >>>> > for(int i=0; i>>> > String garbageObject = new String(chars); >>>> > } >>>> > // keep the fraction of objects live by placing them >>>> in the list (at a random index) >>>> > int index = (int)(Math.random()*myNumLive); >>>> > liveList.set(index, new String(chars)); >>>> > } >>>> > } >>>> > } >>>> > } >>>> > ---------------------------------------------------------------------- >>>> > ---------------------------------------------------------------------- >>>> > --------------------------------------------------------------------- >>>> > >>>> > Andreas M?ller >>>> > >>>> > mgm technology partners GmbH >>>> > Frankfurter Ring 105a >>>> > 80807 M?nchen >>>> > Tel. +49 (89) 35 86 80-633 <%2B49%20%2889%29%2035%2086%2080-633> >>>> > Fax +49 (89) 35 86 80-288 <%2B49%20%2889%29%2035%2086%2080-288> >>>> > E-Mail Andreas.Mueller at mgm-tp.com >>>> > Innovation Implemented. >>>> > Sitz der Gesellschaft: M?nchen >>>> > Gesch?ftsf?hrer: Hamarz Mehmanesh >>>> > Handelsregister: AG M?nchen HRB 105068 >>>> > >>>> > >>>> > >>>> > >>>> > _______________________________________________ >>>> > hotspot-gc-use mailing list >>>> > hotspot-gc-use at openjdk.java.net >>>> > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>> >>>> -------------- next part -------------- >>>> An HTML attachment was scrubbed... >>>> URL: >>>> http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131021/62d044c7/attachment-0001.html >>>> >>>> ------------------------------ >>>> >>>> _______________________________________________ >>>> hotspot-gc-use mailing list >>>> hotspot-gc-use at openjdk.java.net >>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>> >>>> >>>> End of hotspot-gc-use Digest, Vol 68, Issue 5 >>>> ********************************************* >>>> _______________________________________________ >>>> hotspot-gc-use mailing list >>>> hotspot-gc-use at openjdk.java.net >>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>> >>> >>> >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/769ca915/attachment-0001.html From tao.mao at oracle.com Thu Oct 24 17:23:28 2013 From: tao.mao at oracle.com (Tao Mao) Date: Thu, 24 Oct 2013 17:23:28 -0700 Subject: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m In-Reply-To: <46FF8393B58AD84D95E444264805D98FBDDF067B@edata01.mgm-edv.de> References: <46FF8393B58AD84D95E444264805D98FBDDF067B@edata01.mgm-edv.de> Message-ID: <5269BA00.6080908@oracle.com> Hi Andreas, What's your exact VM options for ParNewGC? If possible, please attach ParNew gc log. I'd like to investigate and compare the two cases to see GC behavioral differences. Thanks. Tao On 10/21/13 10:09 AM, Andreas M?ller wrote: > > Hi all, > > while experimenting a bit with different Garbage Collectors and > applying them to my homegrown micro benchmarks I stumbled into the > > following problem: > > I run the below sample with the following command line (using Java > 1.7.0_40 on Windows and probably others): > > java -Xms6g -Xmx6g -XX:+UseParallelGC - > de.am.gc.benchmarks.MixedRandomList 100 8 12500000 > > The Default and proven ParallelGC collector does mostly Full GCs and > shows only poor out-of-the-box performance, more than a factor 10 > lower than the ParNew collector. > > More tests adding the --XX:NewSize=m and --XX:MaxNewSize=m > reveal that the problem occurs as soon as the NewSize rises beyond > 1800m which it obviously does by default. > > Below that threshold ParallelGC performance is similar to ParNewGC (in > the range of 7500 MB/s on my i7-2500MHz notebook), but at > NewSize=2000m is as low as 600 MB/s. > > Any ideas why this might happen? > > Note that the sample is constructed such that the live heap is always > around 3GB. If any I would expect a problem only at around > NewSize=3GB, when Old Gen shrinks to less than the live heap size. As > a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to > NewSize=3500m with little variation around a maximum of 7600 MB/s at > NewSize=2000m. > > I also provide source, gc.log and a plot of the NewSize dependency to > anyone interested in that problem. > > Regards > > Andreas > > -------------------------------------------------------MixedRandomList.java------------------------------------------------------------------------------------------------------------------------ > > package de.am.gc.benchmarks; > > import java.util.ArrayList; > > import java.util.List; > > /** > > * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects > which are kept in randomly updated lists. > > * > > * @author Andreas Mueller > > */ > > public class MixedRandomList { > > private static final int DEFAULT_NUMBEROFTHREADS=1; > > // object size in bytes > > private static final int DEFAULT_OBJECTSIZE=100; > > private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; > > private static int objectSize=DEFAULT_OBJECTSIZE; > > // number of objects to fill half of the available memory with > (permanent) live objects > > private static long numLive = > (Runtime.getRuntime().maxMemory()/objectSize/5); > > /** > > * @param args the command line arguments > > */ > > public static void main(String[] args) { > > if( args.length>0 ) { > > // first, optional argument is the size of the objects > > objectSize = Integer.parseInt(args[0]); > > // second, optional argument is the number of live objects > > if( args.length>1 ) { > > numberOfThreads = Integer.parseInt(args[1]); > > // third, optional argument is the number of live objects > > if( args.length>2 ) { > > numLive = Long.parseLong(args[2]); > > } > > } > > } > > for( int i=0; i > // run several GarbageProducer threads, each with its own > mix of lifetime=0 and higher lifetime objects > > new Thread(new > GarbageProducer((int)Math.pow(50.0,(double)(i+1)), > numLive/numberOfThreads)).start(); > > } > > try { > > Thread.sleep(1200000); > > } catch( InterruptedException iexc) { > > iexc.printStackTrace(); > > } > > System.exit(0); > > } > > private static char[] getCharArray(int length) { > > char[] retVal = new char[length]; > > for(int i=0; i > retVal[i] = 'a'; > > } > > return retVal; > > } > > public static class GarbageProducer implements Runnable { > > // the fraction of newly created objects that do not become > garbage immediately but are stored in the liveList > > int fractionLive; > > // the size of the liveList > > long myNumLive; > > /** > > * Each GarbageProducer creates objects that become garbage > immediately (lifetime=0) and > > * objects that become garbage only after a lifetime>0 which > is distributed about an average lifetime. > > * This average lifetime is a function of fractionLive and > numLive > > * > > * @param fractionLive > > * @param numLive > > */ > > public GarbageProducer(int fractionLive, long numLive) { > > this.fractionLive = fractionLive; > > this.myNumLive = numLive; > > } > > @Override > > public void run() { > > int osize = objectSize; > > char[] chars = getCharArray(objectSize); > > List liveList = new ArrayList((int)myNumLive); > > // initially, the lifeList is filled > > for(int i=0; i > liveList.add(new String(chars)); > > } > > while(true) { > > // create the majority of objects as garbage > > for(int i=0; i > String garbageObject = new String(chars); > > } > > // keep the fraction of objects live by placing them > in the list (at a random index) > > int index = (int)(Math.random()*myNumLive); > > liveList.set(index, new String(chars)); > > } > > } > > } > > } > > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > Andreas M?ller > > *mgm technology partners GmbH* > Frankfurter Ring 105a > 80807 M?nchen > > Tel. +49 (89) 35 86 80-633 > Fax +49 (89) 35 86 80-288 > E-Mail Andreas.Mueller at mgm-tp.com > > *Innovation Implemented.* > > Sitz der Gesellschaft: M?nchen > Gesch?ftsf?hrer: Hamarz Mehmanesh > Handelsregister: AG M?nchen HRB 105068 > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/999d5996/attachment-0001.html From memoleaf at gmail.com Thu Oct 24 21:45:56 2013 From: memoleaf at gmail.com (Ji Cheng) Date: Fri, 25 Oct 2013 12:45:56 +0800 Subject: How to read the output of +PrintTenuringDistribution Message-ID: Hello, I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm quite confused with the tenuring distribution below. ============= 2013-10-19T19:46:30.244+0800: 169797.045: [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 10532656 bytes, 10532656 total - age 2: 14082976 bytes, 24615632 total - age 3: 15155296 bytes, 39770928 total - age 4: 13938272 bytes, 53709200 total : 758515K->76697K(853376K), 0.0748620 secs] 4693076K->4021899K(6120832K), 0.0756370 secs] [Times: user=0.42 sys=0.00, real=0.07 secs] 2013-10-19T19:47:10.909+0800: 169837.710: [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 9167144 bytes, 9167144 total - age 2: 9178824 bytes, 18345968 total - age 3: 16101552 bytes, 34447520 total - age 4: 21369776 bytes, 55817296 total : 759449K->63442K(853376K), 0.0776450 secs] 4704651K->4020310K(6120832K), 0.0783500 secs] [Times: user=0.43 sys=0.00, real=0.07 secs] ============= >From What I read, there are 10532656 bytes in age 1 (survived from 1 GC) in the first gc. In the second gc, 9178824 bytes in age 2 (survived from 2 GCs). This is fine since some objects died between the first and second GC. But in the second GC, 16101552 bytes are in age 3 while only 14082976 bytes in age 2 in the first GC. I don't why this number is increasing. Shouldn't all bytes in age n come from age n-1 in the previous GC? Or I misinterpreted those numbers? btw, the jvm version is 1.7.0_40. Thanks. Ji Cheng -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/827bd930/attachment.html From tao.mao at oracle.com Thu Oct 24 23:40:40 2013 From: tao.mao at oracle.com (Tao Mao) Date: Thu, 24 Oct 2013 23:40:40 -0700 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: Message-ID: <526A1268.1040203@oracle.com> Hi Ji, From what you've reported, it definitely looks weird. Are these two GC's consecutive two GC's (i.e. no other GC/s in between)? Thanks. Tao On 10/24/13 9:45 PM, Ji Cheng wrote: > Hello, > > I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm > quite confused with the tenuring distribution below. > > ============= > 2013-10-19T19:46:30.244+0800: 169797.045: > [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 10532656 bytes, 10532656 total > - age 2: 14082976 bytes, 24615632 total > - age 3: 15155296 bytes, 39770928 total > - age 4: 13938272 bytes, 53709200 total > : 758515K->76697K(853376K), 0.0748620 secs] > 4693076K->4021899K(6120832K), 0.0756370 secs] [Times: user=0.42 > sys=0.00, real=0.07 secs] > 2013-10-19T19:47:10.909+0800: 169837.710: > [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 9167144 bytes, 9167144 total > - age 2: 9178824 bytes, 18345968 total > - age 3: 16101552 bytes, 34447520 total > - age 4: 21369776 bytes, 55817296 total > : 759449K->63442K(853376K), 0.0776450 secs] > 4704651K->4020310K(6120832K), 0.0783500 secs] [Times: user=0.43 > sys=0.00, real=0.07 secs] > ============= > > From What I read, there are 10532656 bytes in age 1 (survived from 1 > GC) in the first gc. In the second gc, 9178824 bytes in age 2 > (survived from 2 GCs). This is fine since some objects died between > the first and second GC. > > But in the second GC, 16101552 bytes are in age 3 while only 14082976 > bytes in age 2 in the first GC. I don't why this number is increasing. > Shouldn't all bytes in age n come from age n-1 in the previous GC? Or > I misinterpreted those numbers? > > btw, the jvm version is 1.7.0_40. > > Thanks. > > Ji Cheng > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131024/33cc2890/attachment.html From ysr1729 at gmail.com Fri Oct 25 00:27:35 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 25 Oct 2013 00:27:35 -0700 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: <526A1268.1040203@oracle.com> References: <526A1268.1040203@oracle.com> Message-ID: Hi Ji -- Are you using ParNew by itself without CMS in the old gen, or are you using CMS in the old gen. If the former, I have a possible explanation (although you will need to evaluate the statistical probability of an event based on the configuration and object demographics to determine if it's plausible in your case). If, however, you are using CMS in the old gen, then I don't have an explanation. So, what is your config? :-) -- ramki On Thu, Oct 24, 2013 at 11:40 PM, Tao Mao wrote: > Hi Ji, > > From what you've reported, it definitely looks weird. Are these two GC's > consecutive two GC's (i.e. no other GC/s in between)? > > Thanks. > Tao > > > On 10/24/13 9:45 PM, Ji Cheng wrote: > > Hello, > > I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm quite > confused with the tenuring distribution below. > > ============= > 2013-10-19T19:46:30.244+0800: 169797.045: > [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 10532656 bytes, 10532656 total > - age 2: 14082976 bytes, 24615632 total > - age 3: 15155296 bytes, 39770928 total > - age 4: 13938272 bytes, 53709200 total > : 758515K->76697K(853376K), 0.0748620 secs] 4693076K->4021899K(6120832K), > 0.0756370 secs] [Times: user=0.42 sys=0.00, real=0.07 secs] > 2013-10-19T19:47:10.909+0800: 169837.710: [GC2013-10-19T19:47:10.909+0800: > 169837.711: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 9167144 bytes, 9167144 total > - age 2: 9178824 bytes, 18345968 total > - age 3: 16101552 bytes, 34447520 total > - age 4: 21369776 bytes, 55817296 total > : 759449K->63442K(853376K), 0.0776450 secs] 4704651K->4020310K(6120832K), > 0.0783500 secs] [Times: user=0.43 sys=0.00, real=0.07 secs] > ============= > > From What I read, there are 10532656 bytes in age 1 (survived from 1 GC) > in the first gc. In the second gc, 9178824 bytes in age 2 (survived from 2 > GCs). This is fine since some objects died between the first and second GC. > > But in the second GC, 16101552 bytes are in age 3 while only 14082976 > bytes in age 2 in the first GC. I don't why this number is increasing. > Shouldn't all bytes in age n come from age n-1 in the previous GC? Or I > misinterpreted those numbers? > > btw, the jvm version is 1.7.0_40. > > Thanks. > > Ji Cheng > > > _______________________________________________ > hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/8c7e0dbd/attachment.html From ysr1729 at gmail.com Fri Oct 25 00:46:28 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 25 Oct 2013 00:46:28 -0700 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: <526A1268.1040203@oracle.com> Message-ID: Never mind my question, born of a bit of confusion on my part. I think my explanation works in both cases, although the probability of the event is increased if the old gen collector is not CMS. Here's the issue: In the code for copying a target object into the survivor space or into old gen, several threads may race to claim an object. In the case where the object's age is under the tenuring threshold, or if the older generation is not CMS, we will first copy the object then claim the object by swapping in the forwarding pointer to the copy. The other copies are discarded and the winning thread continues. The problem is that the age table is incremented by all of the threads racing to do the copying. The fix is that only the winner of the race should increment the age table to avoid multiple increments. That should fix the problem you are seeing. The problem could be more acute in certain kinds of object graph structures, and also when the old generation is not CMS then the possibility of such races is slightly increased because it's present also when copying into the old generation. (I can't recall why we don't always first claim the object and then do the copying and then update the forwarding pointer, as is done when the target space is the CMS space. I'll let others reconstruct that reason, if that reason, probably a performance reason, is still relevant today....) -- ramki On Fri, Oct 25, 2013 at 12:27 AM, Srinivas Ramakrishna wrote: > Hi Ji -- > > Are you using ParNew by itself without CMS in the old gen, or are you > using CMS in the old gen. > If the former, I have a possible explanation (although you will need to > evaluate the statistical probability > of an event based on the configuration and object demographics to > determine if it's plausible in your > case). > > If, however, you are using CMS in the old gen, then I don't have an > explanation. > > So, what is your config? :-) > > -- ramki > > > > On Thu, Oct 24, 2013 at 11:40 PM, Tao Mao wrote: > >> Hi Ji, >> >> From what you've reported, it definitely looks weird. Are these two GC's >> consecutive two GC's (i.e. no other GC/s in between)? >> >> Thanks. >> Tao >> >> >> On 10/24/13 9:45 PM, Ji Cheng wrote: >> >> Hello, >> >> I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm >> quite confused with the tenuring distribution below. >> >> ============= >> 2013-10-19T19:46:30.244+0800: 169797.045: >> [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 10532656 bytes, 10532656 total >> - age 2: 14082976 bytes, 24615632 total >> - age 3: 15155296 bytes, 39770928 total >> - age 4: 13938272 bytes, 53709200 total >> : 758515K->76697K(853376K), 0.0748620 secs] 4693076K->4021899K(6120832K), >> 0.0756370 secs] [Times: user=0.42 sys=0.00, real=0.07 secs] >> 2013-10-19T19:47:10.909+0800: 169837.710: >> [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 9167144 bytes, 9167144 total >> - age 2: 9178824 bytes, 18345968 total >> - age 3: 16101552 bytes, 34447520 total >> - age 4: 21369776 bytes, 55817296 total >> : 759449K->63442K(853376K), 0.0776450 secs] 4704651K->4020310K(6120832K), >> 0.0783500 secs] [Times: user=0.43 sys=0.00, real=0.07 secs] >> ============= >> >> From What I read, there are 10532656 bytes in age 1 (survived from 1 >> GC) in the first gc. In the second gc, 9178824 bytes in age 2 (survived >> from 2 GCs). This is fine since some objects died between the first and >> second GC. >> >> But in the second GC, 16101552 bytes are in age 3 while only 14082976 >> bytes in age 2 in the first GC. I don't why this number is increasing. >> Shouldn't all bytes in age n come from age n-1 in the previous GC? Or I >> misinterpreted those numbers? >> >> btw, the jvm version is 1.7.0_40. >> >> Thanks. >> >> Ji Cheng >> >> >> _______________________________________________ >> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/8eaf140c/attachment-0001.html From ysr1729 at gmail.com Fri Oct 25 00:50:26 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 25 Oct 2013 00:50:26 -0700 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: <526A1268.1040203@oracle.com> Message-ID: Sorry, time for bed... I am not thinking straight... Scratch the part about the old generation being CMS reducing the incidence of the problem. The age table is only incremented when the target goes to a survivor space, so the probability of the race is independent of whether the old generation is CMS or not. I think the fix outlined above should take care of the bad accounting. And would avoid (in extreme cases) the adaprtive tenuring algorithm from being too pessimistic and using a lower tenuring threshold than the more accurate accounting would have yielded. -- ramki On Fri, Oct 25, 2013 at 12:46 AM, Srinivas Ramakrishna wrote: > Never mind my question, born of a bit of confusion on my part. > > I think my explanation works in both cases, although the probability of > the event is increased if the old gen collector is not > CMS. > > Here's the issue: In the code for copying a target object into the > survivor space or into old gen, several threads may race to > claim an object. In the case where the object's age is under the tenuring > threshold, or if the older generation is not CMS, > we will first copy the object then claim the object by swapping in the > forwarding pointer to the copy. The other copies > are discarded and the winning thread continues. The problem is that the > age table is incremented by all of the threads > racing to do the copying. The fix is that only the winner of the race > should increment the age table to avoid multiple increments. > > That should fix the problem you are seeing. The problem could be more > acute in certain kinds of object graph structures, and also when the old > generation is not CMS then the possibility of such races is slightly > increased because it's present also when > copying into the old generation. > > (I can't recall why we don't always first claim the object and then do the > copying and then update the forwarding pointer, as is > done when the target space is the CMS space. I'll let others reconstruct > that reason, if that reason, probably a performance reason, is still > relevant today....) > > -- ramki > > > > On Fri, Oct 25, 2013 at 12:27 AM, Srinivas Ramakrishna wrote: > >> Hi Ji -- >> >> Are you using ParNew by itself without CMS in the old gen, or are you >> using CMS in the old gen. >> If the former, I have a possible explanation (although you will need to >> evaluate the statistical probability >> of an event based on the configuration and object demographics to >> determine if it's plausible in your >> case). >> >> If, however, you are using CMS in the old gen, then I don't have an >> explanation. >> >> So, what is your config? :-) >> >> -- ramki >> >> >> >> On Thu, Oct 24, 2013 at 11:40 PM, Tao Mao wrote: >> >>> Hi Ji, >>> >>> From what you've reported, it definitely looks weird. Are these two GC's >>> consecutive two GC's (i.e. no other GC/s in between)? >>> >>> Thanks. >>> Tao >>> >>> >>> On 10/24/13 9:45 PM, Ji Cheng wrote: >>> >>> Hello, >>> >>> I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm >>> quite confused with the tenuring distribution below. >>> >>> ============= >>> 2013-10-19T19:46:30.244+0800: 169797.045: >>> [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew >>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>> - age 1: 10532656 bytes, 10532656 total >>> - age 2: 14082976 bytes, 24615632 total >>> - age 3: 15155296 bytes, 39770928 total >>> - age 4: 13938272 bytes, 53709200 total >>> : 758515K->76697K(853376K), 0.0748620 secs] >>> 4693076K->4021899K(6120832K), 0.0756370 secs] [Times: user=0.42 sys=0.00, >>> real=0.07 secs] >>> 2013-10-19T19:47:10.909+0800: 169837.710: >>> [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew >>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>> - age 1: 9167144 bytes, 9167144 total >>> - age 2: 9178824 bytes, 18345968 total >>> - age 3: 16101552 bytes, 34447520 total >>> - age 4: 21369776 bytes, 55817296 total >>> : 759449K->63442K(853376K), 0.0776450 secs] >>> 4704651K->4020310K(6120832K), 0.0783500 secs] [Times: user=0.43 sys=0.00, >>> real=0.07 secs] >>> ============= >>> >>> From What I read, there are 10532656 bytes in age 1 (survived from 1 >>> GC) in the first gc. In the second gc, 9178824 bytes in age 2 (survived >>> from 2 GCs). This is fine since some objects died between the first and >>> second GC. >>> >>> But in the second GC, 16101552 bytes are in age 3 while only 14082976 >>> bytes in age 2 in the first GC. I don't why this number is increasing. >>> Shouldn't all bytes in age n come from age n-1 in the previous GC? Or I >>> misinterpreted those numbers? >>> >>> btw, the jvm version is 1.7.0_40. >>> >>> Thanks. >>> >>> Ji Cheng >>> >>> >>> _______________________________________________ >>> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/95f38f90/attachment.html From Andreas.Mueller at mgm-tp.com Fri Oct 25 01:29:15 2013 From: Andreas.Mueller at mgm-tp.com (=?iso-8859-1?Q?Andreas_M=FCller?=) Date: Fri, 25 Oct 2013 08:29:15 +0000 Subject: AW: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m In-Reply-To: <5269BA00.6080908@oracle.com> References: <46FF8393B58AD84D95E444264805D98FBDDF067B@edata01.mgm-edv.de> <5269BA00.6080908@oracle.com> Message-ID: <46FF8393B58AD84D95E444264805D98FBDDF121A@edata01.mgm-edv.de> Hi Tao, start the sample with ParNewGC and the same additional parameters as for ParallelGC: java -Xms6g -Xmx6g -XX:NewSize=%1m -XX:MaxNewSize=%1m -XX:+UseParNewGC -Xloggc:gc_parnew_6g_New%1m.log -XX:+PrintGCDetails -cp GCBench.jar de.am.gc.benchmarks.MixedRandomList 100 8 12500000 (so %1 is the script parameter controlling the new size) I have attached you the (shortened) gc.log for NewSize=MaxNewSize=2500m. Note that the absolute throughput with ParNewGC is in fact a bit (5%) lower than shown in my previous graph. Your question made me check the start script and I found that by error the ParNew script ran the sample with 20% less live heap than the other collectors (last parameter was 10000000 instead of 12500000. This made ParNew look even better than it actually is. Anyway, as you can see from the sample log it does not have ParallelGC's problem above NewSize=1800m. Regards Andreas Von: Tao Mao [mailto:tao.mao at oracle.com] Gesendet: Freitag, 25. Oktober 2013 02:23 An: Andreas M?ller Cc: 'hotspot-gc-use at openjdk.java.net' (hotspot-gc-use at openjdk.java.net) Betreff: Re: ParallelGC issue: collector does only Full GC by default and above NewSize=1800m Hi Andreas, What's your exact VM options for ParNewGC? If possible, please attach ParNew gc log. I'd like to investigate and compare the two cases to see GC behavioral differences. Thanks. Tao On 10/21/13 10:09 AM, Andreas M?ller wrote: Hi all, while experimenting a bit with different Garbage Collectors and applying them to my homegrown micro benchmarks I stumbled into the following problem: I run the below sample with the following command line (using Java 1.7.0_40 on Windows and probably others): java -Xms6g -Xmx6g -XX:+UseParallelGC - de.am.gc.benchmarks.MixedRandomList 100 8 12500000 The Default and proven ParallelGC collector does mostly Full GCs and shows only poor out-of-the-box performance, more than a factor 10 lower than the ParNew collector. More tests adding the -XX:NewSize=m and -XX:MaxNewSize=m reveal that the problem occurs as soon as the NewSize rises beyond 1800m which it obviously does by default. Below that threshold ParallelGC performance is similar to ParNewGC (in the range of 7500 MB/s on my i7-2500MHz notebook), but at NewSize=2000m is as low as 600 MB/s. Any ideas why this might happen? Note that the sample is constructed such that the live heap is always around 3GB. If any I would expect a problem only at around NewSize=3GB, when Old Gen shrinks to less than the live heap size. As a matter of fact, ParNewGC can do >7000 MB/s from NewSize=400m to NewSize=3500m with little variation around a maximum of 7600 MB/s at NewSize=2000m. I also provide source, gc.log and a plot of the NewSize dependency to anyone interested in that problem. Regards Andreas -------------------------------------------------------MixedRandomList.java------------------------------------------------------------------------------------------------------------------------ package de.am.gc.benchmarks; import java.util.ArrayList; import java.util.List; /** * GC benchmark producing a mix of lifetime=0 and lifetime>0 objects which are kept in randomly updated lists. * * @author Andreas Mueller */ public class MixedRandomList { private static final int DEFAULT_NUMBEROFTHREADS=1; // object size in bytes private static final int DEFAULT_OBJECTSIZE=100; private static int numberOfThreads=DEFAULT_NUMBEROFTHREADS; private static int objectSize=DEFAULT_OBJECTSIZE; // number of objects to fill half of the available memory with (permanent) live objects private static long numLive = (Runtime.getRuntime().maxMemory()/objectSize/5); /** * @param args the command line arguments */ public static void main(String[] args) { if( args.length>0 ) { // first, optional argument is the size of the objects objectSize = Integer.parseInt(args[0]); // second, optional argument is the number of live objects if( args.length>1 ) { numberOfThreads = Integer.parseInt(args[1]); // third, optional argument is the number of live objects if( args.length>2 ) { numLive = Long.parseLong(args[2]); } } } for( int i=0; i0 which is distributed about an average lifetime. * This average lifetime is a function of fractionLive and numLive * * @param fractionLive * @param numLive */ public GarbageProducer(int fractionLive, long numLive) { this.fractionLive = fractionLive; this.myNumLive = numLive; } @Override public void run() { int osize = objectSize; char[] chars = getCharArray(objectSize); List liveList = new ArrayList((int)myNumLive); // initially, the lifeList is filled for(int i=0; i Innovation Implemented. Sitz der Gesellschaft: M?nchen Gesch?ftsf?hrer: Hamarz Mehmanesh Handelsregister: AG M?nchen HRB 105068 _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/124f53d8/attachment-0001.html -------------- next part -------------- A non-text attachment was scrubbed... Name: gc_parnew_6g_New2500m.7z Type: application/octet-stream Size: 26899 bytes Desc: gc_parnew_6g_New2500m.7z Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/124f53d8/gc_parnew_6g_New2500m-0001.7z From memoleaf at gmail.com Fri Oct 25 02:43:27 2013 From: memoleaf at gmail.com (Ji Cheng) Date: Fri, 25 Oct 2013 17:43:27 +0800 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: <526A1268.1040203@oracle.com> Message-ID: Thanks Tao and Srinivas for the responses. Yes, it's from two consecutive GCs. If this is a bug in jvm itself, here are some more details if that helps. The log is from Apache Cassandra, a distributed database, running on an 8-core machine. The jvm options are shown as below: ================ root 10267 31.7 58.9 139587620 9683720 ? SLl Oct17 3562:09 /usr/lib/jvm/java-7-oracle/bin/java -ea -javaagent:bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G -Xmn1000M -XX:+HeapDumpOnOutOfMemoryError -Xss256k -verbose:gc *-XX:+UseParNewGC -XX:+UseConcMarkSweepGC* -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=4 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseTLAB -XX:+UseCondCardMark -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintPromotionFailure -Xloggc:/var/log/cassandra/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=cassandra-1.lan -Dcom.sun.management.jmxremote.port=7199 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dlog4j.configuration=log4j-server.properties -Dlog4j.defaultInitOverride=true -cp bin/../conf:bin/../build/classes/main:bin/../build/classes/thrift:bin/../lib/antlr-3.2.jar:bin/../lib/apache-cassandra-1.2.10.jar:bin/../lib/apache-cassandra-clientutil-1.2.10.jar:bin/../lib/apache-cassandra-thrift-1.2.10.jar:bin/../lib/avro-1.4.0-fixes.jar:bin/../lib/avro-1.4.0-sources-fixes.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-lang-2.6.jar:bin/../lib/compress-lzf-0.8.4.jar:bin/../lib/concurrentlinkedhashmap-lru-1.3.jar:bin/../lib/guava-13.0.1.jar:bin/../lib/high-scale-lib-1.1.2.jar:bin/../lib/jackson-core-asl-1.9.2.jar:bin/../lib/jackson-mapper-asl-1.9.2.jar:bin/../lib/jamm-0.2.5.jar:bin/../lib/jbcrypt-0.3m.jar:bin/../lib/jline-1.0.jar:bin/../lib/jna-platform.jar:bin/../lib/jna.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-0.7.0.jar:bin/../lib/log4j-1.2.16.jar:bin/../lib/lz4-1.1.0.jar:bin/../lib/metrics-core-2.2.0.jar:bin/../lib/netty-3.6.6.Final.jar:bin/../lib/servlet-api-2.5-20081211.jar:bin/../lib/slf4j-api-1.7.2.jar:bin/../lib/slf4j-log4j12-1.7.2.jar:bin/../lib/snakeyaml-1.6.jar:bin/../lib/snappy-java-1.0.5.jar:bin/../lib/snaptree-0.1.jar org.apache.cassandra.service.CassandraDaemon ================ Actually this problem happens quite often. All the GCs below are consecutive. I just copied the output of tail -f here. 2013-10-25T16:42:08.768+0800: 677135.570: [GC2013-10-25T16:42:08.769+0800: 677135.570: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 16930360 bytes, 16930360 total *- age 2: 10648456 bytes, * 27578816 total - age 3: 13359440 bytes, 40938256 total - age 4: 10518080 bytes, 51456336 total : 737954K->56968K(853376K), 0.1159900 secs] 4369426K->3692926K(6120832K), 0.1168410 secs] [Times: user=0.70 sys=0.00, real=0.12 secs] 2013-10-25T16:43:10.423+0800: 677197.225: [GC2013-10-25T16:43:10.424+0800: 677197.225: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 12756896 bytes, 12756896 total - age 2: 10679104 bytes, 23436000 total *- age 3: 11557408 bytes, * 34993408 total - age 4: 12170432 bytes, 47163840 total : 739720K->53771K(853376K), 0.1204730 secs] 4375678K->3697974K(6120832K), 0.1213100 secs] [Times: user=0.72 sys=0.00, real=0.12 secs] 2013-10-25T16:43:15.412+0800: 677202.213: [GC2013-10-25T16:43:15.412+0800: 677202.214: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 2347520 bytes, 2347520 total - age 2: 10569072 bytes, 12916592 total *- age 3: 10655296 bytes, * 23571888 total - age 4: 10500536 bytes, 34072424 total : 736523K->41743K(853376K), 0.1057610 secs] 4380726K->3694848K(6120832K), 0.1064420 secs] [Times: user=0.72 sys=0.00, real=0.11 secs] 2013-10-25T16:43:20.705+0800: 677207.507: [GC2013-10-25T16:43:20.706+0800: 677207.507: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 2224608 bytes, 2224608 total - age 2: 1864384 bytes, 4088992 total - age 3: 11297576 bytes, 15386568 total *- age 4: 16933488 bytes, *32320056 total : 724495K->40915K(853376K), 0.1025910 secs] 4377600K->3702271K(6120832K), 0.1033820 secs] [Times: user=0.78 sys=0.00, real=0.10 secs] 2013-10-25T16:44:55.234+0800: 677302.036: [GC2013-10-25T16:44:55.235+0800: 677302.036: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 23557744 bytes, 23557744 total - age 2: 576560 bytes, 24134304 total - age 3: 694352 bytes, 24828656 total - age 4: 11104504 bytes, 35933160 total : 723667K->41493K(853376K), 0.1091820 secs] 4385023K->3712159K(6120832K), 0.1100190 secs] [Times: user=0.79 sys=0.00, real=0.11 secs] 2013-10-25T16:46:24.655+0800: 677391.456: [GC2013-10-25T16:46:24.655+0800: 677391.457: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 16052176 bytes, 16052176 total *- age 2: 13515752 bytes, * 29567928 total - age 3: 568304 bytes, 30136232 total - age 4: 685288 bytes, 30821520 total : 724245K->35895K(853376K), 0.1193780 secs] 4394911K->3715432K(6120832K), 0.1201740 secs] [Times: user=0.71 sys=0.00, real=0.12 secs] 2013-10-25T16:48:08.928+0800: 677495.730: [GC2013-10-25T16:48:08.929+0800: 677495.730: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 11795376 bytes, 11795376 total - age 2: 16954104 bytes, 28749480 total *- age 3: 16548488 bytes, *45297968 total - age 4: 564904 bytes, 45862872 total : 718647K->50696K(853376K), 0.1075270 secs] 4398184K->3730910K(6120832K), 0.1083250 secs] [Times: user=0.71 sys=0.00, real=0.11 secs] 2013-10-25T16:49:49.994+0800: 677596.795: [GC2013-10-25T16:49:49.994+0800: 677596.795: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 10775376 bytes, 10775376 total *- age 2: 8955800 bytes, *19731176 total - age 3: 11600936 bytes, 31332112 total - age 4: 15432856 bytes, 46764968 total : 733448K->52432K(853376K), 0.1129030 secs] 4413662K->3733199K(6120832K), 0.1136890 secs] [Times: user=0.68 sys=0.00, real=0.11 secs] 2013-10-25T16:51:30.850+0800: 677697.651: [GC2013-10-25T16:51:30.851+0800: 677697.652: [ParNew Desired survivor size 87359488 bytes, new threshold 4 (max 4) - age 1: 13071608 bytes, 13071608 total - age 2: 7039704 bytes, 20111312 total *- age 3: 9879160 bytes, * 29990472 total - age 4: 10465456 bytes, 40455928 total : 735184K->52264K(853376K), 0.1201120 secs] 4415951K->3746066K(6120832K), 0.1210270 secs] [Times: user=0.70 sys=0.00, real=0.12 secs] Thanks, Ji Cheng On Fri, Oct 25, 2013 at 3:50 PM, Srinivas Ramakrishna wrote: > > Sorry, time for bed... I am not thinking straight... > > Scratch the part about the old generation being CMS reducing the incidence > of the problem. > The age table is only incremented when the target goes to a survivor > space, so the probability of the race is independent of whether the old > generation is CMS or not. > > I think the fix outlined above should take care of the bad accounting. And > would avoid (in extreme cases) > the adaprtive tenuring algorithm from being too pessimistic and using a > lower tenuring threshold than the > more accurate accounting would have yielded. > > -- ramki > > > > On Fri, Oct 25, 2013 at 12:46 AM, Srinivas Ramakrishna wrote: > >> Never mind my question, born of a bit of confusion on my part. >> >> I think my explanation works in both cases, although the probability of >> the event is increased if the old gen collector is not >> CMS. >> >> Here's the issue: In the code for copying a target object into the >> survivor space or into old gen, several threads may race to >> claim an object. In the case where the object's age is under the tenuring >> threshold, or if the older generation is not CMS, >> we will first copy the object then claim the object by swapping in the >> forwarding pointer to the copy. The other copies >> are discarded and the winning thread continues. The problem is that the >> age table is incremented by all of the threads >> racing to do the copying. The fix is that only the winner of the race >> should increment the age table to avoid multiple increments. >> >> That should fix the problem you are seeing. The problem could be more >> acute in certain kinds of object graph structures, and also when the old >> generation is not CMS then the possibility of such races is slightly >> increased because it's present also when >> copying into the old generation. >> >> (I can't recall why we don't always first claim the object and then do >> the copying and then update the forwarding pointer, as is >> done when the target space is the CMS space. I'll let others reconstruct >> that reason, if that reason, probably a performance reason, is still >> relevant today....) >> >> -- ramki >> >> >> >> On Fri, Oct 25, 2013 at 12:27 AM, Srinivas Ramakrishna > > wrote: >> >>> Hi Ji -- >>> >>> Are you using ParNew by itself without CMS in the old gen, or are you >>> using CMS in the old gen. >>> If the former, I have a possible explanation (although you will need to >>> evaluate the statistical probability >>> of an event based on the configuration and object demographics to >>> determine if it's plausible in your >>> case). >>> >>> If, however, you are using CMS in the old gen, then I don't have an >>> explanation. >>> >>> So, what is your config? :-) >>> >>> -- ramki >>> >>> >>> >>> On Thu, Oct 24, 2013 at 11:40 PM, Tao Mao wrote: >>> >>>> Hi Ji, >>>> >>>> From what you've reported, it definitely looks weird. Are these two >>>> GC's consecutive two GC's (i.e. no other GC/s in between)? >>>> >>>> Thanks. >>>> Tao >>>> >>>> >>>> On 10/24/13 9:45 PM, Ji Cheng wrote: >>>> >>>> Hello, >>>> >>>> I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm >>>> quite confused with the tenuring distribution below. >>>> >>>> ============= >>>> 2013-10-19T19:46:30.244+0800: 169797.045: >>>> [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew >>>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>>> - age 1: 10532656 bytes, 10532656 total >>>> - age 2: 14082976 bytes, 24615632 total >>>> - age 3: 15155296 bytes, 39770928 total >>>> - age 4: 13938272 bytes, 53709200 total >>>> : 758515K->76697K(853376K), 0.0748620 secs] >>>> 4693076K->4021899K(6120832K), 0.0756370 secs] [Times: user=0.42 sys=0.00, >>>> real=0.07 secs] >>>> 2013-10-19T19:47:10.909+0800: 169837.710: >>>> [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew >>>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>>> - age 1: 9167144 bytes, 9167144 total >>>> - age 2: 9178824 bytes, 18345968 total >>>> - age 3: 16101552 bytes, 34447520 total >>>> - age 4: 21369776 bytes, 55817296 total >>>> : 759449K->63442K(853376K), 0.0776450 secs] >>>> 4704651K->4020310K(6120832K), 0.0783500 secs] [Times: user=0.43 sys=0.00, >>>> real=0.07 secs] >>>> ============= >>>> >>>> From What I read, there are 10532656 bytes in age 1 (survived from 1 >>>> GC) in the first gc. In the second gc, 9178824 bytes in age 2 (survived >>>> from 2 GCs). This is fine since some objects died between the first and >>>> second GC. >>>> >>>> But in the second GC, 16101552 bytes are in age 3 while only 14082976 >>>> bytes in age 2 in the first GC. I don't why this number is increasing. >>>> Shouldn't all bytes in age n come from age n-1 in the previous GC? Or I >>>> misinterpreted those numbers? >>>> >>>> btw, the jvm version is 1.7.0_40. >>>> >>>> Thanks. >>>> >>>> Ji Cheng >>>> >>>> >>>> _______________________________________________ >>>> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>> >>>> >>>> _______________________________________________ >>>> hotspot-gc-use mailing list >>>> hotspot-gc-use at openjdk.java.net >>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/e7d64394/attachment-0001.html From ysr1729 at gmail.com Fri Oct 25 11:07:11 2013 From: ysr1729 at gmail.com (Srinivas Ramakrishna) Date: Fri, 25 Oct 2013 11:07:11 -0700 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: <526A1268.1040203@oracle.com> Message-ID: Hi Ji, Tao -- Yes, as I outlined this is a bug in the JVM's parnew object copying and age table book-keeping because of which the agetable entries can end up being incremented more than they should be when two or more GC threads race to copy an object to a survivor space and only one succeeds in doing the copy (but all the racers increment in their local tables). This causes the counts to go awry. (There is no corresponding bug in Parallel Scavenge because it doesn't keep an age table.) Since the incrementing is done to a per-thread local age table, there are two possible solutions to avoid this overcounting: Either have the thread decrement the its count when it loses a race, or have the thread increment its count only when it is sure that it has won the race. One can look at the structure of the code to see which of these looks like a better solution to fix the book-keeping badness. I am assuming Tao or colleagues will create a bug for this and fix it. Or that you have already called in an official bug for this. Thanks for reporting the problem! -- ramki On Fri, Oct 25, 2013 at 2:43 AM, Ji Cheng wrote: > Thanks Tao and Srinivas for the responses. > > Yes, it's from two consecutive GCs. > > If this is a bug in jvm itself, here are some more details if that helps. > > The log is from Apache Cassandra, a distributed database, running on an > 8-core machine. The jvm options are shown as below: > > ================ > root 10267 31.7 58.9 139587620 9683720 ? SLl Oct17 3562:09 > /usr/lib/jvm/java-7-oracle/bin/java -ea > -javaagent:bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities > -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G -Xmn1000M > -XX:+HeapDumpOnOutOfMemoryError -Xss256k -verbose:gc *-XX:+UseParNewGC > -XX:+UseConcMarkSweepGC* -XX:+CMSParallelRemarkEnabled > -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=4 > -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly > -XX:+UseTLAB -XX:+UseCondCardMark -XX:+PrintGCDetails > -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution > -XX:+PrintPromotionFailure -Xloggc:/var/log/cassandra/gc.log > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M > -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=cassandra-1.lan > -Dcom.sun.management.jmxremote.port=7199 > -Dcom.sun.management.jmxremote.ssl=false > -Dcom.sun.management.jmxremote.authenticate=false > -Dlog4j.configuration=log4j-server.properties > -Dlog4j.defaultInitOverride=true -cp > bin/../conf:bin/../build/classes/main:bin/../build/classes/thrift:bin/../lib/antlr-3.2.jar:bin/../lib/apache-cassandra-1.2.10.jar:bin/../lib/apache-cassandra-clientutil-1.2.10.jar:bin/../lib/apache-cassandra-thrift-1.2.10.jar:bin/../lib/avro-1.4.0-fixes.jar:bin/../lib/avro-1.4.0-sources-fixes.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-lang-2.6.jar:bin/../lib/compress-lzf-0.8.4.jar:bin/../lib/concurrentlinkedhashmap-lru-1.3.jar:bin/../lib/guava-13.0.1.jar:bin/../lib/high-scale-lib-1.1.2.jar:bin/../lib/jackson-core-asl-1.9.2.jar:bin/../lib/jackson-mapper-asl-1.9.2.jar:bin/../lib/jamm-0.2.5.jar:bin/../lib/jbcrypt-0.3m.jar:bin/../lib/jline-1.0.jar:bin/../lib/jna-platform.jar:bin/../lib/jna.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-0.7.0.jar:bin/../lib/log4j-1.2.16.jar:bin/../lib/lz4-1.1.0.jar:bin/../lib/metrics-core-2.2.0.jar:bin/../lib/netty-3.6.6.Final.jar:bin/../lib/servlet-api-2.5-20081211.jar:bin/../lib/slf4j-api-1.7.2.jar:bin/../lib/slf4j-log4j12-1.7.2.jar:bin/../lib/snakeyaml-1.6.jar:bin/../lib/snappy-java-1.0.5.jar:bin/../lib/snaptree-0.1.jar > org.apache.cassandra.service.CassandraDaemon > > ================ > > Actually this problem happens quite often. All the GCs below are > consecutive. I just copied the output of tail -f here. > > > 2013-10-25T16:42:08.768+0800: 677135.570: [GC2013-10-25T16:42:08.769+0800: > 677135.570: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 16930360 bytes, 16930360 total > *- age 2: 10648456 bytes, * 27578816 total > - age 3: 13359440 bytes, 40938256 total > - age 4: 10518080 bytes, 51456336 total > : 737954K->56968K(853376K), 0.1159900 secs] 4369426K->3692926K(6120832K), > 0.1168410 secs] [Times: user=0.70 sys=0.00, real=0.12 secs] > 2013-10-25T16:43:10.423+0800: 677197.225: [GC2013-10-25T16:43:10.424+0800: > 677197.225: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 12756896 bytes, 12756896 total > - age 2: 10679104 bytes, 23436000 total > *- age 3: 11557408 bytes, * 34993408 total > - age 4: 12170432 bytes, 47163840 total > : 739720K->53771K(853376K), 0.1204730 secs] 4375678K->3697974K(6120832K), > 0.1213100 secs] [Times: user=0.72 sys=0.00, real=0.12 secs] > 2013-10-25T16:43:15.412+0800: 677202.213: [GC2013-10-25T16:43:15.412+0800: > 677202.214: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 2347520 bytes, 2347520 total > - age 2: 10569072 bytes, 12916592 total > *- age 3: 10655296 bytes, * 23571888 total > - age 4: 10500536 bytes, 34072424 total > : 736523K->41743K(853376K), 0.1057610 secs] 4380726K->3694848K(6120832K), > 0.1064420 secs] [Times: user=0.72 sys=0.00, real=0.11 secs] > 2013-10-25T16:43:20.705+0800: 677207.507: [GC2013-10-25T16:43:20.706+0800: > 677207.507: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 2224608 bytes, 2224608 total > - age 2: 1864384 bytes, 4088992 total > - age 3: 11297576 bytes, 15386568 total > *- age 4: 16933488 bytes, *32320056 total > : 724495K->40915K(853376K), 0.1025910 secs] 4377600K->3702271K(6120832K), > 0.1033820 secs] [Times: user=0.78 sys=0.00, real=0.10 secs] > 2013-10-25T16:44:55.234+0800: 677302.036: [GC2013-10-25T16:44:55.235+0800: > 677302.036: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 23557744 bytes, 23557744 total > - age 2: 576560 bytes, 24134304 total > - age 3: 694352 bytes, 24828656 total > - age 4: 11104504 bytes, 35933160 total > : 723667K->41493K(853376K), 0.1091820 secs] 4385023K->3712159K(6120832K), > 0.1100190 secs] [Times: user=0.79 sys=0.00, real=0.11 secs] > 2013-10-25T16:46:24.655+0800: 677391.456: [GC2013-10-25T16:46:24.655+0800: > 677391.457: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 16052176 bytes, 16052176 total > *- age 2: 13515752 bytes, * 29567928 total > - age 3: 568304 bytes, 30136232 total > - age 4: 685288 bytes, 30821520 total > : 724245K->35895K(853376K), 0.1193780 secs] 4394911K->3715432K(6120832K), > 0.1201740 secs] [Times: user=0.71 sys=0.00, real=0.12 secs] > 2013-10-25T16:48:08.928+0800: 677495.730: [GC2013-10-25T16:48:08.929+0800: > 677495.730: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 11795376 bytes, 11795376 total > - age 2: 16954104 bytes, 28749480 total > *- age 3: 16548488 bytes, *45297968 total > - age 4: 564904 bytes, 45862872 total > : 718647K->50696K(853376K), 0.1075270 secs] 4398184K->3730910K(6120832K), > 0.1083250 secs] [Times: user=0.71 sys=0.00, real=0.11 secs] > 2013-10-25T16:49:49.994+0800: 677596.795: [GC2013-10-25T16:49:49.994+0800: > 677596.795: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 10775376 bytes, 10775376 total > *- age 2: 8955800 bytes, *19731176 total > - age 3: 11600936 bytes, 31332112 total > - age 4: 15432856 bytes, 46764968 total > : 733448K->52432K(853376K), 0.1129030 secs] 4413662K->3733199K(6120832K), > 0.1136890 secs] [Times: user=0.68 sys=0.00, real=0.11 secs] > 2013-10-25T16:51:30.850+0800: 677697.651: [GC2013-10-25T16:51:30.851+0800: > 677697.652: [ParNew > Desired survivor size 87359488 bytes, new threshold 4 (max 4) > - age 1: 13071608 bytes, 13071608 total > - age 2: 7039704 bytes, 20111312 total > *- age 3: 9879160 bytes, * 29990472 total > - age 4: 10465456 bytes, 40455928 total > : 735184K->52264K(853376K), 0.1201120 secs] 4415951K->3746066K(6120832K), > 0.1210270 secs] [Times: user=0.70 sys=0.00, real=0.12 secs] > > > > Thanks, > > Ji Cheng > > > > On Fri, Oct 25, 2013 at 3:50 PM, Srinivas Ramakrishna wrote: > >> >> Sorry, time for bed... I am not thinking straight... >> >> Scratch the part about the old generation being CMS reducing the >> incidence of the problem. >> The age table is only incremented when the target goes to a survivor >> space, so the probability of the race is independent of whether the old >> generation is CMS or not. >> >> I think the fix outlined above should take care of the bad accounting. >> And would avoid (in extreme cases) >> the adaprtive tenuring algorithm from being too pessimistic and using a >> lower tenuring threshold than the >> more accurate accounting would have yielded. >> >> -- ramki >> >> >> >> On Fri, Oct 25, 2013 at 12:46 AM, Srinivas Ramakrishna > > wrote: >> >>> Never mind my question, born of a bit of confusion on my part. >>> >>> I think my explanation works in both cases, although the probability of >>> the event is increased if the old gen collector is not >>> CMS. >>> >>> Here's the issue: In the code for copying a target object into the >>> survivor space or into old gen, several threads may race to >>> claim an object. In the case where the object's age is under the >>> tenuring threshold, or if the older generation is not CMS, >>> we will first copy the object then claim the object by swapping in the >>> forwarding pointer to the copy. The other copies >>> are discarded and the winning thread continues. The problem is that the >>> age table is incremented by all of the threads >>> racing to do the copying. The fix is that only the winner of the race >>> should increment the age table to avoid multiple increments. >>> >>> That should fix the problem you are seeing. The problem could be more >>> acute in certain kinds of object graph structures, and also when the old >>> generation is not CMS then the possibility of such races is slightly >>> increased because it's present also when >>> copying into the old generation. >>> >>> (I can't recall why we don't always first claim the object and then do >>> the copying and then update the forwarding pointer, as is >>> done when the target space is the CMS space. I'll let others reconstruct >>> that reason, if that reason, probably a performance reason, is still >>> relevant today....) >>> >>> -- ramki >>> >>> >>> >>> On Fri, Oct 25, 2013 at 12:27 AM, Srinivas Ramakrishna < >>> ysr1729 at gmail.com> wrote: >>> >>>> Hi Ji -- >>>> >>>> Are you using ParNew by itself without CMS in the old gen, or are you >>>> using CMS in the old gen. >>>> If the former, I have a possible explanation (although you will need to >>>> evaluate the statistical probability >>>> of an event based on the configuration and object demographics to >>>> determine if it's plausible in your >>>> case). >>>> >>>> If, however, you are using CMS in the old gen, then I don't have an >>>> explanation. >>>> >>>> So, what is your config? :-) >>>> >>>> -- ramki >>>> >>>> >>>> >>>> On Thu, Oct 24, 2013 at 11:40 PM, Tao Mao wrote: >>>> >>>>> Hi Ji, >>>>> >>>>> From what you've reported, it definitely looks weird. Are these two >>>>> GC's consecutive two GC's (i.e. no other GC/s in between)? >>>>> >>>>> Thanks. >>>>> Tao >>>>> >>>>> >>>>> On 10/24/13 9:45 PM, Ji Cheng wrote: >>>>> >>>>> Hello, >>>>> >>>>> I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm >>>>> quite confused with the tenuring distribution below. >>>>> >>>>> ============= >>>>> 2013-10-19T19:46:30.244+0800: 169797.045: >>>>> [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew >>>>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>>>> - age 1: 10532656 bytes, 10532656 total >>>>> - age 2: 14082976 bytes, 24615632 total >>>>> - age 3: 15155296 bytes, 39770928 total >>>>> - age 4: 13938272 bytes, 53709200 total >>>>> : 758515K->76697K(853376K), 0.0748620 secs] >>>>> 4693076K->4021899K(6120832K), 0.0756370 secs] [Times: user=0.42 sys=0.00, >>>>> real=0.07 secs] >>>>> 2013-10-19T19:47:10.909+0800: 169837.710: >>>>> [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew >>>>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>>>> - age 1: 9167144 bytes, 9167144 total >>>>> - age 2: 9178824 bytes, 18345968 total >>>>> - age 3: 16101552 bytes, 34447520 total >>>>> - age 4: 21369776 bytes, 55817296 total >>>>> : 759449K->63442K(853376K), 0.0776450 secs] >>>>> 4704651K->4020310K(6120832K), 0.0783500 secs] [Times: user=0.43 sys=0.00, >>>>> real=0.07 secs] >>>>> ============= >>>>> >>>>> From What I read, there are 10532656 bytes in age 1 (survived from 1 >>>>> GC) in the first gc. In the second gc, 9178824 bytes in age 2 (survived >>>>> from 2 GCs). This is fine since some objects died between the first and >>>>> second GC. >>>>> >>>>> But in the second GC, 16101552 bytes are in age 3 while only >>>>> 14082976 bytes in age 2 in the first GC. I don't why this number is >>>>> increasing. Shouldn't all bytes in age n come from age n-1 in the previous >>>>> GC? Or I misinterpreted those numbers? >>>>> >>>>> btw, the jvm version is 1.7.0_40. >>>>> >>>>> Thanks. >>>>> >>>>> Ji Cheng >>>>> >>>>> >>>>> _______________________________________________ >>>>> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>> >>>>> >>>>> _______________________________________________ >>>>> hotspot-gc-use mailing list >>>>> hotspot-gc-use at openjdk.java.net >>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131025/2443d5fb/attachment-0001.html From memoleaf at gmail.com Sun Oct 27 05:36:57 2013 From: memoleaf at gmail.com (Ji Cheng) Date: Sun, 27 Oct 2013 20:36:57 +0800 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: <526A1268.1040203@oracle.com> Message-ID: Hi Srinivas, Thanks a lot for your help and detailed explanation. =) I didn't file a bug report. I think it's easier for Tao or your colleagues to file it on the new jira system (I can't create an account). Ji Cheng On Sat, Oct 26, 2013 at 2:07 AM, Srinivas Ramakrishna wrote: > Hi Ji, Tao -- > > Yes, as I outlined this is a bug in the JVM's parnew object copying and > age table book-keeping > because of which the agetable entries can end up being incremented more > than they should be > when two or more GC threads race to copy an object to a survivor space and > only one succeeds > in doing the copy (but all the racers increment in their local tables). > This causes the counts to go awry. > (There is no corresponding bug in Parallel Scavenge because it doesn't > keep an age table.) > > Since the incrementing is done to a per-thread local age table, there are > two possible solutions to avoid > this overcounting: Either have the thread decrement the its count when it > loses a race, or have the > thread increment its count only when it is sure that it has won the race. > > One can look at the structure of the code to see which of these looks like > a better solution to fix the > book-keeping badness. > > I am assuming Tao or colleagues will create a bug for this and fix it. Or > that you have already called in > an official bug for this. > > Thanks for reporting the problem! > -- ramki > > > > On Fri, Oct 25, 2013 at 2:43 AM, Ji Cheng wrote: > >> Thanks Tao and Srinivas for the responses. >> >> Yes, it's from two consecutive GCs. >> >> If this is a bug in jvm itself, here are some more details if that helps. >> >> The log is from Apache Cassandra, a distributed database, running on an >> 8-core machine. The jvm options are shown as below: >> >> ================ >> root 10267 31.7 58.9 139587620 9683720 ? SLl Oct17 3562:09 >> /usr/lib/jvm/java-7-oracle/bin/java -ea >> -javaagent:bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities >> -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G -Xmn1000M >> -XX:+HeapDumpOnOutOfMemoryError -Xss256k -verbose:gc *-XX:+UseParNewGC >> -XX:+UseConcMarkSweepGC* -XX:+CMSParallelRemarkEnabled >> -XX:SurvivorRatio=4 -XX:MaxTenuringThreshold=4 >> -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly >> -XX:+UseTLAB -XX:+UseCondCardMark -XX:+PrintGCDetails >> -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution >> -XX:+PrintPromotionFailure -Xloggc:/var/log/cassandra/gc.log >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M >> -Djava.net.preferIPv4Stack=true -Djava.rmi.server.hostname=cassandra-1.lan >> -Dcom.sun.management.jmxremote.port=7199 >> -Dcom.sun.management.jmxremote.ssl=false >> -Dcom.sun.management.jmxremote.authenticate=false >> -Dlog4j.configuration=log4j-server.properties >> -Dlog4j.defaultInitOverride=true -cp >> bin/../conf:bin/../build/classes/main:bin/../build/classes/thrift:bin/../lib/antlr-3.2.jar:bin/../lib/apache-cassandra-1.2.10.jar:bin/../lib/apache-cassandra-clientutil-1.2.10.jar:bin/../lib/apache-cassandra-thrift-1.2.10.jar:bin/../lib/avro-1.4.0-fixes.jar:bin/../lib/avro-1.4.0-sources-fixes.jar:bin/../lib/commons-cli-1.1.jar:bin/../lib/commons-codec-1.2.jar:bin/../lib/commons-lang-2.6.jar:bin/../lib/compress-lzf-0.8.4.jar:bin/../lib/concurrentlinkedhashmap-lru-1.3.jar:bin/../lib/guava-13.0.1.jar:bin/../lib/high-scale-lib-1.1.2.jar:bin/../lib/jackson-core-asl-1.9.2.jar:bin/../lib/jackson-mapper-asl-1.9.2.jar:bin/../lib/jamm-0.2.5.jar:bin/../lib/jbcrypt-0.3m.jar:bin/../lib/jline-1.0.jar:bin/../lib/jna-platform.jar:bin/../lib/jna.jar:bin/../lib/json-simple-1.1.jar:bin/../lib/libthrift-0.7.0.jar:bin/../lib/log4j-1.2.16.jar:bin/../lib/lz4-1.1.0.jar:bin/../lib/metrics-core-2.2.0.jar:bin/../lib/netty-3.6.6.Final.jar:bin/../lib/servlet-api-2.5-20081211.jar:bin/../lib/slf4j-api-1.7.2.jar:bin/../lib/slf4j-log4j12-1.7.2.jar:bin/../lib/snakeyaml-1.6.jar:bin/../lib/snappy-java-1.0.5.jar:bin/../lib/snaptree-0.1.jar >> org.apache.cassandra.service.CassandraDaemon >> >> ================ >> >> Actually this problem happens quite often. All the GCs below are >> consecutive. I just copied the output of tail -f here. >> >> >> 2013-10-25T16:42:08.768+0800: 677135.570: >> [GC2013-10-25T16:42:08.769+0800: 677135.570: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 16930360 bytes, 16930360 total >> *- age 2: 10648456 bytes, * 27578816 total >> - age 3: 13359440 bytes, 40938256 total >> - age 4: 10518080 bytes, 51456336 total >> : 737954K->56968K(853376K), 0.1159900 secs] 4369426K->3692926K(6120832K), >> 0.1168410 secs] [Times: user=0.70 sys=0.00, real=0.12 secs] >> 2013-10-25T16:43:10.423+0800: 677197.225: >> [GC2013-10-25T16:43:10.424+0800: 677197.225: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 12756896 bytes, 12756896 total >> - age 2: 10679104 bytes, 23436000 total >> *- age 3: 11557408 bytes, * 34993408 total >> - age 4: 12170432 bytes, 47163840 total >> : 739720K->53771K(853376K), 0.1204730 secs] 4375678K->3697974K(6120832K), >> 0.1213100 secs] [Times: user=0.72 sys=0.00, real=0.12 secs] >> 2013-10-25T16:43:15.412+0800: 677202.213: >> [GC2013-10-25T16:43:15.412+0800: 677202.214: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 2347520 bytes, 2347520 total >> - age 2: 10569072 bytes, 12916592 total >> *- age 3: 10655296 bytes, * 23571888 total >> - age 4: 10500536 bytes, 34072424 total >> : 736523K->41743K(853376K), 0.1057610 secs] 4380726K->3694848K(6120832K), >> 0.1064420 secs] [Times: user=0.72 sys=0.00, real=0.11 secs] >> 2013-10-25T16:43:20.705+0800: 677207.507: >> [GC2013-10-25T16:43:20.706+0800: 677207.507: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 2224608 bytes, 2224608 total >> - age 2: 1864384 bytes, 4088992 total >> - age 3: 11297576 bytes, 15386568 total >> *- age 4: 16933488 bytes, *32320056 total >> : 724495K->40915K(853376K), 0.1025910 secs] 4377600K->3702271K(6120832K), >> 0.1033820 secs] [Times: user=0.78 sys=0.00, real=0.10 secs] >> 2013-10-25T16:44:55.234+0800: 677302.036: >> [GC2013-10-25T16:44:55.235+0800: 677302.036: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 23557744 bytes, 23557744 total >> - age 2: 576560 bytes, 24134304 total >> - age 3: 694352 bytes, 24828656 total >> - age 4: 11104504 bytes, 35933160 total >> : 723667K->41493K(853376K), 0.1091820 secs] 4385023K->3712159K(6120832K), >> 0.1100190 secs] [Times: user=0.79 sys=0.00, real=0.11 secs] >> 2013-10-25T16:46:24.655+0800: 677391.456: >> [GC2013-10-25T16:46:24.655+0800: 677391.457: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 16052176 bytes, 16052176 total >> *- age 2: 13515752 bytes, * 29567928 total >> - age 3: 568304 bytes, 30136232 total >> - age 4: 685288 bytes, 30821520 total >> : 724245K->35895K(853376K), 0.1193780 secs] 4394911K->3715432K(6120832K), >> 0.1201740 secs] [Times: user=0.71 sys=0.00, real=0.12 secs] >> 2013-10-25T16:48:08.928+0800: 677495.730: >> [GC2013-10-25T16:48:08.929+0800: 677495.730: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 11795376 bytes, 11795376 total >> - age 2: 16954104 bytes, 28749480 total >> *- age 3: 16548488 bytes, *45297968 total >> - age 4: 564904 bytes, 45862872 total >> : 718647K->50696K(853376K), 0.1075270 secs] 4398184K->3730910K(6120832K), >> 0.1083250 secs] [Times: user=0.71 sys=0.00, real=0.11 secs] >> 2013-10-25T16:49:49.994+0800: 677596.795: >> [GC2013-10-25T16:49:49.994+0800: 677596.795: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 10775376 bytes, 10775376 total >> *- age 2: 8955800 bytes, *19731176 total >> - age 3: 11600936 bytes, 31332112 total >> - age 4: 15432856 bytes, 46764968 total >> : 733448K->52432K(853376K), 0.1129030 secs] 4413662K->3733199K(6120832K), >> 0.1136890 secs] [Times: user=0.68 sys=0.00, real=0.11 secs] >> 2013-10-25T16:51:30.850+0800: 677697.651: >> [GC2013-10-25T16:51:30.851+0800: 677697.652: [ParNew >> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >> - age 1: 13071608 bytes, 13071608 total >> - age 2: 7039704 bytes, 20111312 total >> *- age 3: 9879160 bytes, * 29990472 total >> - age 4: 10465456 bytes, 40455928 total >> : 735184K->52264K(853376K), 0.1201120 secs] 4415951K->3746066K(6120832K), >> 0.1210270 secs] [Times: user=0.70 sys=0.00, real=0.12 secs] >> >> >> >> Thanks, >> >> Ji Cheng >> >> >> >> On Fri, Oct 25, 2013 at 3:50 PM, Srinivas Ramakrishna wrote: >> >>> >>> Sorry, time for bed... I am not thinking straight... >>> >>> Scratch the part about the old generation being CMS reducing the >>> incidence of the problem. >>> The age table is only incremented when the target goes to a survivor >>> space, so the probability of the race is independent of whether the old >>> generation is CMS or not. >>> >>> I think the fix outlined above should take care of the bad accounting. >>> And would avoid (in extreme cases) >>> the adaprtive tenuring algorithm from being too pessimistic and using a >>> lower tenuring threshold than the >>> more accurate accounting would have yielded. >>> >>> -- ramki >>> >>> >>> >>> On Fri, Oct 25, 2013 at 12:46 AM, Srinivas Ramakrishna < >>> ysr1729 at gmail.com> wrote: >>> >>>> Never mind my question, born of a bit of confusion on my part. >>>> >>>> I think my explanation works in both cases, although the probability of >>>> the event is increased if the old gen collector is not >>>> CMS. >>>> >>>> Here's the issue: In the code for copying a target object into the >>>> survivor space or into old gen, several threads may race to >>>> claim an object. In the case where the object's age is under the >>>> tenuring threshold, or if the older generation is not CMS, >>>> we will first copy the object then claim the object by swapping in the >>>> forwarding pointer to the copy. The other copies >>>> are discarded and the winning thread continues. The problem is that the >>>> age table is incremented by all of the threads >>>> racing to do the copying. The fix is that only the winner of the race >>>> should increment the age table to avoid multiple increments. >>>> >>>> That should fix the problem you are seeing. The problem could be more >>>> acute in certain kinds of object graph structures, and also when the old >>>> generation is not CMS then the possibility of such races is slightly >>>> increased because it's present also when >>>> copying into the old generation. >>>> >>>> (I can't recall why we don't always first claim the object and then do >>>> the copying and then update the forwarding pointer, as is >>>> done when the target space is the CMS space. I'll let others >>>> reconstruct that reason, if that reason, probably a performance reason, is >>>> still relevant today....) >>>> >>>> -- ramki >>>> >>>> >>>> >>>> On Fri, Oct 25, 2013 at 12:27 AM, Srinivas Ramakrishna < >>>> ysr1729 at gmail.com> wrote: >>>> >>>>> Hi Ji -- >>>>> >>>>> Are you using ParNew by itself without CMS in the old gen, or are you >>>>> using CMS in the old gen. >>>>> If the former, I have a possible explanation (although you will need >>>>> to evaluate the statistical probability >>>>> of an event based on the configuration and object demographics to >>>>> determine if it's plausible in your >>>>> case). >>>>> >>>>> If, however, you are using CMS in the old gen, then I don't have an >>>>> explanation. >>>>> >>>>> So, what is your config? :-) >>>>> >>>>> -- ramki >>>>> >>>>> >>>>> >>>>> On Thu, Oct 24, 2013 at 11:40 PM, Tao Mao wrote: >>>>> >>>>>> Hi Ji, >>>>>> >>>>>> From what you've reported, it definitely looks weird. Are these two >>>>>> GC's consecutive two GC's (i.e. no other GC/s in between)? >>>>>> >>>>>> Thanks. >>>>>> Tao >>>>>> >>>>>> >>>>>> On 10/24/13 9:45 PM, Ji Cheng wrote: >>>>>> >>>>>> Hello, >>>>>> >>>>>> I have gc log enabled with -XX:+PrintTenuringDistribution. But I'm >>>>>> quite confused with the tenuring distribution below. >>>>>> >>>>>> ============= >>>>>> 2013-10-19T19:46:30.244+0800: 169797.045: >>>>>> [GC2013-10-19T19:46:30.244+0800: 169797.045: [ParNew >>>>>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>>>>> - age 1: 10532656 bytes, 10532656 total >>>>>> - age 2: 14082976 bytes, 24615632 total >>>>>> - age 3: 15155296 bytes, 39770928 total >>>>>> - age 4: 13938272 bytes, 53709200 total >>>>>> : 758515K->76697K(853376K), 0.0748620 secs] >>>>>> 4693076K->4021899K(6120832K), 0.0756370 secs] [Times: user=0.42 sys=0.00, >>>>>> real=0.07 secs] >>>>>> 2013-10-19T19:47:10.909+0800: 169837.710: >>>>>> [GC2013-10-19T19:47:10.909+0800: 169837.711: [ParNew >>>>>> Desired survivor size 87359488 bytes, new threshold 4 (max 4) >>>>>> - age 1: 9167144 bytes, 9167144 total >>>>>> - age 2: 9178824 bytes, 18345968 total >>>>>> - age 3: 16101552 bytes, 34447520 total >>>>>> - age 4: 21369776 bytes, 55817296 total >>>>>> : 759449K->63442K(853376K), 0.0776450 secs] >>>>>> 4704651K->4020310K(6120832K), 0.0783500 secs] [Times: user=0.43 sys=0.00, >>>>>> real=0.07 secs] >>>>>> ============= >>>>>> >>>>>> From What I read, there are 10532656 bytes in age 1 (survived from >>>>>> 1 GC) in the first gc. In the second gc, 9178824 bytes in age 2 (survived >>>>>> from 2 GCs). This is fine since some objects died between the first and >>>>>> second GC. >>>>>> >>>>>> But in the second GC, 16101552 bytes are in age 3 while only >>>>>> 14082976 bytes in age 2 in the first GC. I don't why this number is >>>>>> increasing. Shouldn't all bytes in age n come from age n-1 in the previous >>>>>> GC? Or I misinterpreted those numbers? >>>>>> >>>>>> btw, the jvm version is 1.7.0_40. >>>>>> >>>>>> Thanks. >>>>>> >>>>>> Ji Cheng >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> hotspot-gc-use mailing list >>>>>> hotspot-gc-use at openjdk.java.net >>>>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131027/8f6c5a36/attachment-0001.html From thomas.schatzl at oracle.com Mon Oct 28 01:26:51 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 28 Oct 2013 09:26:51 +0100 Subject: How to read the output of +PrintTenuringDistribution In-Reply-To: References: <526A1268.1040203@oracle.com> Message-ID: <1382948811.2687.2.camel@cirrus> Hi all, On Sun, 2013-10-27 at 20:36 +0800, Ji Cheng wrote: > Hi Srinivas, > > Thanks a lot for your help and detailed explanation. =) > > I didn't file a bug report. I think it's easier for Tao or your > colleagues to file it on the new jira system (I can't create an > account). > I filed a new bug at https://bugs.openjdk.java.net/browse/JDK-8027363 . Thanks for your report and your detailed explanations (which seem correct after some code review). Thomas From wolfgang.pedot at finkzeit.at Mon Oct 28 08:50:31 2013 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Mon, 28 Oct 2013 16:50:31 +0100 Subject: G1 collector stays on small young-generation size longer than expected after concurrent cycle Message-ID: <526E87C7.8050708@finkzeit.at> Hello, I have spend quite some time tuning G1 for my use case now and I am generally pleased with the results (ignoring PermGen for now). I do have a question of understanding about the automatic sizing of young-generation though, here are parts of my GC-log (only Date/Timestamp and sizes): 2013-10-28T15:55:56.323+0100: 21273.211: [GC pause (young) [Eden: 4032.0M(4032.0M)->0.0B(3960.0M) Survivors: 400.0M->432.0M Heap: 13.5G(14.6G)->9834.7M(14.6G)] 2013-10-28T15:56:18.653+0100: 21295.541: [GC pause (young) [Eden: 3960.0M(3960.0M)->0.0B(3944.0M) Survivors: 432.0M->416.0M Heap: 13.5G(14.6G)->9868.7M(14.6G)] 2013-10-28T15:57:00.727+0100: 21337.615: [GC pause (young) [Eden: 3944.0M(3944.0M)->0.0B(3880.0M) Survivors: 416.0M->416.0M Heap: 13.5G(14.6G)->9931.4M(14.6G)] 2013-10-28T15:57:43.165+0100: 21380.053: [GC pause (young) [Eden: 3880.0M(3880.0M)->0.0B(3808.0M) Survivors: 416.0M->432.0M Heap: 13.5G(14.6G)->9987.4M(14.6G)] 2013-10-28T15:58:04.731+0100: 21401.619: [GC pause (young) [Eden: 3808.0M(3808.0M)->0.0B(3784.0M) Survivors: 432.0M->408.0M Heap: 13.5G(14.6G)->10036.7M(14.6G)] Up to here everything is normal, heap usage reaches the threshold. 2013-10-28T15:58:33.923+0100: 21430.811: [GC pause (young) (initial-mark) [Eden: 3784.0M(3784.0M)->0.0B(3728.0M) Survivors: 408.0M->408.0M Heap: 13.5G(14.6G)->10096.0M(14.6G)] 2013-10-28T15:58:36.234+0100: 21433.122: [GC concurrent-cleanup-end, 0.0004450 secs] 2013-10-28T15:59:19.111+0100: 21475.998: [GC pause (young) [Eden: 3728.0M(3728.0M)->0.0B(360.0M) Survivors: 408.0M->384.0M Heap: 12.9G(14.6G)->9559.3M(14.6G)] Now mixed collects start and young-gen size is reduced to facilitate collection of old-regions (I guess): 2013-10-28T15:59:22.778+0100: 21479.666: [GC pause (mixed) [Eden: 360.0M(360.0M)->0.0B(648.0M) Survivors: 384.0M->96.0M Heap: 9919.3M(14.6G)->8419.4M(14.6G)] 2013-10-28T15:59:30.652+0100: 21487.540: [GC pause (mixed) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 9067.4M(14.6G)->7480.0M(14.6G)] 2013-10-28T15:59:38.671+0100: 21495.559: [GC pause (mixed) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8128.0M(14.6G)->7263.3M(14.6G)] During the last collect G1 decided there is no more need for mixed: 21495.811: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 203 regions, reclaimable: 784279232 bytes (4.99 %), threshold: 5.00 %] Now I would expect the young-generation size to increase again but as you can see below that does not happen for another 11 collects: 2013-10-28T15:59:44.685+0100: 21501.573: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7911.3M(14.6G)->7293.7M(14.6G)] 2013-10-28T15:59:51.136+0100: 21508.024: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7941.7M(14.6G)->7391.1M(14.6G)] 2013-10-28T15:59:58.723+0100: 21515.610: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8039.1M(14.6G)->7477.8M(14.6G)] 2013-10-28T16:00:01.284+0100: 21518.172: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8125.8M(14.6G)->7548.1M(14.6G)] 2013-10-28T16:00:05.597+0100: 21522.485: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8196.1M(14.6G)->7582.0M(14.6G)] 2013-10-28T16:00:06.595+0100: 21523.483: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8230.0M(14.6G)->7629.0M(14.6G)] 2013-10-28T16:00:07.417+0100: 21524.305: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8277.0M(14.6G)->7634.9M(14.6G)] 2013-10-28T16:00:09.160+0100: 21526.048: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8282.9M(14.6G)->7659.5M(14.6G)] 2013-10-28T16:00:10.519+0100: 21527.407: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8307.5M(14.6G)->7675.9M(14.6G)] 2013-10-28T16:00:11.861+0100: 21528.749: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8323.9M(14.6G)->7724.2M(14.6G)] 2013-10-28T16:00:13.625+0100: 21530.513: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8372.2M(14.6G)->7774.0M(14.6G)] 2013-10-28T16:00:19.956+0100: 21536.844: [GC pause (young) [Eden: 648.0M(648.0M)->0.0B(6368.0M) Survivors: 96.0M->88.0M Heap: 8422.0M(14.6G)->7768.2M(14.6G)] 2013-10-28T16:01:04.700+0100: 21581.588: [GC pause (young) [Eden: 6368.0M(6368.0M)->0.0B(6208.0M) Survivors: 88.0M->176.0M Heap: 13.8G(14.6G)->7858.4M(14.6G)] Predicted and real pause-times for those collects are well below the pause-target (~120ms vs 300ms) and mixed collects have stopped, is there another reason to keep young-gen small during that time? I can provide the full log if it helps. Relevant parameters: /opt/jdk1.7.0_45/bin/java -Xmx15000M -Xms15000M -XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=8m -XX:+ParallelRefProcEnabled -XX:InitiatingHeapOccupancyPercent=64 -XX:G1ReservePercent=5 -XX:G1MixedGCLiveThresholdPercent=70 -XX:G1HeapWastePercent=5 I have been playing around with the percentages because we had to many inefficient concurrent cycles and heap is rather tight. I manually set the region-size to 8MB because the heap-size is close to 16GB. any Ideas? Wolfgang Pedot From cconroy at squareup.com Mon Oct 28 13:20:28 2013 From: cconroy at squareup.com (Chris Conroy) Date: Mon, 28 Oct 2013 16:20:28 -0400 Subject: GC Time by cause/phase Message-ID: (I tried sending this to hotspot-dev 5 days ago but it still has not been approved by the moderator. Perhaps this is the more appropriate list?) I'm trying to get detailed GC timing metrics. In particular, I'm interested in tracking time spent in stop the world GC vs parallel GC in my application. I was hoping that the GarbageCollectionNotificationInfo would give me what I need, but I'm having some trouble. I can get total wallclock time per collector by polling the Garbage Collector MX Beans or watching the corresponding notifications. However, I'd like to be able to record time spent in the different phases within the collector (e.g. rescan stop-the-world vs. concurrent sweep) I see that recently GC Cause tracing was fixed for CMS in hotspot ( http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=8008500). I tried testing jdk7u45 to see if I could get the corresponding cause information in the GarbageCollectionNotificationInfo MBean notification. Confusingly, the cause for CMS runs reported via this notification are always "No GC" even though running the same code with -XX:+PrintGCDetails shows me the various CMS phases like Initial Mark or Concurrent Mark. I'm not familiar with the hotspot code base, but it would seem that the cause should propagate up through gcNotifier.cpp and into this notification, but there is definitely a disconnect between the verbose GC logs and the notifications I get. It doesn't look like jstat or any of the other standard tools give this information, but it would be really valuable to understand how much time we've been completely stopped vs. doing parallel GC as the former has important implications for an application trying to service requests on a tight deadline. Is this a bug in what hotspot sends to GarbageCollectionNotificationInfo? Is there some other programmatic way of getting this information, or do I just need to tail and parse the JVM GC Logs? -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131028/792de98f/attachment.html From thomas.schatzl at oracle.com Wed Oct 30 08:14:28 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 30 Oct 2013 16:14:28 +0100 Subject: G1 collector stays on small young-generation size longer than expected after concurrent cycle In-Reply-To: <526E87C7.8050708@finkzeit.at> References: <526E87C7.8050708@finkzeit.at> Message-ID: <1383146068.2824.7.camel@cirrus> Hi, On Mon, 2013-10-28 at 16:50 +0100, Wolfgang Pedot wrote: > Hello, > > I have spend quite some time tuning G1 for my use case now and I am > generally pleased with the results (ignoring PermGen for now). I do have > a question of understanding about the automatic sizing of > young-generation though, here are parts of my GC-log (only > Date/Timestamp and sizes): > > > 2013-10-28T15:59:44.685+0100: 21501.573: [GC pause (young) > [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: > 7911.3M(14.6G)->7293.7M(14.6G)] >[...] > 2013-10-28T16:01:04.700+0100: 21581.588: [GC pause (young) > [Eden: 6368.0M(6368.0M)->0.0B(6208.0M) Survivors: 88.0M->176.0M > Heap: 13.8G(14.6G)->7858.4M(14.6G)] > > Predicted and real pause-times for those collects are well below the > pause-target (~120ms vs 300ms) and mixed collects have stopped, is there > another reason to keep young-gen small during that time? I can provide > the full log if it helps. This seriously looks like a bug. Unfortunately in that area of the code (I think around G1CollectorPolicy::update_young_list_target_length()) there is absolutely no good logging. Could you try running with -XX:+PrintAdaptiveSizePolicy and send the relevant log snippet? Maybe it gives some useful information at the end of mixed gcs. > Relevant parameters: > > /opt/jdk1.7.0_45/bin/java -Xmx15000M -Xms15000M -XX:+UseG1GC > -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=8m > -XX:+ParallelRefProcEnabled -XX:InitiatingHeapOccupancyPercent=64 > -XX:G1ReservePercent=5 -XX:G1MixedGCLiveThresholdPercent=70 > -XX:G1HeapWastePercent=5 > > I have been playing around with the percentages because we had to many > inefficient concurrent cycles and heap is rather tight. > I manually set the region-size to 8MB because the heap-size is close to > 16GB. The defaults are typically somewhat conservative, and should have no impact on this behavior. Thanks, Thomas From wolfgang.pedot at finkzeit.at Wed Oct 30 11:19:22 2013 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Wed, 30 Oct 2013 19:19:22 +0100 Subject: G1 out of memory behaviour Message-ID: <52714DAA.6020605@finkzeit.at> Hi (again), yesterday I had a pretty bad out-of memory situation (Heap was completely full), unfortunately the VM was so unresponsive that I could not find out what the original problem was but I suspect there was a single thread allocating memory in an endless loop. We have had that situation once before due to a scripting error and with CMS the system was recoverable by terminating that thread. The situation yesterday was only recoverable by terminating the VM because it was so extremely unresponsive. Heap-Usage jumped from a normal 6-7GB in old-gen to the full 14.6GB within minutes and cpu usage toggled between 100% (1 core active) and 1200% (all 12 cores active) in the end. There appear to be some unusually large humongous-allocations in the log (40-100MB) normally we should not have objects these sizes, I am guessing these are huge arrays. I managed to get the server down semi-gracefully but it took a very long time to clean up. In the end the gclog was filled with collects like this: 2013-10-29T16:58:44.561+0100: 111441.449: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) 111441.450: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 0, predicted base time: 38.89 ms, remaining time: 261.11 ms, target pause time: 300.00 ms] 111441.450: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 0 regions, survivors: 0 regions, predicted young region time: 0.00 ms] 111441.450: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 0 regions, survivors: 0 regions, old: 0 regions, predicted pause time: 38.89 ms, target pause time: 300.00 ms] 111441.503: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 92.91 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] , 0.0535920 secs] [Parallel Time: 51.0 ms, GC Workers: 12] [GC Worker Start (ms): 111441449.7 111441449.7 111441449.7 111441449.7 111441449.7 111441449.7 111441449.8 111441449.8 111441449.8 111441449.8 111441449.8 111441449.8 Min: 111441449.7, Avg: 111441449.7, Max: 111441449.8, Diff: 0.1] [Ext Root Scanning (ms): 31.7 40.4 42.6 31.6 31.9 34.1 32.4 39.8 31.7 40.4 31.7 38.0 Min: 31.6, Avg: 35.5, Max: 42.6, Diff: 11.0, Sum: 426.3] [SATB Filtering (ms): 0.0 0.0 0.0 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.3, Diff: 0.3, Sum: 0.3] [Update RS (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Processed Buffers: 1 0 0 0 0 0 0 0 0 0 0 0 Min: 0, Avg: 0.1, Max: 1, Diff: 1, Sum: 1] [Scan RS (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Object Copy (ms): 2.5 3.2 8.2 2.5 2.5 1.2 2.2 3.1 2.4 3.2 2.5 3.0 Min: 1.2, Avg: 3.1, Max: 8.2, Diff: 6.9, Sum: 36.6] [Termination (ms): 16.7 7.2 0.0 16.5 16.4 15.5 16.2 7.8 16.7 7.2 16.6 9.7 Min: 0.0, Avg: 12.2, Max: 16.7, Diff: 16.7, Sum: 146.4] [Termination Attempts: 1 1 1 1 1 1 1 1 1 1 1 1 Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 12] [GC Worker Other (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [GC Worker Total (ms): 50.9 50.9 50.8 50.8 50.8 50.8 50.8 50.8 50.8 50.8 50.8 50.8 Min: 50.8, Avg: 50.8, Max: 50.9, Diff: 0.1, Sum: 609.8] [GC Worker End (ms): 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 111441500.6 Min: 111441500.6, Avg: 111441500.6, Max: 111441500.6, Diff: 0.0] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 2.6 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.0 ms] [Eden: 0.0B(744.0M)->0.0B(744.0M) Survivors: 0.0B->0.0B Heap: 14.6G(14.6G)->14.6G(14.6G)] [Times: user=0.61 sys=0.00, real=0.05 secs] I read this output as "not a single byte available anywhere". What puzzles me is why there has not been a single visible OutOfMemoryError during the hole time while there are a whole bunch of different exceptions in the log. If the problem was a single thread an OOM could have terminated it. This application has been running for years (several weeks since the last update) and there has only been the one OOM situation before. Is there any documentation available on what triggers an OOM with G1? regards Wolfgang Pedot From thomas.schatzl at oracle.com Thu Oct 31 05:19:14 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 31 Oct 2013 13:19:14 +0100 Subject: G1 out of memory behaviour In-Reply-To: <52714DAA.6020605@finkzeit.at> References: <52714DAA.6020605@finkzeit.at> Message-ID: <1383221954.2892.76.camel@cirrus> Hi, On Wed, 2013-10-30 at 19:19 +0100, Wolfgang Pedot wrote: > Hi (again), > > yesterday I had a pretty bad out-of memory situation (Heap was > completely full), unfortunately the VM was so unresponsive that I could > not find out what the original problem was but I suspect there was a > single thread allocating memory in an endless loop. We have had that > situation once before due to a scripting error and with CMS the system > was recoverable by terminating that thread. > > The situation yesterday was only recoverable by terminating the VM > because it was so extremely unresponsive. > > Heap-Usage jumped from a normal 6-7GB in old-gen to the full 14.6GB > within minutes and cpu usage toggled between 100% (1 core active) and Full GC :) > 1200% (all 12 cores active) in the end. There appear to be some Young GC. > unusually large humongous-allocations in the log (40-100MB) normally we > should not have objects these sizes, I am guessing these are huge arrays. > > I managed to get the server down semi-gracefully but it took a very long > time to clean up. In the end the gclog was filled with collects like this: > > 2013-10-29T16:58:44.561+0100: 111441.449: [GC pause (young) > Desired survivor size 50331648 bytes, new threshold 15 (max 15) > 111441.450: [G1Ergonomics (CSet Construction) start choosing CSet, > _pending_cards: 0, predicted base time: 38.89 ms, remaining time: 261.11 > ms, target pause time: 300.00 ms] > 111441.450: [G1Ergonomics (CSet Construction) add young regions to > CSet, eden: 0 regions, survivors: 0 regions, predicted young region > time: 0.00 ms] > 111441.450: [G1Ergonomics (CSet Construction) finish choosing CSet, > eden: 0 regions, survivors: 0 regions, old: 0 regions, predicted pause > time: 38.89 ms, target pause time: 300.00 ms] > 111441.503: [G1Ergonomics (Heap Sizing) attempt heap expansion, > reason: recent GC overhead higher than threshold after GC, recent GC > overhead: 92.91 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated > expansion amount: 0 bytes (20.00 %)] > , 0.0535920 secs] > [...] > [Eden: 0.0B(744.0M)->0.0B(744.0M) Survivors: 0.0B->0.0B Heap: > 14.6G(14.6G)->14.6G(14.6G)] > [Times: user=0.61 sys=0.00, real=0.05 secs] > > > I read this output as "not a single byte available anywhere". Yes. > What puzzles me is why there has not been a single visible > OutOfMemoryError during the hole time while there are a whole bunch of > different exceptions in the log. If the problem was a single thread an > OOM could have terminated it. This application has been running for > years (several weeks since the last update) and there has only been the > one OOM situation before. The cause for this behavior is likely the large object/LOB. So the application allocates this LOB, does something, and additional allocations trigger the full gc because the heap is completely full. This full gc can reclaim some space (there's no log output after the full gc). This reclaimed space is large enough for G1 to continue for a little while (i.e. the GC thinks everything is "okay"), however with only a very small young gen, so these young GCs likely follow very closely upon each other (explaining the high gc overhead of 92%), but making some progress at least. After a short while heap is full again, starting the cycle. Since some progress is made all the time, there is no OOME. > > Is there any documentation available on what triggers an OOM with G1? Does above explanation fit the situation/log you have? Thomas From wolfgang.pedot at finkzeit.at Thu Oct 31 05:53:27 2013 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Thu, 31 Oct 2013 13:53:27 +0100 Subject: G1 out of memory behaviour In-Reply-To: <1383221954.2892.76.camel@cirrus> References: <52714DAA.6020605@finkzeit.at> <1383221954.2892.76.camel@cirrus> Message-ID: <527252C7.7080705@finkzeit.at> Hi, thanks for your explanations and effort, see my additional comments below. > >> What puzzles me is why there has not been a single visible >> OutOfMemoryError during the hole time while there are a whole bunch of >> different exceptions in the log. If the problem was a single thread an >> OOM could have terminated it. This application has been running for >> years (several weeks since the last update) and there has only been the >> one OOM situation before. > > The cause for this behavior is likely the large object/LOB. > > So the application allocates this LOB, does something, and additional > allocations trigger the full gc because the heap is completely full. > > This full gc can reclaim some space (there's no log output after the > full gc). > > This reclaimed space is large enough for G1 to continue for a little > while (i.e. the GC thinks everything is "okay"), however with only a > very small young gen, so these young GCs likely follow very closely upon > each other (explaining the high gc overhead of 92%), but making some > progress at least. As I read the logs the young-gen is actually 0B and the CSet in those collects consists of 0 regions so they do not seem to help much. There is some progress during the full-GCs but because the values are in GB its not possible to get exact numbers. I can see up to ~20 young-collects per second over a quite long time and the GC-overhead reaches values above 99.5%. I guess the extreme sluggishness comes from the fact that G1 scaled the young-gen down to 0 regions in this case. CMS with a fixed eden-size would probably throw an OOM when the latest survivors no longer fit into old-gen. I will try to replicate this behaviour on the test-system (much smaller heap) and see what happens there. Here are some of the full-GCs, the first one was in the morning (caused by perm-gen) and thats what normally happens. As you can see the fun begins around 16:32 and there is not much time in between those collects to do anything else. 2013-10-29T08:51:39.412+0100: 82216.300: [Full GC 12G->5801M(14G), 15.0086860 secs] 2013-10-29T16:32:52.984+0100: 109889.872: [Full GC 14G->10G(14G), 21.9944800 secs] 2013-10-29T16:33:25.881+0100: 109922.769: [Full GC 14G->12G(14G), 23.9814220 secs] 2013-10-29T16:33:55.213+0100: 109952.101: [Full GC 14G->13G(14G), 24.5648510 secs] 2013-10-29T16:34:23.449+0100: 109980.337: [Full GC 14G->13G(14G), 25.0227810 secs] 2013-10-29T16:34:50.487+0100: 110007.375: [Full GC 14G->13G(14G), 24.7523580 secs] 2013-10-29T16:35:22.647+0100: 110039.535: [Full GC 14G->13G(14G), 25.5301280 secs] 2013-10-29T16:35:50.341+0100: 110067.229: [Full GC 14G->13G(14G), 26.2003390 secs] 2013-10-29T16:36:18.202+0100: 110095.089: [Full GC 14G->14G(14G), 25.1388210 secs] 2013-10-29T16:36:49.917+0100: 110126.805: [Full GC 14G->14G(14G), 25.6623660 secs] 2013-10-29T16:37:24.125+0100: 110161.013: [Full GC 14G->14G(14G), 26.1288850 secs] 2013-10-29T16:37:51.479+0100: 110188.367: [Full GC 14G->14G(14G), 25.5675010 secs] 2013-10-29T16:38:18.206+0100: 110215.094: [Full GC 14G->14G(14G), 25.4115790 secs] 2013-10-29T16:38:44.440+0100: 110241.328: [Full GC 14G->14G(14G), 25.2001720 secs] 2013-10-29T16:39:10.771+0100: 110267.659: [Full GC 14G->14G(14G), 25.0099190 secs] 2013-10-29T16:39:36.640+0100: 110293.528: [Full GC 14G->14G(14G), 24.8488030 secs] 2013-10-29T16:40:02.489+0100: 110319.377: [Full GC 14G->14G(14G), 26.0805850 secs] 2013-10-29T16:40:29.287+0100: 110346.175: [Full GC 14G->14G(14G), 27.2401450 secs] . . . Would it be feasible to use the GC-overhead to decide that its time for an OOM-Error? Wolfgang From thomas.schatzl at oracle.com Thu Oct 31 06:20:42 2013 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 31 Oct 2013 14:20:42 +0100 Subject: G1 out of memory behaviour In-Reply-To: <527252C7.7080705@finkzeit.at> References: <52714DAA.6020605@finkzeit.at> <1383221954.2892.76.camel@cirrus> <527252C7.7080705@finkzeit.at> Message-ID: <1383225642.2892.101.camel@cirrus> Hi, On Thu, 2013-10-31 at 13:53 +0100, Wolfgang Pedot wrote: > Hi, > > thanks for your explanations and effort, see my additional comments below. > > > > >> What puzzles me is why there has not been a single visible > >> OutOfMemoryError during the hole time while there are a whole bunch of > >> different exceptions in the log. If the problem was a single thread an > >> OOM could have terminated it. This application has been running for > >> years (several weeks since the last update) and there has only been the > >> one OOM situation before. > > > > The cause for this behavior is likely the large object/LOB. > > > > So the application allocates this LOB, does something, and additional > > allocations trigger the full gc because the heap is completely full. > > > > This full gc can reclaim some space (there's no log output after the > > full gc). > > > > This reclaimed space is large enough for G1 to continue for a little > > while (i.e. the GC thinks everything is "okay"), however with only a > > very small young gen, so these young GCs likely follow very closely upon > > each other (explaining the high gc overhead of 92%), but making some > > progress at least. > > As I read the logs the young-gen is actually 0B and the CSet in those > collects consists of 0 regions so they do not seem to help much. There > is some progress during the full-GCs but because the values are in GB > its not possible to get exact numbers. I can see up to ~20 > young-collects per second over a quite long time and the GC-overhead > reaches values above 99.5%. Looking at this line for the young gc output you gave: [Eden: 0.0B(744.0M)->0.0B(744.0M) Survivors: 0.0B->0.0B Heap: 14.6G(14.6G)->14.6G(14.6G)] It means that eden capacity is 744M (i.e. there is eden space available), but there is nothing in it. Other than that the heap seems full (14.6G of 14.6G used). And no CSet or survivors, but that is not surprising given that the occupancy of eden regions before the collection is zero bytes. Another explanation try (given that I do not have enough log information): G1 tries to allocate a LOB, but fails (seen by the failing expansion requests). It then starts a young GC in the hope that the young gc creates a large enough contiguous memory region. That does not work out since the heap is full anyway. For some reason the LOB can also not occupy the regions occupied by the young gen (because of fragmentation of the young gen likely). After that a full gc starts. That one manages to free enough memory I guess - after all there are the 744M of the young gen, and as the heap gets compacted, you will get a contiguous area of memory. Otherwise you would get an OOME after a few unsuccessful attempts of full gcs in a row. So the application basically seems to allocate LOBs in a tight loop, do something on it, and repeat. If you think that might still be wrong, please provide a complete sequence of log messages showing both young and full GCs. > Would it be feasible to use the GC-overhead to decide that its time for > an OOM-Error? What do the others think, it seems reasonable under the right conditions. Maybe you can you file a request for enhancement on bugs.openjdk.java.net/bugs.sun.com? Thomas From wolfgang.pedot at finkzeit.at Wed Oct 30 08:30:27 2013 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Wed, 30 Oct 2013 16:30:27 +0100 Subject: G1 collector stays on small young-generation size longer than expected after concurrent cycle In-Reply-To: <1383146068.2824.7.camel@cirrus> References: <526E87C7.8050708@finkzeit.at> <1383146068.2824.7.camel@cirrus> Message-ID: <52712613.7050705@finkzeit.at> Thanks for the reply, I have that exact log with AdaptiveSizePolicy enabled. I have also added an attachment for convenience. Here is the log, beginning with initial-mark and ending with the decision to grow young-generation. I have noticed that this does not allways happen, sometimes young-gen is grown immediately after mixed collects are done (see second attachment "good.txt"). regards Wolfgang 2013-10-28T15:58:33.923+0100: 21430.811: [GC pause (young) (initial-mark) Desired survivor size 276824064 bytes, new threshold 7 (max 15) - age 1: 55460056 bytes, 55460056 total - age 2: 79721520 bytes, 135181576 total - age 3: 53921760 bytes, 189103336 total - age 4: 24968016 bytes, 214071352 total - age 5: 42768296 bytes, 256839648 total - age 6: 15861352 bytes, 272701000 total - age 7: 28221288 bytes, 300922288 total 21430.811: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 200850, predicted base time: 103.40 ms, remaining time: 196.60 ms, target pause time: 300.00 ms] 21430.811: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 473 regions, survivors: 51 regions, predicted young region time: 76.73 ms] 21430.811: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 473 regions, survivors: 51 regions, old: 0 regions, predicted pause time: 180.13 ms, target pause time: 300.00 ms] , 0.2397300 secs] [Parallel Time: 232.7 ms, GC Workers: 12] [GC Worker Start (ms): 21430811.5 21430811.6 21430811.7 21430811.8 21430811.8 21430811.8 21430811.9 21430811.9 21430811.9 21430812.0 21430812.0 21430812.0 Min: 21430811.5, Avg: 21430811.8, Max: 21430812.0, Diff: 0.4] [Ext Root Scanning (ms): 45.3 46.2 103.6 46.3 46.8 45.7 46.4 63.9 66.3 45.9 47.4 59.8 Min: 45.3, Avg: 55.3, Max: 103.6, Diff: 58.3, Sum: 663.5] [Update RS (ms): 46.4 46.2 0.0 46.5 46.0 46.8 46.7 25.2 26.1 46.5 45.8 29.8 Min: 0.0, Avg: 37.7, Max: 46.8, Diff: 46.8, Sum: 451.9] [Processed Buffers: 109 107 0 99 114 95 109 92 74 108 107 112 Min: 0, Avg: 93.8, Max: 114, Diff: 114, Sum: 1126] [Scan RS (ms): 0.3 0.3 0.1 0.1 0.2 0.3 0.0 0.3 0.2 0.5 0.4 0.2 Min: 0.0, Avg: 0.2, Max: 0.5, Diff: 0.4, Sum: 2.9] [Object Copy (ms): 102.4 101.8 128.5 101.3 101.3 101.4 101.0 104.7 101.5 101.2 100.4 104.2 Min: 100.4, Avg: 104.1, Max: 128.5, Diff: 28.1, Sum: 1249.6] [Termination (ms): 37.9 37.9 0.0 37.9 37.9 37.9 37.9 37.9 37.9 37.9 37.9 37.9 Min: 0.0, Avg: 34.7, Max: 37.9, Diff: 37.9, Sum: 416.5] [Termination Attempts: 36 45 1 37 33 43 1 35 33 32 36 23 Min: 1, Avg: 29.6, Max: 45, Diff: 44, Sum: 355] [GC Worker Other (ms): 0.1 0.1 0.0 0.0 0.1 0.1 0.1 0.0 0.0 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.6] [GC Worker Total (ms): 232.4 232.3 232.2 232.1 232.1 232.1 232.1 232.0 232.0 232.0 231.9 231.9 Min: 231.9, Avg: 232.1, Max: 232.4, Diff: 0.5, Sum: 2785.1] [GC Worker End (ms): 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 Min: 21431043.9, Avg: 21431043.9, Max: 21431043.9, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.7 ms] [Other: 6.3 ms] [Choose CSet: 0.0 ms] [Ref Proc: 3.4 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.6 ms] [Eden: 3784.0M(3784.0M)->0.0B(3728.0M) Survivors: 408.0M->408.0M Heap: 13.5G(14.6G)->10096.0M(14.6G)] [Times: user=2.66 sys=0.02, real=0.24 secs] 2013-10-28T15:58:34.163+0100: 21431.051: [GC concurrent-root-region-scan-start] 2013-10-28T15:58:34.214+0100: 21431.101: [GC concurrent-root-region-scan-end, 0.0506100 secs] 2013-10-28T15:58:34.214+0100: 21431.101: [GC concurrent-mark-start] 2013-10-28T15:58:35.986+0100: 21432.874: [GC concurrent-mark-end, 1.7727150 secs] 2013-10-28T15:58:35.994+0100: 21432.881: [GC remark 2013-10-28T15:58:36.008+0100: 21432.896: [GC ref-proc, 0.1192260 secs], 0.2022820 secs] [Times: user=1.47 sys=0.00, real=0.20 secs] 2013-10-28T15:58:36.198+0100: 21433.086: [GC cleanup 10G->9836M(14G), 0.0347900 secs] [Times: user=0.28 sys=0.00, real=0.03 secs] 2013-10-28T15:58:36.234+0100: 21433.121: [GC concurrent-cleanup-start] 2013-10-28T15:58:36.234+0100: 21433.122: [GC concurrent-cleanup-end, 0.0004450 secs] 2013-10-28T15:59:19.111+0100: 21475.998: [GC pause (young) Desired survivor size 272629760 bytes, new threshold 6 (max 15) - age 1: 63592160 bytes, 63592160 total - age 2: 38460008 bytes, 102052168 total - age 3: 63355736 bytes, 165407904 total - age 4: 44680008 bytes, 210087912 total - age 5: 22742184 bytes, 232830096 total - age 6: 42023016 bytes, 274853112 total - age 7: 15800936 bytes, 290654048 total 21475.999: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 180993, predicted base time: 98.72 ms, remaining time: 201.28 ms, target pause time: 300.00 ms] 21475.999: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 466 regions, survivors: 51 regions, predicted young region time: 80.67 ms] 21475.999: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 466 regions, survivors: 51 regions, old: 0 regions, predicted pause time: 179.39 ms, target pause time: 300.00 ms] 21476.176: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate old regions available, candidate old regions: 635 regions, reclaimable: 3505373264 bytes (22.29 %), threshold: 5.00 %] , 0.1775420 secs] [Parallel Time: 171.9 ms, GC Workers: 12] [GC Worker Start (ms): 21475999.2 21475999.3 21475999.3 21475999.3 21475999.4 21475999.4 21475999.4 21475999.5 21475999.5 21475999.5 21475999.5 21475999.5 Min: 21475999.2, Avg: 21475999.4, Max: 21475999.5, Diff: 0.3] [Ext Root Scanning (ms): 100.6 45.5 64.1 45.1 45.4 44.9 59.1 46.0 45.5 44.7 44.8 63.3 Min: 44.7, Avg: 54.1, Max: 100.6, Diff: 55.9, Sum: 649.0] [Update RS (ms): 0.0 44.5 24.5 44.8 44.6 45.0 29.6 45.0 44.5 44.8 44.9 25.2 Min: 0.0, Avg: 36.5, Max: 45.0, Diff: 45.0, Sum: 437.4] [Processed Buffers: 0 125 71 113 123 117 89 97 100 118 100 54 Min: 0, Avg: 92.2, Max: 125, Diff: 125, Sum: 1107] [Scan RS (ms): 0.1 0.1 0.3 0.2 0.1 0.2 0.4 0.2 0.3 0.5 0.4 0.1 Min: 0.1, Avg: 0.2, Max: 0.5, Diff: 0.4, Sum: 2.8] [Object Copy (ms): 70.7 81.3 82.4 81.2 81.0 81.2 82.1 79.9 80.9 81.1 81.0 82.5 Min: 70.7, Avg: 80.4, Max: 82.5, Diff: 11.8, Sum: 965.4] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 3 1 4 5 5 3 4 3 4 5 2 2 Min: 1, Avg: 3.4, Max: 5, Diff: 4, Sum: 41] [GC Worker Other (ms): 0.3 0.1 0.1 0.1 0.2 0.1 0.2 0.0 0.0 0.1 0.2 0.1 Min: 0.0, Avg: 0.1, Max: 0.3, Diff: 0.2, Sum: 1.5] [GC Worker Total (ms): 171.7 171.4 171.4 171.4 171.5 171.4 171.4 171.2 171.2 171.2 171.3 171.2 Min: 171.2, Avg: 171.3, Max: 171.7, Diff: 0.5, Sum: 2056.2] [GC Worker End (ms): 21476170.9 21476170.7 21476170.7 21476170.7 21476170.8 21476170.8 21476170.9 21476170.7 21476170.7 21476170.7 21476170.8 21476170.8 Min: 21476170.7, Avg: 21476170.8, Max: 21476170.9, Diff: 0.2] [Code Root Fixup: 0.0 ms] [Clear CT: 0.8 ms] [Other: 4.8 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.6 ms] [Eden: 3728.0M(3728.0M)->0.0B(360.0M) Survivors: 408.0M->384.0M Heap: 12.9G(14.6G)->9559.3M(14.6G)] [Times: user=2.07 sys=0.00, real=0.18 secs] 2013-10-28T15:59:22.778+0100: 21479.666: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 1 (max 15) - age 1: 95113344 bytes, 95113344 total - age 2: 33760400 bytes, 128873744 total - age 3: 33282608 bytes, 162156352 total - age 4: 45404280 bytes, 207560632 total - age 5: 36638688 bytes, 244199320 total - age 6: 21839560 bytes, 266038880 total 21479.666: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 127727, predicted base time: 87.54 ms, remaining time: 212.46 ms, target pause time: 300.00 ms] 21479.666: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 45 regions, survivors: 48 regions, predicted young region time: 56.91 ms] 21479.667: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: old CSet region num reached max, old: 188 regions, max: 188 regions] 21479.667: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 45 regions, survivors: 48 regions, old: 188 regions, predicted pause time: 217.53 ms, target pause time: 300.00 ms] 21479.921: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 447 regions, reclaimable: 2132693056 bytes (13.56 %), threshold: 5.00 %] , 0.2550780 secs] [Parallel Time: 243.3 ms, GC Workers: 12] [GC Worker Start (ms): 21479667.1 21479667.1 21479667.1 21479667.2 21479667.2 21479667.2 21479667.3 21479667.3 21479667.3 21479667.3 21479667.4 21479667.4 Min: 21479667.1, Avg: 21479667.3, Max: 21479667.4, Diff: 0.3] [Ext Root Scanning (ms): 45.6 63.8 59.3 45.9 46.0 46.9 46.5 46.0 45.6 45.2 89.0 63.4 Min: 45.2, Avg: 53.6, Max: 89.0, Diff: 43.8, Sum: 643.3] [Update RS (ms): 29.6 10.2 14.7 29.4 28.9 29.5 28.3 28.9 29.3 29.9 0.0 10.1 Min: 0.0, Avg: 22.4, Max: 29.9, Diff: 29.9, Sum: 268.8] [Processed Buffers: 69 32 57 76 54 67 86 69 74 75 0 26 Min: 0, Avg: 57.1, Max: 86, Diff: 86, Sum: 685] [Scan RS (ms): 39.4 39.5 39.7 40.1 39.6 39.8 39.7 39.5 40.2 39.5 5.5 39.6 Min: 5.5, Avg: 36.8, Max: 40.2, Diff: 34.7, Sum: 442.0] [Object Copy (ms): 128.6 129.5 129.3 127.7 128.4 126.8 128.4 128.4 127.8 128.3 148.3 129.7 Min: 126.8, Avg: 130.1, Max: 148.3, Diff: 21.5, Sum: 1561.3] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 1 4 13 12 7 8 8 7 2 8 7 8 Min: 1, Avg: 7.1, Max: 13, Diff: 12, Sum: 85] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 243.2 243.2 243.2 243.1 243.1 243.0 243.0 243.0 243.0 243.0 242.9 242.9 Min: 242.9, Avg: 243.0, Max: 243.2, Diff: 0.4, Sum: 2916.6] [GC Worker End (ms): 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.2 21479910.3 Min: 21479910.2, Avg: 21479910.3, Max: 21479910.3, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 2.0 ms] [Other: 9.8 ms] [Choose CSet: 0.5 ms] [Ref Proc: 2.9 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.6 ms] [Eden: 360.0M(360.0M)->0.0B(648.0M) Survivors: 384.0M->96.0M Heap: 9919.3M(14.6G)->8419.4M(14.6G)] [Times: user=2.77 sys=0.00, real=0.25 secs] 2013-10-28T15:59:30.652+0100: 21487.540: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 7901552 bytes, 7901552 total 21487.540: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 218925, predicted base time: 105.38 ms, remaining time: 194.62 ms, target pause time: 300.00 ms] 21487.540: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 23.27 ms] 21487.541: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: predicted time is too high, predicted time: 1.33 ms, remaining time: 1.08 ms, old: 182 regions, min: 80 regions] 21487.541: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 182 regions, predicted pause time: 298.92 ms, target pause time: 300.00 ms] 21487.843: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 265 regions, reclaimable: 1054790352 bytes (6.71 %), threshold: 5.00 %] , 0.3030120 secs] [Parallel Time: 290.2 ms, GC Workers: 12] [GC Worker Start (ms): 21487541.6 21487541.6 21487541.6 21487541.8 21487541.8 21487541.8 21487541.8 21487541.9 21487541.9 21487541.9 21487541.9 21487541.9 Min: 21487541.6, Avg: 21487541.8, Max: 21487541.9, Diff: 0.4] [Ext Root Scanning (ms): 46.3 45.6 47.1 65.1 46.3 46.1 59.6 46.6 45.4 64.4 97.2 45.8 Min: 45.4, Avg: 54.6, Max: 97.2, Diff: 51.8, Sum: 655.6] [Update RS (ms): 55.1 55.4 55.7 35.0 54.8 54.8 40.6 54.1 55.5 35.2 0.0 54.9 Min: 0.0, Avg: 45.9, Max: 55.7, Diff: 55.7, Sum: 551.1] [Processed Buffers: 95 87 102 72 102 107 104 81 105 93 0 116 Min: 0, Avg: 88.7, Max: 116, Diff: 116, Sum: 1064] [Scan RS (ms): 36.6 36.8 36.3 36.4 36.6 36.6 36.4 36.6 36.7 36.6 16.9 36.9 Min: 16.9, Avg: 35.0, Max: 36.9, Diff: 20.0, Sum: 419.4] [Object Copy (ms): 151.9 151.9 150.7 153.0 151.8 152.2 153.0 152.3 151.9 153.3 175.4 151.9 Min: 150.7, Avg: 154.1, Max: 175.4, Diff: 24.7, Sum: 1849.2] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 1 1 2 2 1 2 1 2 2 1 1 1 Min: 1, Avg: 1.4, Max: 2, Diff: 1, Sum: 17] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.0 0.0 0.1 0.0 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 290.0 289.9 289.8 289.7 289.7 289.6 289.7 289.6 289.6 289.5 289.6 289.5 Min: 289.5, Avg: 289.7, Max: 290.0, Diff: 0.5, Sum: 3476.2] [GC Worker End (ms): 21487831.6 21487831.5 21487831.5 21487831.4 21487831.5 21487831.4 21487831.5 21487831.4 21487831.5 21487831.5 21487831.5 21487831.5 Min: 21487831.4, Avg: 21487831.5, Max: 21487831.6, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 2.4 ms] [Other: 10.4 ms] [Choose CSet: 0.9 ms] [Ref Proc: 1.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 2.1 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 9067.4M(14.6G)->7480.0M(14.6G)] [Times: user=3.51 sys=0.00, real=0.30 secs] 2013-10-28T15:59:38.671+0100: 21495.559: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 24399800 bytes, 24399800 total - age 2: 4322520 bytes, 28722320 total 21495.559: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 199259, predicted base time: 103.72 ms, remaining time: 196.28 ms, target pause time: 300.00 ms] 21495.559: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 24.30 ms] 21495.560: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: reclaimable percentage not over threshold, old: 62 regions, max: 188 regions, reclaimable: 784279232 bytes (4.99 %), threshold: 5.00 %] 21495.560: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 62 regions, predicted pause time: 222.80 ms, target pause time: 300.00 ms] 21495.811: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 203 regions, reclaimable: 784279232 bytes (4.99 %), threshold: 5.00 %] , 0.2522200 secs] [Parallel Time: 241.2 ms, GC Workers: 12] [GC Worker Start (ms): 21495560.3 21495560.4 21495560.4 21495560.4 21495560.4 21495560.5 21495560.5 21495560.5 21495560.5 21495560.5 21495560.6 21495560.6 Min: 21495560.3, Avg: 21495560.5, Max: 21495560.6, Diff: 0.3] [Ext Root Scanning (ms): 64.9 45.7 64.9 45.9 45.6 47.1 99.2 59.5 45.4 45.5 45.2 46.0 Min: 45.2, Avg: 54.6, Max: 99.2, Diff: 54.0, Sum: 654.8] [Update RS (ms): 42.9 63.9 43.1 63.8 63.8 63.6 0.0 48.9 63.8 63.6 64.1 63.0 Min: 0.0, Avg: 53.7, Max: 64.1, Diff: 64.1, Sum: 644.4] [Processed Buffers: 70 97 93 92 108 97 0 97 102 100 104 96 Min: 0, Avg: 88.0, Max: 108, Diff: 108, Sum: 1056] [Scan RS (ms): 34.9 35.0 34.9 34.6 34.8 34.9 16.6 34.8 34.8 34.8 34.8 35.0 Min: 16.6, Avg: 33.3, Max: 35.0, Diff: 18.5, Sum: 400.0] [Object Copy (ms): 98.3 96.2 97.9 96.6 96.6 95.2 125.0 97.5 96.8 96.7 96.6 96.7 Min: 95.2, Avg: 99.2, Max: 125.0, Diff: 29.8, Sum: 1190.1] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 11 1 10 8 4 6 9 7 5 1 6 12 Min: 1, Avg: 6.7, Max: 12, Diff: 11, Sum: 80] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.0 0.1 0.0 0.1 0.1 0.0 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 241.0 241.0 241.0 240.9 240.8 240.9 240.7 240.8 240.8 240.7 240.7 240.7 Min: 240.7, Avg: 240.8, Max: 241.0, Diff: 0.3, Sum: 2890.2] [GC Worker End (ms): 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 Min: 21495801.3, Avg: 21495801.3, Max: 21495801.3, Diff: 0.1] [Code Root Fixup: 0.3 ms] [Clear CT: 2.6 ms] [Other: 8.1 ms] [Choose CSet: 0.6 ms] [Ref Proc: 2.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8128.0M(14.6G)->7263.3M(14.6G)] [Times: user=2.93 sys=0.00, real=0.25 secs] 2013-10-28T15:59:44.685+0100: 21501.573: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 26676824 bytes, 26676824 total - age 2: 17917840 bytes, 44594664 total - age 3: 3599704 bytes, 48194368 total 21501.574: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 160602, predicted base time: 101.20 ms, remaining time: 198.80 ms, target pause time: 300.00 ms] 21501.574: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 60.87 ms] 21501.574: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 162.08 ms, target pause time: 300.00 ms] , 0.1277600 secs] [Parallel Time: 123.2 ms, GC Workers: 12] [GC Worker Start (ms): 21501573.9 21501574.0 21501574.1 21501574.1 21501574.1 21501574.1 21501574.2 21501574.3 21501574.3 21501574.3 21501574.3 21501574.3 Min: 21501573.9, Avg: 21501574.2, Max: 21501574.3, Diff: 0.4] [Ext Root Scanning (ms): 46.0 45.5 58.9 100.0 63.9 64.1 45.7 45.2 45.4 45.2 45.3 44.8 Min: 44.8, Avg: 54.2, Max: 100.0, Diff: 55.2, Sum: 650.1] [Update RS (ms): 51.6 52.3 38.6 0.0 32.2 31.9 51.4 52.0 52.1 51.9 51.7 52.0 Min: 0.0, Avg: 43.1, Max: 52.3, Diff: 52.3, Sum: 517.8] [Processed Buffers: 77 72 73 0 69 56 79 88 78 73 88 80 Min: 0, Avg: 69.4, Max: 88, Diff: 88, Sum: 833] [Scan RS (ms): 0.6 0.6 0.8 0.0 0.7 0.8 0.9 0.6 0.4 0.8 0.8 0.8 Min: 0.0, Avg: 0.6, Max: 0.9, Diff: 0.9, Sum: 7.6] [Object Copy (ms): 20.1 19.9 19.9 22.9 21.2 21.3 20.0 20.2 20.0 20.2 20.2 20.3 Min: 19.9, Avg: 20.5, Max: 22.9, Diff: 3.0, Sum: 246.2] [Termination (ms): 4.8 4.8 4.8 0.0 4.8 4.8 4.8 4.8 4.8 4.7 4.8 4.8 Min: 0.0, Avg: 4.4, Max: 4.8, Diff: 4.8, Sum: 52.6] [Termination Attempts: 148 161 127 1 142 147 162 139 163 1 154 155 Min: 1, Avg: 125.0, Max: 163, Diff: 162, Sum: 1500] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.8] [GC Worker Total (ms): 123.2 123.1 123.0 123.0 122.9 122.9 122.9 122.8 122.8 122.8 122.8 122.8 Min: 122.8, Avg: 122.9, Max: 123.2, Diff: 0.4, Sum: 1475.1] [GC Worker End (ms): 21501697.1 21501697.1 21501697.1 21501697.0 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 Min: 21501697.0, Avg: 21501697.1, Max: 21501697.1, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.8 ms] [Other: 3.7 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.9 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7911.3M(14.6G)->7293.7M(14.6G)] [Times: user=1.50 sys=0.00, real=0.13 secs] 2013-10-28T15:59:51.136+0100: 21508.024: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 22020176 bytes, 22020176 total - age 2: 20834232 bytes, 42854408 total - age 3: 16616152 bytes, 59470560 total - age 4: 3427912 bytes, 62898472 total 21508.025: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 61972, predicted base time: 78.13 ms, remaining time: 221.87 ms, target pause time: 300.00 ms] 21508.025: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 20.76 ms] 21508.025: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 98.89 ms, target pause time: 300.00 ms] , 0.1165410 secs] [Parallel Time: 112.6 ms, GC Workers: 12] [GC Worker Start (ms): 21508025.1 21508025.1 21508025.2 21508025.2 21508025.3 21508025.3 21508025.4 21508025.4 21508025.4 21508025.4 21508025.5 21508025.5 Min: 21508025.1, Avg: 21508025.3, Max: 21508025.5, Diff: 0.4] [Ext Root Scanning (ms): 46.1 45.4 45.6 64.0 44.9 47.0 45.8 58.7 91.3 64.7 45.6 45.2 Min: 44.9, Avg: 53.7, Max: 91.3, Diff: 46.4, Sum: 644.3] [Update RS (ms): 16.0 16.6 16.6 0.0 16.8 15.1 16.2 2.2 0.0 0.0 16.1 16.7 Min: 0.0, Avg: 11.0, Max: 16.8, Diff: 16.8, Sum: 132.3] [Processed Buffers: 41 67 65 0 49 68 51 28 0 0 58 61 Min: 0, Avg: 40.7, Max: 68, Diff: 68, Sum: 488] [Scan RS (ms): 0.0 0.1 0.0 0.0 0.1 0.0 0.0 0.1 0.0 0.0 0.1 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 23.7 23.7 23.6 21.7 23.8 23.6 23.6 24.5 20.8 20.8 23.7 23.5 Min: 20.8, Avg: 23.1, Max: 24.5, Diff: 3.8, Sum: 277.1] [Termination (ms): 26.6 26.6 26.6 26.6 26.6 26.6 26.6 26.6 0.0 26.6 26.6 26.6 Min: 0.0, Avg: 24.4, Max: 26.6, Diff: 26.6, Sum: 292.4] [Termination Attempts: 42 1 48 47 49 44 49 49 1 51 47 50 Min: 1, Avg: 39.8, Max: 51, Diff: 50, Sum: 478] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.0 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.8] [GC Worker Total (ms): 112.5 112.5 112.4 112.4 112.3 112.3 112.2 112.2 112.1 112.1 112.1 112.1 Min: 112.1, Avg: 112.3, Max: 112.5, Diff: 0.4, Sum: 1347.3] [GC Worker End (ms): 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.5 21508137.6 21508137.6 21508137.6 Min: 21508137.5, Avg: 21508137.6, Max: 21508137.6, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 3.7 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7941.7M(14.6G)->7391.1M(14.6G)] [Times: user=1.35 sys=0.01, real=0.11 secs] 2013-10-28T15:59:58.723+0100: 21515.610: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 20444800 bytes, 20444800 total - age 2: 14599736 bytes, 35044536 total - age 3: 17794192 bytes, 52838728 total 21515.611: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 90998, predicted base time: 82.82 ms, remaining time: 217.18 ms, target pause time: 300.00 ms] 21515.611: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 19.51 ms] 21515.611: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 102.33 ms, target pause time: 300.00 ms] , 0.1198280 secs] [Parallel Time: 116.2 ms, GC Workers: 12] [GC Worker Start (ms): 21515611.0 21515611.0 21515611.1 21515611.1 21515611.1 21515611.2 21515611.2 21515611.3 21515611.3 21515611.3 21515611.3 21515611.3 Min: 21515611.0, Avg: 21515611.2, Max: 21515611.3, Diff: 0.4] [Ext Root Scanning (ms): 45.7 45.9 45.8 64.0 45.6 64.2 45.5 94.2 46.6 59.6 45.7 45.8 Min: 45.5, Avg: 54.0, Max: 94.2, Diff: 48.7, Sum: 648.5] [Update RS (ms): 24.9 24.4 24.7 5.3 25.0 5.5 25.2 0.0 25.4 9.8 24.5 24.6 Min: 0.0, Avg: 18.3, Max: 25.4, Diff: 25.4, Sum: 219.5] [Processed Buffers: 66 54 63 34 45 33 54 0 58 57 65 56 Min: 0, Avg: 48.8, Max: 66, Diff: 66, Sum: 585] [Scan RS (ms): 0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 0.4] [Object Copy (ms): 20.9 20.9 20.9 22.1 20.8 21.7 20.6 21.5 19.2 21.7 21.0 20.8 Min: 19.2, Avg: 21.0, Max: 22.1, Diff: 2.9, Sum: 252.2] [Termination (ms): 24.5 24.5 24.5 24.5 24.5 24.5 24.5 0.0 24.5 24.5 24.5 24.5 Min: 0.0, Avg: 22.5, Max: 24.5, Diff: 24.5, Sum: 269.8] [Termination Attempts: 1 1 1 1 1 1 1 1 1 1 1 1 Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 12] [GC Worker Other (ms): 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.0 0.1 0.1 0.0 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 116.1 116.1 116.1 116.1 116.0 116.0 115.9 115.8 115.8 115.8 115.8 115.8 Min: 115.8, Avg: 115.9, Max: 116.1, Diff: 0.4, Sum: 1391.3] [GC Worker End (ms): 21515727.1 21515727.1 21515727.1 21515727.1 21515727.1 21515727.1 21515727.1 21515727.0 21515727.1 21515727.1 21515727.1 21515727.1 Min: 21515727.0, Avg: 21515727.1, Max: 21515727.1, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 3.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8039.1M(14.6G)->7477.8M(14.6G)] [Times: user=1.39 sys=0.00, real=0.12 secs] 2013-10-28T16:00:01.284+0100: 21518.172: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 22178080 bytes, 22178080 total - age 2: 15771712 bytes, 37949792 total - age 3: 12924848 bytes, 50874640 total 21518.172: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 74257, predicted base time: 77.45 ms, remaining time: 222.55 ms, target pause time: 300.00 ms] 21518.172: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 16.66 ms] 21518.172: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 94.12 ms, target pause time: 300.00 ms] , 0.1202740 secs] [Parallel Time: 114.7 ms, GC Workers: 12] [GC Worker Start (ms): 21518172.6 21518172.7 21518172.7 21518172.7 21518172.8 21518172.8 21518172.8 21518172.9 21518173.0 21518173.0 21518173.0 21518173.0 Min: 21518172.6, Avg: 21518172.8, Max: 21518173.0, Diff: 0.4] [Ext Root Scanning (ms): 61.8 46.5 46.0 45.1 45.3 45.2 45.4 45.9 59.6 47.8 92.0 64.5 Min: 45.1, Avg: 53.8, Max: 92.0, Diff: 46.9, Sum: 645.1] [Update RS (ms): 2.4 18.7 20.2 20.8 19.9 19.9 34.4 19.4 4.5 16.9 0.0 0.6 Min: 0.0, Avg: 14.8, Max: 34.4, Diff: 34.4, Sum: 177.8] [Processed Buffers: 32 53 43 51 70 47 19 46 33 46 0 1 Min: 0, Avg: 36.8, Max: 70, Diff: 70, Sum: 441] [Scan RS (ms): 0.1 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.1 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 22.1 21.0 20.1 20.4 20.9 20.9 6.3 39.3 21.8 21.2 21.1 20.9 Min: 6.3, Avg: 21.3, Max: 39.3, Diff: 33.0, Sum: 256.0] [Termination (ms): 27.1 27.1 27.1 27.1 27.1 27.1 27.1 8.6 28.2 27.1 0.0 27.1 Min: 0.0, Avg: 23.4, Max: 28.2, Diff: 28.2, Sum: 280.6] [Termination Attempts: 23 17 15 16 18 17 14 1 17 14 1 14 Min: 1, Avg: 13.9, Max: 23, Diff: 22, Sum: 167] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 113.6 113.5 113.5 113.4 113.3 113.3 113.3 113.3 114.2 113.2 113.1 113.2 Min: 113.1, Avg: 113.4, Max: 114.2, Diff: 1.1, Sum: 1360.9] [GC Worker End (ms): 21518286.2 21518286.1 21518286.2 21518286.2 21518286.2 21518286.2 21518286.2 21518286.2 21518287.1 21518286.2 21518286.1 21518286.2 Min: 21518286.1, Avg: 21518286.2, Max: 21518287.1, Diff: 1.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 5.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 3.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8125.8M(14.6G)->7548.1M(14.6G)] [Times: user=1.31 sys=0.00, real=0.12 secs] 2013-10-28T16:00:05.597+0100: 21522.485: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 16592056 bytes, 16592056 total - age 2: 17463768 bytes, 34055824 total - age 3: 15349624 bytes, 49405448 total 21522.486: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 76409, predicted base time: 77.52 ms, remaining time: 222.48 ms, target pause time: 300.00 ms] 21522.486: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 18.97 ms] 21522.486: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 96.49 ms, target pause time: 300.00 ms] , 0.1174670 secs] [Parallel Time: 113.2 ms, GC Workers: 12] [GC Worker Start (ms): 21522485.8 21522485.9 21522485.9 21522486.0 21522486.1 21522486.2 21522486.2 21522486.2 21522486.2 21522486.2 21522486.3 21522486.3 Min: 21522485.8, Avg: 21522486.1, Max: 21522486.3, Diff: 0.4] [Ext Root Scanning (ms): 45.3 91.8 46.0 46.9 63.4 58.7 45.1 63.1 45.3 46.3 45.7 48.3 Min: 45.1, Avg: 53.8, Max: 91.8, Diff: 46.7, Sum: 645.9] [Update RS (ms): 20.4 0.0 19.4 19.7 0.8 6.0 19.9 0.8 19.9 18.8 19.6 17.1 Min: 0.0, Avg: 13.5, Max: 20.4, Diff: 20.4, Sum: 162.4] [Processed Buffers: 57 0 68 73 8 33 55 2 48 62 61 52 Min: 0, Avg: 43.2, Max: 73, Diff: 73, Sum: 519] [Scan RS (ms): 0.0 0.0 0.1 0.1 0.0 0.0 0.1 0.0 0.1 0.1 0.1 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 21.5 21.0 21.6 20.2 22.6 22.1 21.7 22.9 21.5 21.5 21.4 21.3 Min: 20.2, Avg: 21.6, Max: 22.9, Diff: 2.7, Sum: 259.2] [Termination (ms): 25.7 0.0 25.7 25.7 25.7 25.7 25.7 25.7 25.7 25.7 25.7 25.7 Min: 0.0, Avg: 23.6, Max: 25.7, Diff: 25.7, Sum: 283.0] [Termination Attempts: 19 1 13 16 14 9 17 18 15 20 1 16 Min: 1, Avg: 13.2, Max: 20, Diff: 19, Sum: 159] [GC Worker Other (ms): 0.1 0.0 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 112.9 112.8 112.8 112.7 112.6 112.6 112.6 112.6 112.6 112.6 112.5 112.5 Min: 112.5, Avg: 112.7, Max: 112.9, Diff: 0.4, Sum: 1351.9] [GC Worker End (ms): 21522598.7 21522598.7 21522598.8 21522598.7 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 Min: 21522598.7, Avg: 21522598.8, Max: 21522598.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 4.1 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8196.1M(14.6G)->7582.0M(14.6G)] [Times: user=1.36 sys=0.00, real=0.11 secs] 2013-10-28T16:00:06.595+0100: 21523.483: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 26815464 bytes, 26815464 total - age 2: 12126232 bytes, 38941696 total - age 3: 15333080 bytes, 54274776 total - age 4: 13133256 bytes, 67408032 total 21523.484: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 40586, predicted base time: 68.66 ms, remaining time: 231.34 ms, target pause time: 300.00 ms] 21523.484: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 28.52 ms] 21523.484: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 97.18 ms, target pause time: 300.00 ms] , 0.1269940 secs] [Parallel Time: 118.4 ms, GC Workers: 12] [GC Worker Start (ms): 21523483.8 21523483.8 21523483.9 21523483.9 21523483.9 21523483.9 21523483.9 21523483.9 21523483.9 21523484.0 21523484.1 21523484.0 Min: 21523483.8, Avg: 21523483.9, Max: 21523484.1, Diff: 0.2] [Ext Root Scanning (ms): 87.3 52.5 69.8 96.8 69.7 60.8 53.7 72.7 70.6 54.0 54.1 86.7 Min: 52.5, Avg: 69.1, Max: 96.8, Diff: 44.3, Sum: 828.6] [Update RS (ms): 0.0 20.6 2.1 0.0 2.0 12.0 20.6 1.2 2.4 19.4 46.9 0.0 Min: 0.0, Avg: 10.6, Max: 46.9, Diff: 46.9, Sum: 127.2] [Processed Buffers: 0 62 18 0 12 35 53 4 22 42 23 0 Min: 0, Avg: 22.6, Max: 62, Diff: 62, Sum: 271] [Scan RS (ms): 0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 0.4] [Object Copy (ms): 15.0 29.2 30.5 21.3 30.5 29.5 27.9 36.0 29.2 28.6 1.7 15.4 Min: 1.7, Avg: 24.6, Max: 36.0, Diff: 34.4, Sum: 294.8] [Termination (ms): 15.9 15.9 15.8 0.0 15.8 15.8 15.8 8.1 15.8 15.8 15.3 16.0 Min: 0.0, Avg: 13.8, Max: 16.0, Diff: 16.0, Sum: 166.0] [Termination Attempts: 36 66 109 1 76 66 71 1 66 71 1 35 Min: 1, Avg: 49.9, Max: 109, Diff: 108, Sum: 599] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 118.3 118.3 118.2 118.2 118.2 118.2 118.1 118.2 118.2 118.1 118.0 118.1 Min: 118.0, Avg: 118.2, Max: 118.3, Diff: 0.2, Sum: 1418.1] [GC Worker End (ms): 21523602.1 21523602.1 21523602.1 21523602.0 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 Min: 21523602.0, Avg: 21523602.1, Max: 21523602.1, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 8.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 6.6 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8230.0M(14.6G)->7629.0M(14.6G)] [Times: user=1.17 sys=0.00, real=0.13 secs] 2013-10-28T16:00:07.417+0100: 21524.305: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 10249616 bytes, 10249616 total - age 2: 21417864 bytes, 31667480 total - age 3: 11865880 bytes, 43533360 total 21524.305: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 73249, predicted base time: 85.31 ms, remaining time: 214.69 ms, target pause time: 300.00 ms] 21524.305: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 17.95 ms] 21524.305: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 103.27 ms, target pause time: 300.00 ms] , 0.1186000 secs] [Parallel Time: 115.5 ms, GC Workers: 12] [GC Worker Start (ms): 21524305.2 21524305.2 21524305.2 21524305.2 21524305.2 21524305.3 21524305.3 21524305.4 21524305.4 21524305.4 21524305.4 21524305.4 Min: 21524305.2, Avg: 21524305.3, Max: 21524305.4, Diff: 0.2] [Ext Root Scanning (ms): 47.6 63.0 70.1 65.3 48.5 61.7 64.8 67.3 47.5 60.0 93.6 47.8 Min: 47.5, Avg: 61.5, Max: 93.6, Diff: 46.1, Sum: 737.4] [Update RS (ms): 30.7 14.1 7.9 11.4 30.3 31.4 11.6 8.8 29.8 16.6 0.0 33.5 Min: 0.0, Avg: 18.8, Max: 33.5, Diff: 33.5, Sum: 226.0] [Processed Buffers: 34 33 30 28 63 21 23 26 63 29 0 34 Min: 0, Avg: 32.0, Max: 63, Diff: 63, Sum: 384] [Scan RS (ms): 0.0 0.1 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 15.7 16.7 16.0 17.3 15.1 1.3 17.5 17.8 16.5 17.3 21.5 12.6 Min: 1.3, Avg: 15.5, Max: 21.5, Diff: 20.2, Sum: 185.5] [Termination (ms): 21.2 21.3 21.2 21.2 21.2 20.8 21.2 21.2 21.2 21.2 0.0 21.2 Min: 0.0, Avg: 19.4, Max: 21.3, Diff: 21.3, Sum: 233.2] [Termination Attempts: 37 36 44 34 45 1 37 42 42 36 1 32 Min: 1, Avg: 32.2, Max: 45, Diff: 44, Sum: 387] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 115.4 115.4 115.4 115.4 115.3 115.3 115.3 115.2 115.2 115.2 115.1 115.2 Min: 115.1, Avg: 115.3, Max: 115.4, Diff: 0.3, Sum: 1383.5] [GC Worker End (ms): 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.5 21524420.6 Min: 21524420.5, Avg: 21524420.6, Max: 21524420.6, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 3.0 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8277.0M(14.6G)->7634.9M(14.6G)] [Times: user=1.19 sys=0.01, real=0.11 secs] 2013-10-28T16:00:09.160+0100: 21526.048: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 6000328 bytes, 6000328 total - age 2: 9520600 bytes, 15520928 total - age 3: 20602264 bytes, 36123192 total - age 4: 11786432 bytes, 47909624 total 21526.048: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 27950, predicted base time: 74.37 ms, remaining time: 225.63 ms, target pause time: 300.00 ms] 21526.048: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 16.51 ms] 21526.048: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 90.88 ms, target pause time: 300.00 ms] , 0.1238860 secs] [Parallel Time: 119.6 ms, GC Workers: 12] [GC Worker Start (ms): 21526048.4 21526048.4 21526048.5 21526048.5 21526048.5 21526048.6 21526048.6 21526048.6 21526048.6 21526048.7 21526048.7 21526048.8 Min: 21526048.4, Avg: 21526048.6, Max: 21526048.8, Diff: 0.4] [Ext Root Scanning (ms): 64.3 48.7 56.1 59.2 84.8 66.6 48.1 60.6 98.8 66.3 48.2 61.0 Min: 48.1, Avg: 63.6, Max: 98.8, Diff: 50.7, Sum: 762.8] [Update RS (ms): 1.5 15.9 9.9 7.0 0.0 0.0 16.0 3.5 0.0 0.0 15.9 3.2 Min: 0.0, Avg: 6.1, Max: 16.0, Diff: 16.0, Sum: 73.1] [Processed Buffers: 2 40 29 21 0 0 38 13 0 0 31 22 Min: 0, Avg: 16.3, Max: 40, Diff: 40, Sum: 196] [Scan RS (ms): 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 17.9 19.1 17.6 22.1 2.5 17.0 19.3 19.4 20.5 17.1 19.2 19.1 Min: 2.5, Avg: 17.6, Max: 22.1, Diff: 19.5, Sum: 210.9] [Termination (ms): 35.7 35.7 35.7 31.1 32.0 35.7 35.7 35.7 0.0 35.7 35.7 35.7 Min: 0.0, Avg: 32.1, Max: 35.7, Diff: 35.7, Sum: 384.6] [Termination Attempts: 78 1 57 1 1 70 60 59 1 62 71 66 Min: 1, Avg: 43.9, Max: 78, Diff: 77, Sum: 527] [GC Worker Other (ms): 0.0 0.1 0.0 0.0 0.1 0.0 0.1 0.1 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.6] [GC Worker Total (ms): 119.5 119.6 119.5 119.4 119.4 119.4 119.4 119.4 119.3 119.3 119.2 119.1 Min: 119.1, Avg: 119.4, Max: 119.6, Diff: 0.4, Sum: 1432.5] [GC Worker End (ms): 21526167.9 21526168.0 21526168.0 21526167.9 21526168.0 21526167.9 21526168.0 21526168.0 21526167.9 21526168.0 21526167.9 21526168.0 Min: 21526167.9, Avg: 21526168.0, Max: 21526168.0, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 4.1 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.7 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8282.9M(14.6G)->7659.5M(14.6G)] [Times: user=1.24 sys=0.00, real=0.12 secs] 2013-10-28T16:00:10.519+0100: 21527.407: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 8936336 bytes, 8936336 total - age 2: 3011208 bytes, 11947544 total - age 3: 9009224 bytes, 20956768 total - age 4: 19393152 bytes, 40349920 total - age 5: 9924104 bytes, 50274024 total 21527.407: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 35510, predicted base time: 76.50 ms, remaining time: 223.50 ms, target pause time: 300.00 ms] 21527.407: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 12.59 ms] 21527.407: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 89.09 ms, target pause time: 300.00 ms] , 0.1681470 secs] [Parallel Time: 154.5 ms, GC Workers: 12] [GC Worker Start (ms): 21527407.6 21527407.6 21527407.7 21527407.7 21527407.7 21527407.7 21527407.7 21527407.8 21527407.8 21527407.8 21527407.8 21527407.8 Min: 21527407.6, Avg: 21527407.7, Max: 21527407.8, Diff: 0.2] [Ext Root Scanning (ms): 65.0 134.3 48.8 93.4 49.3 55.4 67.5 48.0 47.8 48.3 67.3 51.3 Min: 47.8, Avg: 64.7, Max: 134.3, Diff: 86.6, Sum: 776.3] [Update RS (ms): 0.0 0.0 12.9 0.0 13.1 6.4 0.0 13.0 13.2 12.7 0.0 10.0 Min: 0.0, Avg: 6.8, Max: 13.2, Diff: 13.2, Sum: 81.4] [Processed Buffers: 0 0 31 0 26 28 0 41 39 42 0 54 Min: 0, Avg: 21.8, Max: 54, Diff: 54, Sum: 261] [Scan RS (ms): 0.0 0.0 0.0 0.0 0.1 0.1 0.0 0.1 0.0 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 16.7 19.9 19.9 2.6 19.1 19.7 14.0 20.5 20.5 20.4 14.1 20.2 Min: 2.6, Avg: 17.3, Max: 20.5, Diff: 18.0, Sum: 207.7] [Termination (ms): 72.6 0.0 72.6 58.3 72.6 72.6 72.6 72.6 72.6 72.6 72.7 72.6 Min: 0.0, Avg: 65.4, Max: 72.7, Diff: 72.7, Sum: 784.6] [Termination Attempts: 12 1 15 1 12 16 12 13 9 1 12 14 Min: 1, Avg: 9.8, Max: 16, Diff: 15, Sum: 118] [GC Worker Other (ms): 0.1 0.0 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 154.5 154.3 154.4 154.3 154.2 154.3 154.3 154.2 154.2 154.2 154.2 154.2 Min: 154.2, Avg: 154.3, Max: 154.5, Diff: 0.3, Sum: 1851.4] [GC Worker End (ms): 21527562.0 21527561.9 21527562.0 21527562.0 21527561.9 21527562.0 21527562.0 21527562.0 21527562.0 21527562.0 21527562.0 21527562.0 Min: 21527561.9, Avg: 21527562.0, Max: 21527562.0, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 13.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 11.9 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8307.5M(14.6G)->7675.9M(14.6G)] [Times: user=1.66 sys=0.00, real=0.17 secs] 2013-10-28T16:00:11.861+0100: 21528.749: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 6 (max 15) - age 1: 8421648 bytes, 8421648 total - age 2: 7620344 bytes, 16041992 total - age 3: 2522808 bytes, 18564800 total - age 4: 7182464 bytes, 25747264 total - age 5: 18835480 bytes, 44582744 total - age 6: 9228976 bytes, 53811720 total 21528.749: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 39666, predicted base time: 81.53 ms, remaining time: 218.47 ms, target pause time: 300.00 ms] 21528.749: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 17.48 ms] 21528.749: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 99.00 ms, target pause time: 300.00 ms] , 0.1418840 secs] [Parallel Time: 128.3 ms, GC Workers: 12] [GC Worker Start (ms): 21528749.7 21528749.7 21528749.7 21528749.8 21528749.8 21528749.8 21528749.9 21528749.9 21528749.9 21528749.9 21528749.9 21528750.8 Min: 21528749.7, Avg: 21528749.9, Max: 21528750.8, Diff: 1.1] [Ext Root Scanning (ms): 51.1 52.6 67.6 74.4 70.3 67.6 67.7 67.8 99.4 62.3 62.7 66.5 Min: 51.1, Avg: 67.5, Max: 99.4, Diff: 48.2, Sum: 810.0] [Update RS (ms): 23.1 20.8 3.2 0.0 3.1 4.2 4.8 4.5 0.0 10.3 12.5 3.6 Min: 0.0, Avg: 7.5, Max: 23.1, Diff: 23.1, Sum: 90.1] [Processed Buffers: 50 47 32 0 17 21 26 26 0 28 28 18 Min: 0, Avg: 24.4, Max: 50, Diff: 50, Sum: 293] [Scan RS (ms): 0.0 0.1 0.0 0.0 18.8 0.0 0.0 0.1 0.0 0.0 0.0 0.1 Min: 0.0, Avg: 1.6, Max: 18.8, Diff: 18.8, Sum: 19.2] [Object Copy (ms): 20.6 21.3 24.0 20.4 2.7 23.0 22.2 22.3 28.3 24.6 41.7 23.7 Min: 2.7, Avg: 22.9, Max: 41.7, Diff: 38.9, Sum: 274.8] [Termination (ms): 33.1 33.1 33.1 33.1 33.0 33.1 33.1 33.1 0.0 30.5 10.8 33.1 Min: 0.0, Avg: 28.2, Max: 33.1, Diff: 33.1, Sum: 338.8] [Termination Attempts: 1 11 8 8 1 5 5 10 1 1 1 6 Min: 1, Avg: 4.8, Max: 11, Diff: 10, Sum: 58] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.7] [GC Worker Total (ms): 128.0 128.0 128.0 127.9 127.9 127.9 127.9 127.8 127.8 127.8 127.7 126.9 Min: 126.9, Avg: 127.8, Max: 128.0, Diff: 1.1, Sum: 1533.6] [GC Worker End (ms): 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.6 21528877.7 21528877.6 21528877.7 Min: 21528877.6, Avg: 21528877.7, Max: 21528877.7, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 13.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 11.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8323.9M(14.6G)->7724.2M(14.6G)] [Times: user=1.36 sys=0.00, real=0.14 secs] 2013-10-28T16:00:13.625+0100: 21530.513: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 6 (max 15) - age 1: 10421320 bytes, 10421320 total - age 2: 6261944 bytes, 16683264 total - age 3: 7196032 bytes, 23879296 total - age 4: 2495936 bytes, 26375232 total - age 5: 5485976 bytes, 31861208 total - age 6: 18536368 bytes, 50397576 total 21530.513: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 52102, predicted base time: 87.75 ms, remaining time: 212.25 ms, target pause time: 300.00 ms] 21530.513: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 11.64 ms] 21530.513: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 99.40 ms, target pause time: 300.00 ms] , 0.1134930 secs] [Parallel Time: 110.3 ms, GC Workers: 12] [GC Worker Start (ms): 21530513.5 21530513.6 21530513.6 21530513.6 21530513.6 21530513.6 21530513.7 21530513.7 21530513.7 21530513.7 21530513.7 21530514.8 Min: 21530513.5, Avg: 21530513.7, Max: 21530514.8, Diff: 1.2] [Ext Root Scanning (ms): 47.1 90.1 65.0 28.2 59.1 56.9 66.6 47.2 64.7 60.2 87.9 47.1 Min: 28.2, Avg: 60.0, Max: 90.1, Diff: 61.9, Sum: 720.1] [Update RS (ms): 21.5 0.0 3.0 11.2 9.5 13.9 2.1 21.4 2.6 9.1 0.0 20.2 Min: 0.0, Avg: 9.6, Max: 21.5, Diff: 21.5, Sum: 114.6] [Processed Buffers: 54 0 22 42 33 36 10 43 26 26 0 40 Min: 0, Avg: 27.7, Max: 54, Diff: 54, Sum: 332] [Scan RS (ms): 0.0 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.1 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.4] [Object Copy (ms): 18.0 20.0 18.5 55.1 17.9 15.7 17.8 17.8 19.1 17.0 2.6 18.1 Min: 2.6, Avg: 19.8, Max: 55.1, Diff: 52.6, Sum: 237.7] [Termination (ms): 23.5 0.0 23.6 15.5 23.6 23.5 23.6 23.5 23.6 23.5 19.6 23.5 Min: 0.0, Avg: 20.6, Max: 23.6, Diff: 23.6, Sum: 247.0] [Termination Attempts: 6 1 6 1 7 2 2 4 2 1 1 1 Min: 1, Avg: 2.8, Max: 7, Diff: 6, Sum: 34] [GC Worker Other (ms): 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 110.2 110.2 110.2 110.2 110.2 110.1 110.1 110.1 110.1 110.1 110.1 109.0 Min: 109.0, Avg: 110.1, Max: 110.2, Diff: 1.2, Sum: 1320.7] [GC Worker End (ms): 21530623.8 21530623.7 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 Min: 21530623.7, Avg: 21530623.8, Max: 21530623.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 3.0 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8372.2M(14.6G)->7774.0M(14.6G)] [Times: user=1.15 sys=0.02, real=0.12 secs] 2013-10-28T16:00:19.956+0100: 21536.844: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 4026408 bytes, 4026408 total - age 2: 8853712 bytes, 12880120 total - age 3: 5294968 bytes, 18175088 total - age 4: 6427000 bytes, 24602088 total - age 5: 2472488 bytes, 27074576 total - age 6: 5429760 bytes, 32504336 total 21536.844: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 76487, predicted base time: 89.50 ms, remaining time: 210.50 ms, target pause time: 300.00 ms] 21536.844: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 10.64 ms] 21536.844: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 100.15 ms, target pause time: 300.00 ms] , 0.1148320 secs] [Parallel Time: 111.4 ms, GC Workers: 12] [GC Worker Start (ms): 21536844.7 21536844.7 21536844.7 21536844.8 21536844.8 21536844.9 21536844.9 21536844.9 21536845.0 21536845.0 21536845.0 21536845.0 Min: 21536844.7, Avg: 21536844.9, Max: 21536845.0, Diff: 0.3] [Ext Root Scanning (ms): 57.9 89.6 48.0 57.9 65.5 58.2 61.3 58.4 56.6 43.4 61.4 60.8 Min: 43.4, Avg: 59.9, Max: 89.6, Diff: 46.2, Sum: 719.1] [Update RS (ms): 14.0 0.0 24.7 14.0 6.6 13.6 8.8 13.6 15.1 27.7 9.0 9.2 Min: 0.0, Avg: 13.0, Max: 27.7, Diff: 27.7, Sum: 156.2] [Processed Buffers: 41 0 73 46 38 55 55 48 61 67 33 43 Min: 0, Avg: 46.7, Max: 73, Diff: 73, Sum: 560] [Scan RS (ms): 0.0 0.0 0.0 0.1 0.1 0.0 0.1 0.0 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 14.2 21.4 13.4 14.1 13.8 14.1 15.7 13.9 14.1 14.7 15.5 15.7 Min: 13.4, Avg: 15.0, Max: 21.4, Diff: 8.0, Sum: 180.3] [Termination (ms): 24.9 0.0 24.9 24.9 24.9 24.9 24.9 24.9 24.9 24.9 24.9 24.9 Min: 0.0, Avg: 22.9, Max: 24.9, Diff: 24.9, Sum: 274.2] [Termination Attempts: 51 1 51 49 44 50 52 52 53 1 56 46 Min: 1, Avg: 42.2, Max: 56, Diff: 55, Sum: 506] [GC Worker Other (ms): 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 111.1 111.0 111.1 111.0 111.0 110.9 110.9 110.9 110.8 110.8 110.8 110.8 Min: 110.8, Avg: 110.9, Max: 111.1, Diff: 0.3, Sum: 1331.2] [GC Worker End (ms): 21536955.8 21536955.7 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 Min: 21536955.7, Avg: 21536955.8, Max: 21536955.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 3.3 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.6 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(6368.0M) Survivors: 96.0M->88.0M Heap: 8422.0M(14.6G)->7768.2M(14.6G)] [Times: user=1.20 sys=0.01, real=0.12 secs] -------------- next part -------------- 2013-10-28T15:58:33.923+0100: 21430.811: [GC pause (young) (initial-mark) Desired survivor size 276824064 bytes, new threshold 7 (max 15) - age 1: 55460056 bytes, 55460056 total - age 2: 79721520 bytes, 135181576 total - age 3: 53921760 bytes, 189103336 total - age 4: 24968016 bytes, 214071352 total - age 5: 42768296 bytes, 256839648 total - age 6: 15861352 bytes, 272701000 total - age 7: 28221288 bytes, 300922288 total 21430.811: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 200850, predicted base time: 103.40 ms, remaining time: 196.60 ms, target pause time: 300.00 ms] 21430.811: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 473 regions, survivors: 51 regions, predicted young region time: 76.73 ms] 21430.811: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 473 regions, survivors: 51 regions, old: 0 regions, predicted pause time: 180.13 ms, target pause time: 300.00 ms] , 0.2397300 secs] [Parallel Time: 232.7 ms, GC Workers: 12] [GC Worker Start (ms): 21430811.5 21430811.6 21430811.7 21430811.8 21430811.8 21430811.8 21430811.9 21430811.9 21430811.9 21430812.0 21430812.0 21430812.0 Min: 21430811.5, Avg: 21430811.8, Max: 21430812.0, Diff: 0.4] [Ext Root Scanning (ms): 45.3 46.2 103.6 46.3 46.8 45.7 46.4 63.9 66.3 45.9 47.4 59.8 Min: 45.3, Avg: 55.3, Max: 103.6, Diff: 58.3, Sum: 663.5] [Update RS (ms): 46.4 46.2 0.0 46.5 46.0 46.8 46.7 25.2 26.1 46.5 45.8 29.8 Min: 0.0, Avg: 37.7, Max: 46.8, Diff: 46.8, Sum: 451.9] [Processed Buffers: 109 107 0 99 114 95 109 92 74 108 107 112 Min: 0, Avg: 93.8, Max: 114, Diff: 114, Sum: 1126] [Scan RS (ms): 0.3 0.3 0.1 0.1 0.2 0.3 0.0 0.3 0.2 0.5 0.4 0.2 Min: 0.0, Avg: 0.2, Max: 0.5, Diff: 0.4, Sum: 2.9] [Object Copy (ms): 102.4 101.8 128.5 101.3 101.3 101.4 101.0 104.7 101.5 101.2 100.4 104.2 Min: 100.4, Avg: 104.1, Max: 128.5, Diff: 28.1, Sum: 1249.6] [Termination (ms): 37.9 37.9 0.0 37.9 37.9 37.9 37.9 37.9 37.9 37.9 37.9 37.9 Min: 0.0, Avg: 34.7, Max: 37.9, Diff: 37.9, Sum: 416.5] [Termination Attempts: 36 45 1 37 33 43 1 35 33 32 36 23 Min: 1, Avg: 29.6, Max: 45, Diff: 44, Sum: 355] [GC Worker Other (ms): 0.1 0.1 0.0 0.0 0.1 0.1 0.1 0.0 0.0 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.6] [GC Worker Total (ms): 232.4 232.3 232.2 232.1 232.1 232.1 232.1 232.0 232.0 232.0 231.9 231.9 Min: 231.9, Avg: 232.1, Max: 232.4, Diff: 0.5, Sum: 2785.1] [GC Worker End (ms): 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 21431043.9 Min: 21431043.9, Avg: 21431043.9, Max: 21431043.9, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.7 ms] [Other: 6.3 ms] [Choose CSet: 0.0 ms] [Ref Proc: 3.4 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.6 ms] [Eden: 3784.0M(3784.0M)->0.0B(3728.0M) Survivors: 408.0M->408.0M Heap: 13.5G(14.6G)->10096.0M(14.6G)] [Times: user=2.66 sys=0.02, real=0.24 secs] 2013-10-28T15:58:34.163+0100: 21431.051: [GC concurrent-root-region-scan-start] 2013-10-28T15:58:34.214+0100: 21431.101: [GC concurrent-root-region-scan-end, 0.0506100 secs] 2013-10-28T15:58:34.214+0100: 21431.101: [GC concurrent-mark-start] 2013-10-28T15:58:35.986+0100: 21432.874: [GC concurrent-mark-end, 1.7727150 secs] 2013-10-28T15:58:35.994+0100: 21432.881: [GC remark 2013-10-28T15:58:36.008+0100: 21432.896: [GC ref-proc, 0.1192260 secs], 0.2022820 secs] [Times: user=1.47 sys=0.00, real=0.20 secs] 2013-10-28T15:58:36.198+0100: 21433.086: [GC cleanup 10G->9836M(14G), 0.0347900 secs] [Times: user=0.28 sys=0.00, real=0.03 secs] 2013-10-28T15:58:36.234+0100: 21433.121: [GC concurrent-cleanup-start] 2013-10-28T15:58:36.234+0100: 21433.122: [GC concurrent-cleanup-end, 0.0004450 secs] 2013-10-28T15:59:19.111+0100: 21475.998: [GC pause (young) Desired survivor size 272629760 bytes, new threshold 6 (max 15) - age 1: 63592160 bytes, 63592160 total - age 2: 38460008 bytes, 102052168 total - age 3: 63355736 bytes, 165407904 total - age 4: 44680008 bytes, 210087912 total - age 5: 22742184 bytes, 232830096 total - age 6: 42023016 bytes, 274853112 total - age 7: 15800936 bytes, 290654048 total 21475.999: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 180993, predicted base time: 98.72 ms, remaining time: 201.28 ms, target pause time: 300.00 ms] 21475.999: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 466 regions, survivors: 51 regions, predicted young region time: 80.67 ms] 21475.999: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 466 regions, survivors: 51 regions, old: 0 regions, predicted pause time: 179.39 ms, target pause time: 300.00 ms] 21476.176: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate old regions available, candidate old regions: 635 regions, reclaimable: 3505373264 bytes (22.29 %), threshold: 5.00 %] , 0.1775420 secs] [Parallel Time: 171.9 ms, GC Workers: 12] [GC Worker Start (ms): 21475999.2 21475999.3 21475999.3 21475999.3 21475999.4 21475999.4 21475999.4 21475999.5 21475999.5 21475999.5 21475999.5 21475999.5 Min: 21475999.2, Avg: 21475999.4, Max: 21475999.5, Diff: 0.3] [Ext Root Scanning (ms): 100.6 45.5 64.1 45.1 45.4 44.9 59.1 46.0 45.5 44.7 44.8 63.3 Min: 44.7, Avg: 54.1, Max: 100.6, Diff: 55.9, Sum: 649.0] [Update RS (ms): 0.0 44.5 24.5 44.8 44.6 45.0 29.6 45.0 44.5 44.8 44.9 25.2 Min: 0.0, Avg: 36.5, Max: 45.0, Diff: 45.0, Sum: 437.4] [Processed Buffers: 0 125 71 113 123 117 89 97 100 118 100 54 Min: 0, Avg: 92.2, Max: 125, Diff: 125, Sum: 1107] [Scan RS (ms): 0.1 0.1 0.3 0.2 0.1 0.2 0.4 0.2 0.3 0.5 0.4 0.1 Min: 0.1, Avg: 0.2, Max: 0.5, Diff: 0.4, Sum: 2.8] [Object Copy (ms): 70.7 81.3 82.4 81.2 81.0 81.2 82.1 79.9 80.9 81.1 81.0 82.5 Min: 70.7, Avg: 80.4, Max: 82.5, Diff: 11.8, Sum: 965.4] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 3 1 4 5 5 3 4 3 4 5 2 2 Min: 1, Avg: 3.4, Max: 5, Diff: 4, Sum: 41] [GC Worker Other (ms): 0.3 0.1 0.1 0.1 0.2 0.1 0.2 0.0 0.0 0.1 0.2 0.1 Min: 0.0, Avg: 0.1, Max: 0.3, Diff: 0.2, Sum: 1.5] [GC Worker Total (ms): 171.7 171.4 171.4 171.4 171.5 171.4 171.4 171.2 171.2 171.2 171.3 171.2 Min: 171.2, Avg: 171.3, Max: 171.7, Diff: 0.5, Sum: 2056.2] [GC Worker End (ms): 21476170.9 21476170.7 21476170.7 21476170.7 21476170.8 21476170.8 21476170.9 21476170.7 21476170.7 21476170.7 21476170.8 21476170.8 Min: 21476170.7, Avg: 21476170.8, Max: 21476170.9, Diff: 0.2] [Code Root Fixup: 0.0 ms] [Clear CT: 0.8 ms] [Other: 4.8 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.6 ms] [Eden: 3728.0M(3728.0M)->0.0B(360.0M) Survivors: 408.0M->384.0M Heap: 12.9G(14.6G)->9559.3M(14.6G)] [Times: user=2.07 sys=0.00, real=0.18 secs] 2013-10-28T15:59:22.778+0100: 21479.666: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 1 (max 15) - age 1: 95113344 bytes, 95113344 total - age 2: 33760400 bytes, 128873744 total - age 3: 33282608 bytes, 162156352 total - age 4: 45404280 bytes, 207560632 total - age 5: 36638688 bytes, 244199320 total - age 6: 21839560 bytes, 266038880 total 21479.666: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 127727, predicted base time: 87.54 ms, remaining time: 212.46 ms, target pause time: 300.00 ms] 21479.666: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 45 regions, survivors: 48 regions, predicted young region time: 56.91 ms] 21479.667: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: old CSet region num reached max, old: 188 regions, max: 188 regions] 21479.667: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 45 regions, survivors: 48 regions, old: 188 regions, predicted pause time: 217.53 ms, target pause time: 300.00 ms] 21479.921: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 447 regions, reclaimable: 2132693056 bytes (13.56 %), threshold: 5.00 %] , 0.2550780 secs] [Parallel Time: 243.3 ms, GC Workers: 12] [GC Worker Start (ms): 21479667.1 21479667.1 21479667.1 21479667.2 21479667.2 21479667.2 21479667.3 21479667.3 21479667.3 21479667.3 21479667.4 21479667.4 Min: 21479667.1, Avg: 21479667.3, Max: 21479667.4, Diff: 0.3] [Ext Root Scanning (ms): 45.6 63.8 59.3 45.9 46.0 46.9 46.5 46.0 45.6 45.2 89.0 63.4 Min: 45.2, Avg: 53.6, Max: 89.0, Diff: 43.8, Sum: 643.3] [Update RS (ms): 29.6 10.2 14.7 29.4 28.9 29.5 28.3 28.9 29.3 29.9 0.0 10.1 Min: 0.0, Avg: 22.4, Max: 29.9, Diff: 29.9, Sum: 268.8] [Processed Buffers: 69 32 57 76 54 67 86 69 74 75 0 26 Min: 0, Avg: 57.1, Max: 86, Diff: 86, Sum: 685] [Scan RS (ms): 39.4 39.5 39.7 40.1 39.6 39.8 39.7 39.5 40.2 39.5 5.5 39.6 Min: 5.5, Avg: 36.8, Max: 40.2, Diff: 34.7, Sum: 442.0] [Object Copy (ms): 128.6 129.5 129.3 127.7 128.4 126.8 128.4 128.4 127.8 128.3 148.3 129.7 Min: 126.8, Avg: 130.1, Max: 148.3, Diff: 21.5, Sum: 1561.3] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 1 4 13 12 7 8 8 7 2 8 7 8 Min: 1, Avg: 7.1, Max: 13, Diff: 12, Sum: 85] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 243.2 243.2 243.2 243.1 243.1 243.0 243.0 243.0 243.0 243.0 242.9 242.9 Min: 242.9, Avg: 243.0, Max: 243.2, Diff: 0.4, Sum: 2916.6] [GC Worker End (ms): 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.3 21479910.2 21479910.3 Min: 21479910.2, Avg: 21479910.3, Max: 21479910.3, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 2.0 ms] [Other: 9.8 ms] [Choose CSet: 0.5 ms] [Ref Proc: 2.9 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.6 ms] [Eden: 360.0M(360.0M)->0.0B(648.0M) Survivors: 384.0M->96.0M Heap: 9919.3M(14.6G)->8419.4M(14.6G)] [Times: user=2.77 sys=0.00, real=0.25 secs] 2013-10-28T15:59:30.652+0100: 21487.540: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 7901552 bytes, 7901552 total 21487.540: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 218925, predicted base time: 105.38 ms, remaining time: 194.62 ms, target pause time: 300.00 ms] 21487.540: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 23.27 ms] 21487.541: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: predicted time is too high, predicted time: 1.33 ms, remaining time: 1.08 ms, old: 182 regions, min: 80 regions] 21487.541: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 182 regions, predicted pause time: 298.92 ms, target pause time: 300.00 ms] 21487.843: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 265 regions, reclaimable: 1054790352 bytes (6.71 %), threshold: 5.00 %] , 0.3030120 secs] [Parallel Time: 290.2 ms, GC Workers: 12] [GC Worker Start (ms): 21487541.6 21487541.6 21487541.6 21487541.8 21487541.8 21487541.8 21487541.8 21487541.9 21487541.9 21487541.9 21487541.9 21487541.9 Min: 21487541.6, Avg: 21487541.8, Max: 21487541.9, Diff: 0.4] [Ext Root Scanning (ms): 46.3 45.6 47.1 65.1 46.3 46.1 59.6 46.6 45.4 64.4 97.2 45.8 Min: 45.4, Avg: 54.6, Max: 97.2, Diff: 51.8, Sum: 655.6] [Update RS (ms): 55.1 55.4 55.7 35.0 54.8 54.8 40.6 54.1 55.5 35.2 0.0 54.9 Min: 0.0, Avg: 45.9, Max: 55.7, Diff: 55.7, Sum: 551.1] [Processed Buffers: 95 87 102 72 102 107 104 81 105 93 0 116 Min: 0, Avg: 88.7, Max: 116, Diff: 116, Sum: 1064] [Scan RS (ms): 36.6 36.8 36.3 36.4 36.6 36.6 36.4 36.6 36.7 36.6 16.9 36.9 Min: 16.9, Avg: 35.0, Max: 36.9, Diff: 20.0, Sum: 419.4] [Object Copy (ms): 151.9 151.9 150.7 153.0 151.8 152.2 153.0 152.3 151.9 153.3 175.4 151.9 Min: 150.7, Avg: 154.1, Max: 175.4, Diff: 24.7, Sum: 1849.2] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 1 1 2 2 1 2 1 2 2 1 1 1 Min: 1, Avg: 1.4, Max: 2, Diff: 1, Sum: 17] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.0 0.0 0.1 0.0 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 290.0 289.9 289.8 289.7 289.7 289.6 289.7 289.6 289.6 289.5 289.6 289.5 Min: 289.5, Avg: 289.7, Max: 290.0, Diff: 0.5, Sum: 3476.2] [GC Worker End (ms): 21487831.6 21487831.5 21487831.5 21487831.4 21487831.5 21487831.4 21487831.5 21487831.4 21487831.5 21487831.5 21487831.5 21487831.5 Min: 21487831.4, Avg: 21487831.5, Max: 21487831.6, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 2.4 ms] [Other: 10.4 ms] [Choose CSet: 0.9 ms] [Ref Proc: 1.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 2.1 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 9067.4M(14.6G)->7480.0M(14.6G)] [Times: user=3.51 sys=0.00, real=0.30 secs] 2013-10-28T15:59:38.671+0100: 21495.559: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 24399800 bytes, 24399800 total - age 2: 4322520 bytes, 28722320 total 21495.559: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 199259, predicted base time: 103.72 ms, remaining time: 196.28 ms, target pause time: 300.00 ms] 21495.559: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 24.30 ms] 21495.560: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: reclaimable percentage not over threshold, old: 62 regions, max: 188 regions, reclaimable: 784279232 bytes (4.99 %), threshold: 5.00 %] 21495.560: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 62 regions, predicted pause time: 222.80 ms, target pause time: 300.00 ms] 21495.811: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 203 regions, reclaimable: 784279232 bytes (4.99 %), threshold: 5.00 %] , 0.2522200 secs] [Parallel Time: 241.2 ms, GC Workers: 12] [GC Worker Start (ms): 21495560.3 21495560.4 21495560.4 21495560.4 21495560.4 21495560.5 21495560.5 21495560.5 21495560.5 21495560.5 21495560.6 21495560.6 Min: 21495560.3, Avg: 21495560.5, Max: 21495560.6, Diff: 0.3] [Ext Root Scanning (ms): 64.9 45.7 64.9 45.9 45.6 47.1 99.2 59.5 45.4 45.5 45.2 46.0 Min: 45.2, Avg: 54.6, Max: 99.2, Diff: 54.0, Sum: 654.8] [Update RS (ms): 42.9 63.9 43.1 63.8 63.8 63.6 0.0 48.9 63.8 63.6 64.1 63.0 Min: 0.0, Avg: 53.7, Max: 64.1, Diff: 64.1, Sum: 644.4] [Processed Buffers: 70 97 93 92 108 97 0 97 102 100 104 96 Min: 0, Avg: 88.0, Max: 108, Diff: 108, Sum: 1056] [Scan RS (ms): 34.9 35.0 34.9 34.6 34.8 34.9 16.6 34.8 34.8 34.8 34.8 35.0 Min: 16.6, Avg: 33.3, Max: 35.0, Diff: 18.5, Sum: 400.0] [Object Copy (ms): 98.3 96.2 97.9 96.6 96.6 95.2 125.0 97.5 96.8 96.7 96.6 96.7 Min: 95.2, Avg: 99.2, Max: 125.0, Diff: 29.8, Sum: 1190.1] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Termination Attempts: 11 1 10 8 4 6 9 7 5 1 6 12 Min: 1, Avg: 6.7, Max: 12, Diff: 11, Sum: 80] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.0 0.1 0.0 0.1 0.1 0.0 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 241.0 241.0 241.0 240.9 240.8 240.9 240.7 240.8 240.8 240.7 240.7 240.7 Min: 240.7, Avg: 240.8, Max: 241.0, Diff: 0.3, Sum: 2890.2] [GC Worker End (ms): 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 21495801.3 Min: 21495801.3, Avg: 21495801.3, Max: 21495801.3, Diff: 0.1] [Code Root Fixup: 0.3 ms] [Clear CT: 2.6 ms] [Other: 8.1 ms] [Choose CSet: 0.6 ms] [Ref Proc: 2.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8128.0M(14.6G)->7263.3M(14.6G)] [Times: user=2.93 sys=0.00, real=0.25 secs] 2013-10-28T15:59:44.685+0100: 21501.573: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 26676824 bytes, 26676824 total - age 2: 17917840 bytes, 44594664 total - age 3: 3599704 bytes, 48194368 total 21501.574: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 160602, predicted base time: 101.20 ms, remaining time: 198.80 ms, target pause time: 300.00 ms] 21501.574: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 60.87 ms] 21501.574: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 162.08 ms, target pause time: 300.00 ms] , 0.1277600 secs] [Parallel Time: 123.2 ms, GC Workers: 12] [GC Worker Start (ms): 21501573.9 21501574.0 21501574.1 21501574.1 21501574.1 21501574.1 21501574.2 21501574.3 21501574.3 21501574.3 21501574.3 21501574.3 Min: 21501573.9, Avg: 21501574.2, Max: 21501574.3, Diff: 0.4] [Ext Root Scanning (ms): 46.0 45.5 58.9 100.0 63.9 64.1 45.7 45.2 45.4 45.2 45.3 44.8 Min: 44.8, Avg: 54.2, Max: 100.0, Diff: 55.2, Sum: 650.1] [Update RS (ms): 51.6 52.3 38.6 0.0 32.2 31.9 51.4 52.0 52.1 51.9 51.7 52.0 Min: 0.0, Avg: 43.1, Max: 52.3, Diff: 52.3, Sum: 517.8] [Processed Buffers: 77 72 73 0 69 56 79 88 78 73 88 80 Min: 0, Avg: 69.4, Max: 88, Diff: 88, Sum: 833] [Scan RS (ms): 0.6 0.6 0.8 0.0 0.7 0.8 0.9 0.6 0.4 0.8 0.8 0.8 Min: 0.0, Avg: 0.6, Max: 0.9, Diff: 0.9, Sum: 7.6] [Object Copy (ms): 20.1 19.9 19.9 22.9 21.2 21.3 20.0 20.2 20.0 20.2 20.2 20.3 Min: 19.9, Avg: 20.5, Max: 22.9, Diff: 3.0, Sum: 246.2] [Termination (ms): 4.8 4.8 4.8 0.0 4.8 4.8 4.8 4.8 4.8 4.7 4.8 4.8 Min: 0.0, Avg: 4.4, Max: 4.8, Diff: 4.8, Sum: 52.6] [Termination Attempts: 148 161 127 1 142 147 162 139 163 1 154 155 Min: 1, Avg: 125.0, Max: 163, Diff: 162, Sum: 1500] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.8] [GC Worker Total (ms): 123.2 123.1 123.0 123.0 122.9 122.9 122.9 122.8 122.8 122.8 122.8 122.8 Min: 122.8, Avg: 122.9, Max: 123.2, Diff: 0.4, Sum: 1475.1] [GC Worker End (ms): 21501697.1 21501697.1 21501697.1 21501697.0 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 21501697.1 Min: 21501697.0, Avg: 21501697.1, Max: 21501697.1, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.8 ms] [Other: 3.7 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.9 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7911.3M(14.6G)->7293.7M(14.6G)] [Times: user=1.50 sys=0.00, real=0.13 secs] 2013-10-28T15:59:51.136+0100: 21508.024: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 22020176 bytes, 22020176 total - age 2: 20834232 bytes, 42854408 total - age 3: 16616152 bytes, 59470560 total - age 4: 3427912 bytes, 62898472 total 21508.025: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 61972, predicted base time: 78.13 ms, remaining time: 221.87 ms, target pause time: 300.00 ms] 21508.025: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 20.76 ms] 21508.025: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 98.89 ms, target pause time: 300.00 ms] , 0.1165410 secs] [Parallel Time: 112.6 ms, GC Workers: 12] [GC Worker Start (ms): 21508025.1 21508025.1 21508025.2 21508025.2 21508025.3 21508025.3 21508025.4 21508025.4 21508025.4 21508025.4 21508025.5 21508025.5 Min: 21508025.1, Avg: 21508025.3, Max: 21508025.5, Diff: 0.4] [Ext Root Scanning (ms): 46.1 45.4 45.6 64.0 44.9 47.0 45.8 58.7 91.3 64.7 45.6 45.2 Min: 44.9, Avg: 53.7, Max: 91.3, Diff: 46.4, Sum: 644.3] [Update RS (ms): 16.0 16.6 16.6 0.0 16.8 15.1 16.2 2.2 0.0 0.0 16.1 16.7 Min: 0.0, Avg: 11.0, Max: 16.8, Diff: 16.8, Sum: 132.3] [Processed Buffers: 41 67 65 0 49 68 51 28 0 0 58 61 Min: 0, Avg: 40.7, Max: 68, Diff: 68, Sum: 488] [Scan RS (ms): 0.0 0.1 0.0 0.0 0.1 0.0 0.0 0.1 0.0 0.0 0.1 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 23.7 23.7 23.6 21.7 23.8 23.6 23.6 24.5 20.8 20.8 23.7 23.5 Min: 20.8, Avg: 23.1, Max: 24.5, Diff: 3.8, Sum: 277.1] [Termination (ms): 26.6 26.6 26.6 26.6 26.6 26.6 26.6 26.6 0.0 26.6 26.6 26.6 Min: 0.0, Avg: 24.4, Max: 26.6, Diff: 26.6, Sum: 292.4] [Termination Attempts: 42 1 48 47 49 44 49 49 1 51 47 50 Min: 1, Avg: 39.8, Max: 51, Diff: 50, Sum: 478] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.0 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.8] [GC Worker Total (ms): 112.5 112.5 112.4 112.4 112.3 112.3 112.2 112.2 112.1 112.1 112.1 112.1 Min: 112.1, Avg: 112.3, Max: 112.5, Diff: 0.4, Sum: 1347.3] [GC Worker End (ms): 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.6 21508137.5 21508137.6 21508137.6 21508137.6 Min: 21508137.5, Avg: 21508137.6, Max: 21508137.6, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 3.7 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7941.7M(14.6G)->7391.1M(14.6G)] [Times: user=1.35 sys=0.01, real=0.11 secs] 2013-10-28T15:59:58.723+0100: 21515.610: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 20444800 bytes, 20444800 total - age 2: 14599736 bytes, 35044536 total - age 3: 17794192 bytes, 52838728 total 21515.611: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 90998, predicted base time: 82.82 ms, remaining time: 217.18 ms, target pause time: 300.00 ms] 21515.611: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 19.51 ms] 21515.611: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 102.33 ms, target pause time: 300.00 ms] , 0.1198280 secs] [Parallel Time: 116.2 ms, GC Workers: 12] [GC Worker Start (ms): 21515611.0 21515611.0 21515611.1 21515611.1 21515611.1 21515611.2 21515611.2 21515611.3 21515611.3 21515611.3 21515611.3 21515611.3 Min: 21515611.0, Avg: 21515611.2, Max: 21515611.3, Diff: 0.4] [Ext Root Scanning (ms): 45.7 45.9 45.8 64.0 45.6 64.2 45.5 94.2 46.6 59.6 45.7 45.8 Min: 45.5, Avg: 54.0, Max: 94.2, Diff: 48.7, Sum: 648.5] [Update RS (ms): 24.9 24.4 24.7 5.3 25.0 5.5 25.2 0.0 25.4 9.8 24.5 24.6 Min: 0.0, Avg: 18.3, Max: 25.4, Diff: 25.4, Sum: 219.5] [Processed Buffers: 66 54 63 34 45 33 54 0 58 57 65 56 Min: 0, Avg: 48.8, Max: 66, Diff: 66, Sum: 585] [Scan RS (ms): 0.0 0.2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 0.4] [Object Copy (ms): 20.9 20.9 20.9 22.1 20.8 21.7 20.6 21.5 19.2 21.7 21.0 20.8 Min: 19.2, Avg: 21.0, Max: 22.1, Diff: 2.9, Sum: 252.2] [Termination (ms): 24.5 24.5 24.5 24.5 24.5 24.5 24.5 0.0 24.5 24.5 24.5 24.5 Min: 0.0, Avg: 22.5, Max: 24.5, Diff: 24.5, Sum: 269.8] [Termination Attempts: 1 1 1 1 1 1 1 1 1 1 1 1 Min: 1, Avg: 1.0, Max: 1, Diff: 0, Sum: 12] [GC Worker Other (ms): 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.0 0.1 0.1 0.0 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 116.1 116.1 116.1 116.1 116.0 116.0 115.9 115.8 115.8 115.8 115.8 115.8 Min: 115.8, Avg: 115.9, Max: 116.1, Diff: 0.4, Sum: 1391.3] [GC Worker End (ms): 21515727.1 21515727.1 21515727.1 21515727.1 21515727.1 21515727.1 21515727.1 21515727.0 21515727.1 21515727.1 21515727.1 21515727.1 Min: 21515727.0, Avg: 21515727.1, Max: 21515727.1, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 3.5 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8039.1M(14.6G)->7477.8M(14.6G)] [Times: user=1.39 sys=0.00, real=0.12 secs] 2013-10-28T16:00:01.284+0100: 21518.172: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 22178080 bytes, 22178080 total - age 2: 15771712 bytes, 37949792 total - age 3: 12924848 bytes, 50874640 total 21518.172: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 74257, predicted base time: 77.45 ms, remaining time: 222.55 ms, target pause time: 300.00 ms] 21518.172: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 16.66 ms] 21518.172: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 94.12 ms, target pause time: 300.00 ms] , 0.1202740 secs] [Parallel Time: 114.7 ms, GC Workers: 12] [GC Worker Start (ms): 21518172.6 21518172.7 21518172.7 21518172.7 21518172.8 21518172.8 21518172.8 21518172.9 21518173.0 21518173.0 21518173.0 21518173.0 Min: 21518172.6, Avg: 21518172.8, Max: 21518173.0, Diff: 0.4] [Ext Root Scanning (ms): 61.8 46.5 46.0 45.1 45.3 45.2 45.4 45.9 59.6 47.8 92.0 64.5 Min: 45.1, Avg: 53.8, Max: 92.0, Diff: 46.9, Sum: 645.1] [Update RS (ms): 2.4 18.7 20.2 20.8 19.9 19.9 34.4 19.4 4.5 16.9 0.0 0.6 Min: 0.0, Avg: 14.8, Max: 34.4, Diff: 34.4, Sum: 177.8] [Processed Buffers: 32 53 43 51 70 47 19 46 33 46 0 1 Min: 0, Avg: 36.8, Max: 70, Diff: 70, Sum: 441] [Scan RS (ms): 0.1 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.1 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 22.1 21.0 20.1 20.4 20.9 20.9 6.3 39.3 21.8 21.2 21.1 20.9 Min: 6.3, Avg: 21.3, Max: 39.3, Diff: 33.0, Sum: 256.0] [Termination (ms): 27.1 27.1 27.1 27.1 27.1 27.1 27.1 8.6 28.2 27.1 0.0 27.1 Min: 0.0, Avg: 23.4, Max: 28.2, Diff: 28.2, Sum: 280.6] [Termination Attempts: 23 17 15 16 18 17 14 1 17 14 1 14 Min: 1, Avg: 13.9, Max: 23, Diff: 22, Sum: 167] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 113.6 113.5 113.5 113.4 113.3 113.3 113.3 113.3 114.2 113.2 113.1 113.2 Min: 113.1, Avg: 113.4, Max: 114.2, Diff: 1.1, Sum: 1360.9] [GC Worker End (ms): 21518286.2 21518286.1 21518286.2 21518286.2 21518286.2 21518286.2 21518286.2 21518286.2 21518287.1 21518286.2 21518286.1 21518286.2 Min: 21518286.1, Avg: 21518286.2, Max: 21518287.1, Diff: 1.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 5.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 3.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8125.8M(14.6G)->7548.1M(14.6G)] [Times: user=1.31 sys=0.00, real=0.12 secs] 2013-10-28T16:00:05.597+0100: 21522.485: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 16592056 bytes, 16592056 total - age 2: 17463768 bytes, 34055824 total - age 3: 15349624 bytes, 49405448 total 21522.486: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 76409, predicted base time: 77.52 ms, remaining time: 222.48 ms, target pause time: 300.00 ms] 21522.486: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 18.97 ms] 21522.486: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 96.49 ms, target pause time: 300.00 ms] , 0.1174670 secs] [Parallel Time: 113.2 ms, GC Workers: 12] [GC Worker Start (ms): 21522485.8 21522485.9 21522485.9 21522486.0 21522486.1 21522486.2 21522486.2 21522486.2 21522486.2 21522486.2 21522486.3 21522486.3 Min: 21522485.8, Avg: 21522486.1, Max: 21522486.3, Diff: 0.4] [Ext Root Scanning (ms): 45.3 91.8 46.0 46.9 63.4 58.7 45.1 63.1 45.3 46.3 45.7 48.3 Min: 45.1, Avg: 53.8, Max: 91.8, Diff: 46.7, Sum: 645.9] [Update RS (ms): 20.4 0.0 19.4 19.7 0.8 6.0 19.9 0.8 19.9 18.8 19.6 17.1 Min: 0.0, Avg: 13.5, Max: 20.4, Diff: 20.4, Sum: 162.4] [Processed Buffers: 57 0 68 73 8 33 55 2 48 62 61 52 Min: 0, Avg: 43.2, Max: 73, Diff: 73, Sum: 519] [Scan RS (ms): 0.0 0.0 0.1 0.1 0.0 0.0 0.1 0.0 0.1 0.1 0.1 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 21.5 21.0 21.6 20.2 22.6 22.1 21.7 22.9 21.5 21.5 21.4 21.3 Min: 20.2, Avg: 21.6, Max: 22.9, Diff: 2.7, Sum: 259.2] [Termination (ms): 25.7 0.0 25.7 25.7 25.7 25.7 25.7 25.7 25.7 25.7 25.7 25.7 Min: 0.0, Avg: 23.6, Max: 25.7, Diff: 25.7, Sum: 283.0] [Termination Attempts: 19 1 13 16 14 9 17 18 15 20 1 16 Min: 1, Avg: 13.2, Max: 20, Diff: 19, Sum: 159] [GC Worker Other (ms): 0.1 0.0 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 112.9 112.8 112.8 112.7 112.6 112.6 112.6 112.6 112.6 112.6 112.5 112.5 Min: 112.5, Avg: 112.7, Max: 112.9, Diff: 0.4, Sum: 1351.9] [GC Worker End (ms): 21522598.7 21522598.7 21522598.8 21522598.7 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 21522598.8 Min: 21522598.7, Avg: 21522598.8, Max: 21522598.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 4.1 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8196.1M(14.6G)->7582.0M(14.6G)] [Times: user=1.36 sys=0.00, real=0.11 secs] 2013-10-28T16:00:06.595+0100: 21523.483: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 3 (max 15) - age 1: 26815464 bytes, 26815464 total - age 2: 12126232 bytes, 38941696 total - age 3: 15333080 bytes, 54274776 total - age 4: 13133256 bytes, 67408032 total 21523.484: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 40586, predicted base time: 68.66 ms, remaining time: 231.34 ms, target pause time: 300.00 ms] 21523.484: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 28.52 ms] 21523.484: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 97.18 ms, target pause time: 300.00 ms] , 0.1269940 secs] [Parallel Time: 118.4 ms, GC Workers: 12] [GC Worker Start (ms): 21523483.8 21523483.8 21523483.9 21523483.9 21523483.9 21523483.9 21523483.9 21523483.9 21523483.9 21523484.0 21523484.1 21523484.0 Min: 21523483.8, Avg: 21523483.9, Max: 21523484.1, Diff: 0.2] [Ext Root Scanning (ms): 87.3 52.5 69.8 96.8 69.7 60.8 53.7 72.7 70.6 54.0 54.1 86.7 Min: 52.5, Avg: 69.1, Max: 96.8, Diff: 44.3, Sum: 828.6] [Update RS (ms): 0.0 20.6 2.1 0.0 2.0 12.0 20.6 1.2 2.4 19.4 46.9 0.0 Min: 0.0, Avg: 10.6, Max: 46.9, Diff: 46.9, Sum: 127.2] [Processed Buffers: 0 62 18 0 12 35 53 4 22 42 23 0 Min: 0, Avg: 22.6, Max: 62, Diff: 62, Sum: 271] [Scan RS (ms): 0.0 0.0 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.2, Diff: 0.2, Sum: 0.4] [Object Copy (ms): 15.0 29.2 30.5 21.3 30.5 29.5 27.9 36.0 29.2 28.6 1.7 15.4 Min: 1.7, Avg: 24.6, Max: 36.0, Diff: 34.4, Sum: 294.8] [Termination (ms): 15.9 15.9 15.8 0.0 15.8 15.8 15.8 8.1 15.8 15.8 15.3 16.0 Min: 0.0, Avg: 13.8, Max: 16.0, Diff: 16.0, Sum: 166.0] [Termination Attempts: 36 66 109 1 76 66 71 1 66 71 1 35 Min: 1, Avg: 49.9, Max: 109, Diff: 108, Sum: 599] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 118.3 118.3 118.2 118.2 118.2 118.2 118.1 118.2 118.2 118.1 118.0 118.1 Min: 118.0, Avg: 118.2, Max: 118.3, Diff: 0.2, Sum: 1418.1] [GC Worker End (ms): 21523602.1 21523602.1 21523602.1 21523602.0 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 21523602.1 Min: 21523602.0, Avg: 21523602.1, Max: 21523602.1, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 8.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 6.6 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8230.0M(14.6G)->7629.0M(14.6G)] [Times: user=1.17 sys=0.00, real=0.13 secs] 2013-10-28T16:00:07.417+0100: 21524.305: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 10249616 bytes, 10249616 total - age 2: 21417864 bytes, 31667480 total - age 3: 11865880 bytes, 43533360 total 21524.305: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 73249, predicted base time: 85.31 ms, remaining time: 214.69 ms, target pause time: 300.00 ms] 21524.305: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 17.95 ms] 21524.305: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 103.27 ms, target pause time: 300.00 ms] , 0.1186000 secs] [Parallel Time: 115.5 ms, GC Workers: 12] [GC Worker Start (ms): 21524305.2 21524305.2 21524305.2 21524305.2 21524305.2 21524305.3 21524305.3 21524305.4 21524305.4 21524305.4 21524305.4 21524305.4 Min: 21524305.2, Avg: 21524305.3, Max: 21524305.4, Diff: 0.2] [Ext Root Scanning (ms): 47.6 63.0 70.1 65.3 48.5 61.7 64.8 67.3 47.5 60.0 93.6 47.8 Min: 47.5, Avg: 61.5, Max: 93.6, Diff: 46.1, Sum: 737.4] [Update RS (ms): 30.7 14.1 7.9 11.4 30.3 31.4 11.6 8.8 29.8 16.6 0.0 33.5 Min: 0.0, Avg: 18.8, Max: 33.5, Diff: 33.5, Sum: 226.0] [Processed Buffers: 34 33 30 28 63 21 23 26 63 29 0 34 Min: 0, Avg: 32.0, Max: 63, Diff: 63, Sum: 384] [Scan RS (ms): 0.0 0.1 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 15.7 16.7 16.0 17.3 15.1 1.3 17.5 17.8 16.5 17.3 21.5 12.6 Min: 1.3, Avg: 15.5, Max: 21.5, Diff: 20.2, Sum: 185.5] [Termination (ms): 21.2 21.3 21.2 21.2 21.2 20.8 21.2 21.2 21.2 21.2 0.0 21.2 Min: 0.0, Avg: 19.4, Max: 21.3, Diff: 21.3, Sum: 233.2] [Termination Attempts: 37 36 44 34 45 1 37 42 42 36 1 32 Min: 1, Avg: 32.2, Max: 45, Diff: 44, Sum: 387] [GC Worker Other (ms): 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 115.4 115.4 115.4 115.4 115.3 115.3 115.3 115.2 115.2 115.2 115.1 115.2 Min: 115.1, Avg: 115.3, Max: 115.4, Diff: 0.3, Sum: 1383.5] [GC Worker End (ms): 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.6 21524420.5 21524420.6 Min: 21524420.5, Avg: 21524420.6, Max: 21524420.6, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 3.0 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8277.0M(14.6G)->7634.9M(14.6G)] [Times: user=1.19 sys=0.01, real=0.11 secs] 2013-10-28T16:00:09.160+0100: 21526.048: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 6000328 bytes, 6000328 total - age 2: 9520600 bytes, 15520928 total - age 3: 20602264 bytes, 36123192 total - age 4: 11786432 bytes, 47909624 total 21526.048: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 27950, predicted base time: 74.37 ms, remaining time: 225.63 ms, target pause time: 300.00 ms] 21526.048: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 16.51 ms] 21526.048: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 90.88 ms, target pause time: 300.00 ms] , 0.1238860 secs] [Parallel Time: 119.6 ms, GC Workers: 12] [GC Worker Start (ms): 21526048.4 21526048.4 21526048.5 21526048.5 21526048.5 21526048.6 21526048.6 21526048.6 21526048.6 21526048.7 21526048.7 21526048.8 Min: 21526048.4, Avg: 21526048.6, Max: 21526048.8, Diff: 0.4] [Ext Root Scanning (ms): 64.3 48.7 56.1 59.2 84.8 66.6 48.1 60.6 98.8 66.3 48.2 61.0 Min: 48.1, Avg: 63.6, Max: 98.8, Diff: 50.7, Sum: 762.8] [Update RS (ms): 1.5 15.9 9.9 7.0 0.0 0.0 16.0 3.5 0.0 0.0 15.9 3.2 Min: 0.0, Avg: 6.1, Max: 16.0, Diff: 16.0, Sum: 73.1] [Processed Buffers: 2 40 29 21 0 0 38 13 0 0 31 22 Min: 0, Avg: 16.3, Max: 40, Diff: 40, Sum: 196] [Scan RS (ms): 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 17.9 19.1 17.6 22.1 2.5 17.0 19.3 19.4 20.5 17.1 19.2 19.1 Min: 2.5, Avg: 17.6, Max: 22.1, Diff: 19.5, Sum: 210.9] [Termination (ms): 35.7 35.7 35.7 31.1 32.0 35.7 35.7 35.7 0.0 35.7 35.7 35.7 Min: 0.0, Avg: 32.1, Max: 35.7, Diff: 35.7, Sum: 384.6] [Termination Attempts: 78 1 57 1 1 70 60 59 1 62 71 66 Min: 1, Avg: 43.9, Max: 78, Diff: 77, Sum: 527] [GC Worker Other (ms): 0.0 0.1 0.0 0.0 0.1 0.0 0.1 0.1 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.6] [GC Worker Total (ms): 119.5 119.6 119.5 119.4 119.4 119.4 119.4 119.4 119.3 119.3 119.2 119.1 Min: 119.1, Avg: 119.4, Max: 119.6, Diff: 0.4, Sum: 1432.5] [GC Worker End (ms): 21526167.9 21526168.0 21526168.0 21526167.9 21526168.0 21526167.9 21526168.0 21526168.0 21526167.9 21526168.0 21526167.9 21526168.0 Min: 21526167.9, Avg: 21526168.0, Max: 21526168.0, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 4.1 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.7 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8282.9M(14.6G)->7659.5M(14.6G)] [Times: user=1.24 sys=0.00, real=0.12 secs] 2013-10-28T16:00:10.519+0100: 21527.407: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 8936336 bytes, 8936336 total - age 2: 3011208 bytes, 11947544 total - age 3: 9009224 bytes, 20956768 total - age 4: 19393152 bytes, 40349920 total - age 5: 9924104 bytes, 50274024 total 21527.407: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 35510, predicted base time: 76.50 ms, remaining time: 223.50 ms, target pause time: 300.00 ms] 21527.407: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 12.59 ms] 21527.407: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 89.09 ms, target pause time: 300.00 ms] , 0.1681470 secs] [Parallel Time: 154.5 ms, GC Workers: 12] [GC Worker Start (ms): 21527407.6 21527407.6 21527407.7 21527407.7 21527407.7 21527407.7 21527407.7 21527407.8 21527407.8 21527407.8 21527407.8 21527407.8 Min: 21527407.6, Avg: 21527407.7, Max: 21527407.8, Diff: 0.2] [Ext Root Scanning (ms): 65.0 134.3 48.8 93.4 49.3 55.4 67.5 48.0 47.8 48.3 67.3 51.3 Min: 47.8, Avg: 64.7, Max: 134.3, Diff: 86.6, Sum: 776.3] [Update RS (ms): 0.0 0.0 12.9 0.0 13.1 6.4 0.0 13.0 13.2 12.7 0.0 10.0 Min: 0.0, Avg: 6.8, Max: 13.2, Diff: 13.2, Sum: 81.4] [Processed Buffers: 0 0 31 0 26 28 0 41 39 42 0 54 Min: 0, Avg: 21.8, Max: 54, Diff: 54, Sum: 261] [Scan RS (ms): 0.0 0.0 0.0 0.0 0.1 0.1 0.0 0.1 0.0 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 16.7 19.9 19.9 2.6 19.1 19.7 14.0 20.5 20.5 20.4 14.1 20.2 Min: 2.6, Avg: 17.3, Max: 20.5, Diff: 18.0, Sum: 207.7] [Termination (ms): 72.6 0.0 72.6 58.3 72.6 72.6 72.6 72.6 72.6 72.6 72.7 72.6 Min: 0.0, Avg: 65.4, Max: 72.7, Diff: 72.7, Sum: 784.6] [Termination Attempts: 12 1 15 1 12 16 12 13 9 1 12 14 Min: 1, Avg: 9.8, Max: 16, Diff: 15, Sum: 118] [GC Worker Other (ms): 0.1 0.0 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 1.0] [GC Worker Total (ms): 154.5 154.3 154.4 154.3 154.2 154.3 154.3 154.2 154.2 154.2 154.2 154.2 Min: 154.2, Avg: 154.3, Max: 154.5, Diff: 0.3, Sum: 1851.4] [GC Worker End (ms): 21527562.0 21527561.9 21527562.0 21527562.0 21527561.9 21527562.0 21527562.0 21527562.0 21527562.0 21527562.0 21527562.0 21527562.0 Min: 21527561.9, Avg: 21527562.0, Max: 21527562.0, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 13.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 11.9 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8307.5M(14.6G)->7675.9M(14.6G)] [Times: user=1.66 sys=0.00, real=0.17 secs] 2013-10-28T16:00:11.861+0100: 21528.749: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 6 (max 15) - age 1: 8421648 bytes, 8421648 total - age 2: 7620344 bytes, 16041992 total - age 3: 2522808 bytes, 18564800 total - age 4: 7182464 bytes, 25747264 total - age 5: 18835480 bytes, 44582744 total - age 6: 9228976 bytes, 53811720 total 21528.749: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 39666, predicted base time: 81.53 ms, remaining time: 218.47 ms, target pause time: 300.00 ms] 21528.749: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 17.48 ms] 21528.749: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 99.00 ms, target pause time: 300.00 ms] , 0.1418840 secs] [Parallel Time: 128.3 ms, GC Workers: 12] [GC Worker Start (ms): 21528749.7 21528749.7 21528749.7 21528749.8 21528749.8 21528749.8 21528749.9 21528749.9 21528749.9 21528749.9 21528749.9 21528750.8 Min: 21528749.7, Avg: 21528749.9, Max: 21528750.8, Diff: 1.1] [Ext Root Scanning (ms): 51.1 52.6 67.6 74.4 70.3 67.6 67.7 67.8 99.4 62.3 62.7 66.5 Min: 51.1, Avg: 67.5, Max: 99.4, Diff: 48.2, Sum: 810.0] [Update RS (ms): 23.1 20.8 3.2 0.0 3.1 4.2 4.8 4.5 0.0 10.3 12.5 3.6 Min: 0.0, Avg: 7.5, Max: 23.1, Diff: 23.1, Sum: 90.1] [Processed Buffers: 50 47 32 0 17 21 26 26 0 28 28 18 Min: 0, Avg: 24.4, Max: 50, Diff: 50, Sum: 293] [Scan RS (ms): 0.0 0.1 0.0 0.0 18.8 0.0 0.0 0.1 0.0 0.0 0.0 0.1 Min: 0.0, Avg: 1.6, Max: 18.8, Diff: 18.8, Sum: 19.2] [Object Copy (ms): 20.6 21.3 24.0 20.4 2.7 23.0 22.2 22.3 28.3 24.6 41.7 23.7 Min: 2.7, Avg: 22.9, Max: 41.7, Diff: 38.9, Sum: 274.8] [Termination (ms): 33.1 33.1 33.1 33.1 33.0 33.1 33.1 33.1 0.0 30.5 10.8 33.1 Min: 0.0, Avg: 28.2, Max: 33.1, Diff: 33.1, Sum: 338.8] [Termination Attempts: 1 11 8 8 1 5 5 10 1 1 1 6 Min: 1, Avg: 4.8, Max: 11, Diff: 10, Sum: 58] [GC Worker Other (ms): 0.1 0.1 0.1 0.0 0.1 0.1 0.1 0.1 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.7] [GC Worker Total (ms): 128.0 128.0 128.0 127.9 127.9 127.9 127.9 127.8 127.8 127.8 127.7 126.9 Min: 126.9, Avg: 127.8, Max: 128.0, Diff: 1.1, Sum: 1533.6] [GC Worker End (ms): 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.7 21528877.6 21528877.7 21528877.6 21528877.7 Min: 21528877.6, Avg: 21528877.7, Max: 21528877.7, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.2 ms] [Other: 13.4 ms] [Choose CSet: 0.0 ms] [Ref Proc: 11.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8323.9M(14.6G)->7724.2M(14.6G)] [Times: user=1.36 sys=0.00, real=0.14 secs] 2013-10-28T16:00:13.625+0100: 21530.513: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 6 (max 15) - age 1: 10421320 bytes, 10421320 total - age 2: 6261944 bytes, 16683264 total - age 3: 7196032 bytes, 23879296 total - age 4: 2495936 bytes, 26375232 total - age 5: 5485976 bytes, 31861208 total - age 6: 18536368 bytes, 50397576 total 21530.513: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 52102, predicted base time: 87.75 ms, remaining time: 212.25 ms, target pause time: 300.00 ms] 21530.513: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 11.64 ms] 21530.513: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 99.40 ms, target pause time: 300.00 ms] , 0.1134930 secs] [Parallel Time: 110.3 ms, GC Workers: 12] [GC Worker Start (ms): 21530513.5 21530513.6 21530513.6 21530513.6 21530513.6 21530513.6 21530513.7 21530513.7 21530513.7 21530513.7 21530513.7 21530514.8 Min: 21530513.5, Avg: 21530513.7, Max: 21530514.8, Diff: 1.2] [Ext Root Scanning (ms): 47.1 90.1 65.0 28.2 59.1 56.9 66.6 47.2 64.7 60.2 87.9 47.1 Min: 28.2, Avg: 60.0, Max: 90.1, Diff: 61.9, Sum: 720.1] [Update RS (ms): 21.5 0.0 3.0 11.2 9.5 13.9 2.1 21.4 2.6 9.1 0.0 20.2 Min: 0.0, Avg: 9.6, Max: 21.5, Diff: 21.5, Sum: 114.6] [Processed Buffers: 54 0 22 42 33 36 10 43 26 26 0 40 Min: 0, Avg: 27.7, Max: 54, Diff: 54, Sum: 332] [Scan RS (ms): 0.0 0.0 0.0 0.1 0.0 0.0 0.0 0.1 0.1 0.1 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.4] [Object Copy (ms): 18.0 20.0 18.5 55.1 17.9 15.7 17.8 17.8 19.1 17.0 2.6 18.1 Min: 2.6, Avg: 19.8, Max: 55.1, Diff: 52.6, Sum: 237.7] [Termination (ms): 23.5 0.0 23.6 15.5 23.6 23.5 23.6 23.5 23.6 23.5 19.6 23.5 Min: 0.0, Avg: 20.6, Max: 23.6, Diff: 23.6, Sum: 247.0] [Termination Attempts: 6 1 6 1 7 2 2 4 2 1 1 1 Min: 1, Avg: 2.8, Max: 7, Diff: 6, Sum: 34] [GC Worker Other (ms): 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 110.2 110.2 110.2 110.2 110.2 110.1 110.1 110.1 110.1 110.1 110.1 109.0 Min: 109.0, Avg: 110.1, Max: 110.2, Diff: 1.2, Sum: 1320.7] [GC Worker End (ms): 21530623.8 21530623.7 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 21530623.8 Min: 21530623.7, Avg: 21530623.8, Max: 21530623.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 3.0 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8372.2M(14.6G)->7774.0M(14.6G)] [Times: user=1.15 sys=0.02, real=0.12 secs] 2013-10-28T16:00:19.956+0100: 21536.844: [GC pause (young) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 4026408 bytes, 4026408 total - age 2: 8853712 bytes, 12880120 total - age 3: 5294968 bytes, 18175088 total - age 4: 6427000 bytes, 24602088 total - age 5: 2472488 bytes, 27074576 total - age 6: 5429760 bytes, 32504336 total 21536.844: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 76487, predicted base time: 89.50 ms, remaining time: 210.50 ms, target pause time: 300.00 ms] 21536.844: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 10.64 ms] 21536.844: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 0 regions, predicted pause time: 100.15 ms, target pause time: 300.00 ms] , 0.1148320 secs] [Parallel Time: 111.4 ms, GC Workers: 12] [GC Worker Start (ms): 21536844.7 21536844.7 21536844.7 21536844.8 21536844.8 21536844.9 21536844.9 21536844.9 21536845.0 21536845.0 21536845.0 21536845.0 Min: 21536844.7, Avg: 21536844.9, Max: 21536845.0, Diff: 0.3] [Ext Root Scanning (ms): 57.9 89.6 48.0 57.9 65.5 58.2 61.3 58.4 56.6 43.4 61.4 60.8 Min: 43.4, Avg: 59.9, Max: 89.6, Diff: 46.2, Sum: 719.1] [Update RS (ms): 14.0 0.0 24.7 14.0 6.6 13.6 8.8 13.6 15.1 27.7 9.0 9.2 Min: 0.0, Avg: 13.0, Max: 27.7, Diff: 27.7, Sum: 156.2] [Processed Buffers: 41 0 73 46 38 55 55 48 61 67 33 43 Min: 0, Avg: 46.7, Max: 73, Diff: 73, Sum: 560] [Scan RS (ms): 0.0 0.0 0.0 0.1 0.1 0.0 0.1 0.0 0.0 0.1 0.0 0.1 Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Object Copy (ms): 14.2 21.4 13.4 14.1 13.8 14.1 15.7 13.9 14.1 14.7 15.5 15.7 Min: 13.4, Avg: 15.0, Max: 21.4, Diff: 8.0, Sum: 180.3] [Termination (ms): 24.9 0.0 24.9 24.9 24.9 24.9 24.9 24.9 24.9 24.9 24.9 24.9 Min: 0.0, Avg: 22.9, Max: 24.9, Diff: 24.9, Sum: 274.2] [Termination Attempts: 51 1 51 49 44 50 52 52 53 1 56 46 Min: 1, Avg: 42.2, Max: 56, Diff: 55, Sum: 506] [GC Worker Other (ms): 0.1 0.0 0.1 0.1 0.1 0.1 0.1 0.1 0.0 0.1 0.1 0.1 Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 111.1 111.0 111.1 111.0 111.0 110.9 110.9 110.9 110.8 110.8 110.8 110.8 Min: 110.8, Avg: 110.9, Max: 111.1, Diff: 0.3, Sum: 1331.2] [GC Worker End (ms): 21536955.8 21536955.7 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 21536955.8 Min: 21536955.7, Avg: 21536955.8, Max: 21536955.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.1 ms] [Other: 3.3 ms] [Choose CSet: 0.0 ms] [Ref Proc: 1.6 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.2 ms] [Eden: 648.0M(648.0M)->0.0B(6368.0M) Survivors: 96.0M->88.0M Heap: 8422.0M(14.6G)->7768.2M(14.6G)] [Times: user=1.20 sys=0.01, real=0.12 secs] -------------- next part -------------- 2013-10-30T01:29:30.664+0100: 27690.612: [GC pause (young) (initial-mark) Desired survivor size 255852544 bytes, new threshold 15 (max 15) - age 1: 54112176 bytes, 54112176 total - age 2: 25195152 bytes, 79307328 total - age 3: 45793088 bytes, 125100416 total - age 4: 9753864 bytes, 134854280 total - age 5: 4775680 bytes, 139629960 total - age 6: 7555296 bytes, 147185256 total - age 7: 3015480 bytes, 150200736 total - age 8: 2843864 bytes, 153044600 total - age 9: 2650656 bytes, 155695256 total - age 10: 126168 bytes, 155821424 total - age 11: 2832664 bytes, 158654088 total - age 12: 3099000 bytes, 161753088 total - age 13: 9933720 bytes, 171686808 total - age 14: 9524488 bytes, 181211296 total - age 15: 17084240 bytes, 198295536 total 27690.612: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 59857, predicted base time: 48.38 ms, remaining time: 251.62 ms, target pause time: 300.00 ms] 27690.612: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 445 regions, survivors: 39 regions, predicted young region time: 40.15 ms] 27690.612: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 445 regions, survivors: 39 regions, old: 0 regions, predicted pause time: 88.53 ms, target pause time: 300.00 ms] , 0.1458620 secs] [Parallel Time: 141.1 ms, GC Workers: 12] [GC Worker Start (ms): 27690612.0 27690612.1 27690612.1 27690612.2 27690612.2 27690612.3 27690612.3 27690612.3 27690612.4 27690612.4 27690612.4 27690612.5 Min: 27690612.0, Avg: 27690612.3, Max: 27690612.5, Diff: 0.5] [Ext Root Scanning (ms): 20.5 28.3 30.4 21.7 20.8 21.0 18.7 18.2 43.1 35.2 21.3 26.6 Min: 18.2, Avg: 25.5, Max: 43.1, Diff: 24.9, Sum: 305.7] [Update RS (ms): 20.1 12.3 9.7 19.5 19.6 19.7 15.0 34.1 0.8 0.0 19.7 13.5 Min: 0.0, Avg: 15.3, Max: 34.1, Diff: 34.1, Sum: 184.1] [Processed Buffers: 43 30 17 51 35 41 25 6 1 0 35 24 Min: 0, Avg: 25.7, Max: 51, Diff: 51, Sum: 308] [Scan RS (ms): 0.1 0.4 0.0 0.4 0.4 0.1 0.0 0.1 0.3 0.1 0.0 0.3 Min: 0.0, Avg: 0.2, Max: 0.4, Diff: 0.4, Sum: 2.2] [Object Copy (ms): 99.9 99.6 100.4 98.9 99.6 99.5 106.5 87.9 96.0 104.9 99.2 99.7 Min: 87.9, Avg: 99.3, Max: 106.5, Diff: 18.6, Sum: 1192.2] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [Termination Attempts: 47 52 39 49 1 41 42 59 47 51 58 57 Min: 1, Avg: 45.2, Max: 59, Diff: 58, Sum: 543] [GC Worker Other (ms): 0.1 0.2 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.0 0.1 Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.1, Sum: 0.9] [GC Worker Total (ms): 140.8 140.7 140.5 140.4 140.4 140.4 140.5 140.5 140.3 140.3 140.2 140.3 Min: 140.2, Avg: 140.4, Max: 140.8, Diff: 0.5, Sum: 1685.3] [GC Worker End (ms): 27690752.8 27690752.8 27690752.7 27690752.7 27690752.7 27690752.7 27690752.8 27690752.8 27690752.7 27690752.7 27690752.7 27690752.7 Min: 27690752.7, Avg: 27690752.7, Max: 27690752.8, Diff: 0.1] [Code Root Fixup: 0.0 ms] [Clear CT: 0.8 ms] [Other: 3.9 ms] [Choose CSet: 0.0 ms] [Ref Proc: 2.1 ms] [Ref Enq: 0.1 ms] [Free CSet: 0.5 ms] [Eden: 3560.0M(3560.0M)->0.0B(3544.0M) Survivors: 312.0M->312.0M Heap: 13.1G(14.6G)->9874.5M(14.6G)] [Times: user=1.19 sys=0.13, real=0.16 secs] 2013-10-30T01:29:30.818+0100: 27690.765: [GC concurrent-root-region-scan-start] 2013-10-30T01:29:30.856+0100: 27690.803: [GC concurrent-root-region-scan-end, 0.0382270 secs] 2013-10-30T01:29:30.856+0100: 27690.803: [GC concurrent-mark-start] 2013-10-30T01:29:32.334+0100: 27692.281: [GC concurrent-mark-end, 1.4779140 secs] 2013-10-30T01:29:32.337+0100: 27692.284: [GC remark 2013-10-30T01:29:32.344+0100: 27692.292: [GC ref-proc, 0.3883890 secs], 0.4504230 secs] [Times: user=4.26 sys=0.05, real=0.45 secs] 2013-10-30T01:29:32.788+0100: 27692.736: [GC cleanup 10127M->9532M(14G), 0.0284350 secs] [Times: user=0.17 sys=0.00, real=0.03 secs] 2013-10-30T01:29:32.817+0100: 27692.765: [GC concurrent-cleanup-start] 2013-10-30T01:29:32.817+0100: 27692.765: [GC concurrent-cleanup-end, 0.0003630 secs] 2013-10-30T01:37:26.669+0100: 28166.616: [GC pause (young) Desired survivor size 255852544 bytes, new threshold 15 (max 15) - age 1: 40919168 bytes, 40919168 total - age 2: 30378112 bytes, 71297280 total - age 3: 23561576 bytes, 94858856 total - age 4: 42991232 bytes, 137850088 total - age 5: 9633512 bytes, 147483600 total - age 6: 4743768 bytes, 152227368 total - age 7: 7511072 bytes, 159738440 total - age 8: 3012496 bytes, 162750936 total - age 9: 2842096 bytes, 165593032 total - age 10: 2643488 bytes, 168236520 total - age 11: 124928 bytes, 168361448 total - age 12: 2832592 bytes, 171194040 total - age 13: 3099000 bytes, 174293040 total - age 14: 9906104 bytes, 184199144 total - age 15: 9524160 bytes, 193723304 total 28166.616: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 226256, predicted base time: 89.33 ms, remaining time: 210.67 ms, target pause time: 300.00 ms] 28166.617: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 443 regions, survivors: 39 regions, predicted young region time: 68.73 ms] 28166.617: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 443 regions, survivors: 39 regions, old: 0 regions, predicted pause time: 158.06 ms, target pause time: 300.00 ms] 28166.745: [G1Ergonomics (Mixed GCs) start mixed GCs, reason: candidate old regions available, candidate old regions: 697 regions, reclaimable: 4317871456 bytes (27.45 %), threshold: 5.00 %] , 0.1286010 secs] [Parallel Time: 118.0 ms, GC Workers: 12] [GC Worker Start (ms): 28166616.7 28166616.7 28166616.7 28166616.8 28166616.8 28166616.9 28166616.9 28166617.0 28166617.0 28166617.0 28166617.0 28166617.0 Min: 28166616.7, Avg: 28166616.9, Max: 28166617.0, Diff: 0.4] [Ext Root Scanning (ms): 22.0 28.7 30.6 21.9 22.1 21.8 22.3 28.6 21.6 21.8 29.7 31.8 Min: 21.6, Avg: 25.2, Max: 31.8, Diff: 10.2, Sum: 302.9] [Update RS (ms): 37.6 30.1 28.6 37.9 37.2 37.6 36.8 31.6 37.5 37.3 28.8 20.9 Min: 20.9, Avg: 33.5, Max: 37.9, Diff: 17.0, Sum: 402.0] [Processed Buffers: 88 60 62 90 88 91 90 82 109 86 75 51 Min: 51, Avg: 81.0, Max: 109, Diff: 58, Sum: 972] [Scan RS (ms): 4.2 4.4 4.1 3.9 4.4 4.1 4.4 4.1 4.2 4.3 4.5 4.4 Min: 3.9, Avg: 4.3, Max: 4.5, Diff: 0.5, Sum: 51.1] [Object Copy (ms): 53.3 53.8 53.8 53.3 53.4 53.4 53.2 52.5 53.5 53.2 53.8 59.6 Min: 52.5, Avg: 53.9, Max: 59.6, Diff: 7.1, Sum: 646.9] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.3] [Termination Attempts: 1 104 96 113 111 116 102 96 117 103 115 100 Min: 1, Avg: 97.8, Max: 117, Diff: 116, Sum: 1174] [GC Worker Other (ms): 0.2 0.2 0.1 0.1 0.1 0.0 0.0 0.0 0.1 0.3 0.3 0.0 Min: 0.0, Avg: 0.1, Max: 0.3, Diff: 0.3, Sum: 1.5] [GC Worker Total (ms): 117.3 117.3 117.2 117.1 117.1 117.0 116.9 116.9 116.9 117.1 117.1 116.8 Min: 116.8, Avg: 117.1, Max: 117.3, Diff: 0.5, Sum: 1404.7] [GC Worker End (ms): 28166734.0 28166734.1 28166733.9 28166733.9 28166733.9 28166733.9 28166733.9 28166733.8 28166733.9 28166734.1 28166734.1 28166733.8 Min: 28166733.8, Avg: 28166733.9, Max: 28166734.1, Diff: 0.3] [Code Root Fixup: 0.0 ms] [Clear CT: 2.5 ms] [Other: 8.2 ms] [Choose CSet: 0.0 ms] [Ref Proc: 6.1 ms] [Ref Enq: 0.4 ms] [Free CSet: 0.6 ms] [Eden: 3544.0M(3544.0M)->0.0B(440.0M) Survivors: 312.0M->304.0M Heap: 12.5G(14.6G)->9294.2M(14.6G)] [Times: user=1.43 sys=0.01, real=0.12 secs] 2013-10-30T01:38:07.937+0100: 28207.885: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 2 (max 15) - age 1: 34304808 bytes, 34304808 total - age 2: 18459760 bytes, 52764568 total - age 3: 28615288 bytes, 81379856 total - age 4: 19659904 bytes, 101039760 total - age 5: 42360080 bytes, 143399840 total - age 6: 9181056 bytes, 152580896 total - age 7: 4688968 bytes, 157269864 total - age 8: 7360688 bytes, 164630552 total - age 9: 2994952 bytes, 167625504 total - age 10: 2825216 bytes, 170450720 total - age 11: 2638880 bytes, 173089600 total - age 12: 123536 bytes, 173213136 total - age 13: 2827664 bytes, 176040800 total - age 14: 3097464 bytes, 179138264 total - age 15: 9902944 bytes, 189041208 total 28207.885: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 51567, predicted base time: 46.37 ms, remaining time: 253.63 ms, target pause time: 300.00 ms] 28207.885: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 55 regions, survivors: 38 regions, predicted young region time: 40.63 ms] 28207.885: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: old CSet region num reached max, old: 188 regions, max: 188 regions] 28207.885: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 55 regions, survivors: 38 regions, old: 188 regions, predicted pause time: 277.13 ms, target pause time: 300.00 ms] 28208.038: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 509 regions, reclaimable: 2832392416 bytes (18.01 %), threshold: 5.00 %] , 0.1535180 secs] [Parallel Time: 142.9 ms, GC Workers: 12] [GC Worker Start (ms): 28207885.6 28207885.6 28207885.7 28207885.7 28207885.8 28207885.8 28207885.8 28207885.8 28207885.8 28207885.9 28207885.9 28207885.9 Min: 28207885.6, Avg: 28207885.8, Max: 28207885.9, Diff: 0.4] [Ext Root Scanning (ms): 22.3 29.9 31.7 28.0 22.6 30.4 21.9 22.3 21.9 21.9 28.8 22.0 Min: 21.9, Avg: 25.3, Max: 31.7, Diff: 9.9, Sum: 303.7] [Update RS (ms): 14.2 5.4 0.0 7.8 13.6 4.8 14.4 13.7 14.0 13.3 8.3 13.9 Min: 0.0, Avg: 10.3, Max: 14.4, Diff: 14.4, Sum: 123.4] [Processed Buffers: 27 12 0 17 24 17 36 37 25 21 18 28 Min: 0, Avg: 21.8, Max: 37, Diff: 37, Sum: 262] [Scan RS (ms): 22.2 23.2 19.2 22.6 22.4 22.4 23.4 22.3 22.3 24.3 22.1 22.3 Min: 19.2, Avg: 22.4, Max: 24.3, Diff: 5.1, Sum: 268.7] [Object Copy (ms): 83.6 83.7 91.2 83.7 83.5 84.4 82.3 83.7 83.8 82.5 82.7 83.6 Min: 82.3, Avg: 84.1, Max: 91.2, Diff: 8.9, Sum: 1008.8] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [Termination Attempts: 12 17 15 12 17 15 21 12 16 2 24 16 Min: 2, Avg: 14.9, Max: 24, Diff: 22, Sum: 179] [GC Worker Other (ms): 0.0 0.1 0.1 0.0 0.0 0.0 0.1 0.0 0.0 0.2 0.1 0.0 Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.7] [GC Worker Total (ms): 142.3 142.2 142.2 142.2 142.1 142.1 142.1 142.0 142.0 142.1 142.1 141.9 Min: 141.9, Avg: 142.1, Max: 142.3, Diff: 0.4, Sum: 1705.3] [GC Worker End (ms): 28208027.9 28208027.9 28208027.9 28208027.9 28208027.9 28208027.9 28208027.9 28208027.9 28208027.9 28208028.0 28208028.0 28208027.9 Min: 28208027.9, Avg: 28208027.9, Max: 28208028.0, Diff: 0.1] [Code Root Fixup: 0.3 ms] [Clear CT: 1.8 ms] [Other: 8.6 ms] [Choose CSet: 0.3 ms] [Ref Proc: 4.0 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.5 ms] [Eden: 440.0M(440.0M)->0.0B(648.0M) Survivors: 304.0M->96.0M Heap: 9734.2M(14.6G)->7886.8M(14.6G)] [Times: user=1.73 sys=0.00, real=0.15 secs] 2013-10-30T01:39:00.272+0100: 28260.220: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 7067616 bytes, 7067616 total - age 2: 27834904 bytes, 34902520 total 28260.220: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 180534, predicted base time: 75.39 ms, remaining time: 224.61 ms, target pause time: 300.00 ms] 28260.220: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 26.33 ms] 28260.221: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: predicted time is too high, predicted time: 1.49 ms, remaining time: 0.35 ms, old: 148 regions, min: 88 regions] 28260.221: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 148 regions, predicted pause time: 299.65 ms, target pause time: 300.00 ms] 28260.407: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 361 regions, reclaimable: 1806105664 bytes (11.48 %), threshold: 5.00 %] , 0.1872570 secs] [Parallel Time: 173.4 ms, GC Workers: 12] [GC Worker Start (ms): 28260220.8 28260220.8 28260220.9 28260220.9 28260221.0 28260221.0 28260221.0 28260221.0 28260221.1 28260221.1 28260221.1 28260221.1 Min: 28260220.8, Avg: 28260221.0, Max: 28260221.1, Diff: 0.4] [Ext Root Scanning (ms): 22.3 22.9 29.8 28.3 22.4 32.1 30.3 29.4 22.5 13.1 28.0 31.4 Min: 13.1, Avg: 26.0, Max: 32.1, Diff: 19.0, Sum: 312.5] [Update RS (ms): 36.4 36.6 28.2 31.8 36.2 27.6 28.0 30.2 36.4 32.4 30.1 18.8 Min: 18.8, Avg: 31.1, Max: 36.6, Diff: 17.7, Sum: 372.8] [Processed Buffers: 62 69 64 74 66 55 55 63 69 69 68 46 Min: 46, Avg: 63.3, Max: 74, Diff: 28, Sum: 760] [Scan RS (ms): 25.7 25.4 25.8 25.3 26.1 25.7 25.5 25.6 26.4 26.2 25.4 25.7 Min: 25.3, Avg: 25.7, Max: 26.4, Diff: 1.1, Sum: 308.7] [Object Copy (ms): 87.9 87.4 88.4 86.8 87.4 86.7 88.3 87.5 86.7 100.3 88.4 96.0 Min: 86.7, Avg: 89.3, Max: 100.3, Diff: 13.6, Sum: 1071.9] [Termination (ms): 0.7 0.7 0.7 0.7 0.7 0.7 0.7 0.0 0.7 0.7 0.7 0.7 Min: 0.0, Avg: 0.6, Max: 0.7, Diff: 0.7, Sum: 7.3] [Termination Attempts: 114 135 127 118 107 128 129 1 127 136 123 135 Min: 1, Avg: 115.0, Max: 136, Diff: 135, Sum: 1380] [GC Worker Other (ms): 0.0 0.0 0.1 0.0 0.0 0.0 0.2 0.0 0.0 0.0 0.1 0.2 Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.8] [GC Worker Total (ms): 173.0 173.0 172.9 172.8 172.8 172.8 172.9 172.7 172.7 172.7 172.7 172.8 Min: 172.7, Avg: 172.8, Max: 173.0, Diff: 0.3, Sum: 2074.0] [GC Worker End (ms): 28260393.8 28260393.8 28260393.8 28260393.8 28260393.8 28260393.8 28260393.9 28260393.8 28260393.8 28260393.8 28260393.9 28260394.0 Min: 28260393.8, Avg: 28260393.8, Max: 28260394.0, Diff: 0.2] [Code Root Fixup: 0.4 ms] [Clear CT: 1.8 ms] [Other: 11.6 ms] [Choose CSet: 0.4 ms] [Ref Proc: 6.8 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.4 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 8534.8M(14.6G)->6992.0M(14.6G)] [Times: user=2.02 sys=0.01, real=0.19 secs] 2013-10-30T01:40:32.405+0100: 28352.352: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 16158568 bytes, 16158568 total - age 2: 708616 bytes, 16867184 total - age 3: 18023256 bytes, 34890440 total 28352.353: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 166653, predicted base time: 71.94 ms, remaining time: 228.06 ms, target pause time: 300.00 ms] 28352.353: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 24.66 ms] 28352.353: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: predicted time is too high, predicted time: 2.09 ms, remaining time: 0.31 ms, old: 114 regions, min: 88 regions] 28352.353: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 114 regions, predicted pause time: 299.69 ms, target pause time: 300.00 ms] 28352.552: [G1Ergonomics (Mixed GCs) continue mixed GCs, reason: candidate old regions available, candidate old regions: 247 regions, reclaimable: 1111160680 bytes (7.06 %), threshold: 5.00 %] , 0.1996140 secs] [Parallel Time: 189.6 ms, GC Workers: 12] [GC Worker Start (ms): 28352353.3 28352353.4 28352353.5 28352353.6 28352353.6 28352353.6 28352353.6 28352353.6 28352353.7 28352353.7 28352353.7 28352353.8 Min: 28352353.3, Avg: 28352353.6, Max: 28352353.8, Diff: 0.4] [Ext Root Scanning (ms): 21.9 30.3 22.0 21.6 30.6 22.0 33.3 28.2 21.9 29.6 21.4 21.5 Min: 21.4, Avg: 25.4, Max: 33.3, Diff: 11.9, Sum: 304.4] [Update RS (ms): 50.2 41.1 50.2 50.2 40.8 50.0 29.2 43.3 49.7 42.9 50.5 50.7 Min: 29.2, Avg: 45.7, Max: 50.7, Diff: 21.4, Sum: 548.8] [Processed Buffers: 56 52 65 71 54 67 34 68 66 67 62 62 Min: 34, Avg: 60.3, Max: 71, Diff: 37, Sum: 724] [Scan RS (ms): 22.3 21.9 21.9 22.2 21.9 22.0 22.1 22.7 22.2 22.1 21.8 22.0 Min: 21.8, Avg: 22.1, Max: 22.7, Diff: 0.9, Sum: 265.1] [Object Copy (ms): 95.0 96.0 95.1 95.1 95.8 95.1 104.5 94.9 95.3 94.3 95.2 94.8 Min: 94.3, Avg: 95.9, Max: 104.5, Diff: 10.1, Sum: 1151.2] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.3] [Termination Attempts: 78 78 85 64 81 86 67 1 90 82 91 86 Min: 1, Avg: 74.1, Max: 91, Diff: 90, Sum: 889] [GC Worker Other (ms): 0.0 0.0 0.0 0.2 0.1 0.1 0.0 0.1 0.0 0.1 0.0 0.0 Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.1, Sum: 0.7] [GC Worker Total (ms): 189.4 189.4 189.3 189.3 189.2 189.3 189.2 189.2 189.1 189.1 189.0 189.0 Min: 189.0, Avg: 189.2, Max: 189.4, Diff: 0.4, Sum: 2270.5] [GC Worker End (ms): 28352542.8 28352542.8 28352542.8 28352542.9 28352542.8 28352542.9 28352542.8 28352542.8 28352542.8 28352542.8 28352542.8 28352542.8 Min: 28352542.8, Avg: 28352542.8, Max: 28352542.9, Diff: 0.1] [Code Root Fixup: 0.5 ms] [Clear CT: 1.9 ms] [Other: 7.6 ms] [Choose CSet: 0.4 ms] [Ref Proc: 2.5 ms] [Ref Enq: 0.1 ms] [Free CSet: 1.4 ms] [Eden: 648.0M(648.0M)->0.0B(648.0M) Survivors: 96.0M->96.0M Heap: 7640.0M(14.6G)->6393.4M(14.6G)] [Times: user=2.26 sys=0.00, real=0.20 secs] 2013-10-30T01:42:22.386+0100: 28462.334: [GC pause (mixed) Desired survivor size 50331648 bytes, new threshold 15 (max 15) - age 1: 11777336 bytes, 11777336 total - age 2: 7846592 bytes, 19623928 total - age 3: 546648 bytes, 20170576 total - age 4: 17067728 bytes, 37238304 total 28462.334: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 152752, predicted base time: 72.54 ms, remaining time: 227.46 ms, target pause time: 300.00 ms] 28462.334: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 81 regions, survivors: 12 regions, predicted young region time: 21.02 ms] 28462.335: [G1Ergonomics (CSet Construction) finish adding old regions to CSet, reason: reclaimable percentage not over threshold, old: 59 regions, max: 188 regions, reclaimable: 782535160 bytes (4.98 %), threshold: 5.00 %] 28462.335: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 81 regions, survivors: 12 regions, old: 59 regions, predicted pause time: 191.14 ms, target pause time: 300.00 ms] 28462.502: [G1Ergonomics (Mixed GCs) do not continue mixed GCs, reason: reclaimable percentage not over threshold, candidate old regions: 188 regions, reclaimable: 782535160 bytes (4.98 %), threshold: 5.00 %] , 0.1683460 secs] [Parallel Time: 157.3 ms, GC Workers: 12] [GC Worker Start (ms): 28462334.7 28462334.8 28462334.8 28462334.9 28462334.9 28462334.9 28462334.9 28462335.0 28462335.0 28462335.0 28462335.0 28462335.3 Min: 28462334.7, Avg: 28462334.9, Max: 28462335.3, Diff: 0.6] [Ext Root Scanning (ms): 35.8 30.5 30.5 31.9 56.7 30.8 30.4 31.6 30.3 30.1 31.6 25.1 Min: 25.1, Avg: 32.9, Max: 56.7, Diff: 31.6, Sum: 395.4] [Update RS (ms): 38.5 45.5 45.7 40.5 17.6 43.1 45.4 41.1 42.6 42.9 41.4 44.2 Min: 17.6, Avg: 40.7, Max: 45.7, Diff: 28.1, Sum: 488.5] [Processed Buffers: 55 58 52 52 36 66 56 68 52 64 46 64 Min: 36, Avg: 55.8, Max: 68, Diff: 32, Sum: 669] [Scan RS (ms): 16.0 16.4 16.2 16.4 15.9 16.2 16.4 16.0 16.3 16.3 15.6 16.4 Min: 15.6, Avg: 16.2, Max: 16.4, Diff: 0.8, Sum: 194.0] [Object Copy (ms): 66.3 64.2 64.2 67.7 66.3 66.4 64.3 67.7 67.1 67.1 67.7 70.4 Min: 64.2, Avg: 66.6, Max: 70.4, Diff: 6.2, Sum: 799.5] [Termination (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.1] [Termination Attempts: 20 21 23 35 28 24 1 25 26 33 29 22 Min: 1, Avg: 23.9, Max: 35, Diff: 34, Sum: 287] [GC Worker Other (ms): 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.0 0.0 0.2 0.1 Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum: 0.7] [GC Worker Total (ms): 156.7 156.6 156.6 156.6 156.6 156.5 156.5 156.5 156.4 156.4 156.6 156.2 Min: 156.2, Avg: 156.5, Max: 156.7, Diff: 0.5, Sum: 1878.2] [GC Worker End (ms): 28462491.4 28462491.4 28462491.4 28462491.4 28462491.4 28462491.4 28462491.4 28462491.5 28462491.4 28462491.4 28462491.6 28462491.5 Min: 28462491.4, Avg: 28462491.5, Max: 28462491.6, Diff: 0.2] [Code Root Fixup: 0.5 ms] [Clear CT: 2.1 ms] [Other: 8.5 ms] [Choose CSet: 0.3 ms] [Ref Proc: 4.7 ms] [Ref Enq: 0.2 ms] [Free CSet: 0.8 ms] [Eden: 648.0M(648.0M)->0.0B(7616.0M) Survivors: 96.0M->88.0M Heap: 7041.4M(14.6G)->6091.8M(14.6G)] [Times: user=1.77 sys=0.01, real=0.16 secs]