From thomas.schatzl at oracle.com Wed Apr 5 10:13:56 2017 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 05 Apr 2017 12:13:56 +0200 Subject: Updates to JDK9 Early Access documentation - Improved G1 documentation Message-ID: <1491387236.3603.34.camel@oracle.com> Hi all, ? yesterday JDK9 early access documentation has been made available at? http://docs.oracle.com/javase/9/?. This includes the GC tuning guide at?http://docs.oracle.com/javase/9/gc tuning/toc.htm: while we tried to improve the documentation throughout, we are aware that there is still lots of work to do on the overall document. However, we basically rewrote chapter 9 "Garbage-First Collector" and chapter 10 "Garbage-First Garbage Collector Tuning" to coincide with G1 becoming the default collector. Particularly the tuning chapter tries to collect answers to many problems people had with G1 that have been discussed in the last few years, also and in particular on this mailing list. (Thanks everyone for coming here and talking about issues you had) I hope the document is a significant improvement on the old one for you. Feedback about any part of it is highly appreciated. If you see anything missing or wrong, we will try to fix it or at least save your feedback for the next bigger update. Thanks, ? Thomas P.S: And yes, chapter 10 can be seen as a to-do list for future improvements :) From phaosaw2 at illinois.edu Thu Apr 13 18:22:54 2017 From: phaosaw2 at illinois.edu (Amarin Phaosawasdi) Date: Thu, 13 Apr 2017 13:22:54 -0500 Subject: g1 gc pause (young) high object copy time Message-ID: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> Hi, I'm trying to explain the GC behavior for two programs that produce the same results but use memory differently. One program seems to be taking much longer GCing than the other, although the GC count and average heap space freed are similar. ------------------------------ java version "1.7.0_79" Java(TM) SE Runtime Environment (build 1.7.0_79-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) Flags: -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:MaxPermSize=256m ------------------------------ Program 1: uses a file reader and iterates line by line. Each line can be garbage collected as soon as the program finishes processing that line. Extra flag: -Xmx500M Number of young GCs: 3599 Average heap space freed (young): 296.74 Total young GC pause time: 11.84 seconds Number of mixed GCs: 0 Number of full GCs: 0 ------------------------------ Program 2: reads a portion of the file into memory and iterates through that first. The whole portion is released at once when the program finishes processing all the lines in this portion. The rest uses a file reader to iterate line by line. Extra flag: -Xmx1000M Number of young GCs: 3086 Average heap space freed (young): 369.42 Total young GC pause time: 129.37 seconds Number of mixed GCs: 344 Average heap space freed (mixed): 56.80 Total mixed GC pause time: 9.35 seconds Number of full GCs: 0 ------------------------------ Program 2 takes much more time doing GC than program 1. When I look at the GC logs, it seems that program 2 spends more time on object copying. ------------------------------ Snippet from program 1's GC log: {Heap before GC invocations=11 (full 0): garbage-first heap total 512000K, used 391986K [0x00000000d0c00000, 0x00000000f0000000, 0x00000000f0000000) region size 1024K, 300 young (307200K), 16 survivors (16384K) compacting perm gen total 31744K, used 30804K [0x00000000f0000000, 0x00000000f1f00000, 0x0000000100000000) the space 31744K, 97% used [0x00000000f0000000, 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) No shared spaces configured. 2.666: [GC pause (young) Desired survivor size 19922944 bytes, new threshold 15 (max 15) - age 1: 758408 bytes, 758408 total - age 2: 929040 bytes, 1687448 total - age 3: 1163024 bytes, 2850472 total - age 4: 39368 bytes, 2889840 total - age 5: 408 bytes, 2890248 total - age 9: 6765952 bytes, 9656200 total - age 10: 818416 bytes, 10474616 total - age 11: 993376 bytes, 11467992 total 2.666: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 1024, predicted base time: 3.57 ms, remaining time: 196.43 ms, target pause time: 200.00 ms] 2.666: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 284 regions, survivors: 16 regions, predicted young region time: 25.28 ms] 2.666: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 284 regions, survivors: 16 regions, old: 0 regions, predicted pause time: 28.85 ms, target pause time: 200.00 ms] , 0.0044930 secs] [Parallel Time: 3.0 ms, GC Workers: 18] [GC Worker Start (ms): Min: 2666.3, Avg: 2667.2, Max: 2669.0, Diff: 2.8] [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.1, Diff: 1.1, Sum: 10.9] [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.3] [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 4] [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.2] [Object Copy (ms): Min: 0.0, Avg: 0.9, Max: 1.5, Diff: 1.5, Sum: 16.8] [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: 5.7] [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.2] [GC Worker Total (ms): Min: 0.0, Avg: 1.9, Max: 2.8, Diff: 2.8, Sum: 34.6] [GC Worker End (ms): Min: 2669.1, Avg: 2669.1, Max: 2669.1, Diff: 0.0] [Code Root Fixup: 0.1 ms] [Code Root Migration: 0.4 ms] [Clear CT: 0.2 ms] [Other: 0.9 ms] [Choose CSet: 0.0 ms] [Ref Proc: 0.4 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.4 ms] [Eden: 284.0M(284.0M)->0.0B(281.0M) Survivors: 16.0M->19.0M Heap: 382.8M(500.0M)->101.3M(500.0M)] Heap after GC invocations=12 (full 0): garbage-first heap total 512000K, used 103737K [0x00000000d0c00000, 0x00000000f0000000, 0x00000000f0000000) region size 1024K, 19 young (19456K), 19 survivors (19456K) compacting perm gen total 31744K, used 30804K [0x00000000f0000000, 0x00000000f1f00000, 0x0000000100000000) the space 31744K, 97% used [0x00000000f0000000, 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) No shared spaces configured. } [Times: user=0.04 sys=0.00, real=0.00 secs] ------------------------------ Snippet from program 2's GC log {Heap before GC invocations=13 (full 0): garbage-first heap total 1024000K, used 852897K [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) region size 1024K, 368 young (376832K), 15 survivors (15360K) compacting perm gen total 31744K, used 30938K [0x00000000f0000000, 0x00000000f1f00000, 0x0000000100000000) the space 31744K, 97% used [0x00000000f0000000, 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) No shared spaces configured. 3.623: [GC pause (young) Desired survivor size 24117248 bytes, new threshold 15 (max 15) - age 1: 8124360 bytes, 8124360 total 3.623: [G1Ergonomics (CSet Construction) start choosing CSet, _pending_cards: 11449, predicted base time: 26.10 ms, remaining time: 173.90 ms, target pause time: 200.00 ms] 3.623: [G1Ergonomics (CSet Construction) add young regions to CSet, eden: 353 regions, survivors: 15 regions, predicted young region time: 52.90 ms] 3.623: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: 353 regions, survivors: 15 regions, old: 0 regions, predicted pause time: 79.00 ms, target pause time: 200.00 ms] 3.656: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: recent GC overhead higher than threshold after GC, recent GC overhead: 44.58 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated expansion amount: 0 bytes (20.00 %)] , 0.0330160 secs] [Parallel Time: 28.1 ms, GC Workers: 18] [GC Worker Start (ms): Min: 3623.0, Avg: 3627.2, Max: 3640.1, Diff: 17.1] [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.0, Diff: 1.0, Sum: 11.0] [SATB Filtering (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.1] [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 5.4, Diff: 5.4, Sum: 54.4] [Processed Buffers: Min: 0, Avg: 3.6, Max: 10, Diff: 10, Sum: 64] [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.6, Diff: 0.6, Sum: 2.7] [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.0] [Object Copy (ms): Min: 10.5, Avg: 19.8, Max: 22.4, Diff: 11.9, Sum: 355.7] [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: 5.0] [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, Sum: 0.4] [GC Worker Total (ms): Min: 10.9, Avg: 23.8, Max: 28.0, Diff: 17.1, Sum: 429.3] [GC Worker End (ms): Min: 3651.0, Avg: 3651.0, Max: 3651.0, Diff: 0.0] [Code Root Fixup: 0.0 ms] [Code Root Migration: 0.0 ms] [Clear CT: 0.2 ms] [Other: 4.6 ms] [Choose CSet: 0.0 ms] [Ref Proc: 3.9 ms] [Ref Enq: 0.0 ms] [Free CSet: 0.6 ms] [Eden: 353.0M(353.0M)->0.0B(330.0M) Survivors: 15.0M->35.0M Heap: 832.9M(1000.0M)->500.4M(1000.0M)] Heap after GC invocations=14 (full 0): garbage-first heap total 1024000K, used 512417K [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) region size 1024K, 35 young (35840K), 35 survivors (35840K) compacting perm gen total 31744K, used 30938K [0x00000000f0000000, 0x00000000f1f00000, 0x0000000100000000) the space 31744K, 97% used [0x00000000f0000000, 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) No shared spaces configured. } [Times: user=0.39 sys=0.00, real=0.03 secs] ------------------------------ I would like to understand why the object copy for program 2 takes much longer. How should I debug this further? Please let me know if you need more information or would like me to run anything. Thank you. Amarin From yu.zhang at oracle.com Thu Apr 13 18:34:00 2017 From: yu.zhang at oracle.com (yu.zhang at oracle.com) Date: Thu, 13 Apr 2017 11:34:00 -0700 Subject: g1 gc pause (young) high object copy time In-Reply-To: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> References: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> Message-ID: <795b2f39-4935-5641-21bc-bd1539c847b5@oracle.com> Amarin, The 2nd program holds the object alive for longer period so they either get copied to survivor or promoted to old. Eventually the objects get collected by mixed gc. Thanks Jenny On 04/13/2017 11:22 AM, Amarin Phaosawasdi wrote: > Hi, > > I'm trying to explain the GC behavior for two programs that produce > the same results but use memory differently. > > One program seems to be taking much longer GCing than the other, > although the GC count and average heap space freed are similar. > > ------------------------------ > > java version "1.7.0_79" > Java(TM) SE Runtime Environment (build 1.7.0_79-b15) > Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) > Flags: -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails > -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution > -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:MaxPermSize=256m > > ------------------------------ > > Program 1: uses a file reader and iterates line by line. Each line can > be garbage collected as soon as the program finishes processing that > line. > Extra flag: -Xmx500M > > Number of young GCs: 3599 > Average heap space freed (young): 296.74 > Total young GC pause time: 11.84 seconds > Number of mixed GCs: 0 > Number of full GCs: 0 > > ------------------------------ > > Program 2: reads a portion of the file into memory and iterates > through that first. The whole portion is released at once when the > program finishes processing all the lines in this portion. The rest > uses a file reader to iterate line by line. > Extra flag: -Xmx1000M > > Number of young GCs: 3086 > Average heap space freed (young): 369.42 > Total young GC pause time: 129.37 seconds > Number of mixed GCs: 344 > Average heap space freed (mixed): 56.80 > Total mixed GC pause time: 9.35 seconds > Number of full GCs: 0 > > ------------------------------ > > Program 2 takes much more time doing GC than program 1. When I look at > the GC logs, it seems that program 2 spends more time on object copying. > > ------------------------------ > > Snippet from program 1's GC log: > > {Heap before GC invocations=11 (full 0): > garbage-first heap total 512000K, used 391986K [0x00000000d0c00000, > 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 300 young (307200K), 16 survivors (16384K) > compacting perm gen total 31744K, used 30804K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) > No shared spaces configured. > 2.666: [GC pause (young) > Desired survivor size 19922944 bytes, new threshold 15 (max 15) > - age 1: 758408 bytes, 758408 total > - age 2: 929040 bytes, 1687448 total > - age 3: 1163024 bytes, 2850472 total > - age 4: 39368 bytes, 2889840 total > - age 5: 408 bytes, 2890248 total > - age 9: 6765952 bytes, 9656200 total > - age 10: 818416 bytes, 10474616 total > - age 11: 993376 bytes, 11467992 total > 2.666: [G1Ergonomics (CSet Construction) start choosing CSet, > _pending_cards: 1024, predicted base time: 3.57 ms, remaining time: > 196.43 ms, target pause time: 200.00 ms] > 2.666: [G1Ergonomics (CSet Construction) add young regions to CSet, > eden: 284 regions, survivors: 16 regions, predicted young region time: > 25.28 ms] > 2.666: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: > 284 regions, survivors: 16 regions, old: 0 regions, predicted pause > time: 28.85 ms, target pause time: 200.00 ms] > , 0.0044930 secs] > [Parallel Time: 3.0 ms, GC Workers: 18] > [GC Worker Start (ms): Min: 2666.3, Avg: 2667.2, Max: 2669.0, > Diff: 2.8] > [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.1, Diff: > 1.1, Sum: 10.9] > [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.3] > [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 4] > [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: > 0.1, Sum: 0.2] > [Object Copy (ms): Min: 0.0, Avg: 0.9, Max: 1.5, Diff: 1.5, Sum: > 16.8] > [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: > 5.7] > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, > Sum: 0.2] > [GC Worker Total (ms): Min: 0.0, Avg: 1.9, Max: 2.8, Diff: 2.8, > Sum: 34.6] > [GC Worker End (ms): Min: 2669.1, Avg: 2669.1, Max: 2669.1, > Diff: 0.0] > [Code Root Fixup: 0.1 ms] > [Code Root Migration: 0.4 ms] > [Clear CT: 0.2 ms] > [Other: 0.9 ms] > [Choose CSet: 0.0 ms] > [Ref Proc: 0.4 ms] > [Ref Enq: 0.0 ms] > [Free CSet: 0.4 ms] > [Eden: 284.0M(284.0M)->0.0B(281.0M) Survivors: 16.0M->19.0M Heap: > 382.8M(500.0M)->101.3M(500.0M)] > Heap after GC invocations=12 (full 0): > garbage-first heap total 512000K, used 103737K [0x00000000d0c00000, > 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 19 young (19456K), 19 survivors (19456K) > compacting perm gen total 31744K, used 30804K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) > No shared spaces configured. > } > [Times: user=0.04 sys=0.00, real=0.00 secs] > > ------------------------------ > > Snippet from program 2's GC log > > {Heap before GC invocations=13 (full 0): > garbage-first heap total 1024000K, used 852897K > [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 368 young (376832K), 15 survivors (15360K) > compacting perm gen total 31744K, used 30938K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) > No shared spaces configured. > 3.623: [GC pause (young) > Desired survivor size 24117248 bytes, new threshold 15 (max 15) > - age 1: 8124360 bytes, 8124360 total > 3.623: [G1Ergonomics (CSet Construction) start choosing CSet, > _pending_cards: 11449, predicted base time: 26.10 ms, remaining time: > 173.90 ms, target pause time: 200.00 ms] > 3.623: [G1Ergonomics (CSet Construction) add young regions to CSet, > eden: 353 regions, survivors: 15 regions, predicted young region time: > 52.90 ms] > 3.623: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: > 353 regions, survivors: 15 regions, old: 0 regions, predicted pause > time: 79.00 ms, target pause time: 200.00 ms] > 3.656: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: > recent GC overhead higher than threshold after GC, recent GC overhead: > 44.58 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated > expansion amount: 0 bytes (20.00 %)] > , 0.0330160 secs] > [Parallel Time: 28.1 ms, GC Workers: 18] > [GC Worker Start (ms): Min: 3623.0, Avg: 3627.2, Max: 3640.1, > Diff: 17.1] > [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.0, Diff: > 1.0, Sum: 11.0] > [SATB Filtering (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, > Sum: 0.1] > [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 5.4, Diff: 5.4, Sum: > 54.4] > [Processed Buffers: Min: 0, Avg: 3.6, Max: 10, Diff: 10, Sum: > 64] > [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.6, Diff: 0.6, Sum: 2.7] > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: > 0.0, Sum: 0.0] > [Object Copy (ms): Min: 10.5, Avg: 19.8, Max: 22.4, Diff: 11.9, > Sum: 355.7] > [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: > 5.0] > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, > Sum: 0.4] > [GC Worker Total (ms): Min: 10.9, Avg: 23.8, Max: 28.0, Diff: > 17.1, Sum: 429.3] > [GC Worker End (ms): Min: 3651.0, Avg: 3651.0, Max: 3651.0, > Diff: 0.0] > [Code Root Fixup: 0.0 ms] > [Code Root Migration: 0.0 ms] > [Clear CT: 0.2 ms] > [Other: 4.6 ms] > [Choose CSet: 0.0 ms] > [Ref Proc: 3.9 ms] > [Ref Enq: 0.0 ms] > [Free CSet: 0.6 ms] > [Eden: 353.0M(353.0M)->0.0B(330.0M) Survivors: 15.0M->35.0M Heap: > 832.9M(1000.0M)->500.4M(1000.0M)] > Heap after GC invocations=14 (full 0): > garbage-first heap total 1024000K, used 512417K > [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 35 young (35840K), 35 survivors (35840K) > compacting perm gen total 31744K, used 30938K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) > No shared spaces configured. > } > [Times: user=0.39 sys=0.00, real=0.03 secs] > > ------------------------------ > > I would like to understand why the object copy for program 2 takes > much longer. > > How should I debug this further? > > Please let me know if you need more information or would like me to > run anything. > > Thank you. > > Amarin > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From wolfgang.pedot at finkzeit.at Thu Apr 13 18:46:31 2017 From: wolfgang.pedot at finkzeit.at (Wolfgang Pedot) Date: Thu, 13 Apr 2017 20:46:31 +0200 Subject: g1 gc pause (young) high object copy time In-Reply-To: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> References: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> Message-ID: <58EFC787.7000106@finkzeit.at> Hi, what I see is your first snippet shows only little surviving objects (Survivor space grows by 3MB) while your second log shows an increase of ~20MB in the survivor space which means way more objects (or bigger objects) are still alive and need to be copied. Dont know if that can explain all of that time but I have looked at some of my logs and there is always more object copy time when survivor space increases. kind regards Wolfgang Pedot Am 13.04.2017 um 20:22 schrieb Amarin Phaosawasdi: > Hi, > > I'm trying to explain the GC behavior for two programs that produce > the same results but use memory differently. > > One program seems to be taking much longer GCing than the other, > although the GC count and average heap space freed are similar. > > ------------------------------ > > java version "1.7.0_79" > Java(TM) SE Runtime Environment (build 1.7.0_79-b15) > Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) > Flags: -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails > -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution > -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:MaxPermSize=256m > > ------------------------------ > > Program 1: uses a file reader and iterates line by line. Each line can > be garbage collected as soon as the program finishes processing that > line. > Extra flag: -Xmx500M > > Number of young GCs: 3599 > Average heap space freed (young): 296.74 > Total young GC pause time: 11.84 seconds > Number of mixed GCs: 0 > Number of full GCs: 0 > > ------------------------------ > > Program 2: reads a portion of the file into memory and iterates > through that first. The whole portion is released at once when the > program finishes processing all the lines in this portion. The rest > uses a file reader to iterate line by line. > Extra flag: -Xmx1000M > > Number of young GCs: 3086 > Average heap space freed (young): 369.42 > Total young GC pause time: 129.37 seconds > Number of mixed GCs: 344 > Average heap space freed (mixed): 56.80 > Total mixed GC pause time: 9.35 seconds > Number of full GCs: 0 > > ------------------------------ > > Program 2 takes much more time doing GC than program 1. When I look at > the GC logs, it seems that program 2 spends more time on object copying. > > ------------------------------ > > Snippet from program 1's GC log: > > {Heap before GC invocations=11 (full 0): > garbage-first heap total 512000K, used 391986K [0x00000000d0c00000, > 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 300 young (307200K), 16 survivors (16384K) > compacting perm gen total 31744K, used 30804K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) > No shared spaces configured. > 2.666: [GC pause (young) > Desired survivor size 19922944 bytes, new threshold 15 (max 15) > - age 1: 758408 bytes, 758408 total > - age 2: 929040 bytes, 1687448 total > - age 3: 1163024 bytes, 2850472 total > - age 4: 39368 bytes, 2889840 total > - age 5: 408 bytes, 2890248 total > - age 9: 6765952 bytes, 9656200 total > - age 10: 818416 bytes, 10474616 total > - age 11: 993376 bytes, 11467992 total > 2.666: [G1Ergonomics (CSet Construction) start choosing CSet, > _pending_cards: 1024, predicted base time: 3.57 ms, remaining time: > 196.43 ms, target pause time: 200.00 ms] > 2.666: [G1Ergonomics (CSet Construction) add young regions to CSet, > eden: 284 regions, survivors: 16 regions, predicted young region time: > 25.28 ms] > 2.666: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: > 284 regions, survivors: 16 regions, old: 0 regions, predicted pause > time: 28.85 ms, target pause time: 200.00 ms] > , 0.0044930 secs] > [Parallel Time: 3.0 ms, GC Workers: 18] > [GC Worker Start (ms): Min: 2666.3, Avg: 2667.2, Max: 2669.0, > Diff: 2.8] > [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.1, Diff: > 1.1, Sum: 10.9] > [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.3] > [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 4] > [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: > 0.1, Sum: 0.2] > [Object Copy (ms): Min: 0.0, Avg: 0.9, Max: 1.5, Diff: 1.5, Sum: > 16.8] > [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: > 5.7] > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, > Sum: 0.2] > [GC Worker Total (ms): Min: 0.0, Avg: 1.9, Max: 2.8, Diff: 2.8, > Sum: 34.6] > [GC Worker End (ms): Min: 2669.1, Avg: 2669.1, Max: 2669.1, > Diff: 0.0] > [Code Root Fixup: 0.1 ms] > [Code Root Migration: 0.4 ms] > [Clear CT: 0.2 ms] > [Other: 0.9 ms] > [Choose CSet: 0.0 ms] > [Ref Proc: 0.4 ms] > [Ref Enq: 0.0 ms] > [Free CSet: 0.4 ms] > [Eden: 284.0M(284.0M)->0.0B(281.0M) Survivors: 16.0M->19.0M Heap: > 382.8M(500.0M)->101.3M(500.0M)] > Heap after GC invocations=12 (full 0): > garbage-first heap total 512000K, used 103737K [0x00000000d0c00000, > 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 19 young (19456K), 19 survivors (19456K) > compacting perm gen total 31744K, used 30804K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) > No shared spaces configured. > } > [Times: user=0.04 sys=0.00, real=0.00 secs] > > ------------------------------ > > Snippet from program 2's GC log > > {Heap before GC invocations=13 (full 0): > garbage-first heap total 1024000K, used 852897K > [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 368 young (376832K), 15 survivors (15360K) > compacting perm gen total 31744K, used 30938K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) > No shared spaces configured. > 3.623: [GC pause (young) > Desired survivor size 24117248 bytes, new threshold 15 (max 15) > - age 1: 8124360 bytes, 8124360 total > 3.623: [G1Ergonomics (CSet Construction) start choosing CSet, > _pending_cards: 11449, predicted base time: 26.10 ms, remaining time: > 173.90 ms, target pause time: 200.00 ms] > 3.623: [G1Ergonomics (CSet Construction) add young regions to CSet, > eden: 353 regions, survivors: 15 regions, predicted young region time: > 52.90 ms] > 3.623: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: > 353 regions, survivors: 15 regions, old: 0 regions, predicted pause > time: 79.00 ms, target pause time: 200.00 ms] > 3.656: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: > recent GC overhead higher than threshold after GC, recent GC overhead: > 44.58 %, threshold: 10.00 %, uncommitted: 0 bytes, calculated > expansion amount: 0 bytes (20.00 %)] > , 0.0330160 secs] > [Parallel Time: 28.1 ms, GC Workers: 18] > [GC Worker Start (ms): Min: 3623.0, Avg: 3627.2, Max: 3640.1, > Diff: 17.1] > [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.0, Diff: > 1.0, Sum: 11.0] > [SATB Filtering (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, > Sum: 0.1] > [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 5.4, Diff: 5.4, Sum: > 54.4] > [Processed Buffers: Min: 0, Avg: 3.6, Max: 10, Diff: 10, Sum: > 64] > [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.6, Diff: 0.6, Sum: 2.7] > [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: > 0.0, Sum: 0.0] > [Object Copy (ms): Min: 10.5, Avg: 19.8, Max: 22.4, Diff: 11.9, > Sum: 355.7] > [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum: > 5.0] > [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, > Sum: 0.4] > [GC Worker Total (ms): Min: 10.9, Avg: 23.8, Max: 28.0, Diff: > 17.1, Sum: 429.3] > [GC Worker End (ms): Min: 3651.0, Avg: 3651.0, Max: 3651.0, > Diff: 0.0] > [Code Root Fixup: 0.0 ms] > [Code Root Migration: 0.0 ms] > [Clear CT: 0.2 ms] > [Other: 4.6 ms] > [Choose CSet: 0.0 ms] > [Ref Proc: 3.9 ms] > [Ref Enq: 0.0 ms] > [Free CSet: 0.6 ms] > [Eden: 353.0M(353.0M)->0.0B(330.0M) Survivors: 15.0M->35.0M Heap: > 832.9M(1000.0M)->500.4M(1000.0M)] > Heap after GC invocations=14 (full 0): > garbage-first heap total 1024000K, used 512417K > [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) > region size 1024K, 35 young (35840K), 35 survivors (35840K) > compacting perm gen total 31744K, used 30938K [0x00000000f0000000, > 0x00000000f1f00000, 0x0000000100000000) > the space 31744K, 97% used [0x00000000f0000000, > 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) > No shared spaces configured. > } > [Times: user=0.39 sys=0.00, real=0.03 secs] > > ------------------------------ > > I would like to understand why the object copy for program 2 takes > much longer. > > How should I debug this further? > > Please let me know if you need more information or would like me to > run anything. > > Thank you. > > Amarin > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From phaosaw2 at illinois.edu Thu Apr 13 19:13:34 2017 From: phaosaw2 at illinois.edu (Amarin Phaosawasdi) Date: Thu, 13 Apr 2017 14:13:34 -0500 Subject: g1 gc pause (young) high object copy time In-Reply-To: <795b2f39-4935-5641-21bc-bd1539c847b5@oracle.com> References: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> <795b2f39-4935-5641-21bc-bd1539c847b5@oracle.com> Message-ID: <12e5f38c-7c89-14df-1786-7f0d05d26480@illinois.edu> This makes a lot of sense. Thanks. Amarin On 04/13/2017 01:34 PM, yu.zhang at oracle.com wrote: > Amarin, > > The 2nd program holds the object alive for longer period so they > either get copied to survivor or promoted to old. Eventually the > objects get collected by mixed gc. > > Thanks > > Jenny > > > On 04/13/2017 11:22 AM, Amarin Phaosawasdi wrote: >> Hi, >> >> I'm trying to explain the GC behavior for two programs that produce >> the same results but use memory differently. >> >> One program seems to be taking much longer GCing than the other, >> although the GC count and average heap space freed are similar. >> >> ------------------------------ >> >> java version "1.7.0_79" >> Java(TM) SE Runtime Environment (build 1.7.0_79-b15) >> Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) >> Flags: -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails >> -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution >> -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:MaxPermSize=256m >> >> ------------------------------ >> >> Program 1: uses a file reader and iterates line by line. Each line >> can be garbage collected as soon as the program finishes processing >> that line. >> Extra flag: -Xmx500M >> >> Number of young GCs: 3599 >> Average heap space freed (young): 296.74 >> Total young GC pause time: 11.84 seconds >> Number of mixed GCs: 0 >> Number of full GCs: 0 >> >> ------------------------------ >> >> Program 2: reads a portion of the file into memory and iterates >> through that first. The whole portion is released at once when the >> program finishes processing all the lines in this portion. The rest >> uses a file reader to iterate line by line. >> Extra flag: -Xmx1000M >> >> Number of young GCs: 3086 >> Average heap space freed (young): 369.42 >> Total young GC pause time: 129.37 seconds >> Number of mixed GCs: 344 >> Average heap space freed (mixed): 56.80 >> Total mixed GC pause time: 9.35 seconds >> Number of full GCs: 0 >> >> ------------------------------ >> >> Program 2 takes much more time doing GC than program 1. When I look >> at the GC logs, it seems that program 2 spends more time on object >> copying. >> >> ------------------------------ >> >> Snippet from program 1's GC log: >> >> {Heap before GC invocations=11 (full 0): >> garbage-first heap total 512000K, used 391986K >> [0x00000000d0c00000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 300 young (307200K), 16 survivors (16384K) >> compacting perm gen total 31744K, used 30804K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) >> No shared spaces configured. >> 2.666: [GC pause (young) >> Desired survivor size 19922944 bytes, new threshold 15 (max 15) >> - age 1: 758408 bytes, 758408 total >> - age 2: 929040 bytes, 1687448 total >> - age 3: 1163024 bytes, 2850472 total >> - age 4: 39368 bytes, 2889840 total >> - age 5: 408 bytes, 2890248 total >> - age 9: 6765952 bytes, 9656200 total >> - age 10: 818416 bytes, 10474616 total >> - age 11: 993376 bytes, 11467992 total >> 2.666: [G1Ergonomics (CSet Construction) start choosing CSet, >> _pending_cards: 1024, predicted base time: 3.57 ms, remaining time: >> 196.43 ms, target pause time: 200.00 ms] >> 2.666: [G1Ergonomics (CSet Construction) add young regions to CSet, >> eden: 284 regions, survivors: 16 regions, predicted young region >> time: 25.28 ms] >> 2.666: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: >> 284 regions, survivors: 16 regions, old: 0 regions, predicted pause >> time: 28.85 ms, target pause time: 200.00 ms] >> , 0.0044930 secs] >> [Parallel Time: 3.0 ms, GC Workers: 18] >> [GC Worker Start (ms): Min: 2666.3, Avg: 2667.2, Max: 2669.0, >> Diff: 2.8] >> [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.1, Diff: >> 1.1, Sum: 10.9] >> [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: >> 0.3] >> [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 4] >> [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] >> [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: >> 0.1, Sum: 0.2] >> [Object Copy (ms): Min: 0.0, Avg: 0.9, Max: 1.5, Diff: 1.5, >> Sum: 16.8] >> [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, >> Sum: 5.7] >> [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, >> Sum: 0.2] >> [GC Worker Total (ms): Min: 0.0, Avg: 1.9, Max: 2.8, Diff: 2.8, >> Sum: 34.6] >> [GC Worker End (ms): Min: 2669.1, Avg: 2669.1, Max: 2669.1, >> Diff: 0.0] >> [Code Root Fixup: 0.1 ms] >> [Code Root Migration: 0.4 ms] >> [Clear CT: 0.2 ms] >> [Other: 0.9 ms] >> [Choose CSet: 0.0 ms] >> [Ref Proc: 0.4 ms] >> [Ref Enq: 0.0 ms] >> [Free CSet: 0.4 ms] >> [Eden: 284.0M(284.0M)->0.0B(281.0M) Survivors: 16.0M->19.0M Heap: >> 382.8M(500.0M)->101.3M(500.0M)] >> Heap after GC invocations=12 (full 0): >> garbage-first heap total 512000K, used 103737K >> [0x00000000d0c00000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 19 young (19456K), 19 survivors (19456K) >> compacting perm gen total 31744K, used 30804K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) >> No shared spaces configured. >> } >> [Times: user=0.04 sys=0.00, real=0.00 secs] >> >> ------------------------------ >> >> Snippet from program 2's GC log >> >> {Heap before GC invocations=13 (full 0): >> garbage-first heap total 1024000K, used 852897K >> [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 368 young (376832K), 15 survivors (15360K) >> compacting perm gen total 31744K, used 30938K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) >> No shared spaces configured. >> 3.623: [GC pause (young) >> Desired survivor size 24117248 bytes, new threshold 15 (max 15) >> - age 1: 8124360 bytes, 8124360 total >> 3.623: [G1Ergonomics (CSet Construction) start choosing CSet, >> _pending_cards: 11449, predicted base time: 26.10 ms, remaining time: >> 173.90 ms, target pause time: 200.00 ms] >> 3.623: [G1Ergonomics (CSet Construction) add young regions to CSet, >> eden: 353 regions, survivors: 15 regions, predicted young region >> time: 52.90 ms] >> 3.623: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: >> 353 regions, survivors: 15 regions, old: 0 regions, predicted pause >> time: 79.00 ms, target pause time: 200.00 ms] >> 3.656: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: >> recent GC overhead higher than threshold after GC, recent GC >> overhead: 44.58 %, threshold: 10.00 %, uncommitted: 0 bytes, >> calculated expansion amount: 0 bytes (20.00 %)] >> , 0.0330160 secs] >> [Parallel Time: 28.1 ms, GC Workers: 18] >> [GC Worker Start (ms): Min: 3623.0, Avg: 3627.2, Max: 3640.1, >> Diff: 17.1] >> [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.0, Diff: >> 1.0, Sum: 11.0] >> [SATB Filtering (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, >> Sum: 0.1] >> [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 5.4, Diff: 5.4, Sum: >> 54.4] >> [Processed Buffers: Min: 0, Avg: 3.6, Max: 10, Diff: 10, >> Sum: 64] >> [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.6, Diff: 0.6, Sum: 2.7] >> [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: >> 0.0, Sum: 0.0] >> [Object Copy (ms): Min: 10.5, Avg: 19.8, Max: 22.4, Diff: 11.9, >> Sum: 355.7] >> [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, >> Sum: 5.0] >> [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, >> Sum: 0.4] >> [GC Worker Total (ms): Min: 10.9, Avg: 23.8, Max: 28.0, Diff: >> 17.1, Sum: 429.3] >> [GC Worker End (ms): Min: 3651.0, Avg: 3651.0, Max: 3651.0, >> Diff: 0.0] >> [Code Root Fixup: 0.0 ms] >> [Code Root Migration: 0.0 ms] >> [Clear CT: 0.2 ms] >> [Other: 4.6 ms] >> [Choose CSet: 0.0 ms] >> [Ref Proc: 3.9 ms] >> [Ref Enq: 0.0 ms] >> [Free CSet: 0.6 ms] >> [Eden: 353.0M(353.0M)->0.0B(330.0M) Survivors: 15.0M->35.0M Heap: >> 832.9M(1000.0M)->500.4M(1000.0M)] >> Heap after GC invocations=14 (full 0): >> garbage-first heap total 1024000K, used 512417K >> [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 35 young (35840K), 35 survivors (35840K) >> compacting perm gen total 31744K, used 30938K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) >> No shared spaces configured. >> } >> [Times: user=0.39 sys=0.00, real=0.03 secs] >> >> ------------------------------ >> >> I would like to understand why the object copy for program 2 takes >> much longer. >> >> How should I debug this further? >> >> Please let me know if you need more information or would like me to >> run anything. >> >> Thank you. >> >> Amarin >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > From phaosaw2 at illinois.edu Thu Apr 13 19:15:48 2017 From: phaosaw2 at illinois.edu (Amarin Phaosawasdi) Date: Thu, 13 Apr 2017 14:15:48 -0500 Subject: g1 gc pause (young) high object copy time In-Reply-To: <58EFC787.7000106@finkzeit.at> References: <9cf76706-5e89-db9d-c198-84f960a0daed@illinois.edu> <58EFC787.7000106@finkzeit.at> Message-ID: <1559ea00-eba8-0f50-36b7-49ab4bc2e49f@illinois.edu> I've looked at the rest of the logs and the survivor space in the second program is indeed much bigger than the first program. It does seem to be the case. Thanks for the pointers. Amarin On 04/13/2017 01:46 PM, Wolfgang Pedot wrote: > Hi, > > what I see is your first snippet shows only little surviving objects > (Survivor space grows by 3MB) while your second log shows an increase > of ~20MB in the survivor space which means way more objects (or bigger > objects) are still alive and need to be copied. > > Dont know if that can explain all of that time but I have looked at > some of my logs and there is always more object copy time when > survivor space increases. > > kind regards > Wolfgang Pedot > > Am 13.04.2017 um 20:22 schrieb Amarin Phaosawasdi: >> Hi, >> >> I'm trying to explain the GC behavior for two programs that produce >> the same results but use memory differently. >> >> One program seems to be taking much longer GCing than the other, >> although the GC count and average heap space freed are similar. >> >> ------------------------------ >> >> java version "1.7.0_79" >> Java(TM) SE Runtime Environment (build 1.7.0_79-b15) >> Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode) >> Flags: -XX:+UseG1GC -verbose:gc -XX:+PrintGCDetails >> -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution >> -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:MaxPermSize=256m >> >> ------------------------------ >> >> Program 1: uses a file reader and iterates line by line. Each line >> can be garbage collected as soon as the program finishes processing >> that line. >> Extra flag: -Xmx500M >> >> Number of young GCs: 3599 >> Average heap space freed (young): 296.74 >> Total young GC pause time: 11.84 seconds >> Number of mixed GCs: 0 >> Number of full GCs: 0 >> >> ------------------------------ >> >> Program 2: reads a portion of the file into memory and iterates >> through that first. The whole portion is released at once when the >> program finishes processing all the lines in this portion. The rest >> uses a file reader to iterate line by line. >> Extra flag: -Xmx1000M >> >> Number of young GCs: 3086 >> Average heap space freed (young): 369.42 >> Total young GC pause time: 129.37 seconds >> Number of mixed GCs: 344 >> Average heap space freed (mixed): 56.80 >> Total mixed GC pause time: 9.35 seconds >> Number of full GCs: 0 >> >> ------------------------------ >> >> Program 2 takes much more time doing GC than program 1. When I look >> at the GC logs, it seems that program 2 spends more time on object >> copying. >> >> ------------------------------ >> >> Snippet from program 1's GC log: >> >> {Heap before GC invocations=11 (full 0): >> garbage-first heap total 512000K, used 391986K >> [0x00000000d0c00000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 300 young (307200K), 16 survivors (16384K) >> compacting perm gen total 31744K, used 30804K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) >> No shared spaces configured. >> 2.666: [GC pause (young) >> Desired survivor size 19922944 bytes, new threshold 15 (max 15) >> - age 1: 758408 bytes, 758408 total >> - age 2: 929040 bytes, 1687448 total >> - age 3: 1163024 bytes, 2850472 total >> - age 4: 39368 bytes, 2889840 total >> - age 5: 408 bytes, 2890248 total >> - age 9: 6765952 bytes, 9656200 total >> - age 10: 818416 bytes, 10474616 total >> - age 11: 993376 bytes, 11467992 total >> 2.666: [G1Ergonomics (CSet Construction) start choosing CSet, >> _pending_cards: 1024, predicted base time: 3.57 ms, remaining time: >> 196.43 ms, target pause time: 200.00 ms] >> 2.666: [G1Ergonomics (CSet Construction) add young regions to CSet, >> eden: 284 regions, survivors: 16 regions, predicted young region >> time: 25.28 ms] >> 2.666: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: >> 284 regions, survivors: 16 regions, old: 0 regions, predicted pause >> time: 28.85 ms, target pause time: 200.00 ms] >> , 0.0044930 secs] >> [Parallel Time: 3.0 ms, GC Workers: 18] >> [GC Worker Start (ms): Min: 2666.3, Avg: 2667.2, Max: 2669.0, >> Diff: 2.8] >> [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.1, Diff: >> 1.1, Sum: 10.9] >> [Update RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: >> 0.3] >> [Processed Buffers: Min: 0, Avg: 0.2, Max: 1, Diff: 1, Sum: 4] >> [Scan RS (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5] >> [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: >> 0.1, Sum: 0.2] >> [Object Copy (ms): Min: 0.0, Avg: 0.9, Max: 1.5, Diff: 1.5, >> Sum: 16.8] >> [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, >> Sum: 5.7] >> [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, >> Sum: 0.2] >> [GC Worker Total (ms): Min: 0.0, Avg: 1.9, Max: 2.8, Diff: 2.8, >> Sum: 34.6] >> [GC Worker End (ms): Min: 2669.1, Avg: 2669.1, Max: 2669.1, >> Diff: 0.0] >> [Code Root Fixup: 0.1 ms] >> [Code Root Migration: 0.4 ms] >> [Clear CT: 0.2 ms] >> [Other: 0.9 ms] >> [Choose CSet: 0.0 ms] >> [Ref Proc: 0.4 ms] >> [Ref Enq: 0.0 ms] >> [Free CSet: 0.4 ms] >> [Eden: 284.0M(284.0M)->0.0B(281.0M) Survivors: 16.0M->19.0M Heap: >> 382.8M(500.0M)->101.3M(500.0M)] >> Heap after GC invocations=12 (full 0): >> garbage-first heap total 512000K, used 103737K >> [0x00000000d0c00000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 19 young (19456K), 19 survivors (19456K) >> compacting perm gen total 31744K, used 30804K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e152e0, 0x00000000f1e15400, 0x00000000f1f00000) >> No shared spaces configured. >> } >> [Times: user=0.04 sys=0.00, real=0.00 secs] >> >> ------------------------------ >> >> Snippet from program 2's GC log >> >> {Heap before GC invocations=13 (full 0): >> garbage-first heap total 1024000K, used 852897K >> [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 368 young (376832K), 15 survivors (15360K) >> compacting perm gen total 31744K, used 30938K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) >> No shared spaces configured. >> 3.623: [GC pause (young) >> Desired survivor size 24117248 bytes, new threshold 15 (max 15) >> - age 1: 8124360 bytes, 8124360 total >> 3.623: [G1Ergonomics (CSet Construction) start choosing CSet, >> _pending_cards: 11449, predicted base time: 26.10 ms, remaining time: >> 173.90 ms, target pause time: 200.00 ms] >> 3.623: [G1Ergonomics (CSet Construction) add young regions to CSet, >> eden: 353 regions, survivors: 15 regions, predicted young region >> time: 52.90 ms] >> 3.623: [G1Ergonomics (CSet Construction) finish choosing CSet, eden: >> 353 regions, survivors: 15 regions, old: 0 regions, predicted pause >> time: 79.00 ms, target pause time: 200.00 ms] >> 3.656: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: >> recent GC overhead higher than threshold after GC, recent GC >> overhead: 44.58 %, threshold: 10.00 %, uncommitted: 0 bytes, >> calculated expansion amount: 0 bytes (20.00 %)] >> , 0.0330160 secs] >> [Parallel Time: 28.1 ms, GC Workers: 18] >> [GC Worker Start (ms): Min: 3623.0, Avg: 3627.2, Max: 3640.1, >> Diff: 17.1] >> [Ext Root Scanning (ms): Min: 0.0, Avg: 0.6, Max: 1.0, Diff: >> 1.0, Sum: 11.0] >> [SATB Filtering (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, >> Sum: 0.1] >> [Update RS (ms): Min: 0.0, Avg: 3.0, Max: 5.4, Diff: 5.4, Sum: >> 54.4] >> [Processed Buffers: Min: 0, Avg: 3.6, Max: 10, Diff: 10, >> Sum: 64] >> [Scan RS (ms): Min: 0.0, Avg: 0.1, Max: 0.6, Diff: 0.6, Sum: 2.7] >> [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: >> 0.0, Sum: 0.0] >> [Object Copy (ms): Min: 10.5, Avg: 19.8, Max: 22.4, Diff: 11.9, >> Sum: 355.7] >> [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, >> Sum: 5.0] >> [GC Worker Other (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.0, >> Sum: 0.4] >> [GC Worker Total (ms): Min: 10.9, Avg: 23.8, Max: 28.0, Diff: >> 17.1, Sum: 429.3] >> [GC Worker End (ms): Min: 3651.0, Avg: 3651.0, Max: 3651.0, >> Diff: 0.0] >> [Code Root Fixup: 0.0 ms] >> [Code Root Migration: 0.0 ms] >> [Clear CT: 0.2 ms] >> [Other: 4.6 ms] >> [Choose CSet: 0.0 ms] >> [Ref Proc: 3.9 ms] >> [Ref Enq: 0.0 ms] >> [Free CSet: 0.6 ms] >> [Eden: 353.0M(353.0M)->0.0B(330.0M) Survivors: 15.0M->35.0M Heap: >> 832.9M(1000.0M)->500.4M(1000.0M)] >> Heap after GC invocations=14 (full 0): >> garbage-first heap total 1024000K, used 512417K >> [0x00000000b1800000, 0x00000000f0000000, 0x00000000f0000000) >> region size 1024K, 35 young (35840K), 35 survivors (35840K) >> compacting perm gen total 31744K, used 30938K [0x00000000f0000000, >> 0x00000000f1f00000, 0x0000000100000000) >> the space 31744K, 97% used [0x00000000f0000000, >> 0x00000000f1e36bf8, 0x00000000f1e36c00, 0x00000000f1f00000) >> No shared spaces configured. >> } >> [Times: user=0.39 sys=0.00, real=0.03 secs] >> >> ------------------------------ >> >> I would like to understand why the object copy for program 2 takes >> much longer. >> >> How should I debug this further? >> >> Please let me know if you need more information or would like me to >> run anything. >> >> Thank you. >> >> Amarin >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From ramkri123 at gmail.com Mon Apr 17 21:41:51 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Mon, 17 Apr 2017 14:41:51 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: <58F533C6.10507@oracle.com> References: <58F533C6.10507@oracle.com> Message-ID: Many thanks Jon for the immediate reply. I am copying the hotspot-gc-use team. Thanks, Ramki On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < jonathan.gibbons at oracle.com> wrote: > > > On 04/17/2017 02:18 PM, Ram Krishnan wrote: > > Hi, > > I have been able to successfully run all the tests in hotspot/test/gc/g1 > using jtreg. > > The only gotcha I am facing is that the JVM startup options specified in > process builder does not have any effect. I have confirmed this through > prints in the JVM code base. > > For example, > ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java > modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of > ConcGCThreads is still zero. > > ?I am new to jtreg and openjdk and probably missing something obvious. > Your help would be much appreciated. > > Thanks in advance,? > ?Ramki? > > > Ramki, > > This does not look like an issue with jtreg, since the behavior you are > apparently seeing is all within the test code and its libraries. > > You might want to follow up with the Hotspot team, who would have more > familiarity with these tests and the associated libraries. > > -- Jon > > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramkri123 at gmail.com Mon Apr 17 21:55:08 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Mon, 17 Apr 2017 14:55:08 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: References: <58F533C6.10507@oracle.com> Message-ID: On Mon, Apr 17, 2017 at 2:41 PM, Ram Krishnan wrote: > Many thanks Jon for the immediate reply. I am copying the hotspot-gc-use > team. > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < > jonathan.gibbons at oracle.com> wrote: > >> >> >> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >> >> Hi, >> >> I have been able to successfully run all the tests in hotspot/test/gc/g1 >> using jtreg. >> >> The only gotcha I am facing is that the JVM startup options specified in >> process builder does not have any effect. I have confirmed this through >> prints in the JVM code base. >> >> For example, >> ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java >> modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of >> ConcGCThreads is still zero. >> >> ?I am new to jtreg and openjdk and probably missing something obvious. >> Your help would be much appreciated. >> >> Thanks in advance,? >> ?Ramki? >> >> >> Ramki, >> >> This does not look like an issue with jtreg, since the behavior you are >> apparently seeing is all within the test code and its libraries. >> >> You might want to follow up with the Hotspot team, who would have more >> familiarity with these tests and the associated libraries. >> >> -- Jon >> >> > > > -- > Thanks, > Ramki > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramkri123 at gmail.com Mon Apr 17 23:33:26 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Mon, 17 Apr 2017 16:33:26 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: <58F533C6.10507@oracle.com> References: <58F533C6.10507@oracle.com> Message-ID: Many thanks Jonathan for the immediate reply. I am copying the hotspot gc team. Hotspot gc team -- your help would be much appreciated on the topic below. Thanks, Ramki On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < jonathan.gibbons at oracle.com> wrote: > > > On 04/17/2017 02:18 PM, Ram Krishnan wrote: > > Hi, > > I have been able to successfully run all the tests in hotspot/test/gc/g1 > using jtreg. > > The only gotcha I am facing is that the JVM startup options specified in > process builder does not have any effect. I have confirmed this through > prints in the JVM code base. > > For example, > ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java > modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of > ConcGCThreads is still zero. > > ?I am new to jtreg and openjdk and probably missing something obvious. > Your help would be much appreciated. > > Thanks in advance,? > ?Ramki? > > > Ramki, > > This does not look like an issue with jtreg, since the behavior you are > apparently seeing is all within the test code and its libraries. > > You might want to follow up with the Hotspot team, who would have more > familiarity with these tests and the associated libraries. > > -- Jon > > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Mon Apr 17 23:49:39 2017 From: yu.zhang at oracle.com (Jenny Zhang) Date: Mon, 17 Apr 2017 16:49:39 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: References: <58F533C6.10507@oracle.com> Message-ID: <58F55493.2000302@oracle.com> Ramki, Can you do the following to be sure that hotspot did not take the parameter? java -XX:ConcGCThreads=1 -XX:+PrintFlagsFinal I am using jdk9b154, the output shows it changed the ConcGCThreads to 1 Thanks Jenny On 4/17/2017 4:33 PM, Ram Krishnan wrote: > Many thanks Jonathan for the immediate reply. > > I am copying the hotspot gc team. > > Hotspot gc team -- your help would be much appreciated on the topic below. > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons > > wrote: > > > > On 04/17/2017 02:18 PM, Ram Krishnan wrote: >> Hi, >> >> I have been able to successfully run all the tests in >> hotspot/test/gc/g1 using jtreg. >> >> The only gotcha I am facing is that the JVM startup options >> specified in process builder does not have any effect. I have >> confirmed this through prints in the JVM code base. >> >> For example, >> ?/hotspot/test/gc/g1/? >> TestEagerReclaimHumongousRegionsClearMarkBits.java modifies the >> "-XX:ConcGCThreads=1", but inside the JVM code to value of >> ConcGCThreads is still zero. >> >> ?I am new to jtreg and openjdk and probably missing something >> obvious. Your help would be much appreciated. >> >> Thanks in advance,? >> ?Ramki? >> > > Ramki, > > This does not look like an issue with jtreg, since the behavior > you are apparently seeing is all within the test code and its > libraries. > > You might want to follow up with the Hotspot team, who would have > more familiarity with these tests and the associated libraries. > > -- Jon > > > > > -- > Thanks, > Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramkri123 at gmail.com Tue Apr 18 00:09:48 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Mon, 17 Apr 2017 17:09:48 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: <58F55493.2000302@oracle.com> References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> Message-ID: Hi Jenny, I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads as 1. The problem seems to be interaction with jtreg. Thanks, Ramki On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang wrote: > Ramki, > > Can you do the following to be sure that hotspot did not take the > parameter? > java -XX: > ?? > ConcGCThreads=1 -XX:+PrintFlagsFinal > > I am using jdk9b154, the output shows it changed the ConcGCThreads to 1 > > Thanks > Jenny > > On 4/17/2017 4:33 PM, Ram Krishnan wrote: > > Many thanks Jonathan for the immediate reply. > > I am copying the hotspot gc team. > > Hotspot gc team -- your help would be much appreciated on the topic below. > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < > jonathan.gibbons at oracle.com> wrote: > >> >> >> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >> >> Hi, >> >> I have been able to successfully run all the tests in hotspot/test/gc/g1 >> using jtreg. >> >> The only gotcha I am facing is that the JVM startup options specified in >> process builder does not have any effect. I have confirmed this through >> prints in the JVM code base. >> >> For example, >> ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java >> modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of >> ConcGCThreads is still zero. >> >> ?I am new to jtreg and openjdk and probably missing something obvious. >> Your help would be much appreciated. >> >> Thanks in advance,? >> ?Ramki? >> >> >> Ramki, >> >> This does not look like an issue with jtreg, since the behavior you are >> apparently seeing is all within the test code and its libraries. >> >> You might want to follow up with the Hotspot team, who would have more >> familiarity with these tests and the associated libraries. >> >> -- Jon >> >> > > > -- > Thanks, > Ramki > > > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.gibbons at oracle.com Tue Apr 18 00:31:18 2017 From: jonathan.gibbons at oracle.com (Jonathan Gibbons) Date: Mon, 17 Apr 2017 17:31:18 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> Message-ID: <58F55E56.3010304@oracle.com> Ramki, jtreg is not involved at that point in time. It has launched the test, and the test code is running. When you say "I have confirmed this through prints in the JVM code base.", are you sure you are checking the right JVM? There are at least 3 JVMs running at the same time in your setup: you have a JVM running jtreg; you have a JVM running the test, and the test is launching a JVM of its own through the ProcessTools library. -- Jon On 04/17/2017 05:09 PM, Ram Krishnan wrote: > Hi Jenny, > > I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads > as 1. > > The problem seems to be interaction with jtreg. > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang > wrote: > > Ramki, > > Can you do the following to be sure that hotspot did not take the > parameter? > java -XX: > ?? > ConcGCThreads=1 -XX:+PrintFlagsFinal > > I am using jdk9b154, the output shows it changed the ConcGCThreads > to 1 > > Thanks > Jenny > > On 4/17/2017 4:33 PM, Ram Krishnan wrote: >> Many thanks Jonathan for the immediate reply. >> >> I am copying the hotspot gc team. >> >> Hotspot gc team -- your help would be much appreciated on the >> topic below. >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons >> > > wrote: >> >> >> >> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>> Hi, >>> >>> I have been able to successfully run all the tests in >>> hotspot/test/gc/g1 using jtreg. >>> >>> The only gotcha I am facing is that the JVM startup options >>> specified in process builder does not have any effect. I >>> have confirmed this through prints in the JVM code base. >>> >>> For example, >>> ?/hotspot/test/gc/g1/? >>> TestEagerReclaimHumongousRegionsClearMarkBits.java modifies >>> the "-XX:ConcGCThreads=1", but inside the JVM code to value >>> of ConcGCThreads is still zero. >>> >>> ?I am new to jtreg and openjdk and probably missing >>> something obvious. Your help would be much appreciated. >>> >>> Thanks in advance,? >>> ?Ramki? >>> >> >> Ramki, >> >> This does not look like an issue with jtreg, since the >> behavior you are apparently seeing is all within the test >> code and its libraries. >> >> You might want to follow up with the Hotspot team, who would >> have more familiarity with these tests and the associated >> libraries. >> >> -- Jon >> >> >> >> >> -- >> Thanks, >> Ramki > > > > > -- > Thanks, > Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.fazunenko at oracle.com Tue Apr 18 06:21:54 2017 From: dmitry.fazunenko at oracle.com (Dmitry Fazunenko) Date: Tue, 18 Apr 2017 09:21:54 +0300 Subject: trouble passing JVM startup options using JTREG In-Reply-To: References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> Message-ID: <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> Hi Ramki, It's very unlikely to be an issue related to jtreg somehow. I ran the test you mentioned manually, this is the quote from .jtr file: ... Command line: [/jdk9/solaris-sparcv9/bin/java -d64 -cp /home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/gc/g1:/home/fa/hs-int/hotspot/test/gc/g1:/home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/test/lib:/home/fa/hs-int/test/lib:/home/fa/jtreg/lib/javatest.jar:/home/fa/jtreg/lib/jtreg.jar -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC *-XX:ConcGCThreads=1* -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps ReclaimRegionFast ] ... All the VM options are passed as expected. > I have confirmed this through prints in the JVM code base. I'm not sure what do you mean here, but I guess you did something wrong. Please note, during execution of this test two JVM are launched: - the first one started by jtreg (TestEagerReclaimHumongousRegionsClearMarkBits class) - the second started by test (ReclaimRegionFast class) In the first one ConcGCThread should be set to 0. Thanks, Dima On 18.04.2017 3:09, Ram Krishnan wrote: > Hi Jenny, > > I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads > as 1. > > The problem seems to be interaction with jtreg. > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang > wrote: > > Ramki, > > Can you do the following to be sure that hotspot did not take the > parameter? > java -XX: > ?? > ConcGCThreads=1 -XX:+PrintFlagsFinal > > I am using jdk9b154, the output shows it changed the ConcGCThreads > to 1 > > Thanks > Jenny > > On 4/17/2017 4:33 PM, Ram Krishnan wrote: >> Many thanks Jonathan for the immediate reply. >> >> I am copying the hotspot gc team. >> >> Hotspot gc team -- your help would be much appreciated on the >> topic below. >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons >> > > wrote: >> >> >> >> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>> Hi, >>> >>> I have been able to successfully run all the tests in >>> hotspot/test/gc/g1 using jtreg. >>> >>> The only gotcha I am facing is that the JVM startup options >>> specified in process builder does not have any effect. I >>> have confirmed this through prints in the JVM code base. >>> >>> For example, >>> ?/hotspot/test/gc/g1/? >>> TestEagerReclaimHumongousRegionsClearMarkBits.java modifies >>> the "-XX:ConcGCThreads=1", but inside the JVM code to value >>> of ConcGCThreads is still zero. >>> >>> ?I am new to jtreg and openjdk and probably missing >>> something obvious. Your help would be much appreciated. >>> >>> Thanks in advance,? >>> ?Ramki? >>> >> >> Ramki, >> >> This does not look like an issue with jtreg, since the >> behavior you are apparently seeing is all within the test >> code and its libraries. >> >> You might want to follow up with the Hotspot team, who would >> have more familiarity with these tests and the associated >> libraries. >> >> -- Jon >> >> >> >> >> -- >> Thanks, >> Ramki > > > > > -- > Thanks, > Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramkri123 at gmail.com Tue Apr 18 14:56:48 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Tue, 18 Apr 2017 07:56:48 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> Message-ID: Hi Dmitry, Thanks, more below. In the expanded command line option, ?ConcGCThread is indeed set to 1 as expected in the ?ReclaimRegionFastclass JVM. In the direct jtreg option, ?ConcGCThread is 0 in both JVMs. The usage details are below. My build is based on JDK 9 and I downloaded the latest jtreg. There may be something wrong in my jtreg usage -- can you please clarify? Using jtreg directly does not work ---------------------------------- /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java Command line option works ------------------------- /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -cp /home/ramki/9dev/hotspot/test/JTwork/classes/gc/g1:/home/ramki/9dev/hotspot/test/gc/g1:/home/ramki/9dev/hotspot/test/JTwork/classes/test/lib:/home/ramki/9dev/test/lib:/home/ramki/jtreg/lib/javatest.jar:/home/ramki/jtreg/lib/jtreg.jar -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC -XX:ConcGCThreads=1 -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps -XX:G1GcCpuLLCCachePartitionPercent=48 ReclaimRegionFast Thanks, Ramki On Mon, Apr 17, 2017 at 11:21 PM, Dmitry Fazunenko < dmitry.fazunenko at oracle.com> wrote: > Hi Ramki, > > It's very unlikely to be an issue related to jtreg somehow. > I ran the test you mentioned manually, this is the quote from .jtr file: > ... > Command line: [/jdk9/solaris-sparcv9/bin/java -d64 -cp > /home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/gc/g1:/ > home/fa/hs-int/hotspot/test/gc/g1:/home/fa/hs-int/hotspot/ > test/gc/g1/JTwork/classes/test/lib:/home/fa/hs-int/test/ > lib:/home/fa/jtreg/lib/javatest.jar:/home/fa/jtreg/lib/jtreg.jar > -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M -XX: > InitiatingHeapOccupancyPercent=0 -Xlog:gc -XX:+UnlockDiagnosticVMOptions > -XX:+VerifyAfterGC *-XX:ConcGCThreads=1* -XX:+IgnoreUnrecognizedVMOptions > -XX:+G1VerifyBitmaps > ?? > ReclaimRegionFast ] > ... > All the VM options are passed as expected. > > > I have confirmed this through prints in the JVM code base. > I'm not sure what do you mean here, but I guess you did something wrong. > > Please note, during execution of this test two JVM are launched: > - the first one started by jtreg (TestEagerReclaimHumongousRegionsClearMarkBits > class) > - the second started by test (ReclaimRegionFast class) > > In the first one > ?? > ConcGCThread should be set to 0. > > Thanks, > Dima > > On 18.04.2017 3:09, Ram Krishnan wrote: > > Hi Jenny, > > I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads as > 1. > > The problem seems to be interaction with jtreg. > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang wrote: > >> Ramki, >> >> Can you do the following to be sure that hotspot did not take the >> parameter? >> java -XX: >> ?? >> ConcGCThreads=1 -XX:+PrintFlagsFinal >> >> I am using jdk9b154, the output shows it changed the ConcGCThreads to 1 >> >> Thanks >> Jenny >> >> On 4/17/2017 4:33 PM, Ram Krishnan wrote: >> >> Many thanks Jonathan for the immediate reply. >> >> I am copying the hotspot gc team. >> >> Hotspot gc team -- your help would be much appreciated on the topic below. >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < >> jonathan.gibbons at oracle.com> wrote: >> >>> >>> >>> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>> >>> Hi, >>> >>> I have been able to successfully run all the tests in hotspot/test/gc/g1 >>> using jtreg. >>> >>> The only gotcha I am facing is that the JVM startup options specified in >>> process builder does not have any effect. I have confirmed this through >>> prints in the JVM code base. >>> >>> For example, >>> ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java >>> modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of >>> ConcGCThreads is still zero. >>> >>> ?I am new to jtreg and openjdk and probably missing something obvious. >>> Your help would be much appreciated. >>> >>> Thanks in advance,? >>> ?Ramki? >>> >>> >>> Ramki, >>> >>> This does not look like an issue with jtreg, since the behavior you are >>> apparently seeing is all within the test code and its libraries. >>> >>> You might want to follow up with the Hotspot team, who would have more >>> familiarity with these tests and the associated libraries. >>> >>> -- Jon >>> >>> >> >> >> -- >> Thanks, >> Ramki >> >> >> > > > -- > Thanks, > Ramki > > > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramkri123 at gmail.com Tue Apr 18 16:16:50 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Tue, 18 Apr 2017 09:16:50 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: <727762a3-29f6-5b28-478c-e0838868b6c9@oracle.com> References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> <727762a3-29f6-5b28-478c-e0838868b6c9@oracle.com> Message-ID: Hi Dima, Thanks. I tried your suggestion and also examined the .jtr file in ?JTWork/gc/g1/ folder. I am getting the same results as before. Thanks, Ramki On Tue, Apr 18, 2017 at 8:12 AM, Dmitry Fazunenko < dmitry.fazunenko at oracle.com> wrote: > Hi Ramki, > > I think you need to specify "-jdk:" option to jtreg: > > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java > -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all > *-jdk:/home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk* > /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java > > > More information about test execution you can find in the .jtr file > created in > ?? > JTWork/gc/g1/ folder. > > Thanks, > Dima > > > > On 18.04.2017 17:56, Ram Krishnan wrote: > > Hi Dmitry, > > Thanks, more below. > > In the expanded command line option, ?ConcGCThread is indeed set to 1 as > expected in the ?ReclaimRegionFastclass JVM. In the direct jtreg option, > ?ConcGCThread is 0 in both JVMs. The usage details are below. My build is > based on JDK 9 and I downloaded the latest jtreg. There may be something > wrong in my jtreg usage -- can you please clarify? > > Using jtreg directly does not work > ---------------------------------- > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java > -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all > /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java > > > Command line option works > ------------------------- > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java > -cp /home/ramki/9dev/hotspot/test/JTwork/classes/gc/g1:/home/ > ramki/9dev/hotspot/test/gc/g1:/home/ramki/9dev/hotspot/test/ > JTwork/classes/test/lib:/home/ramki/9dev/test/lib:/home/ > ramki/jtreg/lib/javatest.jar:/home/ramki/jtreg/lib/jtreg.jar -XX:+UseG1GC > -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M -XX: > InitiatingHeapOccupancyPercent=0 -Xlog:gc -XX:+UnlockDiagnosticVMOptions > -XX:+VerifyAfterGC -XX:ConcGCThreads=1 -XX:+IgnoreUnrecognizedVMOptions > -XX:+G1VerifyBitmaps -XX:G1GcCpuLLCCachePartitionPercent=48 > ReclaimRegionFast > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 11:21 PM, Dmitry Fazunenko < > dmitry.fazunenko at oracle.com> wrote: > >> Hi Ramki, >> >> It's very unlikely to be an issue related to jtreg somehow. >> I ran the test you mentioned manually, this is the quote from .jtr file: >> ... >> Command line: [/jdk9/solaris-sparcv9/bin/java -d64 -cp >> /home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/gc/g1:/hom >> e/fa/hs-int/hotspot/test/gc/g1:/home/fa/hs-int/hotspot/tes >> t/gc/g1/JTwork/classes/test/lib:/home/fa/hs-int/test/lib:/ >> home/fa/jtreg/lib/javatest.jar:/home/fa/jtreg/lib/jtreg.jar -XX:+UseG1GC >> -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M >> -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc >> -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC *-XX:ConcGCThreads=1* >> -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps >> ?? >> ReclaimRegionFast ] >> ... >> All the VM options are passed as expected. >> >> > I have confirmed this through prints in the JVM code base. >> I'm not sure what do you mean here, but I guess you did something wrong. >> >> Please note, during execution of this test two JVM are launched: >> - the first one started by jtreg (TestEagerReclaimHumongousRegionsClearMarkBits >> class) >> - the second started by test (ReclaimRegionFast class) >> >> In the first one >> ?? >> ConcGCThread should be set to 0. >> >> Thanks, >> Dima >> >> On 18.04.2017 3:09, Ram Krishnan wrote: >> >> Hi Jenny, >> >> I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads as >> 1. >> >> The problem seems to be interaction with jtreg. >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang wrote: >> >>> Ramki, >>> >>> Can you do the following to be sure that hotspot did not take the >>> parameter? >>> java -XX: >>> ?? >>> ConcGCThreads=1 -XX:+PrintFlagsFinal >>> >>> I am using jdk9b154, the output shows it changed the ConcGCThreads to 1 >>> >>> Thanks >>> Jenny >>> >>> On 4/17/2017 4:33 PM, Ram Krishnan wrote: >>> >>> Many thanks Jonathan for the immediate reply. >>> >>> I am copying the hotspot gc team. >>> >>> Hotspot gc team -- your help would be much appreciated on the topic >>> below. >>> >>> Thanks, >>> Ramki >>> >>> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < >>> jonathan.gibbons at oracle.com> wrote: >>> >>>> >>>> >>>> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>>> >>>> Hi, >>>> >>>> I have been able to successfully run all the tests in >>>> hotspot/test/gc/g1 using jtreg. >>>> >>>> The only gotcha I am facing is that the JVM startup options specified >>>> in process builder does not have any effect. I have confirmed this through >>>> prints in the JVM code base. >>>> >>>> For example, >>>> ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java >>>> modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of >>>> ConcGCThreads is still zero. >>>> >>>> ?I am new to jtreg and openjdk and probably missing something obvious. >>>> Your help would be much appreciated. >>>> >>>> Thanks in advance,? >>>> ?Ramki? >>>> >>>> >>>> Ramki, >>>> >>>> This does not look like an issue with jtreg, since the behavior you are >>>> apparently seeing is all within the test code and its libraries. >>>> >>>> You might want to follow up with the Hotspot team, who would have more >>>> familiarity with these tests and the associated libraries. >>>> >>>> -- Jon >>>> >>>> >>> >>> >>> -- >>> Thanks, >>> Ramki >>> >>> >>> >> >> >> -- >> Thanks, >> Ramki >> >> >> > > > -- > Thanks, > Ramki > > > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.fazunenko at oracle.com Tue Apr 18 15:12:02 2017 From: dmitry.fazunenko at oracle.com (Dmitry Fazunenko) Date: Tue, 18 Apr 2017 18:12:02 +0300 Subject: trouble passing JVM startup options using JTREG In-Reply-To: References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> Message-ID: <727762a3-29f6-5b28-478c-e0838868b6c9@oracle.com> Hi Ramki, I think you need to specify "-jdk:" option to jtreg: /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all *-jdk:/home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk* /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java More information about test execution you can find in the .jtr file created in JTWork/gc/g1/ folder. Thanks, Dima On 18.04.2017 17:56, Ram Krishnan wrote: > Hi Dmitry, > > Thanks, more below. > > In the expanded command line option, ?ConcGCThread is indeed set to 1 > as expected in the ?ReclaimRegionFastclass JVM. In the direct jtreg > option, ?ConcGCThread is 0 in both JVMs. The usage details are below. > My build is based on JDK 9 and I downloaded the latest jtreg. There > may be something wrong in my jtreg usage -- can you please clarify? > > Using jtreg directly does not work > ---------------------------------- > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java > -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all > /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java > > > Command line option works > ------------------------- > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java > -cp > /home/ramki/9dev/hotspot/test/JTwork/classes/gc/g1:/home/ramki/9dev/hotspot/test/gc/g1:/home/ramki/9dev/hotspot/test/JTwork/classes/test/lib:/home/ramki/9dev/test/lib:/home/ramki/jtreg/lib/javatest.jar:/home/ramki/jtreg/lib/jtreg.jar > -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M > -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc > -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC -XX:ConcGCThreads=1 > -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps > -XX:G1GcCpuLLCCachePartitionPercent=48 ReclaimRegionFast > > Thanks, > Ramki > > On Mon, Apr 17, 2017 at 11:21 PM, Dmitry Fazunenko > > wrote: > > Hi Ramki, > > It's very unlikely to be an issue related to jtreg somehow. > I ran the test you mentioned manually, this is the quote from .jtr > file: > ... > Command line: [/jdk9/solaris-sparcv9/bin/java -d64 -cp > /home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/gc/g1:/home/fa/hs-int/hotspot/test/gc/g1:/home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/test/lib:/home/fa/hs-int/test/lib:/home/fa/jtreg/lib/javatest.jar:/home/fa/jtreg/lib/jtreg.jar > -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M > -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc > -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC > *-XX:ConcGCThreads=1* -XX:+IgnoreUnrecognizedVMOptions > -XX:+G1VerifyBitmaps > ?? > ReclaimRegionFast ] > ... > All the VM options are passed as expected. > > > I have confirmed this through prints in the JVM code base. > I'm not sure what do you mean here, but I guess you did something > wrong. > > Please note, during execution of this test two JVM are launched: > - the first one started by jtreg > (TestEagerReclaimHumongousRegionsClearMarkBits class) > - the second started by test (ReclaimRegionFast class) > > In the first one > ?? > ConcGCThread should be set to 0. > > Thanks, > Dima > > On 18.04.2017 3:09, Ram Krishnan wrote: >> Hi Jenny, >> >> I tried what you suggested. Hotspot output indeed shows >> ?ConcGCThreads as 1. >> >> The problem seems to be interaction with jtreg. >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang > > wrote: >> >> Ramki, >> >> Can you do the following to be sure that hotspot did not take >> the parameter? >> java -XX: >> ?? >> ConcGCThreads=1 -XX:+PrintFlagsFinal >> >> I am using jdk9b154, the output shows it changed the >> ConcGCThreads to 1 >> >> Thanks >> Jenny >> >> On 4/17/2017 4:33 PM, Ram Krishnan wrote: >>> Many thanks Jonathan for the immediate reply. >>> >>> I am copying the hotspot gc team. >>> >>> Hotspot gc team -- your help would be much appreciated on >>> the topic below. >>> >>> Thanks, >>> Ramki >>> >>> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons >>> >> > wrote: >>> >>> >>> >>> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>>> Hi, >>>> >>>> I have been able to successfully run all the tests in >>>> hotspot/test/gc/g1 using jtreg. >>>> >>>> The only gotcha I am facing is that the JVM startup >>>> options specified in process builder does not have any >>>> effect. I have confirmed this through prints in the JVM >>>> code base. >>>> >>>> For example, >>>> ?/hotspot/test/gc/g1/? >>>> TestEagerReclaimHumongousRegionsClearMarkBits.java >>>> modifies the "-XX:ConcGCThreads=1", but inside the JVM >>>> code to value of ConcGCThreads is still zero. >>>> >>>> ?I am new to jtreg and openjdk and probably missing >>>> something obvious. Your help would be much appreciated. >>>> >>>> Thanks in advance,? >>>> ?Ramki? >>>> >>> >>> Ramki, >>> >>> This does not look like an issue with jtreg, since the >>> behavior you are apparently seeing is all within the >>> test code and its libraries. >>> >>> You might want to follow up with the Hotspot team, who >>> would have more familiarity with these tests and the >>> associated libraries. >>> >>> -- Jon >>> >>> >>> >>> >>> -- >>> Thanks, >>> Ramki >> >> >> >> >> -- >> Thanks, >> Ramki > > > > > -- > Thanks, > Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor.ignatyev at oracle.com Tue Apr 18 16:23:51 2017 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Tue, 18 Apr 2017 09:23:51 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> <727762a3-29f6-5b28-478c-e0838868b6c9@oracle.com> Message-ID: <95042001-9806-4330-9688-ACF50E15CCE7@oracle.com> Ramki, if you want jtreg to pass a flag to JDK under test,you should specify by jtreg -javaoptions flag[1]: /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all -javaoptions:-XX:ConcGCThreads=1 -jdk:/home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java [1] http://openjdk.java.net/jtreg/command-help.html Thanks, -- Igor > On Apr 18, 2017, at 9:16 AM, Ram Krishnan wrote: > > Hi Dima, > > Thanks. > > I tried your suggestion and also examined the .jtr file in ?JTWork/gc/g1/ folder. I am getting the same results as before. > > Thanks, > Ramki > > On Tue, Apr 18, 2017 at 8:12 AM, Dmitry Fazunenko > wrote: > Hi Ramki, > > I think you need to specify "-jdk:" option to jtreg: > > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all -jdk:/home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java > > More information about test execution you can find in the .jtr file created in ??JTWork/gc/g1/ folder. > > Thanks, > Dima > > > > On 18.04.2017 17:56, Ram Krishnan wrote: >> Hi Dmitry, >> >> Thanks, more below. >> >> In the expanded command line option, ?ConcGCThread is indeed set to 1 as expected in the ?ReclaimRegionFastclass JVM. In the direct jtreg option, ?ConcGCThread is 0 in both JVMs. The usage details are below. My build is based on JDK 9 and I downloaded the latest jtreg. There may be something wrong in my jtreg usage -- can you please clarify? >> >> Using jtreg directly does not work >> ---------------------------------- >> /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >> >> Command line option works >> ------------------------- >> /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java -cp /home/ramki/9dev/hotspot/test/JTwork/classes/gc/g1:/home/ramki/9dev/hotspot/test/gc/g1:/home/ramki/9dev/hotspot/test/JTwork/classes/test/lib:/home/ramki/9dev/test/lib:/home/ramki/jtreg/lib/javatest.jar:/home/ramki/jtreg/lib/jtreg.jar -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC -XX:ConcGCThreads=1 -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps -XX:G1GcCpuLLCCachePartitionPercent=48 ReclaimRegionFast >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 11:21 PM, Dmitry Fazunenko > wrote: >> Hi Ramki, >> >> It's very unlikely to be an issue related to jtreg somehow. >> I ran the test you mentioned manually, this is the quote from .jtr file: >> ... >> Command line: [/jdk9/solaris-sparcv9/bin/java -d64 -cp /home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/gc/g1:/home/fa/hs-int/hotspot/test/gc/g1:/home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/test/lib:/home/fa/hs-int/test/lib:/home/fa/jtreg/lib/javatest.jar:/home/fa/jtreg/lib/jtreg.jar -XX:+UseG1GC -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC -XX:ConcGCThreads=1 -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps >> ?? >> ReclaimRegionFast ] >> ... >> All the VM options are passed as expected. >> >> > I have confirmed this through prints in the JVM code base. >> I'm not sure what do you mean here, but I guess you did something wrong. >> >> Please note, during execution of this test two JVM are launched: >> - the first one started by jtreg (TestEagerReclaimHumongousRegionsClearMarkBits class) >> - the second started by test (ReclaimRegionFast class) >> >> In the first one >> ?? >> ConcGCThread should be set to 0. >> >> Thanks, >> Dima >> >> On 18.04.2017 3:09, Ram Krishnan wrote: >>> Hi Jenny, >>> >>> I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads as 1. >>> >>> The problem seems to be interaction with jtreg. >>> >>> Thanks, >>> Ramki >>> >>> On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang > wrote: >>> Ramki, >>> >>> Can you do the following to be sure that hotspot did not take the parameter? >>> java -XX: >>> ?? >>> ConcGCThreads=1 -XX:+PrintFlagsFinal >>> >>> I am using jdk9b154, the output shows it changed the ConcGCThreads to 1 >>> >>> Thanks >>> Jenny >>> >>> On 4/17/2017 4:33 PM, Ram Krishnan wrote: >>>> Many thanks Jonathan for the immediate reply. >>>> >>>> I am copying the hotspot gc team. >>>> >>>> Hotspot gc team -- your help would be much appreciated on the topic below. >>>> >>>> Thanks, >>>> Ramki >>>> >>>> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons > wrote: >>>> >>>> >>>> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>>>> Hi, >>>>> >>>>> I have been able to successfully run all the tests in hotspot/test/gc/g1 using jtreg. >>>>> >>>>> The only gotcha I am facing is that the JVM startup options specified in process builder does not have any effect. I have confirmed this through prints in the JVM code base. >>>>> >>>>> For example, >>>>> ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of ConcGCThreads is still zero. >>>>> >>>>> ?I am new to jtreg and openjdk and probably missing something obvious. Your help would be much appreciated. >>>>> >>>>> Thanks in advance,? >>>>> ?Ramki? >>>>> >>>> >>>> Ramki, >>>> >>>> This does not look like an issue with jtreg, since the behavior you are apparently seeing is all within the test code and its libraries. >>>> >>>> You might want to follow up with the Hotspot team, who would have more familiarity with these tests and the associated libraries. >>>> >>>> -- Jon >>>> >>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> Ramki >>> >>> >>> >>> >>> -- >>> Thanks, >>> Ramki >> >> >> >> >> -- >> Thanks, >> Ramki > > > > > -- > Thanks, > Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramkri123 at gmail.com Tue Apr 18 22:35:30 2017 From: ramkri123 at gmail.com (Ram Krishnan) Date: Tue, 18 Apr 2017 15:35:30 -0700 Subject: trouble passing JVM startup options using JTREG In-Reply-To: <95042001-9806-4330-9688-ACF50E15CCE7@oracle.com> References: <58F533C6.10507@oracle.com> <58F55493.2000302@oracle.com> <79871f46-0082-5db4-9f95-2988930ad763@oracle.com> <727762a3-29f6-5b28-478c-e0838868b6c9@oracle.com> <95042001-9806-4330-9688-ACF50E15CCE7@oracle.com> Message-ID: Hi Igor, It is indeed working now. Many thanks! Thanks, Ramki On Tue, Apr 18, 2017 at 9:23 AM, Igor Ignatyev wrote: > Ramki, > > if you want jtreg to pass a flag to JDK under test,you should specify by > jtreg -javaoptions flag[1]: > > ?? > /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java > -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all -javaoptions:-XX:ConcGCThreads=1 > -jdk:/home/ramki/9dev/build/linux-x86_64-normal-server- > release/images/jdk /home/ramki/9dev/hotspot/test/gc/g1/ > TestEagerReclaimHumongousRegionsClearMarkBits.java > > [1] http://openjdk.java.net/jtreg/command-help.html > > Thanks, > -- Igor > > On Apr 18, 2017, at 9:16 AM, Ram Krishnan wrote: > > Hi Dima, > > Thanks. > > I tried your suggestion and also examined the .jtr file in ? > ?? > JTWork/gc/g1/ folder. I am getting the same results as before. > > Thanks, > Ramki > > On Tue, Apr 18, 2017 at 8:12 AM, Dmitry Fazunenko < > dmitry.fazunenko at oracle.com> wrote: > >> Hi Ramki, >> >> I think you need to specify "-jdk:" option to jtreg: >> >> /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java >> -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all >> *-jdk:/home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk* >> /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >> >> >> More information about test execution you can find in the .jtr file >> created in >> ?? >> JTWork/gc/g1/ folder. >> >> Thanks, >> Dima >> >> >> >> On 18.04.2017 17:56, Ram Krishnan wrote: >> >> Hi Dmitry, >> >> Thanks, more below. >> >> In the expanded command line option, ?ConcGCThread is indeed set to 1 as >> expected in the ?ReclaimRegionFastclass JVM. In the direct jtreg option, >> ?ConcGCThread is 0 in both JVMs. The usage details are below. My build is >> based on JDK 9 and I downloaded the latest jtreg. There may be something >> wrong in my jtreg usage -- can you please clarify? >> >> Using jtreg directly does not work >> ---------------------------------- >> /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java >> -jar /home/ramki/jtreg/lib/jtreg.jar -verbose:all >> /home/ramki/9dev/hotspot/test/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java >> >> >> Command line option works >> ------------------------- >> /home/ramki/9dev/build/linux-x86_64-normal-server-release/images/jdk/bin/java >> -cp /home/ramki/9dev/hotspot/test/JTwork/classes/gc/g1:/home/ram >> ki/9dev/hotspot/test/gc/g1:/home/ramki/9dev/hotspot/test/J >> Twork/classes/test/lib:/home/ramki/9dev/test/lib:/home/ramki >> /jtreg/lib/javatest.jar:/home/ramki/jtreg/lib/jtreg.jar -XX:+UseG1GC >> -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M >> -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc >> -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC -XX:ConcGCThreads=1 >> -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps >> -XX:G1GcCpuLLCCachePartitionPercent=48 ReclaimRegionFast >> >> Thanks, >> Ramki >> >> On Mon, Apr 17, 2017 at 11:21 PM, Dmitry Fazunenko < >> dmitry.fazunenko at oracle.com> wrote: >> >>> Hi Ramki, >>> >>> It's very unlikely to be an issue related to jtreg somehow. >>> I ran the test you mentioned manually, this is the quote from .jtr file: >>> ... >>> Command line: [/jdk9/solaris-sparcv9/bin/java -d64 -cp >>> /home/fa/hs-int/hotspot/test/gc/g1/JTwork/classes/gc/g1:/hom >>> e/fa/hs-int/hotspot/test/gc/g1:/home/fa/hs-int/hotspot/test/ >>> gc/g1/JTwork/classes/test/lib:/home/fa/hs-int/test/lib:/home >>> /fa/jtreg/lib/javatest.jar:/home/fa/jtreg/lib/jtreg.jar -XX:+UseG1GC >>> -Xms128M -Xmx128M -Xmn2M -XX:G1HeapRegionSize=1M >>> -XX:InitiatingHeapOccupancyPercent=0 -Xlog:gc >>> -XX:+UnlockDiagnosticVMOptions -XX:+VerifyAfterGC *-XX:ConcGCThreads=1* >>> -XX:+IgnoreUnrecognizedVMOptions -XX:+G1VerifyBitmaps >>> ?? >>> ReclaimRegionFast ] >>> ... >>> All the VM options are passed as expected. >>> >>> > I have confirmed this through prints in the JVM code base. >>> I'm not sure what do you mean here, but I guess you did something wrong. >>> >>> Please note, during execution of this test two JVM are launched: >>> - the first one started by jtreg (TestEagerReclaimHumongousRegionsClearMarkBits >>> class) >>> - the second started by test (ReclaimRegionFast class) >>> >>> In the first one >>> ?? >>> ConcGCThread should be set to 0. >>> >>> Thanks, >>> Dima >>> >>> On 18.04.2017 3:09, Ram Krishnan wrote: >>> >>> Hi Jenny, >>> >>> I tried what you suggested. Hotspot output indeed shows ?ConcGCThreads >>> as 1. >>> >>> The problem seems to be interaction with jtreg. >>> >>> Thanks, >>> Ramki >>> >>> On Mon, Apr 17, 2017 at 4:49 PM, Jenny Zhang >>> wrote: >>> >>>> Ramki, >>>> >>>> Can you do the following to be sure that hotspot did not take the >>>> parameter? >>>> java -XX: >>>> ?? >>>> ConcGCThreads=1 -XX:+PrintFlagsFinal >>>> >>>> I am using jdk9b154, the output shows it changed the ConcGCThreads to 1 >>>> >>>> Thanks >>>> Jenny >>>> >>>> On 4/17/2017 4:33 PM, Ram Krishnan wrote: >>>> >>>> Many thanks Jonathan for the immediate reply. >>>> >>>> I am copying the hotspot gc team. >>>> >>>> Hotspot gc team -- your help would be much appreciated on the topic >>>> below. >>>> >>>> Thanks, >>>> Ramki >>>> >>>> On Mon, Apr 17, 2017 at 2:29 PM, Jonathan Gibbons < >>>> jonathan.gibbons at oracle.com> wrote: >>>> >>>>> >>>>> >>>>> On 04/17/2017 02:18 PM, Ram Krishnan wrote: >>>>> >>>>> Hi, >>>>> >>>>> I have been able to successfully run all the tests in >>>>> hotspot/test/gc/g1 using jtreg. >>>>> >>>>> The only gotcha I am facing is that the JVM startup options specified >>>>> in process builder does not have any effect. I have confirmed this through >>>>> prints in the JVM code base. >>>>> >>>>> For example, >>>>> ?/hotspot/test/gc/g1/? TestEagerReclaimHumongousRegionsClearMarkBits.java >>>>> modifies the "-XX:ConcGCThreads=1", but inside the JVM code to value of >>>>> ConcGCThreads is still zero. >>>>> >>>>> ?I am new to jtreg and openjdk and probably missing something obvious. >>>>> Your help would be much appreciated. >>>>> >>>>> Thanks in advance,? >>>>> ?Ramki? >>>>> >>>>> >>>>> Ramki, >>>>> >>>>> This does not look like an issue with jtreg, since the behavior you >>>>> are apparently seeing is all within the test code and its libraries. >>>>> >>>>> You might want to follow up with the Hotspot team, who would have more >>>>> familiarity with these tests and the associated libraries. >>>>> >>>>> -- Jon >>>>> >>>>> >>>> >>>> >>>> -- >>>> Thanks, >>>> Ramki >>>> >>>> >>>> >>> >>> >>> -- >>> Thanks, >>> Ramki >>> >>> >>> >> >> >> -- >> Thanks, >> Ramki >> >> >> > > > -- > Thanks, > Ramki > > > -- Thanks, Ramki -------------- next part -------------- An HTML attachment was scrubbed... URL: From jw_list at headissue.com Thu Apr 20 02:49:30 2017 From: jw_list at headissue.com (Jens Wilke) Date: Thu, 20 Apr 2017 09:49:30 +0700 Subject: Benchmark scenario with high G1 performance degradation Message-ID: <1857671.YkCaYY5MNi@tapsy> Hello, I am currently benchmarking different in-process caching libraries. In some benchmark scenarios I found very "odd" results when using the G1 garbage collector. In a particular benchmark scenario the performance (as measured in ops/s of the particular benchmark) drops to about 30% when compared to CMS, typically I expect (and observe) only a performance degradation to around 80% of the performance with CMS. In the blog post is a little bit more background of what I am doing: https://cruftex.net/2017/03/28/The-6-Memory-Metrics-You-Should-Track-in-Your-Java-Benchmarks.html For the particular scenario I have made the following observations: - Java 8 VM 25.131-b11: VmRSS, as reported by Linux, grows to 4.2 GB - Java 8 VM 25.131-b11: Average benchmark performance throughput is 2098524 ops/s - Java 8 VM 25.131-b11: Throughput is quite steady - Java 9 WM 9-EA+165: VmRSS, as reported by Linux, grows to 6.3 GB - Java 9 WM 9-EA+165: Average benchmark performance throughput is 566699 ops/s - Java 9 WM 9-EA+165: Throughput has big variations and the tendency to decrease - Java 9 WM 9-EA+165: Profiling shows that 44.19% of CPU cycles is spent in OtherRegionsTable::add_reference (for Java 8 G1 it is similar) And less quantified: - With Java 8 G1 it seems more worse - Scenarios with smaller heap/cache sizes don't show the high performance drop when comparing CMS and G1 - Java 9 WM 9-EA+165 with the options -XX:+UseParallelGC -XX: +UseParallelOldGC, seems to have 50% performance of Java 8 and higher memory consumption (What are the correct parameters to restore the old default behavior?) - The overall GC activity and time spend for GC is quite low To reproduce the measurements: Hardware: Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz, 1x4GB, 1x16GB @ DDR3 1600MHz, 2 cores with hyperthreading enabled OS/Kernel: Linux version 4.4.0-72-generic (buildd at lcy01-17) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017 git clone https://github.com/headissue/cache2k-benchmark.git cd cache2k-benchmark git checkout d68d7608f18ed6c5a10671f6dd3c48f76afdf0a8 mvn -DskipTests package java -jar jmh-suite/target/benchmarks.jar \\.RandomSequenceBenchmark -jvmArgs -server\ -Xmx20G\ -XX:BiasedLockingStartupDelay=0\ -verbose:gc\ -XX: +PrintGCDetails -f 1 -wi 0 -i 10 -r 20s -t 4 -prof org.cache2k.benchmark.jmh.LinuxVmProfiler -p cacheFactory=org.cache2k.benchmark.Cache2kFactory -rf json -rff result.json When not on Linux, strip "-prof org.cache2k.benchmark.jmh.LinuxVmProfiler". I have the feeling this could be worth a closer look. If there are any questions or things I can help with, let me know. I would be interested to know whether there is something that I can change in the code to avoid triggering this behavior. Best, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From stefan.johansson at oracle.com Thu Apr 20 12:59:30 2017 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 20 Apr 2017 14:59:30 +0200 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: <1857671.YkCaYY5MNi@tapsy> References: <1857671.YkCaYY5MNi@tapsy> Message-ID: Hi Jens, Thanks for reaching out and for providing such a good step-by-step guide on how to run the benchmark the same way you are. I've tried with CMS, G1 and Parallel, both with 10g and 20g heap, but so far I can't reproduce your problems. It would be great if you could provide us with some more information. For example GC-logs and the result files. We might be able to dig something out of them. Some comments below: On 2017-04-20 04:49, Jens Wilke wrote: > Hello, > > I am currently benchmarking different in-process caching libraries. In some > benchmark scenarios I found very "odd" results when using > the G1 garbage collector. In a particular benchmark scenario the performance > (as measured in ops/s of the particular benchmark) > drops to about 30% when compared to CMS, typically I expect (and observe) only > a performance degradation to around 80% of the performance with CMS. From my runs it looks like G1 is about 5-10% behind CMS and 10-15% behind Parallel for both JDK 8 and 9. > In the blog post is a little bit more background of what I am doing: > https://cruftex.net/2017/03/28/The-6-Memory-Metrics-You-Should-Track-in-Your-Java-Benchmarks.html > > For the particular scenario I have made the following observations: > > - Java 8 VM 25.131-b11: VmRSS, as reported by Linux, grows to 4.2 GB > - Java 8 VM 25.131-b11: Average benchmark performance throughput is 2098524 > ops/s > - Java 8 VM 25.131-b11: Throughput is quite steady > - Java 9 WM 9-EA+165: VmRSS, as reported by Linux, grows to 6.3 GB For me it is the other way around, running G1 with -Xmx10g I get VmRSS 6 GB for JDK 8 and 4.6 GB for JDK 9. Throughput seems to be more or less on par between 8 and 9, but the results vary a bit so hard to say for sure without doing a deeper analysis. > - Java 9 WM 9-EA+165: Average benchmark performance throughput is 566699 ops/s > - Java 9 WM 9-EA+165: Throughput has big variations and the tendency to > decrease > - Java 9 WM 9-EA+165: Profiling shows that 44.19% of CPU cycles is spent in > OtherRegionsTable::add_reference (for Java 8 G1 it is similar) > > And less quantified: > > - With Java 8 G1 it seems more worse > - Scenarios with smaller heap/cache sizes don't show the high performance drop > when comparing CMS and G1 > - Java 9 WM 9-EA+165 with the options -XX:+UseParallelGC -XX: > +UseParallelOldGC, seems to have 50% performance of Java 8 and higher memory > consumption > (What are the correct parameters to restore the old default behavior?) You only have to set -XX:+UseParallelGC, that will enable the correct "old collector" as well (but setting UseParallelOldGC should be fine as it is the correct one). I see a bit higher memory consumption too, but the score is more or less the same between 8 and 9 for Parallel. > - The overall GC activity and time spend for GC is quite low > > To reproduce the measurements: > > Hardware: > Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz, 1x4GB, 1x16GB @ DDR3 1600MHz, 2 > cores with hyperthreading enabled > OS/Kernel: > Linux version 4.4.0-72-generic (buildd at lcy01-17) (gcc version 5.4.0 20160609 > (Ubuntu 5.4.0-6ubuntu1~16.04.4) ) #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017 > > git clone https://github.com/headissue/cache2k-benchmark.git > cd cache2k-benchmark > git checkout d68d7608f18ed6c5a10671f6dd3c48f76afdf0a8 > mvn -DskipTests package > java -jar jmh-suite/target/benchmarks.jar \\.RandomSequenceBenchmark -jvmArgs > -server\ -Xmx20G\ -XX:BiasedLockingStartupDelay=0\ -verbose:gc\ -XX: > +PrintGCDetails -f 1 -wi 0 -i 10 -r 20s -t 4 -prof > org.cache2k.benchmark.jmh.LinuxVmProfiler -p > cacheFactory=org.cache2k.benchmark.Cache2kFactory -rf json -rff result.json I took a quick look at the blog as well, and there the system had 32GB ram and the runs were done with -Xmx10g. The system you describe here only have 20GB ram and you are using -Xmx20G, is that correct or has there been a typo? Otherwise you might be running into swapping issues, and that could explain your problems. Thanks, Stefan > When not on Linux, strip "-prof org.cache2k.benchmark.jmh.LinuxVmProfiler". > > I have the feeling this could be worth a closer look. > If there are any questions or things I can help with, let me know. > > I would be interested to know whether there is something that I can change in > the code to avoid triggering this behavior. > > Best, > > Jens > From jw_list at headissue.com Fri Apr 21 11:03:45 2017 From: jw_list at headissue.com (Jens Wilke) Date: Fri, 21 Apr 2017 18:03:45 +0700 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: References: <1857671.YkCaYY5MNi@tapsy> Message-ID: <4070940.YM86NbNA8Q@tapsy> Hi Stefan, On Donnerstag, 20. April 2017 19:59:30 ICT Stefan Johansson wrote: > Thanks for reaching out and for providing such a good step-by-step guide > on how to run the benchmark the same way you are. Thanks for the quick reply! > I've tried with CMS, G1 and Parallel, both with 10g and 20g heap, but so > far I can't reproduce your problems. It would be great if you could > provide us with some more information. For example GC-logs and the > result files. We might be able to dig something out of them. The logs from the measurement on my notebook for the first mail (see below) are available at (only 30 days valid): http://ovh.to/FzKbgrb What environment you are testing on? Please mind the core count. My stomach tells me that it could have something to do with the hash table arrays. When you are testing with a system that reports more than 8 cores the allocated arrays will be smaller than in my case, since the cache is doing segmentation. > From my runs it looks like G1 is about 5-10% behind CMS and 10-15% > behind Parallel for both JDK 8 and 9. That seems okay. Actually, I'd like to publish my next benchmark results, however, I am somehow stuck with this issue now. Benchmarking with CMS only doesn't really make sense at the current point in time. Also I don't like to be in doubt that there is something wrong in the setup. > I took a quick look at the blog as well, and there the system had 32GB > ram and the runs were done with -Xmx10g. The system you describe here > only have 20GB ram and you are using -Xmx20G, is that correct or has > there been a typo? My bad, sorry for the confusion. There was enough free memory and RSS was only at 6GB so the system was not swapping. I did play with the parameters to see whether it makes a difference, but forgot to put it in a reasonable range when sending the report. The effects on the isolated benchmark system with 32GB and-Xmx10g or -Xmx20G are the same (see blog article for parameters). The hopping point seems to be the function OtherRegionsTable::add_reference. When I run with -prof perfasm and Java 8U121 with and without G1 on the benchmark system I get this: .../jdk1.8.0_121/bin/java -jar jmh-suite/target/benchmarks.jar \\.RandomSequenceBenchmark -jvmArgs -server\ -Xmx20G\ -XX:BiasedLockingStartupDelay=0\ -verbose:gc\ -XX:+PrintGCDetails -f 1 -wi 1 -w 20s -i 1 -r 20s -t 4 -prof org.cache2k.benchmark.jmh.LinuxVmProfiler -prof org.cache2k.benchmark.jmh.MiscResultRecorderProfiler -p cacheFactory=org.cache2k.benchmark.Cache2kFactory -rf json -rff result.json -prof perfasm ....[Hottest Methods (after inlining)].............................................................. 22.48% 6.61% C2, level 4 org.cache2k.core.AbstractEviction::removeAllFromReplacementListOnEvict, version 897 21.06% 8.18% C2, level 4 org.cache2k.core.HeapCache::insertNewEntry, version 913 9.38% 7.13% libjvm.so SpinPause 9.13% 9.54% C2, level 4 org.cache2k.benchmark.jmh.suite.eviction.symmetrical.generated.RandomSequenceBenchmark_operation_jmhTest::operation_thrpt_jmhStub, version 873 5.00% 3.88% libjvm.so _ZN13InstanceKlass17oop_push_contentsEP18PSPromotionManagerP7oopDesc 4.54% 4.70% perf-5104.map [unknown] 3.57% 3.86% C2, level 4 org.cache2k.core.AbstractEviction::removeFromHashWithoutListener, version 838 2.86% 15.40% libjvm.so _ZN13ObjectMonitor11NotRunnableEP6ThreadS1_ 2.48% 12.53% libjvm.so _ZN13ObjectMonitor20TrySpin_VaryDurationEP6Thread 2.46% 1.72% C2, level 4 org.cache2k.core.AbstractEviction::refillChunk, version 906 2.31% 3.33% libjvm.so _ZN18PSPromotionManager22copy_to_survivor_spaceILb0EEEP7oopDescS2_ 2.24% 6.37% C2, level 4 java.util.concurrent.locks.StampedLock::acquireRead, version 864 2.03% 2.89% libjvm.so _ZN18PSPromotionManager18drain_stacks_depthEb 1.44% 1.43% libjvm.so _ZN13ObjArrayKlass17oop_push_contentsEP18PSPromotionManagerP7oopDesc 1.29% 0.72% kernel [unknown] 1.03% 1.27% libjvm.so _ZN18CardTableExtension26scavenge_contents_parallelEP16ObjectStartArrayP12MutableSpaceP8HeapWordP18PSPromotionManagerjj 0.79% 1.53% C2, level 4 java.util.concurrent.locks.StampedLock::acquireWrite, version 865 0.74% 4.21% runtime stub StubRoutines::SafeFetch32 0.71% 0.50% C2, level 4 org.cache2k.core.ClockProPlusEviction::sumUpListHits, version 772 0.70% 0.39% libc-2.19.so __clock_gettime 3.76% 3.73% <...other 147 warm methods...> .................................................................................................... 100.00% 99.93% .../jdk1.8.0_121/bin/java -jar jmh-suite/target/benchmarks.jar \\.RandomSequenceBenchmark -jvmArgs -server\ -Xmx20G\ -XX:BiasedLockingStartupDelay=0\ -verbose:gc\ -XX:+PrintGCDetails\ -XX:+UseG1GC -f 1 -wi 1 -w 20s -i 1 -r 20s -t 4 -prof org.cache2k.benchmark.jmh.LinuxVmProfiler -prof org.cache2k.benchmark.jmh.MiscResultRecorderProfiler -p cacheFactory=org.cache2k.benchmark.Cache2kFactory -rf json -rff result.json -prof perfasm ....[Hottest Methods (after inlining)].............................................................. 49.11% 41.16% libjvm.so _ZN17OtherRegionsTable13add_referenceEPvi 10.25% 3.37% C2, level 4 org.cache2k.core.ClockProPlusEviction::removeFromReplacementListOnEvict, version 883 4.93% 1.43% C2, level 4 org.cache2k.core.SegmentedEviction::submitWithoutEviction, version 694 4.31% 5.89% libjvm.so _ZN29G1UpdateRSOrPushRefOopClosure6do_oopEPj 3.18% 4.17% libjvm.so _ZN13ObjArrayKlass20oop_oop_iterate_nv_mEP7oopDescP24FilterOutOfRegionClosure9MemRegion 3.17% 3.00% libjvm.so _ZN29G1BlockOffsetArrayContigSpace18block_start_unsafeEPKv 2.95% 3.16% perf-5226.map [unknown] 2.19% 1.00% C2, level 4 org.cache2k.benchmark.Cache2kFactory$1::getIfPresent, version 892 1.58% 1.50% libjvm.so _ZN8G1RemSet11refine_cardEPajb 1.42% 5.02% libjvm.so _ZNK10HeapRegion12block_is_objEPK8HeapWord 1.41% 3.31% libjvm.so _ZN10HeapRegion32oops_on_card_seq_iterate_carefulE9MemRegionP24FilterOutOfRegionClosurebPa 1.13% 3.05% libjvm.so _ZN13InstanceKlass18oop_oop_iterate_nvEP7oopDescP24FilterOutOfRegionClosure 0.98% 0.51% libjvm.so _ZN14G1HotCardCache6insertEPa 0.89% 4.27% libjvm.so _ZN13ObjectMonitor11NotRunnableEP6ThreadS1_ 0.85% 1.17% C2, level 4 org.cache2k.core.HeapCache::insertNewEntry, version 899 0.74% 3.59% libjvm.so _ZN13ObjectMonitor20TrySpin_VaryDurationEP6Thread 0.74% 0.57% libjvm.so _ZN20G1ParScanThreadState10trim_queueEv 0.70% 0.70% C2, level 4 org.cache2k.core.Hash2::remove, version 864 0.69% 0.81% C2, level 4 org.cache2k.core.ClockProPlusEviction::findEvictionCandidate, version 906 0.65% 1.59% C2, level 4 org.cache2k.benchmark.jmh.suite.eviction.symmetrical.generated.RandomSequenceBenchmark_operation_jmhTest::operation_thrpt_jmhStub, version 857 8.14% 10.65% <...other 331 warm methods...> .................................................................................................... 100.00% 99.91% Best, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From jw_list at headissue.com Mon Apr 24 07:29:06 2017 From: jw_list at headissue.com (Jens Wilke) Date: Mon, 24 Apr 2017 14:29:06 +0700 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: <4070940.YM86NbNA8Q@tapsy> References: <1857671.YkCaYY5MNi@tapsy> <4070940.YM86NbNA8Q@tapsy> Message-ID: <2768289.V3k6RVfHne@tapsy> On Freitag, 21. April 2017 18:03:45 ICT Jens Wilke wrote: > Please mind the core count. My stomach tells me that it could have something > to do with the hash table arrays. When you are testing with a system that > reports more than 8 cores the allocated arrays will be smaller than in my > case, since the cache is doing segmentation. Update: I did some tests with different segmentation levels (up to 512) to reduce the internal hashtable array size in cache2k. The high CPU consumption via OtherRegionsTable::add_reference is still present. So this does not seem the problem cause. Best, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From stefan.johansson at oracle.com Mon Apr 24 14:06:15 2017 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 24 Apr 2017 16:06:15 +0200 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: <2768289.V3k6RVfHne@tapsy> References: <1857671.YkCaYY5MNi@tapsy> <4070940.YM86NbNA8Q@tapsy> <2768289.V3k6RVfHne@tapsy> Message-ID: <29de7e50-4665-8cd6-061f-5fb07cded5e0@oracle.com> On 2017-04-24 09:29, Jens Wilke wrote: > On Freitag, 21. April 2017 18:03:45 ICT Jens Wilke wrote: >> Please mind the core count. My stomach tells me that it could have something >> to do with the hash table arrays. When you are testing with a system that >> reports more than 8 cores the allocated arrays will be smaller than in my >> case, since the cache is doing segmentation. > Update: I did some tests with different segmentation levels (up to 512) to > reduce the internal hashtable array size in cache2k. The high CPU consumption > via OtherRegionsTable::add_reference is still present. So this does not seem > the problem cause. One thing you could try is to run with a larger G1HeapRegionSize than default. In the G1 log you provided I see that the heap region size is 4M, so you could try to run with 8M or 16M (-XX:G1HeapRegionSize=8m) to see if that improves the situation. The calls to add_reference are caused by region to region pointers that G1 needs to keep track of. Stefan > Best, > > Jens > From stefan.johansson at oracle.com Mon Apr 24 14:16:20 2017 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 24 Apr 2017 16:16:20 +0200 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: <4070940.YM86NbNA8Q@tapsy> References: <1857671.YkCaYY5MNi@tapsy> <4070940.YM86NbNA8Q@tapsy> Message-ID: Hi Jens, On 2017-04-21 13:03, Jens Wilke wrote: > Hi Stefan, > > On Donnerstag, 20. April 2017 19:59:30 ICT Stefan Johansson wrote: >> Thanks for reaching out and for providing such a good step-by-step guide >> on how to run the benchmark the same way you are. > Thanks for the quick reply! > >> I've tried with CMS, G1 and Parallel, both with 10g and 20g heap, but so >> far I can't reproduce your problems. It would be great if you could >> provide us with some more information. For example GC-logs and the >> result files. We might be able to dig something out of them. > The logs from the measurement on my notebook for the first mail (see below) are available at (only 30 days valid): > > http://ovh.to/FzKbgrb > > What environment you are testing on? I only did some quick testing on my desktop with has 12 cores and hyper-threading, so the default is to use 18 parallel GC threads on my system. > > Please mind the core count. My stomach tells me that it could have something to do with the hash table arrays. When you are testing with a system that reports more than 8 cores the allocated arrays will be smaller than in my case, since the cache is doing segmentation. > >> From my runs it looks like G1 is about 5-10% behind CMS and 10-15% >> behind Parallel for both JDK 8 and 9. > That seems okay. > > Actually, I'd like to publish my next benchmark results, however, I am somehow stuck with this issue now. Benchmarking with CMS only doesn't really make sense at the current point in time. Also I don't like to be in doubt that there is something wrong in the setup. > >> I took a quick look at the blog as well, and there the system had 32GB >> ram and the runs were done with -Xmx10g. The system you describe here >> only have 20GB ram and you are using -Xmx20G, is that correct or has >> there been a typo? > My bad, sorry for the confusion. There was enough free memory and RSS was only at 6GB so the system was not swapping. I did play with the parameters to see whether it makes a difference, but forgot to put it in a reasonable range when sending the report. > > The effects on the isolated benchmark system with 32GB and-Xmx10g or -Xmx20G are the same (see blog article for parameters). > > The hopping point seems to be the function OtherRegionsTable::add_reference. > When I run with -prof perfasm and Java 8U121 with and without G1 on the benchmark system I get this: > > .../jdk1.8.0_121/bin/java -jar jmh-suite/target/benchmarks.jar \\.RandomSequenceBenchmark -jvmArgs -server\ -Xmx20G\ -XX:BiasedLockingStartupDelay=0\ -verbose:gc\ -XX:+PrintGCDetails -f 1 -wi 1 -w 20s -i 1 -r 20s -t 4 -prof org.cache2k.benchmark.jmh.LinuxVmProfiler -prof org.cache2k.benchmark.jmh.MiscResultRecorderProfiler -p cacheFactory=org.cache2k.benchmark.Cache2kFactory -rf json -rff result.json -prof perfasm > > ....[Hottest Methods (after inlining)].............................................................. > 22.48% 6.61% C2, level 4 org.cache2k.core.AbstractEviction::removeAllFromReplacementListOnEvict, version 897 > 21.06% 8.18% C2, level 4 org.cache2k.core.HeapCache::insertNewEntry, version 913 > 9.38% 7.13% libjvm.so SpinPause > 9.13% 9.54% C2, level 4 org.cache2k.benchmark.jmh.suite.eviction.symmetrical.generated.RandomSequenceBenchmark_operation_jmhTest::operation_thrpt_jmhStub, version 873 > 5.00% 3.88% libjvm.so _ZN13InstanceKlass17oop_push_contentsEP18PSPromotionManagerP7oopDesc > 4.54% 4.70% perf-5104.map [unknown] > 3.57% 3.86% C2, level 4 org.cache2k.core.AbstractEviction::removeFromHashWithoutListener, version 838 > 2.86% 15.40% libjvm.so _ZN13ObjectMonitor11NotRunnableEP6ThreadS1_ > 2.48% 12.53% libjvm.so _ZN13ObjectMonitor20TrySpin_VaryDurationEP6Thread > 2.46% 1.72% C2, level 4 org.cache2k.core.AbstractEviction::refillChunk, version 906 > 2.31% 3.33% libjvm.so _ZN18PSPromotionManager22copy_to_survivor_spaceILb0EEEP7oopDescS2_ > 2.24% 6.37% C2, level 4 java.util.concurrent.locks.StampedLock::acquireRead, version 864 > 2.03% 2.89% libjvm.so _ZN18PSPromotionManager18drain_stacks_depthEb > 1.44% 1.43% libjvm.so _ZN13ObjArrayKlass17oop_push_contentsEP18PSPromotionManagerP7oopDesc > 1.29% 0.72% kernel [unknown] > 1.03% 1.27% libjvm.so _ZN18CardTableExtension26scavenge_contents_parallelEP16ObjectStartArrayP12MutableSpaceP8HeapWordP18PSPromotionManagerjj > 0.79% 1.53% C2, level 4 java.util.concurrent.locks.StampedLock::acquireWrite, version 865 > 0.74% 4.21% runtime stub StubRoutines::SafeFetch32 > 0.71% 0.50% C2, level 4 org.cache2k.core.ClockProPlusEviction::sumUpListHits, version 772 > 0.70% 0.39% libc-2.19.so __clock_gettime > 3.76% 3.73% <...other 147 warm methods...> > .................................................................................................... > 100.00% 99.93% > > .../jdk1.8.0_121/bin/java -jar jmh-suite/target/benchmarks.jar \\.RandomSequenceBenchmark -jvmArgs -server\ -Xmx20G\ -XX:BiasedLockingStartupDelay=0\ -verbose:gc\ -XX:+PrintGCDetails\ -XX:+UseG1GC -f 1 -wi 1 -w 20s -i 1 -r 20s -t 4 -prof org.cache2k.benchmark.jmh.LinuxVmProfiler -prof org.cache2k.benchmark.jmh.MiscResultRecorderProfiler -p cacheFactory=org.cache2k.benchmark.Cache2kFactory -rf json -rff result.json -prof perfasm > > ....[Hottest Methods (after inlining)].............................................................. > 49.11% 41.16% libjvm.so _ZN17OtherRegionsTable13add_referenceEPvi > 10.25% 3.37% C2, level 4 org.cache2k.core.ClockProPlusEviction::removeFromReplacementListOnEvict, version 883 > 4.93% 1.43% C2, level 4 org.cache2k.core.SegmentedEviction::submitWithoutEviction, version 694 > 4.31% 5.89% libjvm.so _ZN29G1UpdateRSOrPushRefOopClosure6do_oopEPj > 3.18% 4.17% libjvm.so _ZN13ObjArrayKlass20oop_oop_iterate_nv_mEP7oopDescP24FilterOutOfRegionClosure9MemRegion > 3.17% 3.00% libjvm.so _ZN29G1BlockOffsetArrayContigSpace18block_start_unsafeEPKv > 2.95% 3.16% perf-5226.map [unknown] > 2.19% 1.00% C2, level 4 org.cache2k.benchmark.Cache2kFactory$1::getIfPresent, version 892 > 1.58% 1.50% libjvm.so _ZN8G1RemSet11refine_cardEPajb > 1.42% 5.02% libjvm.so _ZNK10HeapRegion12block_is_objEPK8HeapWord > 1.41% 3.31% libjvm.so _ZN10HeapRegion32oops_on_card_seq_iterate_carefulE9MemRegionP24FilterOutOfRegionClosurebPa > 1.13% 3.05% libjvm.so _ZN13InstanceKlass18oop_oop_iterate_nvEP7oopDescP24FilterOutOfRegionClosure > 0.98% 0.51% libjvm.so _ZN14G1HotCardCache6insertEPa > 0.89% 4.27% libjvm.so _ZN13ObjectMonitor11NotRunnableEP6ThreadS1_ > 0.85% 1.17% C2, level 4 org.cache2k.core.HeapCache::insertNewEntry, version 899 > 0.74% 3.59% libjvm.so _ZN13ObjectMonitor20TrySpin_VaryDurationEP6Thread > 0.74% 0.57% libjvm.so _ZN20G1ParScanThreadState10trim_queueEv > 0.70% 0.70% C2, level 4 org.cache2k.core.Hash2::remove, version 864 > 0.69% 0.81% C2, level 4 org.cache2k.core.ClockProPlusEviction::findEvictionCandidate, version 906 > 0.65% 1.59% C2, level 4 org.cache2k.benchmark.jmh.suite.eviction.symmetrical.generated.RandomSequenceBenchmark_operation_jmhTest::operation_thrpt_jmhStub, version 857 > 8.14% 10.65% <...other 331 warm methods...> > .................................................................................................... > 100.00% 99.91% As I mentioned in my reply to your other mail, these calls are caused by region to region pointers in G1. Adding those references can be done either during a safepoint or concurrently. Looking at your profile it seems that most calls come from the concurrent path and since your system has few cores having the concurrent refinement threads doing a lot of work will have impact on the over all performance more. Stefan > Best, > > Jens > From jw_list at headissue.com Tue Apr 25 03:25:12 2017 From: jw_list at headissue.com (Jens Wilke) Date: Tue, 25 Apr 2017 10:25:12 +0700 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: References: <1857671.YkCaYY5MNi@tapsy> <4070940.YM86NbNA8Q@tapsy> Message-ID: <5790613.ZTTdDqVsLp@tapsy> Hi Stefan, On Montag, 24. April 2017 21:16:20 ICT Stefan Johansson wrote: > >> I've tried with CMS, G1 and Parallel, both with 10g and 20g heap, but so > >> far I can't reproduce your problems. It would be great if you could > >> provide us with some more information. For example GC-logs and the > >> result files. We might be able to dig something out of them. > > > > The logs from the measurement on my notebook for the first mail (see > > below) are available at (only 30 days valid): > > > > http://ovh.to/FzKbgrb > > > > What environment you are testing on? > > I only did some quick testing on my desktop with has 12 cores and > hyper-threading, so the default is to use 18 parallel GC threads on my > system. The benchmarks I am conducting are using four workload threads on four CPU cores. The example I sent is with four workload threads, so in your environment you have enough spare cores for GC work and you don't see the performance difference to the CMS collector. The benchmark is designed to have a constrained core count and keep those cores maximal busy. > As I mentioned in my reply to your other mail, these calls are caused by > region to region pointers in G1. Adding those references can be done > either during a safepoint or concurrently. Looking at your profile it > seems that most calls come from the concurrent path and since your > system has few cores having the concurrent refinement threads doing a > lot of work will have impact on the over all performance more. Yes. I have the feeling that there is some kind of "tripping point" in the whole system, that causes the high "refinement" activity which would be interesting to understand. For the moment I postpone to dig into this deeper. It's "just" a benchmark scenario which triggers this effect. I believe that interactive applications that would make use of G1 and its low pause times don't have these large cache sizes. Using JMH to get some reliable benchmark results for scenarios with large heaps need some more work, too. AFIAK I am the only one doing "not so micro" benchmarks with JMH. Thanks for looking into this! Best, Jens -- "Everything superfluous is wrong!" // Jens Wilke - headissue GmbH - Germany \// https://headissue.com From stefan.johansson at oracle.com Thu Apr 27 08:29:01 2017 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 27 Apr 2017 10:29:01 +0200 Subject: Benchmark scenario with high G1 performance degradation In-Reply-To: <5790613.ZTTdDqVsLp@tapsy> References: <1857671.YkCaYY5MNi@tapsy> <4070940.YM86NbNA8Q@tapsy> <5790613.ZTTdDqVsLp@tapsy> Message-ID: <2496b06e-e97c-2d46-8511-9c79cfe2fa3c@oracle.com> Hi Jens, On 2017-04-25 05:25, Jens Wilke wrote: > Hi Stefan, > > On Montag, 24. April 2017 21:16:20 ICT Stefan Johansson wrote: >>>> I've tried with CMS, G1 and Parallel, both with 10g and 20g heap, but so >>>> far I can't reproduce your problems. It would be great if you could >>>> provide us with some more information. For example GC-logs and the >>>> result files. We might be able to dig something out of them. >>> The logs from the measurement on my notebook for the first mail (see >>> below) are available at (only 30 days valid): >>> >>> http://ovh.to/FzKbgrb >>> >>> What environment you are testing on? >> I only did some quick testing on my desktop with has 12 cores and >> hyper-threading, so the default is to use 18 parallel GC threads on my >> system. > The benchmarks I am conducting are using four workload threads on four CPU > cores. The example I sent is with four workload threads, so in your > environment you have enough spare cores for GC work and you don't see the > performance difference to the CMS collector. > > The benchmark is designed to have a constrained core count and keep those > cores maximal busy. I see, under those circumstances G1 will have a harder time keeping up than the other collectors due to concurrent refinement. You might be able to tune your way out of this or at least improve the situation, but I'm not sure that is what your looking for. >> As I mentioned in my reply to your other mail, these calls are caused by >> region to region pointers in G1. Adding those references can be done >> either during a safepoint or concurrently. Looking at your profile it >> seems that most calls come from the concurrent path and since your >> system has few cores having the concurrent refinement threads doing a >> lot of work will have impact on the over all performance more. > Yes. > > I have the feeling that there is some kind of "tripping point" in the whole > system, that causes the high "refinement" activity which would be interesting > to understand. > > For the moment I postpone to dig into this deeper. It's "just" a benchmark > scenario which triggers this effect. I believe that interactive applications > that would make use of G1 and its low pause times don't have these large cache > sizes. I agree that this is not a benchmark or scenario where we expect G1 to be the best choice. The notion I get is that this is a very throughput oriented benchmark, and especially when run in a constrained environment this will be though on G1. Still, as you said, it would be interesting to understand at which point things start to go bad and work to improve on that. > Using JMH to get some reliable benchmark results for scenarios with large > heaps need some more work, too. AFIAK I am the only one doing "not so micro" > benchmarks with JMH. > > Thanks for looking into this! Thanks again for sharing you findings and if you have more interesting benchmarks/results to share, please do so. Stefan > > Best, > > Jens >