Major GC happening before suggested CMSInitiatingOccupancyFraction
Hi There, I am using below mentioned options for my server. The issue is that I have set CMSInitiatingOccupancyFraction=65 but major GC is happening at a lower value of heap used (approximately 50%) though it should happen at around 65%. Due to this CPU utilization increases to a very high value. I am not able to find out the reason of major GC. P.S. Logs are attached with the mail. Options: -Xdebug -Xnoagent -debug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n,stdalloc=y -Xmx2048m -Xms2048m -XX:NewSize=96m -XX:MaxNewSize=96m -XX:+UseTLAB -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=65 -XX:+DisableExplicitGC -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution -XX:PrintCMSStatistics=1 -XX:PrintFLSStatistics=1 -XX:+PrintHeapAtGC -XX:PermSize=64m -XX:MaxPermSize=64m This is the output of the logs: Desired survivor size 32768 bytes, new threshold 0 (max 0) : 98176K->0K(98240K), 0.1769591 secs] 1093934K->1003680K(2097088K)Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 252988590 Max Chunk Size: 252988590 Number of Blocks: 1 Av. Block Size: 252988590 Tree Height: 1 Statistics for BinaryTreeDictionary: ------------------------------------ Total Free Space: 0 Max Chunk Size: 0 Number of Blocks: 0 Tree Height: 0 Heap after GC invocations=447: Heap par new generation total 98240K, used 0K [0x75800000, 0x7b800000, 0x7b800000) eden space 98176K, 0% used [0x75800000, 0x75800000, 0x7b7e0000) from space 64K, 0% used [0x7b7f0000, 0x7b7f0000, 0x7b800000) to space 64K, 0% used [0x7b7e0000, 0x7b7e0000, 0x7b7f0000) concurrent mark-sweep generation total 1998848K, used 1003680K [0x7b800000, 0xf5800000, 0xf5800000) concurrent-mark-sweep perm gen total 65536K, used 16879K [0xf5800000, 0xf9800000, 0xf9800000) } , 0.1779375 secs] After GC: After GC: 8784.678: [GC [1 CMS-initial-mark: 1003680K(1998848K)] 1003698K(2097088K), 0.0052064 secs] 8784.684: [CMS-concurrent-mark-start] 8787.045: [CMS-concurrent-mark: 2.361/2.361 secs] (CMS-concurrent-mark yielded 0 times) 8787.046: [CMS-concurrent-preclean-start] (cardTable: 2726 cards, re-scanned 2726 cards, 1 iterations) 8788.370: [CMS-concurrent-preclean: 1.320/1.324 secs] (CMS-concurrent-preclean yielded 0 times) 8788.371: [GC8788.371: [Rescan (parallel) , 0.3424891 secs]8788.714: [weak refs processing, 0.0569442 secs] [1 CMS-remark: 1003680K(1998848K)] 1055351K(2097088K), 0.4066270 secs] 8788.799: [CMS-concurrent-sweep-start] 8791.891: [GC {Heap before GC invocations=447: Heap par new generation total 98240K, used 98176K [0x75800000, 0x7b800000, 0x7b800000) eden space 98176K, 100% used [0x75800000, 0x7b7e0000, 0x7b7e0000) from space 64K, 0% used [0x7b7f0000, 0x7b7f0000, 0x7b800000) to space 64K, 0% used [0x7b7e0000, 0x7b7e0000, 0x7b7f0000) concurrent mark-sweep generation total 1998848K, used 742483K [0x7b800000, 0xf5800000, 0xf5800000) concurrent-mark-sweep perm gen total 65536K, used 16879K [0xf5800000, 0xf9800000, 0xf9800000) Statistics for BinaryTreeDictionary:
Which version of the JDK are you using? It does seem curious that the CMS collection are starting so early. CMS uses different criteria for deciding when to start a concurrent collection. The occupancy of the CMS generation (the tenured generation) is only one of the criteria. You can add the flag -XX:+UseCMSInitiatingOccupancyOnly which tells CMS to only use the occupancy of the CMS generation in deciding when to start. When you say you are seeing high cpu utilization, I assume you mean during the CMS concurrent collection? In the example below between 8784.678: [GC [1 CMS-initial-mark: 1003680K(1998848K)] 1003698K(2097088K), 0.0052064 secs] and 8798.633: [CMS-concurrent-sweep: 9.686/9.834 secs] (about a 14 second period)? prashant Kumar wrote On 01/12/09 00:59,:
Hi There,
I am using below mentioned options for my server. The issue is that I have set CMSInitiatingOccupancyFraction=65 but major GC is happening at a lower value of heap used (approximately 50%) though it should happen at around 65%. Due to this CPU utilization increases to a very high value. I am not able to find out the reason of major GC.
P.S. Logs are attached with the mail.
*Options:*
* *
-Xdebug -Xnoagent -debug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n,stdalloc=y -Xmx2048m -Xms2048m -XX:NewSize=96m -XX:MaxNewSize=96m -XX:+UseTLAB -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=65 -XX:+DisableExplicitGC -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintTenuringDistribution -XX:PrintCMSStatistics=1 -XX:PrintFLSStatistics=1 -XX:+PrintHeapAtGC -XX:PermSize=64m -XX:MaxPermSize=64m
*This is the output of the logs:*
Desired survivor size 32768 bytes, new threshold 0 (max 0)
: 98176K->0K(98240K), 0.1769591 secs] 1093934K->1003680K(2097088K)Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 252988590
Max Chunk Size: 252988590
Number of Blocks: 1
Av. Block Size: 252988590
Tree Height: 1
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
Tree Height: 0
Heap after GC invocations=447:
Heap
par new generation total 98240K, used 0K [0x75800000, 0x7b800000, 0x7b800000)
eden space 98176K, 0% used [0x75800000, 0x75800000, 0x7b7e0000)
from space 64K, 0% used [0x7b7f0000, 0x7b7f0000, 0x7b800000)
to space 64K, 0% used [0x7b7e0000, 0x7b7e0000, 0x7b7f0000)
concurrent mark-sweep generation total 1998848K, used 1003680K [0x7b800000, 0xf5800000, 0xf5800000)
concurrent-mark-sweep perm gen total 65536K, used 16879K [0xf5800000, 0xf9800000, 0xf9800000)
} , 0.1779375 secs]
After GC:
After GC:
8784.678: [GC [1 CMS-initial-mark: 1003680K(1998848K)] 1003698K(2097088K), 0.0052064 secs]
8784.684: [CMS-concurrent-mark-start]
8787.045: [CMS-concurrent-mark: 2.361/2.361 secs]
(CMS-concurrent-mark yielded 0 times)
8787.046: [CMS-concurrent-preclean-start]
(cardTable: 2726 cards, re-scanned 2726 cards, 1 iterations)
8788.370: [CMS-concurrent-preclean: 1.320/1.324 secs]
(CMS-concurrent-preclean yielded 0 times)
8788.371: [GC8788.371: [Rescan (parallel) , 0.3424891 secs]8788.714: [weak refs processing, 0.0569442 secs] [1 CMS-remark: 1003680K(1998848K)] 1055351K(2097088K), 0.4066270 secs]
8788.799: [CMS-concurrent-sweep-start]
8791.891: [GC {Heap before GC invocations=447:
Heap
par new generation total 98240K, used 98176K [0x75800000, 0x7b800000, 0x7b800000)
eden space 98176K, 100% used [0x75800000, 0x7b7e0000, 0x7b7e0000)
from space 64K, 0% used [0x7b7f0000, 0x7b7f0000, 0x7b800000)
to space 64K, 0% used [0x7b7e0000, 0x7b7e0000, 0x7b7f0000)
concurrent mark-sweep generation total 1998848K, used 742483K [0x7b800000, 0xf5800000, 0xf5800000)
concurrent-mark-sweep perm gen total 65536K, used 16879K [0xf5800000, 0xf9800000, 0xf9800000)
Statistics for BinaryTreeDictionary:
------------------------------------------------------------------------
participants (2)
-
Jon Masamitsu
-
prashant Kumar