GC notification oddities

Ralf Helbing rhelbing at icubic.de
Tue Jul 10 04:54:40 PDT 2012


Am 10.07.2012 01:04, schrieb John Cuthbertson:
> Hi Ralf,
>
> There are somethings that occur during the safepoint that the GC times 
> may not take in to account (for example TLAB retirement) but may be 
> included in the GC notification durations. I'd have to look at the 
> code to determine the full list. How do the GC times and the durations 
> in the GC notifications compare against the output from 
> -XX:+PrintGCApplicationStoppedTime?


let's see: before the first ParNew, there are 102 stops in the 
sub-millisecond range, totalling 0.0236405s. Then, we have

2012-07-10T10:49:37.719+0200: 23.053: [GC 23.053: [ParNew: 
3356843K->20067K(3774912K), 0.0186376 secs] 3356843K->20067K(3780288K), 
0.0187348 secs] [Times: user=0.13 sys=0.00, real=0.02 secs]
Total time for which application threads were stopped: 0.0190405 seconds

which seems roughly in line.

at the same time, notifications report

2012-07-10 10:49:37    Young Gen GC: - 1 ParNew (from Allocation 
Failure) 266,511 us
2012-07-10 10:49:37    Par Eden Space (fully expanded) used: 100.0% -> 
0.0% (1MB) / CMS Old Gen (still expandable) used: 0.0% -> 0.0% (1MB) / 
CMS Perm Gen (fully expanded) used: 12.1% -> 12.1% (31MB) / Code Cache 
(still expandable) used: 98.8% -> 98.8% (11MB) / Par Survivor Space 
(fully expanded) used: 0.3% -> 4.7% (20MB)

This is the first collection. Times are off by a factor of 10.

A while later, just an example:

2012-07-10T11:29:50.479+0200: 2435.710: [GC 2435.710: [ParNew: 
3404672K->41414K(3774912K), 0.0910539 secs] 
7114602K->3751514K(11308916K), 0.0911489 secs] [Times: user=1.18 
sys=0.01, real=0.09 secs]
Total time for which application threads were stopped: 0.0930148 seconds

and:

2012-07-10 11:29:50    Young Gen GC: - 258 ParNew (from Allocation 
Failure) 1,303,481 us
2012-07-10 11:29:50    Par Eden Space (fully expanded) used: 100.0% -> 
0.0% (1MB) / CMS Old Gen (still expandable) used: 49.2% -> 49.2% 
(3624MB) / CMS Perm Gen (fully expanded) used: 23.7% -> 23.7% (61MB) / 
Code Cache (still expandable) used: 98.7% -> 98.7% (40MB) / Par Survivor 
Space (fully expanded) used: 11.7% -> 9.8% (41MB)

neighbouring entries are 9s away in both directions, so I am certain the 
log lines match the same GCs.

1,303,481 usare the GCInfo.getDuration()

My JVM parameters are

-Xss2m
-Xmx16G
-XX:NewSize=4G
-XX:PermSize=256M
-XX:+UseConcMarkSweepGC
-XX:ParGCCardsPerStrideChunk=4096

-XX:+UseCompressedOops
-XX:+DoEscapeAnalysis
-XX:+TieredCompilation

-Xloggc:logs/gc.log
-verbose:gc
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
-XX:+PrintPromotionFailure
-XX:+PrintGCApplicationStoppedTime

-XX:+PrintFlagsFinal
-XX:+LogVMOutput
-XX:LogFile=logs/jvm.log-XX:+UnlockDiagnosticVMOptions

Numbers are from a dual socket Xeon server, 16 threads. Other machines 
show the same behaviour.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120710/8b0d4699/attachment.html 


More information about the hotspot-gc-use mailing list