young gc time
the.6th.month at gmail.com
the.6th.month at gmail.com
Mon Nov 5 00:43:15 PST 2012
hi,all:
I am using ParNew gc in our prod env, and I notice that the young gc time
is consistently high. I am wondering whether there is a way to lower young
gc time.
Through monitoring JMX garbage collection metrics, we found that the par
new gc time was roughly 150~200ms per par new gc, and the par new gc
happened every 15-20 seconds.
I paste a snippet of gc log below:
: 864507K->87635K(917504K), 0.1231380 secs] 3699003K->2931949K(4476928K),
0.1236430 secs] [Times: user=0.43 sys=0.00, real=0.12 secs]
2012-11-05T16:23:23.949+0800: 236482.439: [GC 236482.439: [ParNew
Desired survivor size 67108864 bytes, new threshold 6 (max 6)
- age 1: 17253464 bytes, 17253464 total
- age 2: 14360944 bytes, 31614408 total
- age 3: 25430792 bytes, 57045200 total
- age 4: 2902680 bytes, 59947880 total
- age 5: 5009480 bytes, 64957360 total
- age 6: 5909656 bytes, 70867016 total
: 874067K->90642K(917504K), 0.1477390 secs] 3718381K->2938612K(4476928K),
0.1481920 secs] [Times: user=0.53 sys=0.00, real=0.15 secs]
2012-11-05T16:23:44.631+0800: 236503.121: [GC 236503.121: [ParNew
Desired survivor size 67108864 bytes, new threshold 6 (max 6)
- age 1: 15747824 bytes, 15747824 total
- age 2: 15988992 bytes, 31736816 total
- age 3: 10270928 bytes, 42007744 total
- age 4: 20606448 bytes, 62614192 total
- age 5: 2049256 bytes, 64663448 total
- age 6: 4744976 bytes, 69408424 total
: 877074K->100618K(917504K), 0.1410630 secs] 3725044K->2954405K(4476928K),
0.1414400 secs] [Times: user=0.48 sys=0.00, real=0.14 secs]
2012-11-05T16:24:15.290+0800: 236533.780: [GC 236533.781: [ParNew
Desired survivor size 67108864 bytes, new threshold 5 (max 6)
- age 1: 19065656 bytes, 19065656 total
- age 2: 12890184 bytes, 31955840 total
- age 3: 15095912 bytes, 47051752 total
- age 4: 8298736 bytes, 55350488 total
- age 5: 15055264 bytes, 70405752 total
- age 6: 1900328 bytes, 72306080 total
and here is the output from jmap -heap:
Server compiler detected.
JVM version is 20.1-b02
using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 4718592000 (4500.0MB)
NewSize = 1073741824 (1024.0MB)
MaxNewSize = 1073741824 (1024.0MB)
OldSize = 5439488 (5.1875MB)
NewRatio = 2
SurvivorRatio = 6
PermSize = 205520896 (196.0MB)
MaxPermSize = 205520896 (196.0MB)
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 939524096 (896.0MB)
used = 668354776 (637.3927841186523MB)
free = 271169320 (258.60721588134766MB)
71.13758751324245% used
Eden Space:
capacity = 805306368 (768.0MB)
used = 602192224 (574.2952575683594MB)
free = 203114144 (193.70474243164062MB)
74.77802832921346% used
>From Space:
capacity = 134217728 (128.0MB)
used = 66162552 (63.09752655029297MB)
free = 68055176 (64.90247344970703MB)
49.29494261741638% used
To Space:
capacity = 134217728 (128.0MB)
used = 0 (0.0MB)
free = 134217728 (128.0MB)
0.0% used
concurrent mark-sweep generation:
capacity = 3644850176 (3476.0MB)
used = 3004386160 (2865.2059173583984MB)
free = 640464016 (610.7940826416016MB)
82.42824848556957% used
Perm Generation:
capacity = 205520896 (196.0MB)
used = 110269304 (105.16100311279297MB)
free = 95251592 (90.83899688720703MB)
53.65357301673111% use
Given the card-table mechanism hotspot adopts for young gc, as our old gen
is around 3.5 gigabytes, each young gc should go through 7m card table
scans, I am wondering whether that's the reason behind the relatively slow
young gc?
Any advice? Thanks
All the best,
Leon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20121105/cb912cce/attachment.html
More information about the hotspot-gc-use
mailing list