YGC time increasing suddenly
Luciano Molinari
lucmolinari at gmail.com
Wed Dec 18 10:58:09 PST 2013
Hi everybody,
We have a standalone Java app that receives requests through RMI and almost
all the objects created by it are short (< ~100ms) lived objects.
This app is running on a 24 cores server with 16 GB RAM (Red Hat Linux).
During our performance tests (10k requests/second) we started to face a
problem where the throughput decreases suddenly just a few minutes
after the app was started.
So, I started to investigate GC behaviour and to make some adjustments
(increase memory, use CMS...) and now we are able to run our app
properly for about 35 minutes. At this point the time spent during young
collections grows sharply although no Full GC is executed (old gen is only
~4% full).
I've done tests with many different parameters, but currently I'm using the
following ones:
java -server -verbose:gc -XX:+PrintGCDetails -XX:+PrintTenuringDistribution
-XX:+PrintGCTimeStamps -XX:PrintFLSStatistics=1 -XX:SurvivorRatio=4
-XX:ParallelGCThreads=8 -XX:PermSize=256m -XX:+UseParNewGC
-XX:MaxPermSize=256m -Xms7g -Xmx7g -XX:NewSize=4608m -XX:MaxNewSize=4608m
-XX:MaxTenuringThreshold=15 -Dsun.rmi.dgc.client.gcInterval=3600000
-Dsun.rmi.dgc.server.gcInterval=3600000
-Djava.rmi.server.hostname=IP_ADDRESS
If I use this same configuration (without CMS) the same problem occurs
after 20minutes, so it doesn't seem to be related to CMS. Actually, as I
mentioned above, CMS (Full GC) isn't executed during the tests.
Some logs I've collected:
1992.748: [ParNew
Desired survivor size 402653184 bytes, new threshold 15 (max 15)
- age 1: 9308728 bytes, 9308728 total
- age 2: 3448 bytes, 9312176 total
- age 3: 1080 bytes, 9313256 total
- age 4: 32 bytes, 9313288 total
- age 5: 34768 bytes, 9348056 total
- age 6: 32 bytes, 9348088 total
- age 15: 2712 bytes, 9350800 total
: 3154710K->10313K(3932160K), 0.0273150 secs] 3215786K->71392K(6553600K)
//14 YGC happened during this window
2021.165: [ParNew
Desired survivor size 402653184 bytes, new threshold 15 (max 15)
- age 1: 9459544 bytes, 9459544 total
- age 2: 3648200 bytes, 13107744 total
- age 3: 3837976 bytes, 16945720 total
- age 4: 3472448 bytes, 20418168 total
- age 5: 3586896 bytes, 24005064 total
- age 6: 3475560 bytes, 27480624 total
- age 7: 3520952 bytes, 31001576 total
- age 8: 3612088 bytes, 34613664 total
- age 9: 3355160 bytes, 37968824 total
- age 10: 3823032 bytes, 41791856 total
- age 11: 3304576 bytes, 45096432 total
- age 12: 3671288 bytes, 48767720 total
- age 13: 3558696 bytes, 52326416 total
- age 14: 3805744 bytes, 56132160 total
- age 15: 3429672 bytes, 59561832 total
: 3230658K->77508K(3932160K), 0.1143860 secs] 3291757K->142447K(6553600K)
Besides the longer time to perform collection, I also realized that all 15
ages started to have larger values.
I must say I'm a little confused about this scenario. Does anyone have some
tip?
Thanks in advance,
--
Luciano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20131218/0a43e1a3/attachment.html
More information about the hotspot-gc-use
mailing list