Can CMS maximum free chunk size provide advance warning before full GC?
Nadav Wiener
nadav.wiener+hotspot at gmail.com
Mon Apr 29 01:05:55 PDT 2013
For the context of a soft real time system that should not pause for more
than 200ms, we're looking for a way to have an advance warning before a
Full GC is imminent. We realize we might not be able to avoid it, but we'd
like to fail over to another node before the system stalls.
We've been able to come up with a scheme that will provide us with an
advance warning, ahead of imminent full GC that may cause the system to
stall for several seconds (which we need to avoid).
What we've been able to come up with relies on CMS free list statistics:
<code>-XX:PrintFLSStatistics=1</code>. This prints free list statistics
into the GC log after every GC cycle, including young GC, so the
information is available at short intervals, and will appear even more
frequently during intervals of high memory allocation rate. It probably
costs a little in terms of performance, but our working assumption is that
we can afford it.
The output to the log looks like so:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 382153298
Max Chunk Size: 382064598
Number of Blocks: 28
Av. Block Size: 13648332
Tree Height: 8
In particular, the maximum free chunk size is 382064598 words. With 64-bit
words this should amount to just below 2915MB. This number has been
decreasing very slowly, at a rate of roughly 1MB per hour.
It is our understanding that so long as the maximum free chunk size is
larger than the young generation (assuming no humungous object allocation),
every object promotion should succeed.
Recently, we've run a several-days-long stress tests, and have been seeing
that CMS was able to maintain maximum chunk sizes upward of 94% of total
old region space. The maximum free chunk size appears to be decreasing at a
rate of less than 1MB/hour, which should be fine -- according to this we
won't be hitting full GC any time soon, and the servers will likely be down
for maintenance more frequently than full GC can occur.
In a previous test, at a time when the system was less memory efficient,
we've been able to run the system for a good 10 hours. During the first
hour, the maximum free chunk size has decreased to 100MB, where it stayed
for over 8 hours. During the last 40 minutes of the run, the maximum free
chunk size has decreased at a steady rate towards 0, when a full GC
occurred -- this was very encouraging, because for that workload we seemed
to be able to get a 40 minute advance warning (when the chunk size started
a steady decline towards 0).
**My question to you**: assuming this all reflects a prolonged peak
workload (workload at any given point in time in production will only be
lower), does this sound like a valid approach? To what degree of
reliability do you reckon we should be able to count on the maximum free
chunk size statistic from the GC log?
We are definitely open for suggestions, but request that they be limited to
solutions available on HotSpot (No Azul for us, at least for now). Also, G1
by itself is no solution unless we can come up with a similar metric that
will give us advance warning before Full GCs, or any GCs that significantly
exceed our SLA (and these can occasionally occur).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130429/1bac0731/attachment.html
More information about the hotspot-gc-use
mailing list