<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Tom,<br>
<br>
<a class="moz-txt-link-freetext" href="http://cr.openjdk.java.net/~tbenson/8060697/webrev/src/share/vm/gc/g1/g1CollectorPolicy.cpp.frames.html">http://cr.openjdk.java.net/~tbenson/8060697/webrev/src/share/vm/gc/g1/g1CollectorPolicy.cpp.frames.html</a><br>
<br>
<pre><span class="new">1053 // Compute the ratio of just this last pause time to the entire time range stored</span>
<span class="new">1054 // in the vectors.</span>
<span class="new">1055 _last_pause_time_ratio = </span>
<span class="new">1056 (pause_time_ms * _recent_prev_end_times_for_all_gcs_sec->num()) / interval_ms;</span></pre>
<br>
<span class="new">_last_pause_time_ratio is the ratio of the last
pause over the <br>
average interval in the truncated sequence? By the latter<br>
I mean<br>
<br>
(interval_ms / </span><span class="new">_recent_prev_end_times_for_all_gcs_sec->num())<br>
<br>
If the truncated sequence has N sample and "interval_ms" is<br>
measured from the oldest sample, I'm calling interval_ms / N<br>
the average interval.<br>
<br>
Is my description correct? Why do you prefer that to the most
recent<br>
pause time ratio?<br>
<br>
</span>Should the 1*M below be 1 region size?<br>
<pre>1545 const size_t min_expand_bytes = 1*M;</pre>
<span class="new"><br>
As the uncommitted space in the heap drops, the grow rate drops.<br>
<br>
</span><br>
<pre>1550 size_t expand_bytes_via_pct =
1551 uncommitted_bytes * G1ExpandByPercentOfAvailable / 100;</pre>
<span class="new"><br>
The scale_factor will increase that by up to a factor of 2, the
policy<br>
seems to grow slowly to the maximum. Is there a reason not to get<br>
to the maximum heap size quickly?<br>
<br>
Jon<br>
<br>
<br>
</span>
<div class="moz-cite-prefix">On 11/25/2015 06:06 AM, Tom Benson
wrote:<br>
</div>
<blockquote cite="mid:5655C06D.1010209@oracle.com" type="cite">Hi
Kim,
<br>
Thanks very much for the review. I've implemented all your
suggestions.
<br>
<a onmousedown="handlePress(4);return true;"
onmouseup="handleRelease(4);return true;"
onmouseout="handleRelease(4);return true;" title="Scroll Down:
Press and Hold to accelerate" onclick="return false;">Down</a><br>
About this:
<br>
<br>
<blockquote type="cite">I suspect you are introducing some
implicit conversions that will cause
<br>
problems for the SAP folks; see discussion of 8143215:
<br>
</blockquote>
<br>
... I think there's one, which is:
<br>
<br>
expand_bytes = expand_bytes * scale_factor;
<br>
<br>
scale_factor is explicitly limited to being between 0.2 and 2.0,
and expand_bytes is fraction of the heap size, so there's no
chance of overflow. Would you object to the static cast in this
case? How about with a comment?
<br>
<br>
Tom
<br>
<br>
On 11/24/2015 9:55 PM, Kim Barrett wrote:
<br>
<blockquote type="cite">On Nov 23, 2015, at 10:02 PM, Tom Benson
<a class="moz-txt-link-rfc2396E" href="mailto:tom.benson@oracle.com"><tom.benson@oracle.com></a> wrote:
<br>
<blockquote type="cite">Hi,
<br>
Here is a proposed change to the G1 heap growth code for
review. I've added a detailed description to the CR, but here
is the short version: After a GC pause, the average ratio of
time spent in recent GC pauses vs overall time is computed. If
it exceeds GCTimeRatio, the heap is expanded by a fixed
amount. With the new code, some deficiencies in the ratio
tracking are addressed, and the expansion size is scaled
according to how much the desired ratio is, on average,
exceeded by. The target ratio itself is also scaled at the
lowest heap sizes.
<br>
<br>
The case that triggered this was actually JDK-8132077, where
the JVM'08 Compress benchmark saw a 40% degradation. It was
due to the heap being about half the size in some runs,
because of the way heap growth worked.
<br>
<br>
I'm still collecting the final performance data for this
version, and will attach it to the CR. Earlier experimental
versions showed good improvements in consistency of heap
sizes. A couple of benchmarks average a percentage point or
two lower, while others improve by that much or more. No
growth percentage or scaling is going to be ideal for every
test, but the goal was to maintain performance without growing
too large. In fact, some tests now use much smaller heaps.
<br>
<br>
CR:
<br>
<a class="moz-txt-link-freetext" href="https://bugs.openjdk.java.net/browse/JDK-8060697">https://bugs.openjdk.java.net/browse/JDK-8060697</a>
<br>
Webrev:
<br>
<a class="moz-txt-link-freetext" href="http://cr.openjdk.java.net/~tbenson/8060697/webrev/">http://cr.openjdk.java.net/~tbenson/8060697/webrev/</a>
<br>
</blockquote>
Generally looks good; just a few very minor things, most of
which you can
<br>
take or ignore as you prefer. I don't need a new webrev for any
of these.
<br>
<br>
The comments were very helpful in understanding what's going on.
<br>
<br>
I suspect you are introducing some implicit conversions that
will cause
<br>
problems for the SAP folks; see discussion of 8143215: gcc
4.1.2: fix three
<br>
issues breaking the build. But the resolution for that is still
being
<br>
discussed, and we don't have an easy way to discover these for
ourselves, so
<br>
I don't think you should worry about it here right now.
<br>
<br>
------------------------------------------------------------------------------
<br>
src/share/vm/gc/g1/g1CollectorPolicy.cpp
<br>
1571 static double const MinScaleDownFactor = 0.2;
<br>
1572 static double const MaxScaleUpFactor = 2;
<br>
1573 static double const StartScaleDownAt =
_gc_overhead_perc;
<br>
1574 static double const StartScaleUpAt =
_gc_overhead_perc * 1.5;
<br>
1575 static double const ScaleUpRange = _gc_overhead_perc
* 2.0;
<br>
<br>
I suggest these not be static.
<br>
<br>
It doesn't really matter for the first two.
<br>
<br>
But for the others, there is a hidden cost to making them
static, due to
<br>
some compilers ensuring thread-safe initialization. C++11
mandates
<br>
thread-safe initialization of function scoped statics. gcc has
implemented
<br>
that starting circa gcc4.0 (if I recall correctly), controlled
by a CLA
<br>
(-f[no]-threadsafe-statics). Visual Studio 2013 also includes
this feature,
<br>
as part of their incremental rollout of C++11 (and later)
features. I don't
<br>
know about other compilers.
<br>
<br>
The cost of making them static is likely at least comparable to
computing
<br>
them. And making them static implies the _gc_overhead_perc is
effectively a
<br>
constant, which appears to be true today, but who knows what
will happen
<br>
tomorrow.
<br>
<br>
------------------------------------------------------------------------------
<br>
src/share/vm/gc/g1/g1CollectorPolicy.cpp
<br>
1587 scale_factor = MAX2<double>(scale_factor,
MinScaleDownFactor);
<br>
1590 scale_factor = MIN2<double>(scale_factor,
MaxScaleUpFactor);
<br>
<br>
The explicit double template parameter isn't needed here, since
the
<br>
arguments are already both doubles.
<br>
<br>
------------------------------------------------------------------------------
<br>
src/share/vm/gc/g1/g1CollectorPolicy.cpp
<br>
1525 threshold = (threshold * ((double)_g1->capacity() /
(double)(_g1->max_capacity() / 2)));
<br>
<br>
This might be easier to read if it used "threshold *= ...".
<br>
<br>
------------------------------------------------------------------------------
<br>
src/share/vm/gc/g1/g1CollectorPolicy.cpp
<br>
1526 threshold = MAX2<double>(threshold, 1);
<br>
<br>
The explicit double template parameter wouldn't be needed if "1"
was
<br>
replaced with "1.0".
<br>
<br>
------------------------------------------------------------------------------
<br>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>