G1 heap growth very aggressive in spite of low heap usage

Peter Schuller peter.schuller at infidyne.com
Sun Mar 21 06:12:59 PDT 2010


Hello,

what is the intended behavior of G1 with respect to heap size? I was
playing around with a fairly simple test[1] designed for something
else, when I realized that G1 was expanding my heap in ways that I
find unexpected[2]. Reading g1CollectedHeap.cpp there seems to be
attempts to adher to MaxHeapFreeRatio and MinHeapFreeRatio. However,
even when I run with (on purpose) extreme options of 10/25
minimum/maximum, heap expansion happens as in [2].

My expected behavior would for the heep size to remain within some
reasonable percentage of actual memory usage (assuming I only specify
max heap size and leave minimum heap size untouched), and it is a
desirable behavior in part because concurrent marking only seems to
trigger as a function of heap usage relative to the heap size. In many
situations it is desirable to have the JVM heap size be a reasonable
indicator of actual memory requirements/use. If I am only using a
fraction of heap size (5% in this case), the delay in concurrent
marking will mean that my application (unless it *only* generates
ephemeral garbage) will grow unreasonably untill concurrent marking is
triggered. After it completes the heap use would go down very very
significantly. The end result is that (1) I actually need a lot more
memory than I otherwise would, and (2) it is difficult to monitor the
real memory requirements of the application.

I do understand that maximum throughput tend to be achieved by
postponing concurrent marking to maximize yield, but if that was my
aim, I would specify -Xms == -Xmx. With a small -Xms and a large -Xmx,
I would want the heap to expand only as "necessary" (though I
understand that is difficult to define), resulting in a much more
reasonable trigger threshold for concurrent marking.

The test is a fairly simple web server that i use 'ab' to throw
traffic at. It spawns a number of threads per requests, each of which
tries to produce a non-trivial stack depth, and then sleeps for a
random period. The intent was to use this to try to see what kind of
overhead was incurred for root set scanning with many threads (though
few active) during young generation pauses.

I then added some generation of permanent data that I could trigger by
submitting HTTP requests. The sudden growth seen in [2], when it
spikes up from 100-200 MB to 3-4 GB, is when I submit some request to
trigger generation of permanent data. The actual amount is very
limited however, and as you can see it very quickly spikes the memory
use far beyond any reasonable 'min free' percentage might be designed
to.

The options I used for the run from which [2] comes from are:

   -XX:+PrintGC
   -XX:+UnlockExperimentalVMOptions
   -XX:+UnlockDiagnosticVMOptions
   -XX:+UseG1GC
   -XX:MaxGCPauseMillis=25
   -XX:GCPauseIntervalMillis=50
   -XX:+G1ParallelRSetUpdatingEnabled
   -XX:+G1ParallelRSetScanningEnabled
   -Xmx4G
   -XX:G1ConfidencePercent=100
   -XX:MinHeapFreeRatio=10
   -XX:MaxHeapFreeRatio=25

This was run on a recent checkout of the bsdport of openjdk7:

   changeset:   192:9f250d0d1b40
   tag:         tip
   parent:      187:417d1a0aa480
   parent:      191:6b1069f53fbc
   user:        Greg Lewis <glewis at eyesbeyond.com>
   date:        Sat Mar 20 11:11:42 2010 -0700
   summary:     Merge from main OpenJDK repository

I can provide an executable jar file if someone wants to test it but
is not set up to build clojure/leiningen projects.

[1] http://github.com/scode/httpgctest

[2]:

Here is an excerpt. Each pause is 0.25-5 seconds in between depending
on whether I am generating permanent data at the time.

[GC pause (young) 45M->35M(110M), 0.0482680 secs]
[GC pause (young) 49M->42M(110M), 0.0428210 secs]
[GC pause (young) 53M->46M(110M), 0.0238750 secs]
[GC pause (young) 59M->51M(110M), 0.0312480 secs]
[GC pause (young) 63M->58M(110M), 0.0404710 secs]
[GC pause (young) 74M->62M(110M), 0.0262080 secs]
[GC pause (young) 75M->69M(220M), 0.0509200 secs]
[GC pause (young) 85M->77M(440M), 0.0478070 secs]
[GC pause (young) 95M->86M(880M), 0.0560680 secs]
[GC pause (young) 102M->96M(1524M), 0.0618900 secs]
[GC pause (young) 112M->102M(2039M), 0.0404470 secs]
[GC pause (young) 116M->111M(2451M), 0.0546120 secs]
[GC pause (young) 126M->120M(2780M), 0.0441630 secs]
[GC pause (young) 137M->121M(3044M), 0.0201850 secs]
[GC pause (young) 136M->122M(3255M), 0.0113520 secs]
[GC pause (young) 134M->122M(3424M), 0.0111860 secs]
[GC pause (young) 133M->122M(3559M), 0.0089910 secs]
[GC pause (young) 133M->122M(3667M), 0.0079770 secs]
[GC pause (young) 140M->122M(3753M), 0.0074160 secs]
[GC pause (young) 144M->136M(3753M), 0.0738920 secs]
[GC pause (young) 146M->141M(3753M), 0.0363720 secs]
[GC pause (young) 154M->147M(3753M), 0.0256830 secs]
[GC pause (young) 158M->148M(3753M), 0.0116360 secs]
[GC pause (young) 157M->149M(3753M), 0.0172780 secs]
[GC pause (young) 157M->154M(3753M), 0.0273040 secs]
[GC pause (young) 168M->163M(3753M), 0.0526970 secs]
[GC pause (young) 179M->172M(3822M), 0.0547310 secs]
[GC pause (young) 190M->180M(3877M), 0.0337350 secs]
[GC pause (young) 195M->189M(3921M), 0.0530290 secs]
[GC pause (young) 206M->199M(3956M), 0.0598740 secs]
[GC pause (young) 218M->206M(3984M), 0.0320840 secs]
[GC pause (young) 222M->207M(4007M), 0.0140790 secs]
[GC pause (young) 221M->207M(4025M), 0.0084930 secs]
[GC pause (young) 218M->207M(4040M), 0.0066170 secs]
[GC pause (young) 219M->207M(4052M), 0.0069570 secs]
[GC pause (young) 223M->207M(4061M), 0.0067710 secs]
[GC pause (young) 227M->207M(4061M), 0.0071300 secs]
[GC pause (young) 231M->207M(4061M), 0.0070520 secs]


-- 
/ Peter Schuller


More information about the hotspot-gc-use mailing list