RFR(s): 8077417: Cleanup of Universe::initialize_heap()
Kim Barrett
kim.barrett at oracle.com
Mon Apr 13 05:42:57 UTC 2015
On Apr 10, 2015, at 10:57 AM, Per Liden <per.liden at oracle.com> wrote:
>
> Hi,
>
> Universe::initialize_heap() is kind of messy as it contains lots of #if INCLUDE_ALL_GCS, this patch tries to make that code a bit more readable. Also, all collectors except ParallelScavenge take their CollectorPolicy as an argument to the constructor. This patch aligns ParallelScavenge to this pattern to further make the code in initialize_heap() more readable.
>
> Bug: https://bugs.openjdk.java.net/browse/JDK-8077417
>
> Webrev: http://cr.openjdk.java.net/~pliden/8077417/webrev.0/
------------------------------------------------------------------------------
src/share/vm/memory/universe.cpp
708 jint status;
709
710 #if !INCLUDE_ALL_GCS
711 if (UseParallelGC) {
712 fatal("UseParallelGC not supported in this VM.");
...
731 if (status != JNI_OK) {
732 return status;
I think this might produce a compiler warning for uninitialized
"status" at line 731 for at least some platforms. I think fatal() is
not marked as non-returning, so to the compiler it appears that
control flow can reach line 731 with an uninitialized "status".
When do we build with INCLUDE_ALL_GCS being false? I'm guessing only
for embedded, and maybe client builds? Were either of those part of
the testing for these changes?
The simplest fix is to initialize status to something, probably *not*
JNI_OK; maybe JNI_ERR?
------------------------------------------------------------------------------
src/share/vm/memory/universe.cpp
727 else { // UseSerialGC
Add a guarantee that UseSerialGC is true here?
------------------------------------------------------------------------------
src/share/vm/memory/universe.cpp
735 ThreadLocalAllocBuffer::set_max_size(Universe::heap()->max_tlab_size());
This used to be done before Universe::heap()->initialize(). Now it is
being called after the heap initialization. Is that change of order
ok?
I looked at it and it seems ok, in which case I think the change of
order is actually an improvement.
While I was looking around I noticed what might be a separate problem.
It seems possible with a sufficiently large heap that G1's
max_tlab_size computation might exceed the int[Integer.MAX_VALUE] size
limit discussed in CollectedHeap::max_tlab_size. I'm not sure whether
that limit is actually relevant to G1 though. If it is relevant then
a new CR should be filed.
------------------------------------------------------------------------------
More information about the hotspot-gc-dev
mailing list