Object chunking in Hotspot GCs
Jon Masamitsu
jon.masamitsu at oracle.com
Fri Feb 21 19:21:38 UTC 2014
On 2/20/2014 3:58 AM, Thomas Schatzl wrote:
> Hi all,
>
> I am currently trying to finalize JDK-8027545:
> "Improve object array chunking test in G1's
> copy_to_survivor_space" (https://bugs.openjdk.java.net/browse/JDK-8027545).
>
> Goal is basically changing G1's large object chunking code to something
> resembling the Parallel GC code because it seems better than the code G1
> has.
>
> Looking at ParNew too, it seems that there is a disagreement over the
> collectors about at which object size to start chunking, and what are
> the conditions for it.
>
> The parallel code looks as follows:
>
> _array_chunk_size = ParGCArrayScanChunk;
> // let's choose 1.5x the chunk size
> _min_array_size_for_chunking = 3 * _array_chunk_size / 2;
>
> The actual decision for chunking:
>
> if (new_obj_size > _min_array_size_for_chunking &&
> new_obj->is_objArray() &&
> PSChunkLargeArrays) {
>
> Starts chunking if an reference array is 1.5 times the
> ParGCArrayScanChunk size.
>
> Following up on chunked objects:
>
> if (end > (int) _min_array_size_for_chunking) {
> // we'll chunk more
> start = end - _array_chunk_size;
> [...]
>
> I.e. parNew chunks from the end of the object.
>
> Looking at ParNew:
>
> Decision to chunk:
>
> bool ParScanThreadState::should_be_partially_scanned(oop new_obj, oop
> old_obj) const {
> return new_obj->is_objArray() &&
> arrayOop(new_obj)->length() > ParGCArrayScanChunk &&
> new_obj != old_obj;
> }
>
> I.e. starts chunking if the object is just larger than
> ParGCArrayScanChunk; further reverses the order of conditions, and does
> not check PSChunkLargeArrays.
>
> Following up on chunked objects:
> if (remainder > 2 * ParGCArrayScanChunk) {
> // Test above combines last partial chunk with a full chunk
> end = start + ParGCArrayScanChunk;
>
> Seems to start from the beginning of the object.
>
> G1 code is copied from ParNew.
>
> What do you think, should I try to make the three GCs behave the same in
> that respect?
> I.e.
> - put the size check first (it's last in G1)
ParallelGC has the object size available, right? So in that case it should
be less expensive to check the sizes first than to go through the klass
to check the type. If you don't have the size already, I'm guessing
(and I really mean guessing) that it is faster to check the type.
>
> - start chunking only if object size > 1.5x ParGCArrayScanChunk
> because starting at ParGCArrayScanChunk will just push the chunked
> object onto the task queue as chunked object and process it as a whole
> immediately (in G1 and CMS). The decision for 1.5x is just arbitrary, I
> am fine with 2x too.
Is there a micro benchmark that you can run to see if there is a
difference between
1.5x and 2x?
>
> (There is no significant difference in performance across specjvm2008,
> specjbb05/13 in any of these cases on G1).
ParallelGC was tuned to run well with jbb2000. Can you check that?
>
> - take PSChunkLargeArrays into account by all collectors by setting
> ParGCArrayScanChunk to maximum value if it is false. While this is not
> exactly the same as the behavior as before, it should be close enough.
The PS in PSChunkLargeArrays implies parallel-scavenge (aka, ParallelGC).
A new flag "GCChunkLargeArrays" and retire PSChunkLargeArrays if
you're going to make PSChunkLargeArrays affect the other collectors
>
> - keep the iteration order as is (backwards for parallel, forward for
> others)
Might you change to backwards for all if the chunking for ParallelGC
still seems
better after you make the other changes?
>
> I could put the calculations for the effective values of the minimum
> object size and the actual chunk size into the collector policy (so that
> it's calculated only once).
> I could also keep the difference in 1.5x/2x times ParGCArrayScanChunk
> decision specific to every collector, but I do not see a real reason.
I like having the multiplier the same but don't know if 1.5x or 2x makes
a difference.
Jon
>
> Opinions?
>
> Thanks,
> Thomas
>
>
More information about the hotspot-gc-dev
mailing list