Any plans to increase G1 max region size to 64m?
Thomas Viessmann
thomas.viessmann at oracle.com
Mon Feb 9 11:41:00 UTC 2015
Hi Thomas,
many thanks for your detailed explanation. Here are some more details
about the background. It seems that a humongous allocation triggers a
(long) evacuation pause although the eden is still (almost) empty.
Such long pauses could only be observed during a humongous allocation.
Is this what you meant when you said:/
/
/ - copying around large objects tends to fragment the survivor and old
gen space. I.e. at the moment, there is just a single current allocation
region for all threads during GC.
So if, due to timing, you copy such a large object, and another thread
allocates only 16 bytes into that same region, there is not enough space
left for another such large object, throwing away the entire remainder
region.
This is also an implementation issue, and will likely be improved soon,
but still relevant at least for 8u40./
Here is an example from the gc.log
173913.719: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
initiation, reason: occupancy higher than threshold, occupancy:
11710496768 bytes, allocation request: 33554448 bytes, threshold:
11596411665 bytes (45.00 %), source: concurrent humongous allocation]
173913.721: [G1Ergonomics (Concurrent Cycles) request concurrent cycle
initiation, reason: requested by GC cause, GC cause: G1 Humongous
Allocation]
{Heap before GC invocations=11416 (full 0):
garbage-first heap total 25165824K, used 11532190K
[0x00000001e0000000, 0x00000007e0000000, 0x00000007e0000000)
region size 32768K, 6 young (196608K), 5 survivors (163840K)
compacting perm gen total 262144K, used 72815K [0x00000007e0000000,
0x00000007f0000000, 0x0000000800000000)
the space 262144K, 27% used [0x00000007e0000000,
0x00000007e471bd28, 0x00000007e471be00, 0x00000007f0000000)
No shared spaces configured.
173913.721: [G1Ergonomics (Concurrent Cycles) initiate concurrent
cycle, reason: concurrent cycle initiation requested]
2014-12-13T00:34:06.802-0600: 173913.721: [GC pause (G1 Humongous
Allocation) (young) (initial-mark) 173913.721: [G1Ergonomics (CSet
Construction) start choosing CSet, _pending_cards: 3141273, predicted
base time: 925.47 ms, remaining time: 0.00 ms, target pause time: 400.00 ms]
173913.721: [G1Ergonomics (CSet Construction) add young regions to
CSet, eden: 1 regions, survivors: 5 regions, predicted young region
time: 49.96 ms]
173913.721: [G1Ergonomics (CSet Construction) finish choosing CSet,
eden: 1 regions, survivors: 5 regions, old: 0 regions, predicted pause
time: 975.43 ms, target pause time: 400.00 ms]
, 1.0386970 secs]
[Parallel Time: 1019.3 ms, GC Workers: 18]
[GC Worker Start (ms): Min: 173913721.2, Avg: 173913721.5, Max:
173913721.8, Diff: 0.5]
[Ext Root Scanning (ms): Min: 3.8, Avg: 4.2, Max: 5.1, Diff: 1.3,
Sum: 76.1]
[Code Root Marking (ms): Min: 0.0, Avg: 0.6, Max: 3.9, Diff: 3.9,
Sum: 11.0]
[Update RS (ms): Min: 954.6, Avg: 957.4, Max: 958.0, Diff: 3.5,
Sum: 17232.4]
[Processed Buffers: Min: 601, Avg: 682.2, Max: 852, Diff: 251,
Sum: 12279]
[Scan RS (ms): Min: 1.5, Avg: 1.9, Max: 2.3, Diff: 0.7, Sum: 35.0]
[Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff:
0.0, Sum: 0.0]
[Object Copy (ms): Min: 54.0, Avg: 54.5, Max: 54.8, Diff: 0.7,
Sum: 980.1]
[Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0, Sum: 0.3]
[GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.1, Diff: 0.1,
Sum: 1.1]
[GC Worker Total (ms): Min: 1018.4, Avg: 1018.7, Max: 1019.0,
Diff: 0.5, Sum: 18336.1]
[GC Worker End (ms): Min: 173914740.1, Avg: 173914740.2, Max:
173914740.2, Diff: 0.1]
[Code Root Fixup: 0.1 ms]
[Code Root Migration: 0.0 ms]
[Clear CT: 0.4 ms]
[Other: 18.9 ms]
[Choose CSet: 0.0 ms]
[Ref Proc: 10.7 ms]
[Ref Enq: 0.4 ms]
[Free CSet: 0.2 ms]
[Eden: 32.0M(1056.0M)->0.0B(1056.0M) Survivors: 160.0M->160.0M Heap:
11.0G(24.0G)->11.5G(24.0G)]
Heap after GC invocations=11417 (full 0):
garbage-first heap total 25165824K, used 12101473K
[0x00000001e0000000, 0x00000007e0000000, 0x00000007e0000000)
region size 32768K, 5 young (163840K), 5 survivors (163840K)
compacting perm gen total 262144K, used 72815K [0x00000007e0000000,
0x00000007f0000000, 0x0000000800000000)
the space 262144K, 27% used [0x00000007e0000000,
0x00000007e471bd28, 0x00000007e471be00, 0x00000007f0000000)
No shared spaces configured.
}
[Times: user=18.39 sys=0.01, real=1.04 secs]
Thanks and Regards
Thomas
On 02/06/15 14:27, Thomas Schatzl wrote:
> Hi Thomas,
>
> On Thu, 2015-02-05 at 15:44 +0100, Thomas Viessmann wrote:
>> Hi,
>>
>> currently maximum value is -XX:G1HeapRegionSize=32m.
>> Are there plans to increase that number as there are applications
>> which allocate bigger objects which then result in slow humongous
>> allocations which in turn typically exceed the target pause time.
> I have to disagree a little here: while humongous allocations are much
> slower than regular applications, any application allocating too many of
> them too quickly will lead to out-of-memory situations anyway.
>
> I.e. before you are concerned about the performance of humongous
> allocations (you are going to actually do something with them?), you
> will most likely first get troubles with available memory.
>
> These are my observations though, feel free to give yours.
>
> Allocations for humongous objects have no impact on pause time, except
> if they continuously trigger garbage collections, and then you will
> probably run into the previously mentioned full gcs anyway. Unless you
> count the time the GC takes to the allocation time.
>
> We have tried maximum increasing humongous object size on some very
> large heap (>=100G) applications, without good results.
>
> The main problem is, a heap region size of X results in maximum regular
> object size of at most X/2.
>
> This results in some or all of these issues:
>
> - generally, large objects are very slow to copy around during young
> gc. Just copying a 16M object (at region size 32M) takes long, and then
> processing up to 16M/sizeof(pointer) references is very slow.
>
> (At the moment there is no load-balancing of the copying across threads,
> so a lot of threads may be waiting for another thread for a long time
> during that; we have seen balancing issues because of that. That's an
> implementation issue and could be fixed of course).
>
> - copying around large objects tends to fragment the survivor and old
> gen space. I.e. at the moment, there is just a single current allocation
> region for all threads during GC.
> So if, due to timing, you copy such a large object, and another thread
> allocates only 16 bytes into that same region, there is not enough space
> left for another such large object, throwing away the entire remainder
> region.
> This is also an implementation issue, and will likely be improved soon,
> but still relevant at least for 8u40.
>
> - allocation granularity during GC (and also for the TLABs, ie. during
> mutator time) is a region. This may lead to waste of a lot of space at
> the start and end of GC because of the single allocation region rule
> above.
>
> - the remembered set management overhead decrease from going from 32M
> to 64M is of course significant (roughly halves it), but overall it does
> not seem that much better in the cases we tried.
>
> We have found that it is often much better to keep humongous objects
> humongous, and then try to reclaim at every GC. This works extremely
> well already, and is hopefully going to get better in the future :)
>
> [Ignoring the fact that applications could help GC a little in that
> respect by better initial sizing or managing of their large objects in
> the first place.]
>
> Given all this, to me there does not seem to be a case right now to
> increase this limit.
>
> You can try yourselves if it makes sense for your application if you
> increase HeapRegionBounds::MAX_REGION_SIZE. It seems to work (although
> not officially supported) as far as I am concerned.
>
> Thanks,
> Thomas
>
>
--
Oracle <http://www.oracle.com>
THOMAS VIESSMANN | Senior Principal Technical Support Engineer - Java
Phone: +498914302496 <tel:+49814302496> | Mobile: +491743005467
<tel:+491743005467>
Oracle Customer Technical Support - Java
ORACLE Deutschland B.V. & Co. KG | Riesstr.25 | D-80992 Muenchen
ORACLE Deutschland B.V. & Co. KG
Hauptverwaltung: Riesstr. 25, D-80992 Muenchen
Registergericht: Amtsgericht Muenchen, HRA 95603
Geschäftsführere: Juergen Kunz
Komplementärin: ORACLE Deutschland Verwaltung B.V.
Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher
------------------------------------------------------------------------
------------------------------------------------------------------------
Green Oracle <http://www.oracle.com/commitment> Oracle is committed to
developing practices and products that help protect the environment
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20150209/ec291fb1/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: oracle_sig_logo.gif
Type: image/gif
Size: 658 bytes
Desc: not available
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20150209/ec291fb1/oracle_sig_logo.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: green-for-email-sig_0.gif
Type: image/gif
Size: 356 bytes
Desc: not available
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20150209/ec291fb1/green-for-email-sig_0.gif>
More information about the hotspot-gc-dev
mailing list