Minor question about logging
Bernd Eckenfels
ecki at zusammenkunft.net
Mon Mar 9 10:40:08 UTC 2020
Hello Eugene,
As far as I know, The number in brackets are the current capacity not the heuristics for the next iteration. But even if it is the heuristics decision, the GC cannot enforce regions to be filled if you do nearly no allocations.
BTW can you comment on the underlying problem? Do you really want to use G1 with such a small heap or are you just trying to debug a specific problem - if yes, can you describe it?
Gruss
Bernd
--
http://bernd.eckenfels.net
________________________________
Von: hotspot-gc-use <hotspot-gc-use-bounces at openjdk.java.net> im Auftrag von Eugeniu Rabii <eugen.rabii at gmail.com>
Gesendet: Monday, March 9, 2020 11:32:43 AM
An: Stefan Johansson <stefan.johansson at oracle.com>; hotspot-gc-use at openjdk.java.net <hotspot-gc-use at openjdk.java.net>
Betreff: Re: Minor question about logging
Hello Stefan,
I know these are humongous allocations, the 2 MB was chosen on purpose
(I could have chose 1 MB too, I know).
The first GC (0 - young collection) is actually the result of the
allocation of those humongous Objects.
Because the humongous allocation happened, a concurrent GC was triggered
(GC (1)) that triggers the young collection first (GC (0)); these are
concepts I do seem do get.
My question here is different. After the young collection is done, there
are entries like this in logs:
[0.071s][info ][gc,heap ] GC(0) Eden regions: 1->0(9)
[0.071s][info ][gc,heap ] GC(0) Survivor regions: 0->1(2)
The way I read it it is: there were 1 Eden Regions before the
collection; everything was cleared from them (that zero) and the
heuristics just said that the next cycle should have 9 Eden Regions.
Same explanation would happen for Survivor Regions. As such there would
be : 11 young, 2 survivor.
I am expecting the third cycle (GC (2)) to start with :
[0.076s][debug][gc,heap ] GC(2) Heap before GC invocations=2
(full 0): .....
[0.076s][debug][gc,heap ] GC(2) region size 1024K, 11 young
(2048K), 2 survivors (1024K)
Instead it prints:
[0.076s][debug][gc,heap ] GC(2) region size 1024K, 2 young
(2048K), 1 survivors (1024K)
Does this makes it a better explanation now?
Thank you,
Eugene.
On 3/9/20 4:14 AM, Stefan Johansson wrote:
> Hi Eugeniu,
>
> The second GC is most likely also caused by having many humongous
> allocations. This was the cause GC(0) as well, and since your
> application only allocates large (humongous) objects it will not use a
> lot of space for other objects.
>
> If you are not familiar with the concept of humongous objects in G1,
> these are objects that are to large to be allocated in the normal fast
> path. They are instead allocated in separate regions. This requires
> some special handling and that's the reason we trigger GCs more
> quickly if a lot of such objects are allocated. In your setup the
> region size will be 1MB so all objects larger than 500KB will be
> considered humongous.
>
> Hope this helps,
> StefanJ
>
>
> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>> Hello,
>>
>> I have a very simple program that constantly allocates some byte
>> arrays (of 2 MB) each (running with the latest jdk-13). I run it with :
>>
>> -Xms20M
>> -Xmx20M
>> -Xmn10M
>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>
>>
>> For example:
>>
>> public static void main(String[] args) {
>> while (true) {
>> System.out.println(invokeMe());
>> }
>> }
>>
>> public static int invokeMe() {
>> int x = 1024;
>> int factor = 2;
>> byte[] allocation1 = new byte[factor * x * x];
>> allocation1[2] = 3;
>> byte[] allocation2 = new byte[factor * x * x];
>> byte[] allocation3 = new byte[factor * x * x];
>> byte[] allocation4 = new byte[factor * x * x];
>>
>> return Arrays.hashCode(allocation1) ^
>> Arrays.hashCode(allocation2)
>> ^ Arrays.hashCode(allocation3) ^
>> Arrays.hashCode(allocation4);
>> }
>>
>> In logs, I see something that is puzzling me:
>>
>>
>> [0.066s][debug][gc,ergo ] Request concurrent cycle initiation
>> (requested by GC cause). GC cause: G1 Humongous Allocation
>> [0.066s][debug][gc,heap ] GC(0) Heap before GC invocations=0
>> (full 0): garbage-first heap total 20480K, used 6908K
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.066s][debug][gc,heap ] GC(0) region size 1024K, 1 young
>> (1024K), 0 survivors (0K)
>>
>> OK, so Heap Before: 1 young, 0 survivors.
>>
>> Then:
>>
>> [0.071s][info ][gc,heap ] GC(0) Eden regions: 1->0(9)
>> [0.071s][info ][gc,heap ] GC(0) Survivor regions: 0->1(2)
>> [0.071s][info ][gc,heap ] GC(0) Old regions: 0->0
>>
>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at
>> least this is how I read the source code of where this is logged)
>>
>> Then a GC(1) concurrent cycle happens:
>>
>> [0.071s][info ][gc ] GC(1) Concurrent Cycle
>>
>> And the next cycle is where I fail to understand the logging:
>>
>> [0.076s][debug][gc,heap ] GC(2) Heap before GC
>> invocations=2 (full 0): garbage-first heap total 20480K, used 7148K
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.076s][debug][gc,heap ] GC(2) region size 1024K, 2
>> young (2048K), 1 survivors (1024K)
>>
>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2
>> Survivor.
>>
>> Thank you,
>> Eugene.
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200309/93cb21cf/attachment.htm>
More information about the hotspot-gc-use
mailing list