Minor question about logging
Stefan Johansson
stefan.johansson at oracle.com
Mon Mar 9 10:41:32 UTC 2020
Hi Eugeniu,
I should have been more clear around that your understanding of the
numbers are correct. But as Thomas also responded, these are estimates
and we might have to start a GC due to other circumstances.
See more below.
On 2020-03-09 11:32, Eugeniu Rabii wrote:
> Hello Stefan,
>
> I know these are humongous allocations, the 2 MB was chosen on purpose
> (I could have chose 1 MB too, I know).
>
> The first GC (0 - young collection) is actually the result of the
> allocation of those humongous Objects.
>
> Because the humongous allocation happened, a concurrent GC was triggered
> (GC (1)) that triggers the young collection first (GC (0)); these are
> concepts I do seem do get.
>
> My question here is different. After the young collection is done, there
> are entries like this in logs:
>
> [0.071s][info ][gc,heap ] GC(0) Eden regions: 1->0(9)
> [0.071s][info ][gc,heap ] GC(0) Survivor regions: 0->1(2)
>
> The way I read it it is: there were 1 Eden Regions before the
> collection; everything was cleared from them (that zero) and the
> heuristics just said that the next cycle should have 9 Eden Regions.
Correct, but this is an estimate and we might have to GC before we fill
up the 9 young regions. For example if there are a lot of humongous
allocations. The humongous allocations are as I mentioned treated
differently and aren't considered young.
>
> Same explanation would happen for Survivor Regions. As such there would
> be : 11 young, 2 survivor.
>
>
> I am expecting the third cycle (GC (2)) to start with :
>
>
> [0.076s][debug][gc,heap ] GC(2) Heap before GC invocations=2
> (full 0): .....
>
> [0.076s][debug][gc,heap ] GC(2) region size 1024K, 11 young
> (2048K), 2 survivors (1024K)
>
>
> Instead it prints:
>
>
> [0.076s][debug][gc,heap ] GC(2) region size 1024K, 2 young
> (2048K), 1 survivors (1024K)
>
>
> Does this makes it a better explanation now?
Your expectation is correct, and if GC(2) isn't caused by a humongous
allocation this is unexpected behavior. It would help a lot if you could
post more of your log, especially the cause for GC(2).
Thanks,
Stefan
>
> Thank you,
>
> Eugene.
>
>
> On 3/9/20 4:14 AM, Stefan Johansson wrote:
>> Hi Eugeniu,
>>
>> The second GC is most likely also caused by having many humongous
>> allocations. This was the cause GC(0) as well, and since your
>> application only allocates large (humongous) objects it will not use a
>> lot of space for other objects.
>>
>> If you are not familiar with the concept of humongous objects in G1,
>> these are objects that are to large to be allocated in the normal fast
>> path. They are instead allocated in separate regions. This requires
>> some special handling and that's the reason we trigger GCs more
>> quickly if a lot of such objects are allocated. In your setup the
>> region size will be 1MB so all objects larger than 500KB will be
>> considered humongous.
>>
>> Hope this helps,
>> StefanJ
>>
>>
>> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>>> Hello,
>>>
>>> I have a very simple program that constantly allocates some byte
>>> arrays (of 2 MB) each (running with the latest jdk-13). I run it with :
>>>
>>> -Xms20M
>>> -Xmx20M
>>> -Xmn10M
>>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>>
>>>
>>> For example:
>>>
>>> public static void main(String[] args) {
>>> while (true) {
>>> System.out.println(invokeMe());
>>> }
>>> }
>>>
>>> public static int invokeMe() {
>>> int x = 1024;
>>> int factor = 2;
>>> byte[] allocation1 = new byte[factor * x * x];
>>> allocation1[2] = 3;
>>> byte[] allocation2 = new byte[factor * x * x];
>>> byte[] allocation3 = new byte[factor * x * x];
>>> byte[] allocation4 = new byte[factor * x * x];
>>>
>>> return Arrays.hashCode(allocation1) ^
>>> Arrays.hashCode(allocation2)
>>> ^ Arrays.hashCode(allocation3) ^
>>> Arrays.hashCode(allocation4);
>>> }
>>>
>>> In logs, I see something that is puzzling me:
>>>
>>>
>>> [0.066s][debug][gc,ergo ] Request concurrent cycle initiation
>>> (requested by GC cause). GC cause: G1 Humongous Allocation
>>> [0.066s][debug][gc,heap ] GC(0) Heap before GC invocations=0
>>> (full 0): garbage-first heap total 20480K, used 6908K
>>> [0x00000007fec00000, 0x0000000800000000)
>>> [0.066s][debug][gc,heap ] GC(0) region size 1024K, 1 young
>>> (1024K), 0 survivors (0K)
>>>
>>> OK, so Heap Before: 1 young, 0 survivors.
>>>
>>> Then:
>>>
>>> [0.071s][info ][gc,heap ] GC(0) Eden regions: 1->0(9)
>>> [0.071s][info ][gc,heap ] GC(0) Survivor regions: 0->1(2)
>>> [0.071s][info ][gc,heap ] GC(0) Old regions: 0->0
>>>
>>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at
>>> least this is how I read the source code of where this is logged)
>>>
>>> Then a GC(1) concurrent cycle happens:
>>>
>>> [0.071s][info ][gc ] GC(1) Concurrent Cycle
>>>
>>> And the next cycle is where I fail to understand the logging:
>>>
>>> [0.076s][debug][gc,heap ] GC(2) Heap before GC
>>> invocations=2 (full 0): garbage-first heap total 20480K, used 7148K
>>> [0x00000007fec00000, 0x0000000800000000)
>>> [0.076s][debug][gc,heap ] GC(2) region size 1024K, 2
>>> young (2048K), 1 survivors (1024K)
>>>
>>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2
>>> Survivor.
>>>
>>> Thank you,
>>> Eugene.
>>>
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use at openjdk.java.net
>>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>
More information about the hotspot-gc-use
mailing list