From tequilaron at gmail.com  Mon Mar  2 21:10:12 2020
From: tequilaron at gmail.com (Ron Reynolds)
Date: Mon, 2 Mar 2020 13:10:12 -0800
Subject: Full-GC class-histogram generation time
Message-ID: <CAEii5KDRYRbNieQB-ZRAJ5_-WwvhqZ00wFKT-LipKUCTUrVG6A@mail.gmail.com>

going thru a gc-log of a box that had several Full-GCs (OpenJDK 64-Bit
Server VM (11.0.3+7-201905271830) on CentOS 6.9 with 20 cores and 132GB)
and i came across some really surprising times for the output from
 -Xlog:gc*=info,gc+classhisto*=trace:
(notice the timestamps)

[2020-02-29T*02:39:42.403*+0000][trace][gc,classhisto,start] Class
Histogram (before full gc)
[2020-02-29T02:40:37.810+0000][trace][gc,classhisto      ] GC(790)  num
#instances         #bytes  class name (module)
...
[2020-02-29T02:40:37.877+0000][trace][gc,classhisto      ] GC(790) Total
 1714972606   101189395832
[2020-02-29T*02:40:37.878*+0000][trace][gc,classhisto      ] GC(790) Class
Histogram (before full gc) *55475.179ms*
[2020-02-29T02:40:37.878+0000][info ][gc,task            ] GC(790) Using 15
workers of 15 for full compaction
[2020-02-29T02:40:37.878+0000][info ][gc,start           ] GC(790) Pause
Full (G1 Evacuation Pause)
[2020-02-29T02:40:37.954+0000][info ][gc,phases,start    ] GC(790) Phase 1:
Mark live objects
[2020-02-29T02:40:44.115+0000][info ][gc,stringtable     ] GC(790) Cleaned
string and symbol table, strings: 76105 processed, 198 removed, symbols:
254466 processed, 0 removed
[2020-02-29T02:40:44.115+0000][info ][gc,phases          ] GC(790) Phase 1:
Mark live objects 6160.375ms
[2020-02-29T02:40:44.115+0000][info ][gc,phases,start    ] GC(790) Phase 2:
Prepare for compaction
[2020-02-29T02:40:47.010+0000][info ][gc,phases          ] GC(790) Phase 2:
Prepare for compaction 2894.766ms
[2020-02-29T02:40:47.010+0000][info ][gc,phases,start    ] GC(790) Phase 3:
Adjust pointers
[2020-02-29T02:40:52.241+0000][info ][gc,phases          ] GC(790) Phase 3:
Adjust pointers 5231.599ms
[2020-02-29T02:40:52.241+0000][info ][gc,phases,start    ] GC(790) Phase 4:
Compact heap
[2020-02-29T02:40:56.657+0000][info ][gc,phases          ] GC(790) Phase 4:
Compact heap 4415.453ms
[2020-02-29T02:40:56.730+0000][info ][gc,heap            ] GC(790) Eden
regions: 0->0(161)
[2020-02-29T02:40:56.730+0000][info ][gc,heap            ] GC(790) Survivor
regions: 0->0(21)
[2020-02-29T02:40:56.730+0000][info ][gc,heap            ] GC(790) Old
regions: 3112->2776
[2020-02-29T02:40:56.730+0000][info ][gc,heap            ] GC(790)
Humongous regions: 115->24
[2020-02-29T02:40:56.730+0000][info ][gc,metaspace       ] GC(790)
Metaspace: 104224K->104200K(106496K)
[2020-02-29T02:40:56.730+0000][info ][gc                 ] GC(790) Pause
Full (G1 Evacuation Pause) 103045M->89309M(103264M) *18852.025ms*
[2020-02-29T02:40:56.730+0000][trace][gc,classhisto,start] Class Histogram
(after full gc)
[2020-02-29T02:41:39.715+0000][trace][gc,classhisto      ] GC(790)  num
#instances         #bytes  class name (module)
...
[2020-02-29T02:41:39.779+0000][trace][gc,classhisto      ] GC(790) Total
 1628116065    93487648064
[2020-02-29T02:41:39.780+0000][trace][gc,classhisto      ] GC(790) Class
Histogram (after full gc) *43049.823ms*
[2020-02-29T02:41:39.780+0000][info ][gc,cpu             ] GC(790)
User=372.08s Sys=0.09s *Real=117.36s*

did the JVM really take 55.4 SECONDS to generate the pre-full-GC and 43
seconds to generate the post-full-GC class-histograms?  for a GC that would
have only taken 18.8 seconds otherwise?  i mean, sure, avoid full-GC at all
costs but was it always so expensive to get the class histogram?  this
feature has helped us narrow down a lot of memory issues; it would be a
shame to have to disable it because it makes Full-GCs 6x slower.  any
info/suggestions gratefully accepted.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200302/2e562e8c/attachment.htm>

From stefan.johansson at oracle.com  Tue Mar  3 10:55:37 2020
From: stefan.johansson at oracle.com (Stefan Johansson)
Date: Tue, 3 Mar 2020 11:55:37 +0100
Subject: Full-GC class-histogram generation time
In-Reply-To: <CAEii5KDRYRbNieQB-ZRAJ5_-WwvhqZ00wFKT-LipKUCTUrVG6A@mail.gmail.com>
References: <CAEii5KDRYRbNieQB-ZRAJ5_-WwvhqZ00wFKT-LipKUCTUrVG6A@mail.gmail.com>
Message-ID: <8ee24061-80d2-720b-514d-b07df63170ee@oracle.com>

Hi Ron,

You are correct, the generation of class histograms take a long time. 
The feature is enabled under the 'trace' level and should be used with 
caution. The reason it takes so long time is that it is not 
parallelized, so the whole heap is inspected by a single thread. To my 
knowledge this has always been the case. What has changed is that G1, 
since JDK 10, does the Full GC in parallel. So in the past the overhead 
caused by the creating the class histograms wasn't as big as it is now.

There is currently an RFE out to enable use of parallel threads when 
doing heap inspection so this behavior might improve in future releases.

One thing you can do to not always suffer from this overhead is skipping 
-Xlog:gc,classhisto=trace and instead generate class histogram just if 
you suspect something is wrong. This can be done using the tool jmap:
jmap -histo[:live] <pid>
   to connect to running process and print histogram of java object heap
   if the "live" suboption is specified, only count live objects

Hope this helps.

Thanks,
Stefan


On 2020-03-02 22:10, Ron Reynolds wrote:
> going thru a gc-log of a box that had several Full-GCs (OpenJDK 64-Bit 
> Server VM (11.0.3+7-201905271830) on CentOS 6.9 with 20 cores and 132GB) 
> and i came across some really surprising times for the output from 
>  ?-Xlog:gc*=info,gc+classhisto*=trace:
> (notice the timestamps)
> 
> [2020-02-29T*02:39:42.403*+0000][trace][gc,classhisto,start] Class 
> Histogram (before full gc)
> [2020-02-29T02:40:37.810+0000][trace][gc,classhisto ? ? ?] GC(790) ?num 
>  ? ? #instances ? ? ? ? #bytes ?class name (module)
> ...
> [2020-02-29T02:40:37.877+0000][trace][gc,classhisto ? ? ?] GC(790) Total 
>  ? ?1714972606 ? 101189395832
> [2020-02-29T*02:40:37.878*+0000][trace][gc,classhisto ? ? ?] GC(790) 
> Class Histogram (before full gc) *55475.179ms*
> [2020-02-29T02:40:37.878+0000][info ][gc,task ? ? ? ? ? ?] GC(790) Using 
> 15 workers of 15 for full compaction
> [2020-02-29T02:40:37.878+0000][info ][gc,start ? ? ? ? ? ] GC(790) Pause 
> Full (G1 Evacuation Pause)
> [2020-02-29T02:40:37.954+0000][info ][gc,phases,start ? ?] GC(790) Phase 
> 1: Mark live objects
> [2020-02-29T02:40:44.115+0000][info ][gc,stringtable ? ? ] GC(790) 
> Cleaned string and symbol table, strings: 76105 processed, 198 removed, 
> symbols: 254466 processed, 0 removed
> [2020-02-29T02:40:44.115+0000][info ][gc,phases ? ? ? ? ?] GC(790) Phase 
> 1: Mark live objects 6160.375ms
> [2020-02-29T02:40:44.115+0000][info ][gc,phases,start ? ?] GC(790) Phase 
> 2: Prepare for compaction
> [2020-02-29T02:40:47.010+0000][info ][gc,phases ? ? ? ? ?] GC(790) Phase 
> 2: Prepare for compaction 2894.766ms
> [2020-02-29T02:40:47.010+0000][info ][gc,phases,start ? ?] GC(790) Phase 
> 3: Adjust pointers
> [2020-02-29T02:40:52.241+0000][info ][gc,phases ? ? ? ? ?] GC(790) Phase 
> 3: Adjust pointers 5231.599ms
> [2020-02-29T02:40:52.241+0000][info ][gc,phases,start ? ?] GC(790) Phase 
> 4: Compact heap
> [2020-02-29T02:40:56.657+0000][info ][gc,phases ? ? ? ? ?] GC(790) Phase 
> 4: Compact heap 4415.453ms
> [2020-02-29T02:40:56.730+0000][info ][gc,heap ? ? ? ? ? ?] GC(790) Eden 
> regions: 0->0(161)
> [2020-02-29T02:40:56.730+0000][info ][gc,heap ? ? ? ? ? ?] GC(790) 
> Survivor regions: 0->0(21)
> [2020-02-29T02:40:56.730+0000][info ][gc,heap ? ? ? ? ? ?] GC(790) Old 
> regions: 3112->2776
> [2020-02-29T02:40:56.730+0000][info ][gc,heap ? ? ? ? ? ?] GC(790) 
> Humongous regions: 115->24
> [2020-02-29T02:40:56.730+0000][info ][gc,metaspace ? ? ? ] GC(790) 
> Metaspace: 104224K->104200K(106496K)
> [2020-02-29T02:40:56.730+0000][info ][gc ? ? ? ? ? ? ? ? ] GC(790) Pause 
> Full (G1 Evacuation Pause) 103045M->89309M(103264M) *18852.025ms*
> [2020-02-29T02:40:56.730+0000][trace][gc,classhisto,start] Class 
> Histogram (after full gc)
> [2020-02-29T02:41:39.715+0000][trace][gc,classhisto ? ? ?] GC(790) ?num 
>  ? ? #instances ? ? ? ? #bytes ?class name (module)
> ...
> [2020-02-29T02:41:39.779+0000][trace][gc,classhisto ? ? ?] GC(790) Total 
>  ? ?1628116065 ? ?93487648064
> [2020-02-29T02:41:39.780+0000][trace][gc,classhisto ? ? ?] GC(790) Class 
> Histogram (after full gc) *43049.823ms*
> [2020-02-29T02:41:39.780+0000][info ][gc,cpu ? ? ? ? ? ? ] GC(790) 
> User=372.08s Sys=0.09s *Real=117.36s*
> 
> did the JVM really take 55.4 SECONDS to generate the pre-full-GC and 43 
> seconds to generate the post-full-GC class-histograms?? for a GC that 
> would have only taken 18.8 seconds otherwise?? i mean, sure, avoid 
> full-GC at all costs but was it always so expensive to get the class 
> histogram?? this feature has helped us narrow down a lot of memory 
> issues; it would be a shame to have to disable it because it makes 
> Full-GCs 6x slower.? any info/suggestions gratefully accepted.
> 
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> 

From eugen.rabii at gmail.com  Sun Mar  8 05:01:43 2020
From: eugen.rabii at gmail.com (Eugeniu Rabii)
Date: Sun, 8 Mar 2020 00:01:43 -0500
Subject: Minor question about logging
Message-ID: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>

Hello,

I have a very simple program that constantly allocates some byte arrays 
(of 2 MB) each (running with the latest jdk-13). I run it with :

-Xms20M
-Xmx20M
-Xmn10M
"-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"


For?example:

 ? ? public static void main(String[] args) {
 ? ? ? ? while (true) {
 ? ? ? ? ? ? System.out.println(invokeMe());
 ? ? ? ? }
 ? ? }

 ? ? public static int invokeMe() {
 ? ? ? ? int x = 1024;
 ? ? ? ? int factor = 2;
 ? ? ? ? byte[] allocation1 = new byte[factor * x * x];
 ? ? ? ? allocation1[2] = 3;
 ? ? ? ? byte[] allocation2 = new byte[factor * x * x];
 ? ? ? ? byte[] allocation3 = new byte[factor * x * x];
 ? ? ? ? byte[] allocation4 = new byte[factor * x * x];

 ? ? ? ? return Arrays.hashCode(allocation1) ^ Arrays.hashCode(allocation2)
 ? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ Arrays.hashCode(allocation4);
 ? ? }

In logs, I see something that is puzzling me:


[0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
(requested by GC cause). GC cause: G1 Humongous Allocation
[0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 (full 
0): garbage-first heap ? total 20480K, used 6908K [0x00000007fec00000, 
0x0000000800000000)
[0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
(1024K), 0 survivors (0K)

OK, so Heap Before: 1 young, 0 survivors.

Then:

[0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
[0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
[0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0

So the next cycle will have 9 Eden Regions and 2 Survivor ones (at least 
this is how I read the source code of where this is logged)

Then a GC(1) concurrent cycle happens:

[0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle

And the next cycle is where I fail to understand the logging:

[0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC invocations=2 
(full 0): garbage-first heap ? total 20480K, used 7148K 
[0x00000007fec00000, 0x0000000800000000)
[0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 young 
(2048K), 1 survivors (1024K)

How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2 
Survivor.

Thank you,
Eugene.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200308/b30d7319/attachment.htm>

From lvfangmin at gmail.com  Sun Mar  8 18:17:44 2020
From: lvfangmin at gmail.com (Fangmin Lv)
Date: Sun, 8 Mar 2020 11:17:44 -0700
Subject: High young GC pause time due to G1GC long termination attempts with
 OpenJDK 11
Message-ID: <CAGJhZLJXK56y1rgnR_MhC=Zy3y0xTdM_z2QWqXgg4j81Td=Yag@mail.gmail.com>

Hi guys,

Need GC expert to help me understand a strange G1GC behavior here, it has
been causing trouble for us for a while, and I've spent more than a week
investigating into this without fully addressing it. Really appreciate if I
can get some help here.

Our code is running on OpenJDK 11 with following GC settings:


-Xms144179m -Xmx144179m -XX:+AlwaysPreTouch -XX:+UseG1GC
> -XX:MaxGCPauseMillis=500 -Xlog:gc*=debug:file=/tmp/zeus.gc.log


And it's running on machine with 256GB memory and 80 processors.

There are long GC pause (> 2s) from time to time during young GC, and most
of the time is spent in "Evacuating Collection Set". By enabling the GC
debug log, found most of the time was spent in "Object Copy" phase:


...
[74835.843s][debug  ][gc,phases         ] GC(43)     Object Copy (ms):
    Min: 2271.3, Avg: 2287.8, Max: 2401.6, Diff: 130.2, Sum: 86935.4,
Workers: 38
[74835.843s][debug  ][gc,phases         ] GC(43)     Termination (ms):
    Min:  3.7, Avg:  4.2, Max:  5.0, Diff:  1.3, Sum: 160.7, Workers: 38
[74835.843s][debug  ][gc,phases         ] GC(43)       Termination
Attempts:     Min: 3523, Avg: 3654.5, Max: 3782, Diff: 259, Sum: 138872,
Workers: 38
...
[74835.843s][info      ][gc,heap             ] GC(43) Eden regions:
1261->0(1258)
[74835.843s][info      ][gc,heap             ] GC(43) Survivor regions:
90->93(169)
[74835.843s][info      ][gc,heap             ] GC(43) Old regions: 266->266
[74835.843s][info      ][gc,heap             ] GC(43) Humongous regions:
7->7
[74835.843s][info      ][gc,metaspace   ] GC(43) Metaspace:
46892K->46892K(49152K)
[74835.843s][debug  ][gc,heap            ] GC(43) Heap after GC
invocations=44 (full 0): garbage-first heap   total 147652608K, used
11987833K [0x00007f0c34000000, 0x00007f2f68000000)
[74835.843s][debug  ][gc,heap            ] GC(43)   region size 32768K, 93
young (3047424K), 93 survivors (3047424K)
[74835.843s][debug  ][gc,heap            ] GC(43)  Metaspace       used
46892K, capacity 47403K, committed 47712K, reserved 49152K
[74835.843s][info      ][gc                     ] GC(43) Pause Young
(Normal) (G1 Evacuation Pause) 51966M->11706M(144192M) 2439.941ms
[74835.843s][info      ][gc,cpu              ] GC(43) User=14.54s Sys=1.86s
Real=2.44s


Looked into the JDK docs and other blogs shared online to understand the
G1GC, tried various settings like reducing the minimal young gen size
-XX:G1NewSizePercent=1, -XX:ParallelGCThreads=64 and PLAB size, etc, and
they didn't address the issue.

There are blogs mentioned things like memory swap or disk IO might causing
long Object Copy, but there are plentiful of memory in the system, and we
didn't see swap kick in from atop.

Since I cannot get the answer online, I tried to read the code and add
instrument logs to understand the GC behavior (you can find the changed
code in the attachment), and found that the time is spent in long time
stealing works from other task queues, the full log is attached.

There is no single GC thread got caught in root scanning or CPU starving,
all of them finished the trim_queue within a few ms. And, started to the
long steal_and_trim_queue loop. In
G1ParScanThreadState::do_oop_partial_array, it deals with array specially,
which will only scan a subset of object array and push remainder back if
array is bigger than ParGCArrayScanChunk, and we do have array object
(temporary) larger than 500k sometimes. I thought this might be the issue,
but we still see the long stealing behavior even raised this value from 50
to 500, and we saw there are cases there is no large array but still had
this problem.

By adding more logs, saw all the GC threads seemed to steal the same task
and then stole by another, the task should be different, but the behavior
seems weird:

[24632.803s][debug  ][gc,ergo           ] GC(11) worker 1 stealed from
queue 32, its size was 1
[24632.803s][debug  ][gc,ergo           ] GC(11) worker 28 stealed from
queue 1, its size was 2
[24632.803s][debug  ][gc,ergo           ] GC(11) worker 24 stealed from
queue 28, its size was 1
[24632.803s][debug  ][gc,ergo           ] GC(11) worker 4 stealed from
queue 24, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 5 stealed from
queue 4, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 15 stealed from
queue 5, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 37 stealed from
queue 15, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 16 stealed from
queue 37, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 7 stealed from
queue 16, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 2 stealed from
queue 7, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 29 stealed from
queue 2, its size was 1
[24632.804s][debug  ][gc,ergo           ] GC(11) worker 15 stealed from
queue 29, its size was 1

I tried to follow the code more, and found G1ScanEvacuatedObjClosure might
try to push obj into the task queue after copying the obj to survivor
space, but I'm not able to follow and think of why this could keep
happening for a few thousands of stealing, please give me some insights why
this happened, any known scenarios could trigger this?

One thing I'm trying is cap the steal behavior, skip steal if the task
queue has less than 5 tasks, this might reduce the contention and CPU cost
due to busy steal from those GC threads, but I'm not sure if this could
fully address the issue here, I'll update here based on the testing.

Thanks again for taking your time to go through these, really appreciate
any help here!

Thanks,
Fangmin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200308/aa31ac13/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gc.log.gz
Type: application/x-gzip
Size: 7849 bytes
Desc: not available
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200308/aa31ac13/gc.log.gz>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: instrumented_g1gc.diff
Type: application/octet-stream
Size: 6253 bytes
Desc: not available
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200308/aa31ac13/instrumented_g1gc.diff>

From stefan.johansson at oracle.com  Mon Mar  9 08:14:07 2020
From: stefan.johansson at oracle.com (Stefan Johansson)
Date: Mon, 9 Mar 2020 09:14:07 +0100
Subject: Minor question about logging
In-Reply-To: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
Message-ID: <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>

Hi Eugeniu,

The second GC is most likely also caused by having many humongous 
allocations. This was the cause GC(0) as well, and since your 
application only allocates large (humongous) objects it will not use a 
lot of space for other objects.

If you are not familiar with the concept of humongous objects in G1, 
these are objects that are to large to be allocated in the normal fast 
path. They are instead allocated in separate regions. This requires some 
special handling and that's the reason we trigger GCs more quickly if a 
lot of such objects are allocated. In your setup the region size will be 
1MB so all objects larger than 500KB will be considered humongous.

Hope this helps,
StefanJ


On 2020-03-08 06:01, Eugeniu Rabii wrote:
> Hello,
> 
> I have a very simple program that constantly allocates some byte arrays 
> (of 2 MB) each (running with the latest jdk-13). I run it with :
> 
> -Xms20M
> -Xmx20M
> -Xmn10M
> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
> 
> 
> For?example:
> 
>  ? ? public static void main(String[] args) {
>  ? ? ? ? while (true) {
>  ? ? ? ? ? ? System.out.println(invokeMe());
>  ? ? ? ? }
>  ? ? }
> 
>  ? ? public static int invokeMe() {
>  ? ? ? ? int x = 1024;
>  ? ? ? ? int factor = 2;
>  ? ? ? ? byte[] allocation1 = new byte[factor * x * x];
>  ? ? ? ? allocation1[2] = 3;
>  ? ? ? ? byte[] allocation2 = new byte[factor * x * x];
>  ? ? ? ? byte[] allocation3 = new byte[factor * x * x];
>  ? ? ? ? byte[] allocation4 = new byte[factor * x * x];
> 
>  ? ? ? ? return Arrays.hashCode(allocation1) ^ Arrays.hashCode(allocation2)
>  ? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ Arrays.hashCode(allocation4);
>  ? ? }
> 
> In logs, I see something that is puzzling me:
> 
> 
> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
> (requested by GC cause). GC cause: G1 Humongous Allocation
> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 (full 
> 0): garbage-first heap ? total 20480K, used 6908K [0x00000007fec00000, 
> 0x0000000800000000)
> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
> (1024K), 0 survivors (0K)
> 
> OK, so Heap Before: 1 young, 0 survivors.
> 
> Then:
> 
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
> 
> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at least 
> this is how I read the source code of where this is logged)
> 
> Then a GC(1) concurrent cycle happens:
> 
> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
> 
> And the next cycle is where I fail to understand the logging:
> 
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC invocations=2 
> (full 0): garbage-first heap ? total 20480K, used 7148K 
> [0x00000007fec00000, 0x0000000800000000)
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 young 
> (2048K), 1 survivors (1024K)
> 
> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2 
> Survivor.
> 
> Thank you,
> Eugene.
> 
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> 

From stefan.johansson at oracle.com  Mon Mar  9 08:33:40 2020
From: stefan.johansson at oracle.com (Stefan Johansson)
Date: Mon, 9 Mar 2020 09:33:40 +0100
Subject: High young GC pause time due to G1GC long termination attempts
 with OpenJDK 11
In-Reply-To: <CAGJhZLJXK56y1rgnR_MhC=Zy3y0xTdM_z2QWqXgg4j81Td=Yag@mail.gmail.com>
References: <CAGJhZLJXK56y1rgnR_MhC=Zy3y0xTdM_z2QWqXgg4j81Td=Yag@mail.gmail.com>
Message-ID: <db9470f8-40e0-8ef7-b663-0fce27634b28@oracle.com>

Hi Fangmin,

Very hard to say for sure what's causing this, but since you are seeing 
high system times this indicates that something outside the JVM might be 
causing the long pause. Since you ruled out swapping, another possible 
cause can be Transparent Hugepages, there is a short section about this 
here:
https://docs.oracle.com/javase/9/gctuning/garbage-first-garbage-collector-tuning.htm#GUID-8D9B2530-E370-4B8B-8ADD-A43674FC6658

I suggest that you check if transparent hugepages is configure on your 
system. If they are, turn them of and re-run your test to see if it helps.

Regarding your finding around the stealing, there can of course be 
something problematic there as well but before going down that direction 
I would like to rule out transparent hugepages first.

Thanks,
Stefan

On 2020-03-08 19:17, Fangmin Lv wrote:
> Hi guys,
> 
> Need GC expert to help me understand a strange G1GC behavior here, it 
> has been causing trouble for us for a while, and I've spent more than a 
> week investigating into this without fully addressing it. Really 
> appreciate if I can get some help here.
> 
> Our code is running on OpenJDK 11 with following GC settings:
> 
> 
>         -Xms144179m -Xmx144179m -XX:+AlwaysPreTouch -XX:+UseG1GC
>         -XX:MaxGCPauseMillis=500 -Xlog:gc*=debug:file=/tmp/zeus.gc.log 
> 
> 
> And it's running on machine with 256GB memory and 80 processors.
> 
> There are long GC pause (> 2s) from time to time during young GC, and 
> most of the time is spent in "Evacuating Collection Set". By enabling 
> the GC debug log, found most of the time was spent in "Object Copy" phase:
> 
> e thing I'm trying is cap the steal behavior, skip steal if the task 
> queue has less than 5 tasks, this might reduce the contention and CPU 
> cost due to busy steal from those GC threads, but I'm not sure if this 
> could fully address the issue here, I'll update here based on the testing.
> 
> Thanks again for taking your time to go through these, really appreciate 
> any help here!
> 
>     ...
>     [74835.843s][debug ?][gc,phases ? ? ? ? ] GC(43) ? ? Object Copy
>     (ms): ? ? ? ? Min: 2271.3, Avg: 2287.8, Max: 2401.6, Diff: 130.2,
>     Sum: 86935.4, Workers: 38
>     [74835.843s][debug ?][gc,phases ? ? ? ? ] GC(43) ? ? Termination
>     (ms): ? ? ? ? Min: ?3.7, Avg: ?4.2, Max: ?5.0, Diff: ?1.3, Sum:
>     160.7, Workers: 38
>     [74835.843s][debug ?][gc,phases ? ? ? ? ] GC(43) ? ? ? Termination
>     Attempts: ? ? Min: 3523, Avg: 3654.5, Max: 3782, Diff: 259, Sum:
>     138872, Workers: 38
>     ...
>     [74835.843s][info? ? ? ][gc,heap? ? ? ? ? ? ?] GC(43) Eden regions:
>     1261->0(1258)
>     [74835.843s][info? ? ? ][gc,heap? ? ? ? ? ? ?] GC(43) Survivor
>     regions: 90->93(169)
>     [74835.843s][info? ? ? ][gc,heap? ? ? ? ? ? ?] GC(43) Old regions:
>     266->266
>     [74835.843s][info? ? ? ][gc,heap? ? ? ? ? ? ?] GC(43) Humongous
>     regions: 7->7
>     [74835.843s][info? ? ? ][gc,metaspace? ?] GC(43) Metaspace:
>     46892K->46892K(49152K)
>     [74835.843s][debug ?][gc,heap? ? ? ? ? ? ] GC(43) Heap after GC
>     invocations=44 (full 0): garbage-first heap ? total 147652608K, used
>     11987833K [0x00007f0c34000000, 0x00007f2f68000000)
>     [74835.843s][debug ?][gc,heap? ? ? ? ? ? ] GC(43) ? region size
>     32768K, 93 young (3047424K), 93 survivors (3047424K)
>     [74835.843s][debug ?][gc,heap? ? ? ? ? ? ] GC(43) ?Metaspace      
>     used 46892K, capacity 47403K, committed 47712K, reserved 49152K
>     [74835.843s][info? ? ? ][gc? ? ? ? ? ? ? ? ? ? ?] GC(43) Pause Young
>     (Normal) (G1 Evacuation Pause) 51966M->11706M(144192M) 2439.941ms
>     [74835.843s][info? ? ? ][gc,cpu? ? ? ? ? ? ? ] GC(43) User=14.54s
>     Sys=1.86s Real=2.44s
> 
> 
> Looked into the JDK docs and other blogs shared online to understand the 
> G1GC, tried various settings like reducing the minimal young gen size 
> -XX:G1NewSizePercent=1,?-XX:ParallelGCThreads=64 and?PLAB size, etc, and 
> they didn't address the issue.
> 
> There are blogs mentioned things like memory swap or disk IO might 
> causing long Object Copy, but there are plentiful of memory in the 
> system, and we didn't see swap kick in from atop.
> 
> Since I cannot get the answer online, I tried to read the code and add 
> instrument logs to understand the GC behavior (you can find the changed 
> code in the attachment), and found that the time is spent in long time 
> stealing works from other task queues, the full log is attached.
> 
> There is no single GC thread got caught in root scanning or CPU 
> starving, all of them finished the trim_queue within a few ms. And, 
> started to the long steal_and_trim_queue loop. In 
> G1ParScanThreadState::do_oop_partial_array, it deals with array 
> specially, which will only scan a subset of object array and push 
> remainder back if array is bigger than ParGCArrayScanChunk, and we do 
> have array object (temporary) larger than 500k sometimes.?I thought this 
> might be the issue, but we still see the long stealing behavior even 
> raised this value from 50 to 500, and we saw there are cases there is no 
> large array but still had this problem.
> 
> By adding more logs, saw all the GC threads seemed to steal the same 
> task and then stole by another, the task should be different, but the 
> behavior seems weird:
> 
>     [24632.803s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 1 stealed
>     from queue 32, its size was 1
>     [24632.803s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 28 stealed
>     from queue 1, its size was 2
>     [24632.803s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 24 stealed
>     from queue 28, its size was 1
>     [24632.803s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 4 stealed
>     from queue 24, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 5 stealed
>     from queue 4, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 15 stealed
>     from queue 5, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 37 stealed
>     from queue 15, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 16 stealed
>     from queue 37, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 7 stealed
>     from queue 16, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 2 stealed
>     from queue 7, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 29 stealed
>     from queue 2, its size was 1
>     [24632.804s][debug ?][gc,ergo ? ? ? ? ? ] GC(11) worker 15 stealed
>     from queue 29, its size was 1
> 
> I tried to follow the code more, and found G1ScanEvacuatedObjClosure 
> might try to push obj into the task queue after copying the obj to 
> survivor space, but I'm not able to follow and think of why this could 
> keep happening for a few thousands of stealing, please give me some 
> insights why this happened, any known scenarios could trigger this?
> 
> One thing I'm trying is cap the steal behavior, skip steal if the task 
> queue has less than 5 tasks, this might reduce the contention and CPU 
> cost due to busy steal from those GC threads, but I'm not sure if this 
> could fully address the issue here, I'll update here based on the testing.
> 
> Thanks again for taking your time to go through these, really appreciate 
> any help here!
> 
> Thanks,
> Fangmin
> 
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> 

From thomas.schatzl at oracle.com  Mon Mar  9 10:32:43 2020
From: thomas.schatzl at oracle.com (Thomas Schatzl)
Date: Mon, 9 Mar 2020 11:32:43 +0100
Subject: Minor question about logging
In-Reply-To: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
Message-ID: <8fc9ab6e-972a-21dd-4765-8d11697bcae0@oracle.com>

Hi,

On 08.03.20 06:01, Eugeniu Rabii wrote:
> Hello,
> 
> I have a very simple program that constantly allocates some byte arrays 
> (of 2 MB) each (running with the latest jdk-13). I run it with :
> 
> -Xms20M
> -Xmx20M
> -Xmn10M
> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
> 
> 
> For?example:
> 
>  ? ? public static void main(String[] args) {
[...]
>  ? ? }
> 
> In logs, I see something that is puzzling me:
> 
> 
> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
> (requested by GC cause). GC cause: G1 Humongous Allocation
> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 (full 
> 0): garbage-first heap ? total 20480K, used 6908K [0x00000007fec00000, 
> 0x0000000800000000)
> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
> (1024K), 0 survivors (0K)
> 
> OK, so Heap Before: 1 young, 0 survivors.
> 
> Then:
> 
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
> 
> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at least 
> this is how I read the source code of where this is logged)

The nine eden regions are estimates based on pause time and some factors 
like estimated allocation rate. The two survivor ones are actually 
survivor regions allowed in the current GC (allowed survivor region 
length is always determined at the start of GC). It should probably read

Survivor regions: 0(2)->1

In this case the limit for entire young gen is G1MaxNewSizePercent, 
which by default is 60%; 60% of 20M is 12M, which is distributed across 
Survivor and Eden according to SurvivorRatio (=8).

> 
> Then a GC(1) concurrent cycle happens:
> 
> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
> 
> And the next cycle is where I fail to understand the logging:
> 
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC invocations=2 
> (full 0): garbage-first heap ? total 20480K, used 7148K 
> [0x00000007fec00000, 0x0000000800000000)
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 young 
> (2048K), 1 survivors (1024K)
> 
> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2 
> Survivor.

Humongous allocations can make initial estimations about young gen size 
obsolete.

I.e. the humongous allocations (directly into old gen) made old gen 
occupancy cross the current threshold to start old gen reclamation. 
Otherwise the humongous objects would have filled up the entire heap, 
causing full gc.

Thanks,
   Thomas

From eugen.rabii at gmail.com  Mon Mar  9 10:32:43 2020
From: eugen.rabii at gmail.com (Eugeniu Rabii)
Date: Mon, 9 Mar 2020 06:32:43 -0400
Subject: Minor question about logging
In-Reply-To: <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>
Message-ID: <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>

Hello Stefan,

I know these are humongous allocations, the 2 MB was chosen on purpose 
(I could have chose 1 MB too, I know).

The first GC (0 - young collection) is actually the result of the 
allocation of those humongous Objects.

Because the humongous allocation happened, a concurrent GC was triggered 
(GC (1)) that triggers the young collection first (GC (0)); these are 
concepts I do seem do get.

My question here is different. After the young collection is done, there 
are entries like this in logs:

[0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
[0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)

The way I read it it is: there were 1 Eden Regions before the 
collection; everything was cleared from them (that zero) and the 
heuristics just said that the next cycle should have 9 Eden Regions.

Same explanation would happen for Survivor Regions.? As such there would 
be : 11 young, 2 survivor.


I am expecting the third cycle (GC (2)) to start with :


[0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC invocations=2 
(full 0): .....

[0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 11 young 
(2048K), 2 survivors (1024K)


Instead it prints:


[0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 young 
(2048K), 1 survivors (1024K)


Does this makes it a better explanation now?

Thank you,

Eugene.


On 3/9/20 4:14 AM, Stefan Johansson wrote:
> Hi Eugeniu,
>
> The second GC is most likely also caused by having many humongous 
> allocations. This was the cause GC(0) as well, and since your 
> application only allocates large (humongous) objects it will not use a 
> lot of space for other objects.
>
> If you are not familiar with the concept of humongous objects in G1, 
> these are objects that are to large to be allocated in the normal fast 
> path. They are instead allocated in separate regions. This requires 
> some special handling and that's the reason we trigger GCs more 
> quickly if a lot of such objects are allocated. In your setup the 
> region size will be 1MB so all objects larger than 500KB will be 
> considered humongous.
>
> Hope this helps,
> StefanJ
>
>
> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>> Hello,
>>
>> I have a very simple program that constantly allocates some byte 
>> arrays (of 2 MB) each (running with the latest jdk-13). I run it with :
>>
>> -Xms20M
>> -Xmx20M
>> -Xmn10M
>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>
>>
>> For?example:
>>
>> ?? ? public static void main(String[] args) {
>> ?? ? ? ? while (true) {
>> ?? ? ? ? ? ? System.out.println(invokeMe());
>> ?? ? ? ? }
>> ?? ? }
>>
>> ?? ? public static int invokeMe() {
>> ?? ? ? ? int x = 1024;
>> ?? ? ? ? int factor = 2;
>> ?? ? ? ? byte[] allocation1 = new byte[factor * x * x];
>> ?? ? ? ? allocation1[2] = 3;
>> ?? ? ? ? byte[] allocation2 = new byte[factor * x * x];
>> ?? ? ? ? byte[] allocation3 = new byte[factor * x * x];
>> ?? ? ? ? byte[] allocation4 = new byte[factor * x * x];
>>
>> ?? ? ? ? return Arrays.hashCode(allocation1) ^ 
>> Arrays.hashCode(allocation2)
>> ?? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ 
>> Arrays.hashCode(allocation4);
>> ?? ? }
>>
>> In logs, I see something that is puzzling me:
>>
>>
>> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
>> (requested by GC cause). GC cause: G1 Humongous Allocation
>> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 
>> (full 0): garbage-first heap ? total 20480K, used 6908K 
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
>> (1024K), 0 survivors (0K)
>>
>> OK, so Heap Before: 1 young, 0 survivors.
>>
>> Then:
>>
>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
>>
>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at 
>> least this is how I read the source code of where this is logged)
>>
>> Then a GC(1) concurrent cycle happens:
>>
>> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
>>
>> And the next cycle is where I fail to understand the logging:
>>
>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>> invocations=2 (full 0): garbage-first heap ? total 20480K, used 7148K 
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>> young (2048K), 1 survivors (1024K)
>>
>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2 
>> Survivor.
>>
>> Thank you,
>> Eugene.
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>

From ecki at zusammenkunft.net  Mon Mar  9 10:40:08 2020
From: ecki at zusammenkunft.net (Bernd Eckenfels)
Date: Mon, 9 Mar 2020 10:40:08 +0000
Subject: Minor question about logging
In-Reply-To: <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>,
 <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
Message-ID: <AM6PR03MB438976D89B5A64286557A136FFFE0@AM6PR03MB4389.eurprd03.prod.outlook.com>

Hello Eugene,

As far as I know, The number in brackets are the current capacity not the heuristics for the next iteration. But even if it is the heuristics decision, the GC cannot enforce regions to be filled if you do nearly no allocations.

BTW can you comment on the underlying problem? Do you really want to use G1 with such a small heap or are you just trying to debug a specific problem - if yes, can you describe it?

Gruss
Bernd


--
http://bernd.eckenfels.net
________________________________
Von: hotspot-gc-use <hotspot-gc-use-bounces at openjdk.java.net> im Auftrag von Eugeniu Rabii <eugen.rabii at gmail.com>
Gesendet: Monday, March 9, 2020 11:32:43 AM
An: Stefan Johansson <stefan.johansson at oracle.com>; hotspot-gc-use at openjdk.java.net <hotspot-gc-use at openjdk.java.net>
Betreff: Re: Minor question about logging

Hello Stefan,

I know these are humongous allocations, the 2 MB was chosen on purpose
(I could have chose 1 MB too, I know).

The first GC (0 - young collection) is actually the result of the
allocation of those humongous Objects.

Because the humongous allocation happened, a concurrent GC was triggered
(GC (1)) that triggers the young collection first (GC (0)); these are
concepts I do seem do get.

My question here is different. After the young collection is done, there
are entries like this in logs:

[0.071s][info ][gc,heap        ] GC(0) Eden regions: 1->0(9)
[0.071s][info ][gc,heap        ] GC(0) Survivor regions: 0->1(2)

The way I read it it is: there were 1 Eden Regions before the
collection; everything was cleared from them (that zero) and the
heuristics just said that the next cycle should have 9 Eden Regions.

Same explanation would happen for Survivor Regions.  As such there would
be : 11 young, 2 survivor.


I am expecting the third cycle (GC (2)) to start with :


[0.076s][debug][gc,heap           ] GC(2) Heap before GC invocations=2
(full 0): .....

[0.076s][debug][gc,heap           ] GC(2)   region size 1024K, 11 young
(2048K), 2 survivors (1024K)


Instead it prints:


[0.076s][debug][gc,heap           ] GC(2)   region size 1024K, 2 young
(2048K), 1 survivors (1024K)


Does this makes it a better explanation now?

Thank you,

Eugene.


On 3/9/20 4:14 AM, Stefan Johansson wrote:
> Hi Eugeniu,
>
> The second GC is most likely also caused by having many humongous
> allocations. This was the cause GC(0) as well, and since your
> application only allocates large (humongous) objects it will not use a
> lot of space for other objects.
>
> If you are not familiar with the concept of humongous objects in G1,
> these are objects that are to large to be allocated in the normal fast
> path. They are instead allocated in separate regions. This requires
> some special handling and that's the reason we trigger GCs more
> quickly if a lot of such objects are allocated. In your setup the
> region size will be 1MB so all objects larger than 500KB will be
> considered humongous.
>
> Hope this helps,
> StefanJ
>
>
> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>> Hello,
>>
>> I have a very simple program that constantly allocates some byte
>> arrays (of 2 MB) each (running with the latest jdk-13). I run it with :
>>
>> -Xms20M
>> -Xmx20M
>> -Xmn10M
>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>
>>
>> For example:
>>
>>      public static void main(String[] args) {
>>          while (true) {
>>              System.out.println(invokeMe());
>>          }
>>      }
>>
>>      public static int invokeMe() {
>>          int x = 1024;
>>          int factor = 2;
>>          byte[] allocation1 = new byte[factor * x * x];
>>          allocation1[2] = 3;
>>          byte[] allocation2 = new byte[factor * x * x];
>>          byte[] allocation3 = new byte[factor * x * x];
>>          byte[] allocation4 = new byte[factor * x * x];
>>
>>          return Arrays.hashCode(allocation1) ^
>> Arrays.hashCode(allocation2)
>>              ^ Arrays.hashCode(allocation3) ^
>> Arrays.hashCode(allocation4);
>>      }
>>
>> In logs, I see something that is puzzling me:
>>
>>
>> [0.066s][debug][gc,ergo       ] Request concurrent cycle initiation
>> (requested by GC cause). GC cause: G1 Humongous Allocation
>> [0.066s][debug][gc,heap       ] GC(0) Heap before GC invocations=0
>> (full 0): garbage-first heap   total 20480K, used 6908K
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.066s][debug][gc,heap       ] GC(0)   region size 1024K, 1 young
>> (1024K), 0 survivors (0K)
>>
>> OK, so Heap Before: 1 young, 0 survivors.
>>
>> Then:
>>
>> [0.071s][info ][gc,heap        ] GC(0) Eden regions: 1->0(9)
>> [0.071s][info ][gc,heap        ] GC(0) Survivor regions: 0->1(2)
>> [0.071s][info ][gc,heap        ] GC(0) Old regions: 0->0
>>
>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at
>> least this is how I read the source code of where this is logged)
>>
>> Then a GC(1) concurrent cycle happens:
>>
>> [0.071s][info ][gc             ] GC(1) Concurrent Cycle
>>
>> And the next cycle is where I fail to understand the logging:
>>
>> [0.076s][debug][gc,heap           ] GC(2) Heap before GC
>> invocations=2 (full 0): garbage-first heap   total 20480K, used 7148K
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.076s][debug][gc,heap           ] GC(2)   region size 1024K, 2
>> young (2048K), 1 survivors (1024K)
>>
>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2
>> Survivor.
>>
>> Thank you,
>> Eugene.
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20200309/93cb21cf/attachment.htm>

From stefan.johansson at oracle.com  Mon Mar  9 10:41:32 2020
From: stefan.johansson at oracle.com (Stefan Johansson)
Date: Mon, 9 Mar 2020 11:41:32 +0100
Subject: Minor question about logging
In-Reply-To: <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>
 <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
Message-ID: <d423f8e7-b5b1-d0cd-6731-980cc1bd3ae6@oracle.com>

Hi Eugeniu,

I should have been more clear around that your understanding of the 
numbers are correct. But as Thomas also responded, these are estimates 
and we might have to start a GC due to other circumstances.

See more below.

On 2020-03-09 11:32, Eugeniu Rabii wrote:
> Hello Stefan,
> 
> I know these are humongous allocations, the 2 MB was chosen on purpose 
> (I could have chose 1 MB too, I know).
> 
> The first GC (0 - young collection) is actually the result of the 
> allocation of those humongous Objects.
> 
> Because the humongous allocation happened, a concurrent GC was triggered 
> (GC (1)) that triggers the young collection first (GC (0)); these are 
> concepts I do seem do get.
> 
> My question here is different. After the young collection is done, there 
> are entries like this in logs:
> 
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
> 
> The way I read it it is: there were 1 Eden Regions before the 
> collection; everything was cleared from them (that zero) and the 
> heuristics just said that the next cycle should have 9 Eden Regions.
Correct, but this is an estimate and we might have to GC before we fill 
up the 9 young regions. For example if there are a lot of humongous 
allocations. The humongous allocations are as I mentioned treated 
differently and aren't considered young.

> 
> Same explanation would happen for Survivor Regions.? As such there would 
> be : 11 young, 2 survivor.
> 
> 
> I am expecting the third cycle (GC (2)) to start with :
> 
> 
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC invocations=2 
> (full 0): .....
> 
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 11 young 
> (2048K), 2 survivors (1024K)
> 
> 
> Instead it prints:
> 
> 
> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 young 
> (2048K), 1 survivors (1024K)
> 
> 
> Does this makes it a better explanation now?
Your expectation is correct, and if GC(2) isn't caused by a humongous 
allocation this is unexpected behavior. It would help a lot if you could 
post more of your log, especially the cause for GC(2).

Thanks,
Stefan

> 
> Thank you,
> 
> Eugene.
> 
> 
> On 3/9/20 4:14 AM, Stefan Johansson wrote:
>> Hi Eugeniu,
>>
>> The second GC is most likely also caused by having many humongous 
>> allocations. This was the cause GC(0) as well, and since your 
>> application only allocates large (humongous) objects it will not use a 
>> lot of space for other objects.
>>
>> If you are not familiar with the concept of humongous objects in G1, 
>> these are objects that are to large to be allocated in the normal fast 
>> path. They are instead allocated in separate regions. This requires 
>> some special handling and that's the reason we trigger GCs more 
>> quickly if a lot of such objects are allocated. In your setup the 
>> region size will be 1MB so all objects larger than 500KB will be 
>> considered humongous.
>>
>> Hope this helps,
>> StefanJ
>>
>>
>> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>>> Hello,
>>>
>>> I have a very simple program that constantly allocates some byte 
>>> arrays (of 2 MB) each (running with the latest jdk-13). I run it with :
>>>
>>> -Xms20M
>>> -Xmx20M
>>> -Xmn10M
>>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>>
>>>
>>> For?example:
>>>
>>> ?? ? public static void main(String[] args) {
>>> ?? ? ? ? while (true) {
>>> ?? ? ? ? ? ? System.out.println(invokeMe());
>>> ?? ? ? ? }
>>> ?? ? }
>>>
>>> ?? ? public static int invokeMe() {
>>> ?? ? ? ? int x = 1024;
>>> ?? ? ? ? int factor = 2;
>>> ?? ? ? ? byte[] allocation1 = new byte[factor * x * x];
>>> ?? ? ? ? allocation1[2] = 3;
>>> ?? ? ? ? byte[] allocation2 = new byte[factor * x * x];
>>> ?? ? ? ? byte[] allocation3 = new byte[factor * x * x];
>>> ?? ? ? ? byte[] allocation4 = new byte[factor * x * x];
>>>
>>> ?? ? ? ? return Arrays.hashCode(allocation1) ^ 
>>> Arrays.hashCode(allocation2)
>>> ?? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ 
>>> Arrays.hashCode(allocation4);
>>> ?? ? }
>>>
>>> In logs, I see something that is puzzling me:
>>>
>>>
>>> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
>>> (requested by GC cause). GC cause: G1 Humongous Allocation
>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 
>>> (full 0): garbage-first heap ? total 20480K, used 6908K 
>>> [0x00000007fec00000, 0x0000000800000000)
>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
>>> (1024K), 0 survivors (0K)
>>>
>>> OK, so Heap Before: 1 young, 0 survivors.
>>>
>>> Then:
>>>
>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
>>>
>>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at 
>>> least this is how I read the source code of where this is logged)
>>>
>>> Then a GC(1) concurrent cycle happens:
>>>
>>> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
>>>
>>> And the next cycle is where I fail to understand the logging:
>>>
>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>>> invocations=2 (full 0): garbage-first heap ? total 20480K, used 7148K 
>>> [0x00000007fec00000, 0x0000000800000000)
>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>>> young (2048K), 1 survivors (1024K)
>>>
>>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2 
>>> Survivor.
>>>
>>> Thank you,
>>> Eugene.
>>>
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use at openjdk.java.net
>>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>

From thomas.schatzl at oracle.com  Mon Mar  9 10:54:11 2020
From: thomas.schatzl at oracle.com (Thomas Schatzl)
Date: Mon, 9 Mar 2020 11:54:11 +0100
Subject: Minor question about logging
In-Reply-To: <8fc9ab6e-972a-21dd-4765-8d11697bcae0@oracle.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <8fc9ab6e-972a-21dd-4765-8d11697bcae0@oracle.com>
Message-ID: <73dd3cc1-f249-793f-ee21-243cdb9d464b@oracle.com>

Hi,

On 09.03.20 11:32, Thomas Schatzl wrote:
> Hi,
> 
> On 08.03.20 06:01, Eugeniu Rabii wrote:
>> Hello,
>>
>> I have a very simple program that constantly allocates some byte 
>> arrays (of 2 MB) each (running with the latest jdk-13). I run it with :
>>
[...]
>>
>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at 
>> least this is how I read the source code of where this is logged)
> 
> The nine eden regions are estimates based on pause time and some factors 
> like estimated allocation rate. The two survivor ones are actually 
> survivor regions allowed in the current GC (allowed survivor region 
> length is always determined at the start of GC). It should probably read
> 
> Survivor regions: 0(2)->1
> 
> In this case the limit for entire young gen is G1MaxNewSizePercent, 
> which by default is 60%; 60% of 20M is 12M, which is distributed across 
> Survivor and Eden according to SurvivorRatio (=8).

Note that in my local runs I always get

Eden regions: 1->0(10)
Survivor regions: 0->0(2)

In the stable state, that is, after initial allocations in young gen by 
the runtime have been moved into old gen.

>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC invocations=2 
>> (full 0): garbage-first heap ? total 20480K, used 7148K 
>> [0x00000007fec00000, 0x0000000800000000)
>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 young 
>> (2048K), 1 survivors (1024K)
>>
>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 2 
>> Survivor.
> 
> Humongous allocations can make initial estimations about young gen size 
> obsolete.
> 
> I.e. the humongous allocations (directly into old gen) made old gen 
> occupancy cross the current threshold to start old gen reclamation. 
> Otherwise the humongous objects would have filled up the entire heap, 
> causing full gc.
> 

And actually answering the question: The number of young regions at the 
time of GC (e.g. "2") depend on when these extra GCs occur, i.e. how 
full eden/survivor were at that time.

The GC log also tells you that the root cause for the GCs has been a 
humongous allocation, i.e.

[info][gc,start] GC(xyz) Pause Young (Concurrent Start) (G1 Humongous 
Allocation)

Other log output before that (with the settings you gave) told you that 
the reason for the "concurrent cycle request" (read: gc) has been old 
gen occupancy being higher than a threshold, e.g.

[...][gc,ergo,ihop] Request concurrent cycle initiation (occupancy 
higher than threshold) occupancy: ...B allocation request: ...B 
threshold ...B (45.00) source: concurrent humongous allocation
[...][gc,ergo] Request concurrent cycle initiation (requested by GC 
cause). GC cause: G1 Humongous Allocation

I.e. the humongous objects caused the gc in order to try to clean them 
out from old gen.

Thanks,
   Thomas

From eugen.rabii at gmail.com  Mon Mar  9 14:08:43 2020
From: eugen.rabii at gmail.com (Eugeniu Rabii)
Date: Mon, 9 Mar 2020 10:08:43 -0400
Subject: Minor question about logging
In-Reply-To: <d423f8e7-b5b1-d0cd-6731-980cc1bd3ae6@oracle.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>
 <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
 <d423f8e7-b5b1-d0cd-6731-980cc1bd3ae6@oracle.com>
Message-ID: <a35d897d-317d-27ec-1282-97ce041606db@gmail.com>

Hello Stefan,

I actually should have been more clear myself on the specific question I 
have, I am sorry for that.

Comments inline.

On 3/9/20 6:41 AM, Stefan Johansson wrote:
> Hi Eugeniu,
>
> I should have been more clear around that your understanding of the 
> numbers are correct. But as Thomas also responded, these are estimates 
> and we might have to start a GC due to other circumstances.
>
> See more below.
>
> On 2020-03-09 11:32, Eugeniu Rabii wrote:
>> Hello Stefan,
>>
>> I know these are humongous allocations, the 2 MB was chosen on 
>> purpose (I could have chose 1 MB too, I know).
>>
>> The first GC (0 - young collection) is actually the result of the 
>> allocation of those humongous Objects.
>>
>> Because the humongous allocation happened, a concurrent GC was 
>> triggered (GC (1)) that triggers the young collection first (GC (0)); 
>> these are concepts I do seem do get.
>>
>> My question here is different. After the young collection is done, 
>> there are entries like this in logs:
>>
>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>
>> The way I read it it is: there were 1 Eden Regions before the 
>> collection; everything was cleared from them (that zero) and the 
>> heuristics just said that the next cycle should have 9 Eden Regions.
> Correct, but this is an estimate and we might have to GC before we 
> fill up the 9 young regions. For example if there are a lot of 
> humongous allocations. The humongous allocations are as I mentioned 
> treated differently and aren't considered young.
>

I understand these are estimates; I also understand these could be 
ignored. In fact, they are, since GC (2) is _again_ a humongous allocation.


>>
>> Same explanation would happen for Survivor Regions.? As such there 
>> would be : 11 young, 2 survivor.
>>
>>
>> I am expecting the third cycle (GC (2)) to start with :
>>
>>
>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>> invocations=2 (full 0): .....
>>
>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 11 
>> young (2048K), 2 survivors (1024K)
>>
>>
>> Instead it prints:
>>
>>
>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>> young (2048K), 1 survivors (1024K)
>>
>>
>> Does this makes it a better explanation now?
> Your expectation is correct, and if GC(2) isn't caused by a humongous 
> allocation this is unexpected behavior. It would help a lot if you 
> could post more of your log, especially the cause for GC(2).

Logs attached, but you are correct GC (2) is still a humongous allocation:

[0.090s][debug][gc,ergo?????????? ] Request concurrent cycle initiation 
(requested by GC cause). GC cause: G1 Humongous Allocation


And now my question: what I REALLY wish for (not sure if possible) is a 
log statement in GC(1) of how young regions were adjusted because of the 
humongous allocation - this is the part I was missing.

I sort of already realized before posting that those are only estimates, 
I hoped for some kind of a hint in logs.


>
> Thanks,
> Stefan
>
>>
>> Thank you,
>>
>> Eugene.
>>
>>
>> On 3/9/20 4:14 AM, Stefan Johansson wrote:
>>> Hi Eugeniu,
>>>
>>> The second GC is most likely also caused by having many humongous 
>>> allocations. This was the cause GC(0) as well, and since your 
>>> application only allocates large (humongous) objects it will not use 
>>> a lot of space for other objects.
>>>
>>> If you are not familiar with the concept of humongous objects in G1, 
>>> these are objects that are to large to be allocated in the normal 
>>> fast path. They are instead allocated in separate regions. This 
>>> requires some special handling and that's the reason we trigger GCs 
>>> more quickly if a lot of such objects are allocated. In your setup 
>>> the region size will be 1MB so all objects larger than 500KB will be 
>>> considered humongous.
>>>
>>> Hope this helps,
>>> StefanJ
>>>
>>>
>>> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>>>> Hello,
>>>>
>>>> I have a very simple program that constantly allocates some byte 
>>>> arrays (of 2 MB) each (running with the latest jdk-13). I run it 
>>>> with :
>>>>
>>>> -Xms20M
>>>> -Xmx20M
>>>> -Xmn10M
>>>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>>>
>>>>
>>>> For?example:
>>>>
>>>> ?? ? public static void main(String[] args) {
>>>> ?? ? ? ? while (true) {
>>>> ?? ? ? ? ? ? System.out.println(invokeMe());
>>>> ?? ? ? ? }
>>>> ?? ? }
>>>>
>>>> ?? ? public static int invokeMe() {
>>>> ?? ? ? ? int x = 1024;
>>>> ?? ? ? ? int factor = 2;
>>>> ?? ? ? ? byte[] allocation1 = new byte[factor * x * x];
>>>> ?? ? ? ? allocation1[2] = 3;
>>>> ?? ? ? ? byte[] allocation2 = new byte[factor * x * x];
>>>> ?? ? ? ? byte[] allocation3 = new byte[factor * x * x];
>>>> ?? ? ? ? byte[] allocation4 = new byte[factor * x * x];
>>>>
>>>> ?? ? ? ? return Arrays.hashCode(allocation1) ^ 
>>>> Arrays.hashCode(allocation2)
>>>> ?? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ 
>>>> Arrays.hashCode(allocation4);
>>>> ?? ? }
>>>>
>>>> In logs, I see something that is puzzling me:
>>>>
>>>>
>>>> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
>>>> (requested by GC cause). GC cause: G1 Humongous Allocation
>>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 
>>>> (full 0): garbage-first heap ? total 20480K, used 6908K 
>>>> [0x00000007fec00000, 0x0000000800000000)
>>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
>>>> (1024K), 0 survivors (0K)
>>>>
>>>> OK, so Heap Before: 1 young, 0 survivors.
>>>>
>>>> Then:
>>>>
>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
>>>>
>>>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at 
>>>> least this is how I read the source code of where this is logged)
>>>>
>>>> Then a GC(1) concurrent cycle happens:
>>>>
>>>> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
>>>>
>>>> And the next cycle is where I fail to understand the logging:
>>>>
>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>>>> invocations=2 (full 0): garbage-first heap ? total 20480K, used 
>>>> 7148K [0x00000007fec00000, 0x0000000800000000)
>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>>>> young (2048K), 1 survivors (1024K)
>>>>
>>>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 
>>>> 2 Survivor.
>>>>
>>>> Thank you,
>>>> Eugene.
>>>>
>>>> _______________________________________________
>>>> hotspot-gc-use mailing list
>>>> hotspot-gc-use at openjdk.java.net
>>>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>>
-------------- next part --------------
[0.005s][debug][ergo] ThreadLocalHandshakes enabled.
[0.011s][info ][gc,heap] Heap region size: 1M
[0.011s][debug][gc,heap] Minimum heap 20971520  Initial heap 20971520  Maximum heap 20971520
[0.011s][debug][gc     ] ConcGCThreads: 3 offset 22
[0.011s][debug][gc     ] ParallelGCThreads: 10
[0.011s][debug][gc     ] Initialize mark stack with 4096 chunks, maximum 16384
[0.011s][debug][gc,ergo,heap] Expand the heap. requested expansion amount: 20971520B expansion amount: 20971520B
[0.011s][debug][gc,ihop     ] Target occupancy update: old: 0B, new: 20971520B
[0.011s][debug][gc,ergo,refine] Initial Refinement Zones: green: 10, yellow: 30, red: 50, min yellow size: 20
[0.011s][info ][gc            ] Using G1
[0.011s][info ][gc,heap,coops ] Heap address: 0x00000007fec00000, size: 20 MB, Compressed Oops mode: Zero based, Oop shift amount: 3
[0.012s][info ][gc,cds        ] Mark closed archive regions in map: [0x00000007fff00000, 0x00000007fff76ff8]
[0.012s][info ][gc,cds        ] Mark open archive regions in map: [0x00000007ffe00000, 0x00000007ffe47ff8]
[0.019s][debug][cds,heap      ]   0x0000000800325f80 init field @ 112 = 0x00000007ffe43e60
[0.020s][info ][cds,heap      ] initialize_from_archived_subgraph java.util.ImmutableCollections$MapN 0x0000000800325f80
[0.029s][debug][cds,heap      ]   0x00000008002ea3b0 init field @ 116 = 0x00000007fff70040
[0.029s][info ][cds,heap      ] initialize_from_archived_subgraph java.lang.Integer$IntegerCache 0x00000008002ea3b0
[0.029s][info ][gc            ] Periodic GC disabled
[0.031s][debug][cds,heap      ]   0x00000008002d1988 init field @ 112 = 0x00000007ffe344c8
[0.031s][info ][cds,heap      ] initialize_from_archived_subgraph java.util.ImmutableCollections$SetN 0x00000008002d1988
[0.031s][debug][cds,heap      ]   0x00000008002c1810 init field @ 112 = 0x00000007ffe43e40
[0.031s][info ][cds,heap      ] initialize_from_archived_subgraph java.util.ImmutableCollections$ListN 0x00000008002c1810
[0.031s][debug][cds,heap      ]   0x000000080029fa00 init field @ 112 = 0x00000007ffe43e18
[0.031s][info ][cds,heap      ] initialize_from_archived_subgraph java.lang.module.Configuration 0x000000080029fa00
[0.032s][debug][cds,heap      ]   0x00000008002d11a0 init field @ 112 = 0x00000007ffe34150
[0.032s][info ][cds,heap      ] initialize_from_archived_subgraph jdk.internal.module.ArchivedModuleGraph 0x00000008002d11a0
[0.051s][debug][cds,heap      ]   0x0000000800003ed0 init field @ 112 = 0x00000007fff766d0
[0.051s][info ][cds,heap      ] initialize_from_archived_subgraph sun.util.locale.BaseLocale 0x0000000800003ed0
[0.061s][debug][gc,ergo,ihop  ] Request concurrent cycle initiation (occupancy higher than threshold) occupancy: 8388608B allocation request: 2097168B threshold: 9437184B (45.00) source: concurrent humongous allocation
[0.061s][debug][gc,ergo       ] Request concurrent cycle initiation (requested by GC cause). GC cause: G1 Humongous Allocation
[0.061s][debug][gc,heap       ] GC(0) Heap before GC invocations=0 (full 0): garbage-first heap   total 20480K, used 6908K [0x00000007fec00000, 0x0000000800000000)
[0.061s][debug][gc,heap       ] GC(0)   region size 1024K, 1 young (1024K), 0 survivors (0K)
[0.061s][debug][gc,heap       ] GC(0)  Metaspace       used 140K, capacity 4486K, committed 4864K, reserved 1056768K
[0.061s][debug][gc,heap       ] GC(0)   class space    used 6K, capacity 386K, committed 512K, reserved 1048576K
[0.061s][debug][gc,ergo       ] GC(0) Initiate concurrent cycle (concurrent cycle initiation requested)
[0.061s][info ][gc,start      ] GC(0) Pause Young (Concurrent Start) (G1 Humongous Allocation)
[0.061s][info ][gc,task       ] GC(0) Using 2 workers of 10 for evacuation
[0.061s][debug][gc,tlab       ] GC(0) TLAB totals: thrds: 2  refills: 3 max: 2 slow allocs: 3 max 3 waste: 53.6% gc: 336536B max: 209712B slow: 576B max: 576B fast: 0B max: 0B
[0.061s][debug][gc,age        ] GC(0) Desired survivor size 1048576 bytes, new threshold 15 (max threshold 15)
[0.061s][debug][gc,alloc,region] GC(0) Mutator Allocation stats, regions: 1, wasted size: 0B ( 0.0%)
[0.067s][debug][gc,ergo        ] GC(0) Running G1 Clear Card Table Task using 1 workers for 1 units of work for 2 regions.
[0.067s][debug][gc,ref         ] GC(0) Skipped phase1 of Reference Processing due to unavailable references
[0.068s][debug][gc,ref         ] GC(0) Skipped phase3 of Reference Processing due to unavailable references
[0.068s][debug][gc,ergo        ] GC(0) Running G1 Free Collection Set using 1 workers for collection set length 1
[0.068s][debug][gc,humongous   ] GC(0) Live humongous region 0 object size 2097168 start 0x00000007fec00000  with remset 0 code roots 0 is marked 1 reclaim candidate 0 type array 1
[0.068s][debug][gc,humongous   ] GC(0) Live humongous region 3 object size 2097168 start 0x00000007fef00000  with remset 0 code roots 0 is marked 1 reclaim candidate 0 type array 1
[0.068s][debug][gc,plab        ] GC(0) Young PLAB allocation: allocated: 196608B, wasted: 312B, unused: 912B, used: 195384B, undo waste: 0B, 
[0.068s][debug][gc,plab        ] GC(0) Young other allocation: region end waste: 0B, regions filled: 1, direct allocated: 41008B, failure used: 0B, failure wasted: 0B
[0.068s][debug][gc,plab        ] GC(0) Young sizing: calculated: 39072B, actual: 39072B
[0.068s][debug][gc,plab        ] GC(0) Old PLAB allocation: allocated: 0B, wasted: 0B, unused: 0B, used: 0B, undo waste: 0B, 
[0.068s][debug][gc,plab        ] GC(0) Old other allocation: region end waste: 0B, regions filled: 0, direct allocated: 0B, failure used: 0B, failure wasted: 0B
[0.068s][debug][gc,plab        ] GC(0) Old sizing: calculated: 0B, actual: 2064B
[0.068s][debug][gc,ihop        ] GC(0) Basic information (value update), threshold: 9437184B (45.00), target occupancy: 20971520B, current occupancy: 7311408B, recent allocation size: 6291456B, recent allocation duration: 50.70ms, recent old gen allocation rate: 124080777.18B/s, recent marking phase length: 0.00ms
[0.068s][debug][gc,ihop        ] GC(0) Adaptive IHOP information (value update), threshold: 9437184B (52.94), internal target occupancy: 17825792B, occupancy: 7311408B, additional buffer size: 10485760B, predicted old gen allocation rate: 248161554.36B/s, predicted marking phase length: 0.00ms, prediction active: false
[0.068s][debug][gc,ergo,refine ] GC(0) Updated Refinement Zones: green: 10, yellow: 30, red: 50
[0.068s][info ][gc,phases      ] GC(0)   Pre Evacuate Collection Set: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Prepare TLABs: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Choose Collection Set: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Region Register: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Clear Claimed Marks: 0.0ms
[0.068s][info ][gc,phases      ] GC(0)   Evacuate Collection Set: 5.9ms
[0.068s][debug][gc,phases      ] GC(0)     Ext Root Scanning (ms):   Min:  0.1, Avg:  0.2, Max:  0.2, Diff:  0.0, Sum:  0.3, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     Scan HCC (ms):            Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     Update RS (ms):           Min:  0.0, Avg:  2.8, Max:  5.6, Diff:  5.6, Sum:  5.6, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Processed Buffers:        Min: 0, Avg:  0.5, Max: 1, Diff: 1, Sum: 1, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Scanned Cards:            Min: 0, Avg: 74.0, Max: 148, Diff: 148, Sum: 148, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Skipped Cards:            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     Scan RS (ms):             Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Scanned Cards:            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Claimed Cards:            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Skipped Cards:            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     Code Root Scan (ms):      Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     Object Copy (ms):         Min:  0.0, Avg:  0.5, Max:  0.9, Diff:  0.8, Sum:  0.9, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       LAB Waste                 Min: 0, Avg: 156.0, Max: 312, Diff: 312, Sum: 312, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       LAB Undo Waste            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     Termination (ms):         Min:  0.0, Avg:  2.4, Max:  4.8, Diff:  4.8, Sum:  4.8, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)       Termination Attempts:     Min: 1, Avg: 62.0, Max: 123, Diff: 122, Sum: 124, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     GC Worker Other (ms):     Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.068s][debug][gc,phases      ] GC(0)     GC Worker Total (ms):     Min:  5.8, Avg:  5.8, Max:  5.8, Diff:  0.0, Sum: 11.7, Workers: 2
[0.068s][info ][gc,phases      ] GC(0)   Post Evacuate Collection Set: 0.3ms
[0.068s][debug][gc,phases      ] GC(0)     Code Roots Fixup: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Clear Card Table: 0.1ms
[0.068s][debug][gc,phases      ] GC(0)     Reference Processing: 0.1ms
[0.068s][debug][gc,phases,ref  ] GC(0)       Reconsider SoftReferences: 0.0ms
[0.068s][debug][gc,phases,ref  ] GC(0)         SoftRef (ms):             skipped
[0.068s][debug][gc,phases,ref  ] GC(0)       Notify Soft/WeakReferences: 0.0ms
[0.068s][debug][gc,phases,ref  ] GC(0)         SoftRef (ms):             Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.068s][debug][gc,phases,ref  ] GC(0)         WeakRef (ms):             Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.068s][debug][gc,phases,ref  ] GC(0)         FinalRef (ms):            Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.068s][debug][gc,phases,ref  ] GC(0)         Total (ms):               Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.068s][debug][gc,phases,ref  ] GC(0)       Notify and keep alive finalizable: 0.0ms
[0.068s][debug][gc,phases,ref  ] GC(0)         FinalRef (ms):            skipped
[0.068s][debug][gc,phases,ref  ] GC(0)       Notify PhantomReferences: 0.0ms
[0.068s][debug][gc,phases,ref  ] GC(0)         PhantomRef (ms):          Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.068s][debug][gc,phases,ref  ] GC(0)       SoftReference:
[0.068s][debug][gc,phases,ref  ] GC(0)         Discovered: 0
[0.068s][debug][gc,phases,ref  ] GC(0)         Cleared: 0
[0.068s][debug][gc,phases,ref  ] GC(0)       WeakReference:
[0.068s][debug][gc,phases,ref  ] GC(0)         Discovered: 2
[0.068s][debug][gc,phases,ref  ] GC(0)         Cleared: 2
[0.068s][debug][gc,phases,ref  ] GC(0)       FinalReference:
[0.068s][debug][gc,phases,ref  ] GC(0)         Discovered: 0
[0.068s][debug][gc,phases,ref  ] GC(0)         Cleared: 0
[0.068s][debug][gc,phases,ref  ] GC(0)       PhantomReference:
[0.068s][debug][gc,phases,ref  ] GC(0)         Discovered: 1
[0.068s][debug][gc,phases,ref  ] GC(0)         Cleared: 1
[0.068s][debug][gc,phases      ] GC(0)     Weak Processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)       JVMTI weak processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)         Dead: 0
[0.068s][debug][gc,phases      ] GC(0)         Total: 0
[0.068s][debug][gc,phases      ] GC(0)       JFR weak processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)         Dead: 0
[0.068s][debug][gc,phases      ] GC(0)         Total: 0
[0.068s][debug][gc,phases      ] GC(0)       JNI weak processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)         Dead: 0
[0.068s][debug][gc,phases      ] GC(0)         Total: 0
[0.068s][debug][gc,phases      ] GC(0)       StringTable weak processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)         Dead: 0
[0.068s][debug][gc,phases      ] GC(0)         Total: 4
[0.068s][debug][gc,phases      ] GC(0)       ResolvedMethodTable weak processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)         Dead: 0
[0.068s][debug][gc,phases      ] GC(0)         Total: 0
[0.068s][debug][gc,phases      ] GC(0)       VM weak processing: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)         Dead: 0
[0.068s][debug][gc,phases      ] GC(0)         Total: 3
[0.068s][debug][gc,phases      ] GC(0)     Merge Per-Thread State: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Code Roots Purge: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Redirty Cards: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     DerivedPointerTable Update: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Free Collection Set: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Humongous Reclaim: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Start New Collection Set: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Resize TLABs: 0.0ms
[0.068s][debug][gc,phases      ] GC(0)     Expand Heap After Collection: 0.0ms
[0.068s][info ][gc,phases      ] GC(0)   Other: 0.5ms
[0.068s][info ][gc,heap        ] GC(0) Eden regions: 1->0(9)
[0.068s][info ][gc,heap        ] GC(0) Survivor regions: 0->1(2)
[0.068s][info ][gc,heap        ] GC(0) Old regions: 0->0
[0.068s][info ][gc,heap        ] GC(0) Archive regions: 2->2
[0.068s][info ][gc,heap        ] GC(0) Humongous regions: 6->6
[0.068s][info ][gc,metaspace   ] GC(0) Metaspace: 140K->140K(1056768K)
[0.068s][debug][gc,heap        ] GC(0) Heap after GC invocations=1 (full 0): garbage-first heap   total 20480K, used 7140K [0x00000007fec00000, 0x0000000800000000)
[0.068s][debug][gc,heap        ] GC(0)   region size 1024K, 1 young (1024K), 1 survivors (1024K)
[0.068s][debug][gc,heap        ] GC(0)  Metaspace       used 140K, capacity 4486K, committed 4864K, reserved 1056768K
[0.068s][debug][gc,heap        ] GC(0)   class space    used 6K, capacity 386K, committed 512K, reserved 1048576K
[0.068s][info ][gc             ] GC(0) Pause Young (Concurrent Start) (G1 Humongous Allocation) 7M->6M(20M) 7.230ms
[0.068s][info ][gc,cpu         ] GC(0) User=0.01s Sys=0.01s Real=0.00s
[0.068s][info ][gc             ] GC(1) Concurrent Cycle
[0.069s][info ][gc,marking     ] GC(1) Concurrent Clear Claimed Marks
[0.069s][info ][gc,marking     ] GC(1) Concurrent Clear Claimed Marks 0.010ms
[0.069s][info ][gc,marking     ] GC(1) Concurrent Scan Root Regions
[0.069s][debug][gc,ergo        ] GC(1) Running G1 Root Region Scan using 1 workers for 1 work units.
[0.069s][info ][gc,marking     ] GC(1) Concurrent Scan Root Regions 0.387ms
[0.069s][info ][gc,marking     ] GC(1) Concurrent Mark (0.069s)
[0.069s][info ][gc,marking     ] GC(1) Concurrent Mark From Roots
[0.069s][info ][gc,task        ] GC(1) Using 3 workers of 3 for marking
[0.070s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.070s][debug][gc,stats       ] Marking Stats, task = 0, calls = 1
[0.070s][debug][gc,stats       ]   Elapsed time = 0.87ms, Termination time = 0.01ms
[0.070s][debug][gc,stats       ]   Step Times (cum): num = 1, avg = 0.87ms, sd = 0.00ms max = 0.87ms, total = 0.87ms
[0.070s][debug][gc,stats       ]   Mark Stats Cache: hits 5641 misses 2 ratio 99.965
[0.070s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.070s][debug][gc,stats       ] Marking Stats, task = 1, calls = 14
[0.070s][debug][gc,stats       ]   Elapsed time = 0.88ms, Termination time = 0.40ms
[0.070s][debug][gc,stats       ]   Step Times (cum): num = 14, avg = 0.06ms, sd = 0.07ms max = 0.24ms, total = 0.88ms
[0.070s][debug][gc,stats       ]   Mark Stats Cache: hits 1427 misses 3 ratio 99.790
[0.070s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.070s][debug][gc,stats       ] Marking Stats, task = 2, calls = 12
[0.070s][debug][gc,stats       ]   Elapsed time = 0.86ms, Termination time = 0.30ms
[0.070s][debug][gc,stats       ]   Step Times (cum): num = 12, avg = 0.07ms, sd = 0.09ms max = 0.28ms, total = 0.86ms
[0.070s][debug][gc,stats       ]   Mark Stats Cache: hits 1122 misses 1 ratio 99.911
[0.070s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.070s][info ][gc,marking     ] GC(1) Concurrent Mark From Roots 1.335ms
[0.070s][info ][gc,marking     ] GC(1) Concurrent Preclean
[0.070s][debug][gc,ref,start   ] GC(1) Preclean SoftReferences
[0.070s][debug][gc,ref         ] GC(1) Preclean SoftReferences 0.024ms
[0.070s][debug][gc,ref,start   ] GC(1) Preclean WeakReferences
[0.070s][debug][gc,ref         ] GC(1) Preclean WeakReferences 0.013ms
[0.070s][debug][gc,ref,start   ] GC(1) Preclean FinalReferences
[0.070s][debug][gc,ref         ] GC(1) Preclean FinalReferences 0.010ms
[0.070s][debug][gc,ref,start   ] GC(1) Preclean PhantomReferences
[0.070s][debug][gc,ref         ] GC(1) Preclean PhantomReferences 0.010ms
[0.070s][info ][gc,marking     ] GC(1) Concurrent Preclean 0.097ms
[0.070s][info ][gc,marking     ] GC(1) Concurrent Mark (0.069s, 0.070s) 1.465ms
[0.072s][info ][gc,start       ] GC(1) Pause Remark
[0.072s][debug][gc,phases,start] GC(1) Finalize Marking
[0.072s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.072s][debug][gc,stats       ] Marking Stats, task = 0, calls = 42
[0.072s][debug][gc,stats       ]   Elapsed time = 0.03ms, Termination time = 0.02ms
[0.072s][debug][gc,stats       ]   Step Times (cum): num = 42, avg = 0.02ms, sd = 0.13ms max = 0.87ms, total = 0.91ms
[0.072s][debug][gc,stats       ]   Mark Stats Cache: hits 5641 misses 2 ratio 99.965
[0.072s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.072s][debug][gc,stats       ] Marking Stats, task = 1, calls = 15
[0.072s][debug][gc,stats       ]   Elapsed time = 0.03ms, Termination time = 0.41ms
[0.072s][debug][gc,stats       ]   Step Times (cum): num = 15, avg = 0.06ms, sd = 0.07ms max = 0.24ms, total = 0.90ms
[0.072s][debug][gc,stats       ]   Mark Stats Cache: hits 1427 misses 3 ratio 99.790
[0.072s][debug][gc,stats       ] ---------------------------------------------------------------------
[0.072s][debug][gc,phases      ] GC(1) Finalize Marking 0.165ms
[0.072s][debug][gc,phases,start] GC(1) Reference Processing
[0.072s][debug][gc,ref         ] GC(1) Skipped phase1 of Reference Processing due to unavailable references
[0.072s][debug][gc,ref         ] GC(1) Skipped phase2 of Reference Processing due to unavailable references
[0.072s][debug][gc,ref         ] GC(1) Skipped phase3 of Reference Processing due to unavailable references
[0.072s][debug][gc,ref         ] GC(1) Skipped phase4 of Reference Processing due to unavailable references
[0.072s][debug][gc,phases,ref  ] GC(1) Reference Processing: 0.0ms
[0.072s][debug][gc,phases,ref  ] GC(1)   Reconsider SoftReferences: 0.0ms
[0.072s][debug][gc,phases,ref  ] GC(1)     SoftRef (ms):             skipped
[0.072s][debug][gc,phases,ref  ] GC(1)   Notify Soft/WeakReferences: 0.0ms
[0.072s][debug][gc,phases,ref  ] GC(1)     SoftRef (ms):             skipped
[0.072s][debug][gc,phases,ref  ] GC(1)     WeakRef (ms):             skipped
[0.072s][debug][gc,phases,ref  ] GC(1)     FinalRef (ms):            skipped
[0.072s][debug][gc,phases,ref  ] GC(1)     Total (ms):               skipped
[0.072s][debug][gc,phases,ref  ] GC(1)   Notify and keep alive finalizable: 0.0ms
[0.072s][debug][gc,phases,ref  ] GC(1)     FinalRef (ms):            skipped
[0.072s][debug][gc,phases,ref  ] GC(1)   Notify PhantomReferences: 0.0ms
[0.072s][debug][gc,phases,ref  ] GC(1)     PhantomRef (ms):          skipped
[0.072s][debug][gc,phases,ref  ] GC(1)   SoftReference:
[0.072s][debug][gc,phases,ref  ] GC(1)     Discovered: 0
[0.072s][debug][gc,phases,ref  ] GC(1)     Cleared: 0
[0.072s][debug][gc,phases,ref  ] GC(1)   WeakReference:
[0.072s][debug][gc,phases,ref  ] GC(1)     Discovered: 0
[0.072s][debug][gc,phases,ref  ] GC(1)     Cleared: 0
[0.072s][debug][gc,phases,ref  ] GC(1)   FinalReference:
[0.072s][debug][gc,phases,ref  ] GC(1)     Discovered: 0
[0.072s][debug][gc,phases,ref  ] GC(1)     Cleared: 0
[0.072s][debug][gc,phases,ref  ] GC(1)   PhantomReference:
[0.072s][debug][gc,phases,ref  ] GC(1)     Discovered: 0
[0.072s][debug][gc,phases,ref  ] GC(1)     Cleared: 0
[0.072s][debug][gc,phases      ] GC(1) Reference Processing 0.338ms
[0.072s][debug][gc,phases,start] GC(1) Weak Processing
[0.072s][debug][gc,phases      ] GC(1)   JVMTI weak processing: 0.0ms
[0.072s][debug][gc,phases      ] GC(1)     Dead: 0
[0.072s][debug][gc,phases      ] GC(1)     Total: 0
[0.072s][debug][gc,phases      ] GC(1)   JFR weak processing: 0.0ms
[0.072s][debug][gc,phases      ] GC(1)     Dead: 0
[0.072s][debug][gc,phases      ] GC(1)     Total: 0
[0.072s][debug][gc,phases      ] GC(1)   JNI weak processing: 0.0ms
[0.072s][debug][gc,phases      ] GC(1)     Dead: 0
[0.072s][debug][gc,phases      ] GC(1)     Total: 0
[0.072s][debug][gc,phases      ] GC(1)   StringTable weak processing: 0.0ms
[0.072s][debug][gc,phases      ] GC(1)     Dead: 0
[0.072s][debug][gc,phases      ] GC(1)     Total: 4
[0.072s][debug][gc,phases      ] GC(1)   ResolvedMethodTable weak processing: 0.0ms
[0.072s][debug][gc,phases      ] GC(1)     Dead: 0
[0.072s][debug][gc,phases      ] GC(1)     Total: 0
[0.072s][debug][gc,phases      ] GC(1)   VM weak processing: 0.0ms
[0.072s][debug][gc,phases      ] GC(1)     Dead: 0
[0.072s][debug][gc,phases      ] GC(1)     Total: 3
[0.072s][debug][gc,phases      ] GC(1) Weak Processing 0.139ms
[0.072s][debug][gc,phases,start] GC(1) Class Unloading
[0.072s][debug][gc,phases,start] GC(1) ClassLoaderData
[0.072s][debug][gc,phases      ] GC(1) ClassLoaderData 0.006ms
[0.072s][debug][gc,phases,start] GC(1) Trigger cleanups
[0.072s][debug][gc,phases      ] GC(1) Trigger cleanups 0.029ms
[0.073s][debug][gc,phases      ] GC(1) Class Unloading 0.152ms
[0.073s][debug][gc,phases,start] GC(1) Flush Task Caches
[0.073s][debug][gc,stats       ] Mark stats cache hits 8190 misses 6 ratio 99.927
[0.073s][debug][gc,phases      ] GC(1) Flush Task Caches 0.050ms
[0.073s][debug][gc,phases,start] GC(1) Update Remembered Set Tracking Before Rebuild
[0.073s][debug][gc,ergo        ] GC(1) Running G1 Update RemSet Tracking Before Rebuild using 1 workers for 20 regions in heap
[0.073s][debug][gc,remset,tracking] GC(1) Remembered Set Tracking update regions total 20, selected 0
[0.073s][debug][gc,phases         ] GC(1) Update Remembered Set Tracking Before Rebuild 0.046ms
[0.073s][debug][gc,phases,start   ] GC(1) Reclaim Empty Regions
[0.073s][debug][gc,phases         ] GC(1) Reclaim Empty Regions 0.020ms
[0.073s][debug][gc,phases,start   ] GC(1) Purge Metaspace
[0.073s][debug][gc,phases         ] GC(1) Purge Metaspace 0.007ms
[0.073s][debug][gc,phases,start   ] GC(1) Report Object Count
[0.073s][debug][gc,phases         ] GC(1) Report Object Count 0.006ms
[0.073s][info ][gc                ] GC(1) Pause Remark 12M->12M(20M) 1.091ms
[0.073s][info ][gc,cpu            ] GC(1) User=0.00s Sys=0.00s Real=0.00s
[0.073s][info ][gc,marking        ] GC(1) Concurrent Rebuild Remembered Sets
[0.073s][info ][gc,marking        ] GC(1) Concurrent Rebuild Remembered Sets 0.498ms
[0.073s][info ][gc,start          ] GC(1) Pause Cleanup
[0.073s][debug][gc,phases,start   ] GC(1) Update Remembered Set Tracking After Rebuild
[0.073s][debug][gc,phases         ] GC(1) Update Remembered Set Tracking After Rebuild 0.010ms
[0.073s][debug][gc,phases,start   ] GC(1) Finalize Concurrent Mark Cleanup
[0.073s][debug][gc,ergo           ] GC(1) request young-only gcs (candidate old regions not available)
[0.073s][debug][gc,phases         ] GC(1) Finalize Concurrent Mark Cleanup 0.047ms
[0.073s][info ][gc                ] GC(1) Pause Cleanup 13M->13M(20M) 0.099ms
[0.074s][info ][gc,cpu            ] GC(1) User=0.00s Sys=0.00s Real=0.00s
[0.074s][info ][gc,marking        ] GC(1) Concurrent Cleanup for Next Mark
[0.074s][debug][gc,ergo           ] GC(1) Running G1 Clear Bitmap with 1 workers for 1 work units.
[0.074s][info ][gc,marking        ] GC(1) Concurrent Cleanup for Next Mark 0.209ms
[0.074s][info ][gc                ] GC(1) Concurrent Cycle 5.297ms
991999711
[0.090s][debug][gc,ergo,ihop      ] Request concurrent cycle initiation (occupancy higher than threshold) occupancy: 14680064B allocation request: 2097168B threshold: 9437184B (45.00) source: concurrent humongous allocation
[0.090s][debug][gc,ergo           ] Request concurrent cycle initiation (requested by GC cause). GC cause: G1 Humongous Allocation
[0.090s][debug][gc,heap           ] GC(2) Heap before GC invocations=2 (full 0): garbage-first heap   total 20480K, used 13284K [0x00000007fec00000, 0x0000000800000000)
[0.090s][debug][gc,heap           ] GC(2)   region size 1024K, 2 young (2048K), 1 survivors (1024K)
[0.090s][debug][gc,heap           ] GC(2)  Metaspace       used 146K, capacity 4486K, committed 4864K, reserved 1056768K
[0.090s][debug][gc,heap           ] GC(2)   class space    used 7K, capacity 386K, committed 512K, reserved 1048576K
[0.090s][debug][gc,ergo           ] GC(2) Initiate concurrent cycle (concurrent cycle initiation requested)
[0.090s][info ][gc,start          ] GC(2) Pause Young (Concurrent Start) (G1 Humongous Allocation)
[0.090s][info ][gc,task           ] GC(2) Using 2 workers of 10 for evacuation
[0.090s][debug][gc,tlab           ] GC(2) TLAB totals: thrds: 1  refills: 1 max: 1 slow allocs: 1 max 1 waste: 99.4% gc: 187584B max: 187584B slow: 0B max: 0B fast: 0B max: 0B
[0.090s][debug][gc,age            ] GC(2) Desired survivor size 1048576 bytes, new threshold 15 (max threshold 15)
[0.090s][debug][gc,alloc,region   ] GC(2) Mutator Allocation stats, regions: 1, wasted size: 0B ( 0.0%)
[0.095s][debug][gc,ergo           ] GC(2) Running G1 Clear Card Table Task using 1 workers for 1 units of work for 3 regions.
[0.095s][debug][gc,ref            ] GC(2) Skipped phase1 of Reference Processing due to unavailable references
[0.095s][debug][gc,ref            ] GC(2) Skipped phase3 of Reference Processing due to unavailable references
[0.095s][debug][gc,ergo           ] GC(2) Running G1 Free Collection Set using 1 workers for collection set length 2
[0.095s][debug][gc,humongous      ] GC(2) Dead humongous region 0 object size 2097168 start 0x00000007fec00000 with remset 0 code roots 0 is marked 0 reclaim candidate 1 type array 1
[0.095s][debug][gc,humongous      ] GC(2) Dead humongous region 3 object size 2097168 start 0x00000007fef00000 with remset 0 code roots 0 is marked 0 reclaim candidate 1 type array 1
[0.095s][debug][gc,humongous      ] GC(2) Dead humongous region 6 object size 2097168 start 0x00000007ff200000 with remset 0 code roots 0 is marked 0 reclaim candidate 1 type array 1
[0.095s][debug][gc,humongous      ] GC(2) Dead humongous region 9 object size 2097168 start 0x00000007ff500000 with remset 0 code roots 0 is marked 0 reclaim candidate 1 type array 1
[0.095s][debug][gc,plab           ] GC(2) Young PLAB allocation: allocated: 234432B, wasted: 312B, unused: 29848B, used: 204272B, undo waste: 0B, 
[0.095s][debug][gc,plab           ] GC(2) Young other allocation: region end waste: 0B, regions filled: 1, direct allocated: 32800B, failure used: 0B, failure wasted: 0B
[0.095s][debug][gc,plab           ] GC(2) Young sizing: calculated: 40848B, actual: 40400B
[0.095s][debug][gc,plab           ] GC(2) Old PLAB allocation: allocated: 0B, wasted: 0B, unused: 0B, used: 0B, undo waste: 0B, 
[0.095s][debug][gc,plab           ] GC(2) Old other allocation: region end waste: 0B, regions filled: 0, direct allocated: 0B, failure used: 0B, failure wasted: 0B
[0.095s][debug][gc,plab           ] GC(2) Old sizing: calculated: 0B, actual: 2064B
[0.095s][debug][gc,ihop           ] GC(2) Basic information (value update), threshold: 9437184B (45.00), target occupancy: 20971520B, current occupancy: 1049568B, recent allocation size: 6291456B, recent allocation duration: 21.57ms, recent old gen allocation rate: 291621394.31B/s, recent marking phase length: 0.00ms
[0.095s][debug][gc,ihop           ] GC(2) Adaptive IHOP information (value update), threshold: 9437184B (52.94), internal target occupancy: 17825792B, occupancy: 1049568B, additional buffer size: 10485760B, predicted old gen allocation rate: 231801164.06B/s, predicted marking phase length: 0.00ms, prediction active: false
[0.095s][debug][gc,ergo,refine    ] GC(2) Updated Refinement Zones: green: 10, yellow: 30, red: 50
[0.095s][info ][gc,phases         ] GC(2)   Pre Evacuate Collection Set: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Prepare TLABs: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Choose Collection Set: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Region Register: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Clear Claimed Marks: 0.0ms
[0.095s][info ][gc,phases         ] GC(2)   Evacuate Collection Set: 4.4ms
[0.095s][debug][gc,phases         ] GC(2)     Ext Root Scanning (ms):   Min:  0.0, Avg:  0.1, Max:  0.1, Diff:  0.0, Sum:  0.1, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     Scan HCC (ms):            Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     Update RS (ms):           Min:  0.6, Avg:  2.3, Max:  4.0, Diff:  3.4, Sum:  4.5, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Processed Buffers:        Min: 1, Avg:  1.5, Max: 2, Diff: 1, Sum: 3, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Scanned Cards:            Min: 23, Avg: 75.0, Max: 127, Diff: 104, Sum: 150, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Skipped Cards:            Min: 1, Avg: 10.0, Max: 19, Diff: 18, Sum: 20, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     Scan RS (ms):             Min:  0.0, Avg:  1.0, Max:  2.1, Diff:  2.1, Sum:  2.1, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Scanned Cards:            Min: 0, Avg: 31.0, Max: 62, Diff: 62, Sum: 62, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Claimed Cards:            Min: 0, Avg: 74.0, Max: 148, Diff: 148, Sum: 148, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Skipped Cards:            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     Code Root Scan (ms):      Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     Object Copy (ms):         Min:  0.3, Avg:  0.4, Max:  0.5, Diff:  0.2, Sum:  0.8, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       LAB Waste                 Min: 88, Avg: 156.0, Max: 224, Diff: 136, Sum: 312, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       LAB Undo Waste            Min: 0, Avg:  0.0, Max: 0, Diff: 0, Sum: 0, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     Termination (ms):         Min:  0.0, Avg:  0.6, Max:  1.2, Diff:  1.2, Sum:  1.2, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)       Termination Attempts:     Min: 1, Avg: 21.0, Max: 41, Diff: 40, Sum: 42, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     GC Worker Other (ms):     Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 2
[0.095s][debug][gc,phases         ] GC(2)     GC Worker Total (ms):     Min:  4.3, Avg:  4.3, Max:  4.4, Diff:  0.0, Sum:  8.7, Workers: 2
[0.095s][info ][gc,phases         ] GC(2)   Post Evacuate Collection Set: 0.2ms
[0.095s][debug][gc,phases         ] GC(2)     Code Roots Fixup: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Clear Card Table: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Reference Processing: 0.0ms
[0.095s][debug][gc,phases,ref     ] GC(2)       Reconsider SoftReferences: 0.0ms
[0.095s][debug][gc,phases,ref     ] GC(2)         SoftRef (ms):             skipped
[0.095s][debug][gc,phases,ref     ] GC(2)       Notify Soft/WeakReferences: 0.0ms
[0.095s][debug][gc,phases,ref     ] GC(2)         SoftRef (ms):             Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.095s][debug][gc,phases,ref     ] GC(2)         WeakRef (ms):             Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.095s][debug][gc,phases,ref     ] GC(2)         FinalRef (ms):            Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.095s][debug][gc,phases,ref     ] GC(2)         Total (ms):               Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.095s][debug][gc,phases,ref     ] GC(2)       Notify and keep alive finalizable: 0.0ms
[0.095s][debug][gc,phases,ref     ] GC(2)         FinalRef (ms):            skipped
[0.095s][debug][gc,phases,ref     ] GC(2)       Notify PhantomReferences: 0.0ms
[0.095s][debug][gc,phases,ref     ] GC(2)         PhantomRef (ms):          Min:  0.0, Avg:  0.0, Max:  0.0, Diff:  0.0, Sum:  0.0, Workers: 1
[0.095s][debug][gc,phases,ref     ] GC(2)       SoftReference:
[0.095s][debug][gc,phases,ref     ] GC(2)         Discovered: 0
[0.095s][debug][gc,phases,ref     ] GC(2)         Cleared: 0
[0.095s][debug][gc,phases,ref     ] GC(2)       WeakReference:
[0.095s][debug][gc,phases,ref     ] GC(2)         Discovered: 2
[0.095s][debug][gc,phases,ref     ] GC(2)         Cleared: 2
[0.095s][debug][gc,phases,ref     ] GC(2)       FinalReference:
[0.095s][debug][gc,phases,ref     ] GC(2)         Discovered: 0
[0.095s][debug][gc,phases,ref     ] GC(2)         Cleared: 0
[0.095s][debug][gc,phases,ref     ] GC(2)       PhantomReference:
[0.095s][debug][gc,phases,ref     ] GC(2)         Discovered: 1
[0.095s][debug][gc,phases,ref     ] GC(2)         Cleared: 1
[0.095s][debug][gc,phases         ] GC(2)     Weak Processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)       JVMTI weak processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)         Dead: 0
[0.095s][debug][gc,phases         ] GC(2)         Total: 0
[0.095s][debug][gc,phases         ] GC(2)       JFR weak processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)         Dead: 0
[0.095s][debug][gc,phases         ] GC(2)         Total: 0
[0.095s][debug][gc,phases         ] GC(2)       JNI weak processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)         Dead: 0
[0.095s][debug][gc,phases         ] GC(2)         Total: 0
[0.095s][debug][gc,phases         ] GC(2)       StringTable weak processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)         Dead: 0
[0.095s][debug][gc,phases         ] GC(2)         Total: 4
[0.095s][debug][gc,phases         ] GC(2)       ResolvedMethodTable weak processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)         Dead: 0
[0.095s][debug][gc,phases         ] GC(2)         Total: 0
[0.095s][debug][gc,phases         ] GC(2)       VM weak processing: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)         Dead: 0
[0.095s][debug][gc,phases         ] GC(2)         Total: 3
[0.095s][debug][gc,phases         ] GC(2)     Merge Per-Thread State: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Code Roots Purge: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Redirty Cards: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     DerivedPointerTable Update: 0.0ms
[0.095s][debug][gc,phases         ] GC(2)     Free Collection Set: 0.0ms
[0.096s][debug][gc,phases         ] GC(2)     Humongous Reclaim: 0.0ms
[0.096s][debug][gc,phases         ] GC(2)     Start New Collection Set: 0.0ms
[0.096s][debug][gc,phases         ] GC(2)     Resize TLABs: 0.0ms
[0.096s][debug][gc,phases         ] GC(2)     Expand Heap After Collection: 0.0ms
[0.096s][info ][gc,phases         ] GC(2)   Other: 0.2ms
[0.096s][info ][gc,heap           ] GC(2) Eden regions: 1->0(9)
[0.096s][info ][gc,heap           ] GC(2) Survivor regions: 1->1(2)
[0.096s][info ][gc,heap           ] GC(2) Old regions: 0->0
[0.096s][info ][gc,heap           ] GC(2) Archive regions: 2->2
[0.096s][info ][gc,heap           ] GC(2) Humongous regions: 12->0
[0.096s][info ][gc,metaspace      ] GC(2) Metaspace: 146K->146K(1056768K)
[0.096s][debug][gc,heap           ] GC(2) Heap after GC invocations=3 (full 0): garbage-first heap   total 20480K, used 1024K [0x00000007fec00000, 0x0000000800000000)
[0.096s][debug][gc,heap           ] GC(2)   region size 1024K, 1 young (1024K), 1 survivors (1024K)
[0.096s][debug][gc,heap           ] GC(2)  Metaspace       used 146K, capacity 4486K, committed 4864K, reserved 1056768K
[0.096s][debug][gc,heap           ] GC(2)   class space    used 7K, capacity 386K, committed 512K, reserved 1048576K
[0.096s][info ][gc                ] GC(2) Pause Young (Concurrent Start) (G1 Humongous Allocation) 13M->1M(20M) 5.215ms
[0.096s][info ][gc,cpu            ] GC(2) User=0.01s Sys=0.00s Real=0.00s

From stefan.johansson at oracle.com  Mon Mar  9 14:30:06 2020
From: stefan.johansson at oracle.com (Stefan Johansson)
Date: Mon, 9 Mar 2020 15:30:06 +0100
Subject: Minor question about logging
In-Reply-To: <a35d897d-317d-27ec-1282-97ce041606db@gmail.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>
 <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
 <d423f8e7-b5b1-d0cd-6731-980cc1bd3ae6@oracle.com>
 <a35d897d-317d-27ec-1282-97ce041606db@gmail.com>
Message-ID: <e4e97ed5-30ae-c3d1-e613-a9372d78f923@oracle.com>

Hi,

On 2020-03-09 15:08, Eugeniu Rabii wrote:
> Hello Stefan,
> 
> I actually should have been more clear myself on the specific question I 
> have, I am sorry for that.
No problem.

> 
> Comments inline.
> 
> On 3/9/20 6:41 AM, Stefan Johansson wrote:
>> Hi Eugeniu,
>>
>> I should have been more clear around that your understanding of the 
>> numbers are correct. But as Thomas also responded, these are estimates 
>> and we might have to start a GC due to other circumstances.
>>
>> See more below.
>>
>> On 2020-03-09 11:32, Eugeniu Rabii wrote:
>>> Hello Stefan,
>>>
>>> I know these are humongous allocations, the 2 MB was chosen on 
>>> purpose (I could have chose 1 MB too, I know).
>>>
>>> The first GC (0 - young collection) is actually the result of the 
>>> allocation of those humongous Objects.
>>>
>>> Because the humongous allocation happened, a concurrent GC was 
>>> triggered (GC (1)) that triggers the young collection first (GC (0)); 
>>> these are concepts I do seem do get.
>>>
>>> My question here is different. After the young collection is done, 
>>> there are entries like this in logs:
>>>
>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>>
>>> The way I read it it is: there were 1 Eden Regions before the 
>>> collection; everything was cleared from them (that zero) and the 
>>> heuristics just said that the next cycle should have 9 Eden Regions.
>> Correct, but this is an estimate and we might have to GC before we 
>> fill up the 9 young regions. For example if there are a lot of 
>> humongous allocations. The humongous allocations are as I mentioned 
>> treated differently and aren't considered young.
>>
> 
> I understand these are estimates; I also understand these could be 
> ignored. In fact, they are, since GC (2) is _again_ a humongous allocation.
> 
> 
>>>
>>> Same explanation would happen for Survivor Regions.? As such there 
>>> would be : 11 young, 2 survivor.
>>>
>>>
>>> I am expecting the third cycle (GC (2)) to start with :
>>>
>>>
>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>>> invocations=2 (full 0): .....
>>>
>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 11 
>>> young (2048K), 2 survivors (1024K)
>>>
>>>
>>> Instead it prints:
>>>
>>>
>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>>> young (2048K), 1 survivors (1024K)
>>>
>>>
>>> Does this makes it a better explanation now?
>> Your expectation is correct, and if GC(2) isn't caused by a humongous 
>> allocation this is unexpected behavior. It would help a lot if you 
>> could post more of your log, especially the cause for GC(2).
> 
> Logs attached, but you are correct GC (2) is still a humongous allocation:
> 
> [0.090s][debug][gc,ergo?????????? ] Request concurrent cycle initiation 
> (requested by GC cause). GC cause: G1 Humongous Allocation
> 
> 
> And now my question: what I REALLY wish for (not sure if possible) is a 
> log statement in GC(1) of how young regions were adjusted because of the 
> humongous allocation - this is the part I was missing.
> 
> I sort of already realized before posting that those are only estimates, 
> I hoped for some kind of a hint in logs.
GC(1) is the concurrent cycle initiated by GC(0), and the concurrent 
cycle itself doesn't affect the number of young regions used in the next 
young collection. So to get this number the only thing you can really do 
is comparing the estimate from GC(0) with the actual number in GC(2). Or 
to be more general compare the estimate from the previous young 
collection with the actual number used in the current one. For normal 
young collections the numbers should be equal but there are some causes 
when they are not and one of them is:
G1 Humongous Allocation

Not sure if this helps, but this is the information you currently have 
in the logs.

Cheers,
Stefan

> 
> 
>>
>> Thanks,
>> Stefan
>>
>>>
>>> Thank you,
>>>
>>> Eugene.
>>>
>>>
>>> On 3/9/20 4:14 AM, Stefan Johansson wrote:
>>>> Hi Eugeniu,
>>>>
>>>> The second GC is most likely also caused by having many humongous 
>>>> allocations. This was the cause GC(0) as well, and since your 
>>>> application only allocates large (humongous) objects it will not use 
>>>> a lot of space for other objects.
>>>>
>>>> If you are not familiar with the concept of humongous objects in G1, 
>>>> these are objects that are to large to be allocated in the normal 
>>>> fast path. They are instead allocated in separate regions. This 
>>>> requires some special handling and that's the reason we trigger GCs 
>>>> more quickly if a lot of such objects are allocated. In your setup 
>>>> the region size will be 1MB so all objects larger than 500KB will be 
>>>> considered humongous.
>>>>
>>>> Hope this helps,
>>>> StefanJ
>>>>
>>>>
>>>> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>>>>> Hello,
>>>>>
>>>>> I have a very simple program that constantly allocates some byte 
>>>>> arrays (of 2 MB) each (running with the latest jdk-13). I run it 
>>>>> with :
>>>>>
>>>>> -Xms20M
>>>>> -Xmx20M
>>>>> -Xmn10M
>>>>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>>>>
>>>>>
>>>>> For?example:
>>>>>
>>>>> ?? ? public static void main(String[] args) {
>>>>> ?? ? ? ? while (true) {
>>>>> ?? ? ? ? ? ? System.out.println(invokeMe());
>>>>> ?? ? ? ? }
>>>>> ?? ? }
>>>>>
>>>>> ?? ? public static int invokeMe() {
>>>>> ?? ? ? ? int x = 1024;
>>>>> ?? ? ? ? int factor = 2;
>>>>> ?? ? ? ? byte[] allocation1 = new byte[factor * x * x];
>>>>> ?? ? ? ? allocation1[2] = 3;
>>>>> ?? ? ? ? byte[] allocation2 = new byte[factor * x * x];
>>>>> ?? ? ? ? byte[] allocation3 = new byte[factor * x * x];
>>>>> ?? ? ? ? byte[] allocation4 = new byte[factor * x * x];
>>>>>
>>>>> ?? ? ? ? return Arrays.hashCode(allocation1) ^ 
>>>>> Arrays.hashCode(allocation2)
>>>>> ?? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ 
>>>>> Arrays.hashCode(allocation4);
>>>>> ?? ? }
>>>>>
>>>>> In logs, I see something that is puzzling me:
>>>>>
>>>>>
>>>>> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle initiation 
>>>>> (requested by GC cause). GC cause: G1 Humongous Allocation
>>>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC invocations=0 
>>>>> (full 0): garbage-first heap ? total 20480K, used 6908K 
>>>>> [0x00000007fec00000, 0x0000000800000000)
>>>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 young 
>>>>> (1024K), 0 survivors (0K)
>>>>>
>>>>> OK, so Heap Before: 1 young, 0 survivors.
>>>>>
>>>>> Then:
>>>>>
>>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
>>>>>
>>>>> So the next cycle will have 9 Eden Regions and 2 Survivor ones (at 
>>>>> least this is how I read the source code of where this is logged)
>>>>>
>>>>> Then a GC(1) concurrent cycle happens:
>>>>>
>>>>> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
>>>>>
>>>>> And the next cycle is where I fail to understand the logging:
>>>>>
>>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>>>>> invocations=2 (full 0): garbage-first heap ? total 20480K, used 
>>>>> 7148K [0x00000007fec00000, 0x0000000800000000)
>>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>>>>> young (2048K), 1 survivors (1024K)
>>>>>
>>>>> How come 2 young, 1 survivors? When the previous cycle said 9 Eden, 
>>>>> 2 Survivor.
>>>>>
>>>>> Thank you,
>>>>> Eugene.
>>>>>
>>>>> _______________________________________________
>>>>> hotspot-gc-use mailing list
>>>>> hotspot-gc-use at openjdk.java.net
>>>>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>>>

From eugen.rabii at gmail.com  Mon Mar  9 15:11:23 2020
From: eugen.rabii at gmail.com (Eugeniu Rabii)
Date: Mon, 9 Mar 2020 11:11:23 -0400
Subject: Minor question about logging
In-Reply-To: <e4e97ed5-30ae-c3d1-e613-a9372d78f923@oracle.com>
References: <88eefabc-fdd7-cdb4-0e11-b2803df131ce@gmail.com>
 <051eeaea-45d4-d761-2549-756a8f7b5aca@oracle.com>
 <2ab61625-e308-a21f-421a-668afbc3f638@gmail.com>
 <d423f8e7-b5b1-d0cd-6731-980cc1bd3ae6@oracle.com>
 <a35d897d-317d-27ec-1282-97ce041606db@gmail.com>
 <e4e97ed5-30ae-c3d1-e613-a9372d78f923@oracle.com>
Message-ID: <f5eaacff-2b53-4901-7b64-787d8449d5f4@gmail.com>

Yes Stefan, I have already seen that under back-to-back young GCs, those 
values are up to date, always.

Thank you for confirming this.

Eugene.

On 3/9/20 10:30 AM, Stefan Johansson wrote:
> Hi,
>
> On 2020-03-09 15:08, Eugeniu Rabii wrote:
>> Hello Stefan,
>>
>> I actually should have been more clear myself on the specific 
>> question I have, I am sorry for that.
> No problem.
>
>>
>> Comments inline.
>>
>> On 3/9/20 6:41 AM, Stefan Johansson wrote:
>>> Hi Eugeniu,
>>>
>>> I should have been more clear around that your understanding of the 
>>> numbers are correct. But as Thomas also responded, these are 
>>> estimates and we might have to start a GC due to other circumstances.
>>>
>>> See more below.
>>>
>>> On 2020-03-09 11:32, Eugeniu Rabii wrote:
>>>> Hello Stefan,
>>>>
>>>> I know these are humongous allocations, the 2 MB was chosen on 
>>>> purpose (I could have chose 1 MB too, I know).
>>>>
>>>> The first GC (0 - young collection) is actually the result of the 
>>>> allocation of those humongous Objects.
>>>>
>>>> Because the humongous allocation happened, a concurrent GC was 
>>>> triggered (GC (1)) that triggers the young collection first (GC 
>>>> (0)); these are concepts I do seem do get.
>>>>
>>>> My question here is different. After the young collection is done, 
>>>> there are entries like this in logs:
>>>>
>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>>>
>>>> The way I read it it is: there were 1 Eden Regions before the 
>>>> collection; everything was cleared from them (that zero) and the 
>>>> heuristics just said that the next cycle should have 9 Eden Regions.
>>> Correct, but this is an estimate and we might have to GC before we 
>>> fill up the 9 young regions. For example if there are a lot of 
>>> humongous allocations. The humongous allocations are as I mentioned 
>>> treated differently and aren't considered young.
>>>
>>
>> I understand these are estimates; I also understand these could be 
>> ignored. In fact, they are, since GC (2) is _again_ a humongous 
>> allocation.
>>
>>
>>>>
>>>> Same explanation would happen for Survivor Regions.? As such there 
>>>> would be : 11 young, 2 survivor.
>>>>
>>>>
>>>> I am expecting the third cycle (GC (2)) to start with :
>>>>
>>>>
>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>>>> invocations=2 (full 0): .....
>>>>
>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 11 
>>>> young (2048K), 2 survivors (1024K)
>>>>
>>>>
>>>> Instead it prints:
>>>>
>>>>
>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>>>> young (2048K), 1 survivors (1024K)
>>>>
>>>>
>>>> Does this makes it a better explanation now?
>>> Your expectation is correct, and if GC(2) isn't caused by a 
>>> humongous allocation this is unexpected behavior. It would help a 
>>> lot if you could post more of your log, especially the cause for GC(2).
>>
>> Logs attached, but you are correct GC (2) is still a humongous 
>> allocation:
>>
>> [0.090s][debug][gc,ergo?????????? ] Request concurrent cycle 
>> initiation (requested by GC cause). GC cause: G1 Humongous Allocation
>>
>>
>> And now my question: what I REALLY wish for (not sure if possible) is 
>> a log statement in GC(1) of how young regions were adjusted because 
>> of the humongous allocation - this is the part I was missing.
>>
>> I sort of already realized before posting that those are only 
>> estimates, I hoped for some kind of a hint in logs.
> GC(1) is the concurrent cycle initiated by GC(0), and the concurrent 
> cycle itself doesn't affect the number of young regions used in the 
> next young collection. So to get this number the only thing you can 
> really do is comparing the estimate from GC(0) with the actual number 
> in GC(2). Or to be more general compare the estimate from the previous 
> young collection with the actual number used in the current one. For 
> normal young collections the numbers should be equal but there are 
> some causes when they are not and one of them is:
> G1 Humongous Allocation
>
> Not sure if this helps, but this is the information you currently have 
> in the logs.
>
> Cheers,
> Stefan
>
>>
>>
>>>
>>> Thanks,
>>> Stefan
>>>
>>>>
>>>> Thank you,
>>>>
>>>> Eugene.
>>>>
>>>>
>>>> On 3/9/20 4:14 AM, Stefan Johansson wrote:
>>>>> Hi Eugeniu,
>>>>>
>>>>> The second GC is most likely also caused by having many humongous 
>>>>> allocations. This was the cause GC(0) as well, and since your 
>>>>> application only allocates large (humongous) objects it will not 
>>>>> use a lot of space for other objects.
>>>>>
>>>>> If you are not familiar with the concept of humongous objects in 
>>>>> G1, these are objects that are to large to be allocated in the 
>>>>> normal fast path. They are instead allocated in separate regions. 
>>>>> This requires some special handling and that's the reason we 
>>>>> trigger GCs more quickly if a lot of such objects are allocated. 
>>>>> In your setup the region size will be 1MB so all objects larger 
>>>>> than 500KB will be considered humongous.
>>>>>
>>>>> Hope this helps,
>>>>> StefanJ
>>>>>
>>>>>
>>>>> On 2020-03-08 06:01, Eugeniu Rabii wrote:
>>>>>> Hello,
>>>>>>
>>>>>> I have a very simple program that constantly allocates some byte 
>>>>>> arrays (of 2 MB) each (running with the latest jdk-13). I run it 
>>>>>> with :
>>>>>>
>>>>>> -Xms20M
>>>>>> -Xmx20M
>>>>>> -Xmn10M
>>>>>> "-Xlog:heap*=debug" "-Xlog:gc*=debug" "-Xlog:ergo*=debug"
>>>>>>
>>>>>>
>>>>>> For?example:
>>>>>>
>>>>>> ?? ? public static void main(String[] args) {
>>>>>> ?? ? ? ? while (true) {
>>>>>> ?? ? ? ? ? ? System.out.println(invokeMe());
>>>>>> ?? ? ? ? }
>>>>>> ?? ? }
>>>>>>
>>>>>> ?? ? public static int invokeMe() {
>>>>>> ?? ? ? ? int x = 1024;
>>>>>> ?? ? ? ? int factor = 2;
>>>>>> ?? ? ? ? byte[] allocation1 = new byte[factor * x * x];
>>>>>> ?? ? ? ? allocation1[2] = 3;
>>>>>> ?? ? ? ? byte[] allocation2 = new byte[factor * x * x];
>>>>>> ?? ? ? ? byte[] allocation3 = new byte[factor * x * x];
>>>>>> ?? ? ? ? byte[] allocation4 = new byte[factor * x * x];
>>>>>>
>>>>>> ?? ? ? ? return Arrays.hashCode(allocation1) ^ 
>>>>>> Arrays.hashCode(allocation2)
>>>>>> ?? ? ? ? ? ? ^ Arrays.hashCode(allocation3) ^ 
>>>>>> Arrays.hashCode(allocation4);
>>>>>> ?? ? }
>>>>>>
>>>>>> In logs, I see something that is puzzling me:
>>>>>>
>>>>>>
>>>>>> [0.066s][debug][gc,ergo ? ? ? ] Request concurrent cycle 
>>>>>> initiation (requested by GC cause). GC cause: G1 Humongous 
>>>>>> Allocation
>>>>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) Heap before GC 
>>>>>> invocations=0 (full 0): garbage-first heap ? total 20480K, used 
>>>>>> 6908K [0x00000007fec00000, 0x0000000800000000)
>>>>>> [0.066s][debug][gc,heap ? ? ? ] GC(0) ? region size 1024K, 1 
>>>>>> young (1024K), 0 survivors (0K)
>>>>>>
>>>>>> OK, so Heap Before: 1 young, 0 survivors.
>>>>>>
>>>>>> Then:
>>>>>>
>>>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Eden regions: 1->0(9)
>>>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Survivor regions: 0->1(2)
>>>>>> [0.071s][info ][gc,heap ? ? ? ?] GC(0) Old regions: 0->0
>>>>>>
>>>>>> So the next cycle will have 9 Eden Regions and 2 Survivor ones 
>>>>>> (at least this is how I read the source code of where this is 
>>>>>> logged)
>>>>>>
>>>>>> Then a GC(1) concurrent cycle happens:
>>>>>>
>>>>>> [0.071s][info ][gc ? ? ? ? ? ? ] GC(1) Concurrent Cycle
>>>>>>
>>>>>> And the next cycle is where I fail to understand the logging:
>>>>>>
>>>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) Heap before GC 
>>>>>> invocations=2 (full 0): garbage-first heap ? total 20480K, used 
>>>>>> 7148K [0x00000007fec00000, 0x0000000800000000)
>>>>>> [0.076s][debug][gc,heap ? ? ? ? ? ] GC(2) ? region size 1024K, 2 
>>>>>> young (2048K), 1 survivors (1024K)
>>>>>>
>>>>>> How come 2 young, 1 survivors? When the previous cycle said 9 
>>>>>> Eden, 2 Survivor.
>>>>>>
>>>>>> Thank you,
>>>>>> Eugene.
>>>>>>
>>>>>> _______________________________________________
>>>>>> hotspot-gc-use mailing list
>>>>>> hotspot-gc-use at openjdk.java.net
>>>>>> https://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>>>>