Generational ZGC issue

Stefan Karlsson stefan.karlsson at oracle.com
Thu Feb 15 11:05:26 UTC 2024


Hi Johannes,

We tried to look at the log files and the jfr files, but couldn't find 
an OotOfMemoryError in any of them. Do you think you could try to rerun 
and capture the entire GC log from the OutOfMemoryError run?

A few things to note:

1) You seem to be running the Graal compiler. Graal doesn't support 
Generational ZGC, so you are going to run different compilers when you 
compare Singlegen ZGC with Generational ZGC.

2) It's not clear to me that the provided JFR files matches the provided 
log files.

3) The JFR files show that -XX:+UseLargePages are used, but the gc+init 
logs shows 'Large Page Support: Disabled', you might want to look into 
why that is the case.

4) The singlegen JFR file has a -Xlog:gc:g1-chicago.log line. It should 
probably be named zgc-chicago.log.

Cheers,
StefanK

On 2024-02-14 17:36, Johannes Lichtenberger wrote:
> Hello,
>
> a test of my little DB project fails using generational ZGC, but not 
> with ZGC and G1 (out of memory error).
>
> To be honest, I guess the allocation rate and thus GC pressure, when 
> traversing a resource in SirixDB is unacceptable. The strategy is to 
> create fine-grained nodes from JSON input and store these in a trie. 
> First, a 3,8Gb JSON file is shredded and imported. Next, a preorder 
> traversal of the generated trie traverses a trie (with leaf pages 
> storing 1024 nodes each and in total ~300_000_000 (and these are going 
> to be deserialized one by one). The pages are furthermore referenced 
> in memory through PageReference::setPage. Furthermore, a Caffeine page 
> cache caches the PageReferences (keys) and the pages (values) and sets 
> the reference back to null once entries are going to be evicted 
> (PageReference.setPage(null)).
>
> However, I think the whole strategy of having to have in-memory nodes 
> might not be best. Maybe it's better to use off-heap memory for the 
> pages itself with MemorySegments, but the pages are not of a fixed 
> size, thus it may get tricky.
>
> The test mentioned is this: 
> https://github.com/sirixdb/sirix/blob/248ab141632c94c6484a3069a056550516afb1d2/bundles/sirix-core/src/test/java/io/sirix/service/json/shredder/JsonShredderTest.java#L69
>
> I can upload the JSON file somewhere for a couple of days if needed.
>
> Caused by: java.lang.OutOfMemoryError
>     at 
> java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
>     at 
> java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502)
>     at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486)
>     at 
> java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:542)
>     at 
> java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:567)
>     at 
> java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:670)
>     at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:160)
>     at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:174)
>     at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
>     at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596)
>     at 
> io.sirix.access.trx.page.NodePageTrx.parallelSerializationOfKeyValuePages(NodePageTrx.java:442)
>
> I've uploaded several JFR recordings and logs over here (maybe besides 
> the async profiler JFR files the zgc-detailed log is most interesting):
>
> https://github.com/sirixdb/sirix/tree/main/bundles/sirix-core
>
> kind regards
> Johannes



More information about the zgc-dev mailing list