Discussion: improve humongous objects handling for G1

Man Cao manc at google.com
Fri Jan 17 00:53:04 UTC 2020

Hi all,

While migrating our workload from CMS to G1, we found many production
applications suffer from humongous allocations.
The default threshold for humongous objects is often too small for our
applications with heap sizes between 2GB-15GB.
Humongous allocations caused noticeable increase in the frequency of
concurrent old-gen collections, mixed collections and CPU usage.
We could advise applications to increase G1HeapRegionSize. But some
applications still suffer with G1HeapRegionSize=32M.
We could also advise applications to refactor code to break down large
objects. But it is a high cost effort that may not always be feasible.

We'd like to work with the OpenJDK community together to improve G1's
handling of humongous objects.
Thomas Schatzl mentioned to me a few efforts/ideas on this front in an
offline chat:
a. Allocation into tail regions of humongous object: JDK-8172713,
b. Commit additional virtual address space for humongous objects.
c. Improve the region selection heuristics (e.g., first-fit, best-fit) for
humongous objects.

I didn't find open CRs for b. and c. Could someone give pointers?
Are there any other ideas/prototypes on this front?


More information about the hotspot-gc-dev mailing list