RFR: 8376125: Out of memory in the CDS archive error with lot of classes

Ioi Lam iklam at openjdk.org
Tue Feb 3 07:38:04 UTC 2026


On Mon, 2 Feb 2026 20:34:29 GMT, Ashutosh Mehra <asmehra at openjdk.org> wrote:

> > These usages cannot switch to _u4 versions because they need raw byte offsets (not scaled) and store them in 64-bit types.
> 
> I am not sure why we can't store the scaled offsets in such cases. Are data structures not aligned properly that prevents from storing as scaled offsets. Its true they are stored in 64-bit types but that doesn't prevent scaling the offsets. IMO I would rather have a single API to compute offsets, otherwise we will end up with a system that has two types of offsets and it would be confusing when to use which. @iklam what do you think?

I tried switching everything to the encoded offsets, but the changes are quite extensive. Most tests passed but serviceability/sa/ClhsdbCDSCore.java is still failing.

Here's my patch: https://github.com/openjdk/jdk/commit/3f6dea9963bba05ca2f22abfe02199fa7767f82d

I think this should be done in a follow-up RFE.

In this PR, I think we should update the APIs so it's more obvious which "offset" we are talking about:

- byte offsets  should be called "raw offset".
- the "u4 offset" should be called "encoded offset"

So we'd have

- `ArchiveUtils::encoded_offset_to_archived_address()`
- `ArchiveBuilder::buffer_to_raw_offset()`
- `ArchiveBuilder::any_to_encoded_offset()`
- etc

Eventually, I want to move the encoding logic to its own class (patterned after `CompressedKlassPointers`): https://github.com/openjdk/jdk/commit/8d5b3d5e684381005f1631e1577af2f716c4be9c

-------------

PR Comment: https://git.openjdk.org/jdk/pull/29494#issuecomment-3839602927


More information about the hotspot-dev mailing list