RFR: 8376125: Out of memory in the CDS archive error with lot of classes
Alexey Bakhtin
abakhtin at openjdk.org
Tue Feb 3 21:31:46 UTC 2026
On Tue, 3 Feb 2026 00:38:31 GMT, Alexey Bakhtin <abakhtin at openjdk.org> wrote:
>>> There are two more apis that return "unchecked" offset: `ArchiveBuilder::buffer_to_offset()` and `ArchiveBuilder::any_to_offset()`. These apis are not returning the scaled offset. I think it is better to get rid of these apis and replace their usage with `_u4` version which has the offset range check. I noticed there are only 1-2 instances that use these "unchecked" apis.
>>
>> Thanks for the suggestion. I looked into this and found that buffer_to_offset() and any_to_offset() serve a different purpose than the _u4 versions. The _u4 versions use scaled encoding (with MetadataOffsetShift) and return a compact u4 for metadata pointer storage. The raw versions return unscaled byte offsets stored in larger types. These usages cannot switch to _u4 versions because they need raw byte offsets (not scaled) and store them in 64-bit types.
>>
>> However, the comments for the methods may be misleading after introducing the _u4 methods. What do you think to revise the comment as:
>>
>> // The address p points to an object inside the output buffer. When the archive is mapped
>> // at the requested address, what's the byte offset of this object from _requested_static_archive_bottom?
>> uintx buffer_to_offset(address p) const;
>>
>> // Same as buffer_to_offset, except that the address p points to either (a) an object
>> // inside the output buffer, or (b), an object in the currently mapped static archive.
>> uintx any_to_offset(address p) const;
>>
>> // The reverse of buffer_to_offset_u4() - converts scaled offset units back to buffered address.
>> address offset_to_buffered_address(u4 offset_units) const;
>>
>>
>> I am also OK to rename the method names to: `buffer_to_offset_bytes()` and `any_to_offset_bytes()`, if the new names are clearer.
>>
>> @ashu-mehra What do you think?
>
> Hi @XueleiFan,
>
> I've tried the suggested code with an archive size more than 4Gb, but it fails with an assertion:
>
> # Internal Error (aotMetaspace.cpp:1955), pid=96332, tid=4099
> # guarantee(archive_space_size < max_encoding_range_size - class_space_alignment) failed: Archive too large
>
> CDC archive was created successfully:
>
> [187.068s][info ][cds ] Shared file region (rw) 0: 822453584 bytes, addr 0x0000000800004000 file offset 0x00004000 crc 0x132b652e
> [189.176s][info ][cds ] Shared file region (ro) 1: 3576115584 bytes, addr 0x0000000831060000 file offset 0x31060000 crc 0x71b020a2
> [197.653s][info ][cds ] Shared file region (ac) 4: 0 bytes
> [198.870s][info ][cds ] Shared file region (bm) 2: 56555664 bytes, addr 0x0000000000000000 file offset 0x1062d4000 crc 0xbd87f804
> [199.504s][info ][cds ] Shared file region (hp) 3: 16091256 bytes, addr 0x00000000ff000000 file offset 0x1098c4000 crc 0x7834b7c3
> [199.684s][debug ][cds ] bm space: 56555664 [ 1.3% of total] out of 56555664 bytes [100.0% used]
> [199.684s][debug ][cds ] hp space: 16091256 [ 0.4% of total] out of 16091256 bytes [100.0% used] at 0x0000000c6d000000
> [199.684s][debug ][cds ] total : 4471216088 [100.0% of total] out of 4471228536 bytes [100.0% used]
> @alexeybakhtin Thank you for testing of bigger archives (>4GB).
>
> I was wondering if it is OK to support 4GB+ archive when UseCompactObjectHeaders is false. The following prototype works. However, we prefer UseCompactObjectHeaders in practice, and the biggest archive size (5.6M objects) is about 2.1G at this moment. Could we have 4GB archive size limit as an open issue, and address it separately if needed?
>
My test passes with the patch provided and default UseCompactObjectHeaders (false)
However, it crashes with UseCompactObjectHeaders=true
size_t CompressedClassSpaceSize=18446744073592111104 is outside the allowed range [ 1048576 ... 4294967296 ]
#
# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (jvmFlagAccess.cpp:117), pid=35685, tid=5891
# fatal error: FLAG_SET_ERGO cannot be used to set an invalid value for CompressedClassSpaceSize
Thrown from aotMetaspace.cpp:1963 (FLAG_SET_ERGO(CompressedClassSpaceSize, class_space_size);)
In my case, max_encoding_range_size=4294967296, archive_space_size=4400103424, and gap_size=12304384. So, it causes a miscalculation of class_space_size
-------------
PR Comment: https://git.openjdk.org/jdk/pull/29494#issuecomment-3843772344
More information about the hotspot-dev
mailing list