RFR: 8332139: SymbolTableHash::Node allocations allocates twice the required memory
Coleen Phillimore
coleenp at openjdk.org
Tue Jun 4 15:45:11 UTC 2024
On Tue, 4 Jun 2024 15:36:36 GMT, Coleen Phillimore <coleenp at openjdk.org> wrote:
>> The symbols are inline and allocated together with the ConcurrentHashTable (CHT) Nodes. The calculation used for the required size is `alloc_size = size + value.byte_size() + value.effective_length();`
>>
>> Where
>> * `size == sizeof(SymbolTableHash::Node) == sizeof(void*) + sizeof(Symbol)`
>> * `value.byte_size() == dynamic_sizeof(Symbol) == sizeof(Symbol) + <bytes beyond object>`
>> * `value.effective_length() == dynamic_sizeof(Symbol) - sizeof(Symbol) == <bytes beyond object>`
>>
>> So `alloc_size` ends up being `sizeof(void*) /* node metadata */ + 2 * dynamic_sizeof(Symbol)`
>>
>> Because using the CHT with dynamically sized (and inlined) types requires knowing about its implementation details I chose to make the functionality for calculating the the allocation size a property of the CHT. It now queries the CHT for the node allocation size given the dynamic size required for the VALUE.
>>
>> The only current (implicit) restriction regarding using dynamically sized (and inlined) types in CHT is that the _value field C++ object ends where the Node object ends, so there is not padding bytes where the dynamic payload is allocated. (effectively `sizeof(VALUE) % alignof(Node) == 0` as long as there are no non-standard alignment fields in the Node metadata). I chose to test this as a runtime assert that the _value ends where the Node object ends, instead of a static assert with the alignment as it seemed to more explicitly show the intent of the check.
>>
>> Running testing tier1-7
>
> src/hotspot/share/classfile/symbolTable.cpp line 185:
>
>> 183: private:
>> 184: static void* allocate_node_impl(size_t size, Value const& value) {
>> 185: size_t alloc_size = SymbolTableHash::get_dynamic_node_size(value.byte_size());
>
> So 'size' passed in is actually sizeof(NODE), right or is it sizeof(NODE) + sizeof(VALUE) ? So also to fix it, don't we just remove value.byte_size() from this calculation?
The real problem is that size passed in is known to the concurrent hash table but not to the caller, so maybe this does help make this less error prone.
-------------
PR Review Comment: https://git.openjdk.org/jdk/pull/19214#discussion_r1626241160
More information about the hotspot-dev
mailing list