RFR: 8332139: SymbolTableHash::Node allocations allocates twice the required memory

Coleen Phillimore coleenp at openjdk.org
Tue Jun 4 15:45:10 UTC 2024


On Mon, 13 May 2024 12:30:38 GMT, Axel Boldt-Christmas <aboldtch at openjdk.org> wrote:

> The symbols are inline and allocated together with the ConcurrentHashTable (CHT) Nodes. The calculation used for the required size is `alloc_size = size + value.byte_size() + value.effective_length();`
> 
> Where
>   * `size == sizeof(SymbolTableHash::Node) == sizeof(void*) + sizeof(Symbol)`
>   * `value.byte_size() == dynamic_sizeof(Symbol) == sizeof(Symbol) + <bytes beyond object>`
>   * `value.effective_length() == dynamic_sizeof(Symbol) - sizeof(Symbol) == <bytes beyond object>`
> 
> So `alloc_size` ends up being `sizeof(void*) /* node metadata */ + 2 * dynamic_sizeof(Symbol)`
> 
> Because using the CHT with dynamically sized (and inlined)  types requires knowing about its implementation details I chose to make the functionality for calculating the the allocation size a property of the CHT. It now queries the CHT for the node allocation size given the dynamic size required for the VALUE. 
> 
> The only current (implicit) restriction regarding using  dynamically sized (and inlined) types in CHT is that the _value field C++ object ends where the Node object ends, so there is not padding bytes where the dynamic payload is allocated. (effectively `sizeof(VALUE) % alignof(Node) == 0` as long as there are no non-standard alignment fields in the Node metadata).  I chose to test this as a runtime assert that the _value ends where the Node object ends, instead of a static assert with the alignment as it seemed to more explicitly show the intent of the check. 
> 
> Running testing tier1-7

Yes, I think this is a helpful patch and fixes the bug.

src/hotspot/share/classfile/symbolTable.cpp line 185:

> 183: private:
> 184:   static void* allocate_node_impl(size_t size, Value const& value) {
> 185:     size_t alloc_size = SymbolTableHash::get_dynamic_node_size(value.byte_size());

So 'size' passed in is actually sizeof(NODE), right or is it sizeof(NODE) + sizeof(VALUE) ?  So also to fix it, don't we just remove value.byte_size() from this calculation?

src/hotspot/share/utilities/concurrentHashTable.inline.hpp line 1065:

> 1063:   assert(value_size >= sizeof(VALUE), "must include the VALUE");
> 1064:   return sizeof(Node) - sizeof(VALUE) + value_size;
> 1065: }

I'm not sure if this makes it less error prone, if I'm right that the size passed in is sizeof(Node) + sizeof(VALUE).

-------------

Marked as reviewed by coleenp (Reviewer).

PR Review: https://git.openjdk.org/jdk/pull/19214#pullrequestreview-2096778534
PR Review Comment: https://git.openjdk.org/jdk/pull/19214#discussion_r1626235425
PR Review Comment: https://git.openjdk.org/jdk/pull/19214#discussion_r1626236348


More information about the hotspot-dev mailing list