RFR: 8186706: ArchivedObjectCache obj_hash() is broken
coleen.phillimore at oracle.com
coleen.phillimore at oracle.com
Thu Aug 24 20:31:50 UTC 2017
Hi,
I'm glad that you changed this to CHEAP allocated.
+ CDS_JAVA_HEAP_ONLY(_archive_object_cache = new (ResourceObj::C_HEAP,
mtInternal)ArchivedObjectCache(););
This should probably be mtClass and not mtInternal.
The other question I had was that I expected you to use obj->hash()
since the objects probably should (?) have a hashcode installed when in
the archive.
Thanks,
Coleen
On 8/24/17 3:53 PM, Jiangli Zhou wrote:
> Hi Ioi,
>
> ResourceObj only allows delete for C_HEAP objects, we need to allocate _archive_object_cache as C_HEAP object also. Otherwise, we would hit the following assert.
>
> void ResourceObj::operator delete(void* p) {
> assert(((ResourceObj *)p)->allocated_on_C_heap(),
> "delete only allowed for C_HEAP objects”);
>
> Here is the updated webrev that allocates/deallocates the _archive_object_cache table and nodes as C_HEAP objects.
> http://cr.openjdk.java.net/~jiangli/8186706/webrev.01/
>
> Thanks,
> Jiangli
>
>> On Aug 24, 2017, at 10:22 AM, Ioi Lam <ioi.lam at oracle.com> wrote:
>>
>> Hi Jiangli,
>>
>> The Nodes need to be deallocated in the ResourceHashtable destructor.
>>
>> ~ResourceHashtable() {
>> if (ALLOC_TYPE == C_HEAP) {
>> Node* const* bucket = _table;
>> while (bucket < &_table[SIZE]) {
>> Node* node = *bucket;
>> while (node != NULL) {
>> Node* cur = node;
>> node = node->_next;
>> delete cur;
>> }
>> ++bucket;
>> }
>> }
>> }
>>
>>
>> The problem with ResourceHashtable is that by default ALLOC_TYPE = ResourceObj::RESOURCE_AREA, but if your call path looks like this:
>>
>>
>> ResourceHashtable<...> table;
>> ...
>> {
>> ResourceMark rm;
>> ...
>> {
>> table.put(....);
>> }
>> }
>>
>> The Node in the table will end up being invalid.
>>
>> In your case, the code path between the allocation of the ResourceHashtable and the call to MetaspaceShared::archive_heap_object covers a few files. There's currently no ResourceMark in between. However, in the future, someone could potentially put in a ResourceMark and cause erratic failures.
>>
>> So, since your're fixing the hashtable code, I think it will be a good idea to change the ALLOC_TYPE = ResourceObj::C_HEAP. However, when doing that, it's a good idea to do the proper clean up by invoking the ~ResourceHashtable() destructor via the delete operator.
>>
>>
>> Thanks
>>
>> - Ioi
>>
>>
>>
>> On 8/23/17 10:27 PM, Jiangli Zhou wrote:
>>> Hi Ioi,
>>>
>>> The table was not changed to be allocated as ResourceObj::C_HEAP. I see ‘ALLOC_TYPE’ only applies to the Nodes in the ResourceHashtable.
>>>
>>> Thanks,
>>> Jiangli
>>>
>>>> On Aug 23, 2017, at 6:03 PM, Ioi Lam <ioi.lam at oracle.com> wrote:
>>>>
>>>> Hi Jiangli,
>>>>
>>>> Since the table is allocated as ResourceObj::C_HEAP, it's better to delete it afterwards to avoid memory leaks
>>>>
>>>> {
>>>> NoSafepointVerifier nsv;
>>>>
>>>> // Cache for recording where the archived objects are copied to
>>>> MetaspaceShared::create_archive_object_cache();
>>>>
>>>> tty->print_cr("Dumping String objects to closed archive heap region ...");
>>>> NOT_PRODUCT(StringTable::verify());
>>>> // The string space has maximum two regions. See FileMapInfo::write_archive_heap_regions() for details.
>>>> _string_regions = new GrowableArray<MemRegion>(2);
>>>> StringTable::write_to_archive(_string_regions);
>>>>
>>>> tty->print_cr("Dumping objects to open archive heap region ...");
>>>> _open_archive_heap_regions = new GrowableArray<MemRegion>(2);
>>>> MetaspaceShared::dump_open_archive_heap_objects(_open_archive_heap_regions);
>>>>
>>>> + MetaspaceShared::create_archive_object_cache();
>>>>
>>>> }
>>>>
>>>> + static void delete_archive_object_cache() {
>>>> + CDS_JAVA_HEAP_ONLY(delete _archive_object_cache; _archive_object_cache = NULL;);
>>>> + }
>>>>
>>>> Thanks
>>>> - Ioi
>>>>
>>>> On 8/23/17 4:24 PM, Jiangli Zhou wrote:
>>>>> Please review the following webrev that fixes the ArchivedObjectCache obj_hash() issue. The patch was from Ioi (thanks!). I will count myself as a reviewer.
>>>>>
>>>>> bug: https://bugs.openjdk.java.net/browse/JDK-8186706?filter=14921
>>>>> webrev: http://cr.openjdk.java.net/~jiangli/8186706/webrev.00/
>>>>>
>>>>> ArchivedObjectCache obj_hash() computes hash using incorrect address. The fix is to use the correct oop address. The default ResourceHashtable size is 256, which is too small when large number of objects are archived. The table is now changed to use a much larger (15889) size. The ArchivedObjectCache issue was noticed when one test times out on slower linux arm64 machine. With the fix the test finishes without timeout.
>>>>>
>>>>> Tested with tier4-comp tests.
>>>>>
>>>>> Thanks,
>>>>> Jiangli
More information about the hotspot-runtime-dev
mailing list