RFR: JDK-8209389: SIGSEGV in WalkOopAndArchiveClosure::do_oop_work

coleen.phillimore at oracle.com coleen.phillimore at oracle.com
Thu Aug 16 12:08:49 UTC 2018



http://cr.openjdk.java.net/~jiangli/8209389/webrev.01/src/hotspot/share/memory/heapShared.cpp.udiff.html

+ if (archived == NULL) {
+ ResourceMark rm;
+ tty->print("Failed to archive %s object ("
+ PTR_FORMAT "), size[" SIZE_FORMAT "] in sub-graph",
+ obj->klass()->external_name(), p2i(obj), (size_t)obj->size());
+ obj->print_on(tty);
+ vm_exit(1);
+ }

Can you use log_error() for this printing?

http://cr.openjdk.java.net/~jiangli/8209389/webrev.01/src/hotspot/share/memory/metaspaceShared.cpp.udiff.html

+ } else {
+ tty->print("Cannot allocate space for object " PTR_FORMAT " in 
archived heap region",
+ p2i(obj));
+ vm_exit(1);
+ }


Same here.  You may still need vm_exit(1) but it would be nice if you 
didn't.

Is there an RFE to convert all CDS messages to logging?

The fix looks good though.

Thanks,
Coleen


On 8/15/18 8:45 PM, Jiangli Zhou wrote:
> Here is the updated webrev: 
> http://cr.openjdk.java.net/~jiangli/8209389/webrev.01/.
>
> Thanks,
>
> Jiangli
>
>
> On 8/14/18 9:34 PM, Jiangli Zhou wrote:
>> Hi Ioi,
>>
>> Thanks for the review and suggestions. I'll incorporate them.
>>
>> Thanks,
>>
>> Jiangli
>>
>>
>> On 8/14/18 6:05 PM, Ioi Lam wrote:
>>> Hi Jiangli,
>>>
>>> The changes look good. I think it's OK to exit the dumping VM 
>>> because normally we should not be archiving such large objects.
>>>
>>> For the various messages, I think we should include the object size 
>>> (in bytes).
>>>
>>> Also, for this message:
>>>
>>>  395       if (archived == NULL) {
>>>  396         ResourceMark rm;
>>>  397         tty->print("Failed to archive %s object " PTR_FORMAT " 
>>> in sub-graph",
>>>  398                    obj->klass()->external_name(), p2i(obj));
>>>  399         vm_exit(1);
>>>  400       }
>>>
>>> In addition to the size, I think we should also add 
>>> obj->print_on(tty) to help diagnosing the problems.
>>>
>>> Thanks
>>>
>>> - Ioi
>>>
>>> On 8/14/18 5:50 PM, Jiangli Zhou wrote:
>>>> Please review the following fix that addresses the issue for 
>>>> JDK-8209389. A Java object that's larger than one GC region cannot 
>>>> be archived as we currently don't support object spanning more than 
>>>> one archive heap region. The archiving code needs to check the 
>>>> MetaspaceShared::archive_heap_object return value and handle 
>>>> failure accordingly. Thanks Claes for finding the edge case and 
>>>> reporting the problem!
>>>>
>>>>   webrev: http://cr.openjdk.java.net/~jiangli/8209389/webrev.00
>>>>
>>>>   bug: https://bugs.openjdk.java.net/browse/JDK-8209389
>>>>
>>>> - java_lang_Class::archive_basic_type_mirrors
>>>>   Archived object returned by MetaspaceShared::archive_heap_object 
>>>> should not be NULL in these cases (basic type mirrors are not 
>>>> humongous). Added an assert.
>>>>
>>>> - HeapShared::archive_reachable_objects_from_static_field
>>>>   If the sub-graph entry object is too large, archiving is skipped 
>>>> for it’s referenced sub-graph and dumping process continues.
>>>>
>>>> - WalkOopAndArchiveClosure::do_oop_work
>>>>   Abort dumping when archiving failures due to extra-large object 
>>>> encountered during the sub-graph archiving.
>>>>
>>>> Tested with the new test case included in the webrev. Tier1 - tier4 
>>>> testing are in progress.
>>>>
>>>> Thanks,
>>>> Jiangli
>>>
>>
>



More information about the hotspot-runtime-dev mailing list