RFR: JDK-8209389: SIGSEGV in WalkOopAndArchiveClosure::do_oop_work
Ioi Lam
ioi.lam at oracle.com
Wed Aug 15 01:05:48 UTC 2018
Hi Jiangli,
The changes look good. I think it's OK to exit the dumping VM because
normally we should not be archiving such large objects.
For the various messages, I think we should include the object size (in
bytes).
Also, for this message:
395 if (archived == NULL) {
396 ResourceMark rm;
397 tty->print("Failed to archive %s object " PTR_FORMAT " in
sub-graph",
398 obj->klass()->external_name(), p2i(obj));
399 vm_exit(1);
400 }
In addition to the size, I think we should also add obj->print_on(tty)
to help diagnosing the problems.
Thanks
- Ioi
On 8/14/18 5:50 PM, Jiangli Zhou wrote:
> Please review the following fix that addresses the issue for
> JDK-8209389. A Java object that's larger than one GC region cannot be
> archived as we currently don't support object spanning more than one
> archive heap region. The archiving code needs to check the
> MetaspaceShared::archive_heap_object return value and handle failure
> accordingly. Thanks Claes for finding the edge case and reporting the
> problem!
>
> webrev: http://cr.openjdk.java.net/~jiangli/8209389/webrev.00
>
> bug: https://bugs.openjdk.java.net/browse/JDK-8209389
>
> - java_lang_Class::archive_basic_type_mirrors
> Archived object returned by MetaspaceShared::archive_heap_object
> should not be NULL in these cases (basic type mirrors are not
> humongous). Added an assert.
>
> - HeapShared::archive_reachable_objects_from_static_field
> If the sub-graph entry object is too large, archiving is skipped for
> it’s referenced sub-graph and dumping process continues.
>
> - WalkOopAndArchiveClosure::do_oop_work
> Abort dumping when archiving failures due to extra-large object
> encountered during the sub-graph archiving.
>
> Tested with the new test case included in the webrev. Tier1 - tier4
> testing are in progress.
>
> Thanks,
> Jiangli
More information about the hotspot-runtime-dev
mailing list