RFR(s): 8067187: -XX:MaxMetaspaceSize=20m -Xshare:dump caused JVM to crash
Calvin Cheung
calvin.cheung at oracle.com
Fri Dec 19 06:46:17 UTC 2014
Hi Stefan,
On 12/17/2014 12:59 AM, Stefan Karlsson wrote:
> Hi Calvin,
>
> On 2014-12-12 21:42, Calvin Cheung wrote:
>> JBS: https://bugs.openjdk.java.net/browse/JDK-8067187
>>
>> This fix is to add a check on the MaxMetaspaceSize when performing
>> CDS archive dumping.
>>
>> With the fix, instead of a crash, it will print and error message
>> like the following and exit:
>>
>> Java HotSpot(TM) 64-Bit Server VM warning:
>> The MaxMetaspaceSize of 20971520 bytes is not large enough.
>> Either don't specify the -XX:MaxMetaspaceSize=<size>
>> or increase the size to at least 33554432.
>>
>> Tested manually via command line and jtreg test on the new test.
>>
>> webrev: http://cr.openjdk.java.net/~ccheung/8067187/webrev/
>
> Please, don't add this check and side-effect deep down in
> MetaspaceGC::can_expand. I think it would be better if you move this
> check further up in the call chain. Maybe Metaspace::initialize would
> be a good place. We already do CDS specific checks in that method, for
> example:
> assert(!DumpSharedSpaces || new_chunk != NULL, "should have enough
> space for both chunks");
Thanks for your review.
I've moved the check to Metaspace::initialize() before calling
get_initialization_chunk().
Also clarified a comment in Metaspace::global_initialize().
updated webrev is at the same location:
http://cr.openjdk.java.net/~ccheung/8067187/webrev/
I've re-ran the test via jprt.
thanks,
Calvin
>
> Thanks,
> StefanK
>
>>
>> thanks,
>> Calvin
>
More information about the hotspot-runtime-dev
mailing list