RFR: 8263377: Store method handle linkers in the 'non-nmethods' heap
Jorn Vernee
jvernee at openjdk.java.net
Fri Jun 3 18:56:53 UTC 2022
On Tue, 17 May 2022 23:19:54 GMT, Yi-Fan Tsai <duke at openjdk.java.net> wrote:
> 8263377: Store method handle linkers in the 'non-nmethods' heap
Some initial comments. The codeBlob code looks mostly good to me. But, I don't have enough knowledge about the GC and IC changes to be able to say the same about that.
src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp line 1168:
> 1166: int frame_complete = ((intptr_t)__ pc()) - start; // not complete, period
> 1167: __ flush();
> 1168: int stack_slots = SharedRuntime::out_preserve_stack_slots(); // no out slots at all, actually
`iid`, `start`, `frame_complete` and `stack_slots` variables are no longer used now it seems. Please remove them.
Suggestion:
// First instruction must be a nop as it may need to be patched on deoptimisation
__ nop();
gen_special_dispatch(masm,
method,
in_sig_bt,
in_regs);
__ flush();
src/hotspot/cpu/arm/sharedRuntime_arm.cpp line 751:
> 749:
> 750: vmIntrinsics::ID iid = method->intrinsic_id();
> 751: intptr_t start = (intptr_t)__ pc();
Suggestion:
src/hotspot/cpu/ppc/sharedRuntime_ppc.cpp line 1629:
> 1627:
> 1628: vmIntrinsics::ID iid = method->intrinsic_id();
> 1629: intptr_t start = (intptr_t)__ pc();
unused variables
Suggestion:
src/hotspot/cpu/riscv/sharedRuntime_riscv.cpp line 1124:
> 1122:
> 1123: vmIntrinsics::ID iid = method->intrinsic_id();
> 1124: intptr_t start = (intptr_t)__ pc();
Suggestion:
src/hotspot/cpu/s390/sharedRuntime_s390.cpp line 1303:
> 1301: int total_in_args = method->size_of_parameters();
> 1302: vmIntrinsics::ID iid = method->intrinsic_id();
> 1303: intptr_t start = (intptr_t) __ pc();
Suggestion:
src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp line 1328:
> 1326:
> 1327: vmIntrinsics::ID iid = method->intrinsic_id();
> 1328: intptr_t start = (intptr_t)__ pc();
Suggestion:
src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp line 1428:
> 1426:
> 1427: vmIntrinsics::ID iid = method->intrinsic_id();
> 1428: intptr_t start = (intptr_t)__ pc();
Suggestion:
src/hotspot/share/ci/ciMethod.cpp line 1131:
> 1129: CompiledMethod* cm = (code == nullptr) ? nullptr : code->as_compiled_method_or_null();
> 1130: if (cm != NULL && (cm->comp_level() == CompLevel_full_optimization)) {
> 1131: _instructions_size = cm->insts_end() - cm->verified_entry_point();
So, the old code only used sets _instruction_size if `comp_level` is `CompLevel_full_optimization`. Since the old shared runtime code used `new_native_nmethod`, which ends up setting comp_level to `CompLevel_none`, this branch is also not being taken in the current code?
In that case, this looks good.
src/hotspot/share/classfile/javaClasses.cpp line 2675:
> 2673: CodeBlob* nm = method->code();
> 2674: if (WizardMode && nm != NULL) {
> 2675: sprintf(buf + (int)strlen(buf), "(nmethod " INTPTR_FORMAT ")", (intptr_t)nm);
Maybe change the text here as well, since this might not be an nmethod.
Suggestion:
sprintf(buf + (int)strlen(buf), "(code " INTPTR_FORMAT ")", (intptr_t)nm);
src/hotspot/share/classfile/systemDictionary.cpp line 2049:
> 2047: assert(spe != NULL && spe->method() != NULL, "");
> 2048: assert(Arguments::is_interpreter_only() || (spe->method()->has_compiled_code() &&
> 2049: spe->method()->code()->code_begin() == spe->method()->from_compiled_entry()),
Not sure if this assert is still needed/useful, since we should only encounter MH linker blobs here, for which the `from_compiled_entry` is always `code_begin`.
I'd suggest just removing this assert.
src/hotspot/share/code/codeBlob.cpp line 342:
> 340: MethodHandleIntrinsicBlob* MethodHandleIntrinsicBlob::create(const methodHandle& method,
> 341: CodeBuffer *code_buffer) {
> 342: code_buffer->finalize_oop_references(method);
Can you please explain why this is needed? (I'm a bit surprised since the constructor asserts that `total_oop_size` is 0)
Thanks.
src/hotspot/share/code/codeBlob.cpp line 347:
> 345: {
> 346: MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag);
> 347: int mhi_size = CodeBlob::allocation_size(code_buffer, sizeof(MethodHandleIntrinsicBlob));
The allocation size could also be computed before taking the code cache lock. BufferBlob also does this for example, but others don't. I think it makes sense to have it outside of the mutex block though, to minimize the time we need to hold the lock. I don't see anything in there that seems to require the lock. (Maybe we should clean up other cases in a followup as well).
src/hotspot/share/code/codeBlob.cpp line 354:
> 352: if (mhi != NULL) {
> 353: debug_only(mhi->verify();) // might block
> 354: }
This is debug only. Looking at CodeCache::allocate, it can only return `NULL` if the allocation size is `<= 0`, in which case an earlier assert will already fire. So, this null check doesn't seem needed?
Suggestion:
debug_only(mhi->verify();) // might block
src/hotspot/share/code/codeBlob.cpp line 360:
> 358: void MethodHandleIntrinsicBlob::verify() {
> 359: // Make sure all the entry points are correctly aligned for patching.
> 360: NativeJump::check_verified_entry_alignment(code_begin(), code_begin());
This method only seems implemented on x86, which ignores the first argument. Maybe it's a good opportunity to clean up the first argument?
src/hotspot/share/code/codeBlob.cpp line 366:
> 364: if (!CodeCache::contains((void*)this)) {
> 365: fatal("MethodHandleIntrinsicBlob at " INTPTR_FORMAT " not in zone", p2i(this));
> 366: }
Why would this not be true? We allocate from the code cache after all.
src/hotspot/share/compiler/compileBroker.hpp line 307:
> 305: TRAPS);
> 306:
> 307: static CodeBlob* compile_method(const methodHandle& method,
Not so sure about these changes. It seems to me that if a method is requested to be compiled, it should always result in an nmethod.
Alternatively, would it be possible to keep these functions returning an `nmethod` but add an assert at the start to check that the passed `method` is not a method handle intrinsic?
src/hotspot/share/oops/method.cpp line 1016:
> 1014: CodeBlob* nm = code(); // Put it into local variable to guard against concurrent updates
> 1015: if (nm != NULL) {
> 1016: nm->as_compiled_method()->make_not_entrant();
Looks like this code is guarding against a situation where 2 threads race to generate a native wrapper.
I think there is a potential race here with the new method handle linker blobs if 2 threads race to generate a linker, and one ends up overwriting the other as well.
I think this cast would fail, but we would also have a memory leak if we only made CompiledMethods non-entrant.
i.e. it seems there can be cases where 2 threads race to generate a MH linker, and both try to install it. We should make sure that there are no leaks in that case.
I think it might be enough to see here if `nm` is not null, and if it's a CompiledMethod, make it not entrant, and if it's an MH linker, free it directly (can delegate to `RuntimeBlob::free`). Though, I'm not sure if it's okay here to grab the code cache lock for that. Maybe there's a way to mark it as unused, and have it be cleaned up later.
src/hotspot/share/oops/method.cpp line 1304:
> 1302: assert(blob->is_mh_intrinsic(), "must be MH intrinsic");
> 1303: MethodHandleIntrinsicBlob* mhi = blob->as_mh_intrinsic();
> 1304: return (mhi->method() == nullptr) || (mhi->method() == this);
The assert looks redundant, since the cast on the next line already checks it.
Also, can `mhi->method()` really be `nullptr` here?
-------------
PR: https://git.openjdk.java.net/jdk/pull/8760
More information about the hotspot-dev
mailing list