RFR (S): 8016270: "CodeCache is full" message and compilation disabled, but codecache is not actually full

Vladimir Kozlov vladimir.kozlov at oracle.com
Mon Aug 12 18:39:33 PDT 2013


On 8/12/13 5:19 PM, Chris Plummer wrote:
> Hi Validimir,
>
> Thanks for the review. I also have an updated webrev that implements the change we talked about so there will be no
> attempt to compile methods that are larger than 1/2 the size of the smallest method that had an nmethod allocation
> failure. I should get that tested and out for review this week.
>
> More comments below:
>
> On 8/12/13 12:12 PM, Vladimir Kozlov wrote:
>>
>> globals.hpp - should be "if a code blob failed allocation is smaller than"
> Actually my wording is correct, and so is yours, but I'm not so sure either reads that well. Here are a few choices
>
> Turn compiler off if a code blob allocation smaller than this size failed.
> Turn compiler off if a code blob failed allocation is smaller than this size.
> Turn compiler off if the allocation of a code blob smaller than this size failed.
>
>
> Turn compiler off if the allocation failed for a code blob smaller than this size.

This one, I think.

>>
>> There are other places during C2 compilation where we fail compilation due to no space in CodeCache.
>>
>> C2 fails compilation during scratch buffer creation Compile::init_scratch_buffer_blob() with buffer's size slightly >
>> 1Kb. So could you make StopCompilationFailedSize 2Kb?
> I don't see how these two sizes relate. If the failure in Compile::init_scratch_buffer_blob() triggered calling
> handle_full_code_cache (it doesn't look like it does), then the failed_code_size passed in would be 0, so it's already
> smaller than any possible default value for StopCompilationFailedSize. Basically the approach is that any failed
> allocation of something other than an nmethod will result in compilation being disable, but that assume that
> handle_full_code_cache is called. See below.

I am not talking about calling handle_full_code_cache() in init_scratch_buffer_blob().
What I am trying to say is you will never get failed nmethod allocation size < 1Kb because compilation will fail before 
that in init_scratch_buffer_blob() and never reach ciEnv::register_method() if it can't allocate 1200 bytes for scratch 
buffer.

>>
>> Also we bailout compilation in Compile::init_buffer() and several other similar places in output.cpp:
>>
>>   // Have we run out of code space?
>>   if ((cb->blob() == NULL) || (!CompileBroker::should_compile_new_jobs())) {
>>     C->record_failure("CodeCache is full");
>>     return NULL;
>>   }
>>
>> I am worried that we don't call handle_full_code_cache() in such cases.
> Compile::init_scratch_buffer_blob() has the same problem. Isn't' this a pre-existing bug for not calling
> handle_full_code_cache? I don't think I've done anything to change behavior for these cases.

Yes, it is existing problem. I am just saying that we should fix it also but may be as separate fix.

Thanks,
Vladimir

>
> thanks,
>
> Chris
>>
>> Thanks,
>> Vladimir
>>
>> On 7/31/13 6:19 PM, Chris Plummer wrote:
>>> Hi,
>>>
>>> Please review the following:
>>>
>>> http://cr.openjdk.java.net/~cjplummer/8016270/webrev.00/
>>>
>>> The purpose of this fix is to prevent the compiler from being disabled due to a combination of fragmentation and a very
>>> large nmethod that cannot be allocated. I've added a new command line flag called StopCompilationFailedSize (defaults to
>>> 1k). If the allocation that fails is smaller than this size, then the compiler will still be disabled. If bigger, the
>>> compiler will remain enabled, allowing for smaller methods to still be compiled. However, the sweeper will still be
>>> invoked in hopes of making room for the large method so it eventually can be compiled.
>>>
>>> The failed_allocation_size now passed into CompileBroker::handle_full_code_cache() defaults to 0, and is only explicitly
>>> passed in when an nmethod allocation fails. I figured this was the only place likely to see larger allocations, and
>>> other allocations would typically be well under 1k. However, if something like an adapter was over 1k and failed to
>>> allocate, no real harm is done. It just means the compiler won't be turned off. failed_allocation_size is really more of
>>> a hint for CompileBroker::handle_full_code_cache() and is not required to be accurate.
>>>
>>> In CodeCache::print_summary in codeCache.cpp, I made a minor and somewhat related fix. I removed the word "contiguous"
>>> from the message when the compiler is currently disabled. It used to be true, but no longer is some fixes Nils did a
>>> while back.
>>>
>>> I've verified that with this fix I no longer see the "codecache is full" messages when running Nashorn + v8 with a 20m
>>> codecache (normally is uses about 58m). Benchmark results aren't changing much, although the stdev seems to be lower
>>> with the fix. I think this is because compilation was almost always quickly re-enabled anyway because the codecache was
>>> normally in a state such that CodeCache::needs_flushing() would return false. In fact sometimes compilation would be
>>> enabled even before CompileBroker::handle_full_code_cache() had a chance to call CodeCache::print_summary(), which ended
>>> up showing that the compiler is enabled.
>>>
>>> I've tested with JPRT, jck (vm), JTReg compiler tests, and vm.quick.testlist.
>>>
>>> Chris
>


More information about the hotspot-compiler-dev mailing list