[9] RFR(L): 8046809: vm/mlvm/meth/stress/compiler/deoptimize CodeCache is full.
Vladimir Kozlov
vladimir.kozlov at oracle.com
Tue Oct 14 02:00:24 UTC 2014
Looks good but you need to wait CCC approval for product flags removal.
And you need to add them to obsolete_jvm_flags table in arguments.cpp.
Thanks,
Vladimir
On 10/13/14 7:55 AM, Albert Noll wrote:
> Hi Vladimir,
>
> thanks for the feedback. Here is the updated webrev:
>
> http://cr.openjdk.java.net/~anoll/8046809/webrev.04/
>
> Best,
> Albert
>
> On 10/10/2014 06:57 PM, Vladimir Kozlov wrote:
>> On 10/10/14 7:30 AM, Albert Noll wrote:
>>> Tobias, Vladimir, Dean, Nils, thanks for looking at the patch and for your feedback.
>>>
>>> @Tobias
>>> I have adapted your suggestions.
>>>
>>> @Vladimir
>>> I tried to add a condition to SafepointSynchronize::is_cleanup_needed() that returns 'true' if the code cache has less
>>> than 10% free space. Unfortunately, that does not fix the bug. A safepoint interval of 1000ms seems to be too coarse
>>> to free space in the code cache fast enough.
>>
>> Okay, thank you for trying.
>>
>>>
>>> It seems that the concept of critical memory allocations (allocations that must succeed for the VM to be able to
>>> continue to execute) is broken. For example, we need to compile method handle intrinsics (otherwise we get a
>>> java.lang.VirtualMachineError: out of space in CodeCache for method handle intrinsic). However, method handle intrinsic
>>> are not critical allocations. For this reason, test/compiler/startup/SmallCodeCacheStartup.java fails with a 32-bit
>>> client
>>> version. We will get a new nightly bug soon... The current patch fixes this.
>>>
>>> I want to keep the removal of critical allocations in this patch, since aggressive sweeping (enabled by the VM
>>> operation that forces stack scanning) replaces the concept of critical allocations in the code cache. I think
>>> these two changes belong together. If you still want me to, I will do a separate change for critical allocation
>>> removal.
>>
>> Okay, I am fine with this removal.
>>
>>>
>>> @Dean
>>> I removed the corresponding code. It was a fragment that I missed to delete.
>>>
>>> @Nils
>>> CodeCacheMinimumFreeSpace did not guarantee that the VM can continue to execute. In the failing test
>>> that is reported in the bug, 500K was not enough to generate all adapters. The test can be changed such
>>> that a CodeCacheMinimumFreeSpace size of 1m, 2m, 3m, etc. is too small too. What value should we choose?
>>>
>>> Also, as noted above, method handle intrinsic need to be compiled to be able to continue execution.
>>> Method handle intrinsic are currently not critical, so we must make them critical allocations. As a consequence,
>>> we must re-evaluate the default size of CodeCacheMinimumFreeSpace.
>>>
>>> The current approach enables very aggressive sweeping, if the code cache is 90% full. It is very likely that code
>>> will be flushed from the code cache in the next 5-10 invocations of CodeCache::allocate(). In a sense, the remaining
>>> 10% can be considered as a 'critical region' that is used as a 'buffer' until we free space in the code cache. This bug
>>> proves that the proposed strategy solves the problem better than CodeCacheMinimumFreeSpace.
>>>
>>> Maybe we should provide the user with control over this threshold, i.e., replace CodeCacheMinimumFreeSpace
>>> with a different command line flag that allows the user to specify the percentage (currently 90%) at which aggressive
>>
>> Yes, it should be flag. I think it should be percentage.
>>
>> Thanks,
>> Vladimir
>>
>>> sweeping starts. We could also use a fixed-size. I don't think that having two thresholds (the threshold where we start
>>> aggressive sweeping AND CodeCacheMinimumFreeSpace) is necessary.
>>
>>>
>>> What do you think?
>>>
>>>
>>>
>>> The performance runs show that waking up the sweeper thread for every allocation has a negative impact
>>> on startup time. To fix this, in the current patch, the sweeper thread is only woken up, if the code cache is
>>> more than 10% occupied. I will issue a new performance run and compare it against b34 (which includes
>>> the segmented code cache).
>>>
>>> Here is the new webrev:
>>> http://cr.openjdk.java.net/~anoll/8046809/webrev.03/
>>>
>>> Best,
>>> Albert
>>>
>>>
>>> On 10/10/2014 11:01 AM, Nils Eliasson wrote:
>>>> Hi, Albert
>>>>
>>>> Overall a very welcome change to move the sweeper into a separate thread.
>>>>
>>>> On 2014-10-09 10:24, Albert Noll wrote:
>>>>> The patch also removes CodeCacheMinimumFreeSpace and 'critical' code cache allocations. Due to a bug in
>>>>> heap.cpp, the CodeCacheMinimumFreeSpace was in fact not reserved for 'critical' allocations. The following
>>>>> lines produce an underflow if heap_unallocated_capacity() < CodeCacheMinimumFreeSpace:
>>>>>
>>>>> segments_to_size(number_of_segments) > (heap_unallocated_capacity() - CodeCacheMinimumFreeSpace)
>>>>>
>>>>> Since the critical part in the code cache was never used for its intended purpose and we did not have problems
>>>>> due to that, we can remove it.
>>>>
>>>> Are you sure? The reasons for the CodeCacheMinimumFreeSpace and critical allocations where problems with code cache
>>>> fragmentation in long running applications, where small compilations would starve out the adaptors and cause VM
>>>> shutdown. You won't see this other than in the really long running tests. It might be broken - but then we should open
>>>> a bug and fix it. (And in the long run we should handle the fragmentation with relocating code.)
>>>>
>>>> Regards,
>>>> //Nils
>>>
>
More information about the hotspot-compiler-dev
mailing list