[15] RFR(S): 8230402: Allocation of compile task fails with assert: "Leaking compilation tasks?"

Vladimir Kozlov vladimir.kozlov at oracle.com
Mon Apr 27 19:30:30 UTC 2020



On 4/27/20 2:26 AM, Christian Hagedorn wrote:
> Hi Vladimir
> 
> Thank you for your review!
> 
> On 24.04.20 23:57, Vladimir Kozlov wrote:
>> compileBroker.hpp and other places - when you have only one line you can use DEBUG_ONLY( ) macro.
>> I think dump() method should print only duplicated tasks to avoid search duplicates in 5000 lines.
>>
>> Can you use TieredThresholdPolicy::compare_methods() in compare_by_weight()? It would be nice to have the same logic 
>> which determines which method should be compiled first or removed from queue.
> 
> Sounds good, I included these in a new webrev:
> http://cr.openjdk.java.net/~chagedorn/8230402/webrev.01/

Looks better.

> 
>> May be we should mark methods which are removed from queue or use counters decay or use other mechanisms to prevent 
>> methods be put back into queue immediately because their counters are high. You may not need to remove half of queue 
>> in such case.
> 
> You mean we could, for example, just reset the invocation and backedge counters of removed methods from the queue? This 
> would probably be beneficial in a more general case than in my test case where each method is only executed twice. About 
> the number of tasks to drop, it was just a best guess. We can also choose to drop fewer. But it is probably hard to 
> determine a best value in general.

An other thought. Instead of removing tasks from queue may be we should not put new tasks on queue when it become almost 
full (but continue profiling methods). For that we need a parameter (or diagnostic flag) instead of 10000 number.

We are not using counters decay in Tiered mode because we are loosing/corrupting profiling data with it. We should avoid 
this. I just gave an example of what could be done.

One concern I have is that before it was check in debug VM. Now we putting limitation on compilations in product VM 
which may affect performance in some cases. We should check that.

Thanks,
Vladimir

> 
> Best regards,
> Christian
> 
>>
>> On 4/24/20 7:37 AM, Christian Hagedorn wrote:
>>> Hi
>>>
>>> Please review the following patch:
>>> https://bugs.openjdk.java.net/browse/JDK-8230402
>>> http://cr.openjdk.java.net/~chagedorn/8230402/webrev.00/
>>>
>>> This assert was hit very intermittently in an internal test until jdk-14+19. The test was changed afterwards and the 
>>> assert was not observed to fail anymore. However, the problem of having too many tasks in the queue is still present 
>>> (i.e. the compile queue is growing too quickly and the compiler(s) too slow to catch up). This assert can easily be 
>>> hit by creating many class loaders which load many methods which are immediately compiled by setting a low 
>>> compilation threshold as used in runA() in the testcase.
>>>
>>> Therefore, I suggest to tackle this problem with a general solution to drop half of the compilation tasks in 
>>> CompileQueue::add() when a queue size of 10000 is reached and none of the other conditions of this assert hold (no 
>>> Whitebox or JVMCI compiler). For tiered compilation, the tasks with the lowest method weight() or which are unloaded 
>>> are removed from the queue (without altering the order of the remaining tasks in the queue). Without tiered 
>>> compilation (i.e. SimpleCompPolicy), the tasks from the tail of the queue are removed. An additional verification in 
>>> debug builds should ensure that there are no duplicated tasks. I assume that part of the reason of the original 
>>> assert was to detect such duplicates.
>>>
>>> Thank you!
>>>
>>> Best regards,
>>> Christian
>>>


More information about the hotspot-compiler-dev mailing list