Request for reviews (S): 6876276: assert(!is_visited, "visit only once")

Vladimir Kozlov Vladimir.Kozlov at Sun.COM
Fri Aug 28 10:47:26 PDT 2009


OK.

Vladimir

Changpeng Fang wrote:
> On 08/27/09 17:38, Vladimir Kozlov wrote:
>> Changpeng,
>>
>> I think you need additional check "current != first_mem &&" to stop 
>> the loop:
>>
>> +       for (Node* current = last_mem; current != 
>> ld->in(MemNode::Memory);
>>
> Based on our discussion just now, an assert may be more appropriate 
> here, because
> if we see first_mem in the loop body, the memory graph must have been 
> corrupted
> already:
> 
> +      for (Node* current = last_mem; current != ld->in(MemNode::Memory);
> +           current=current->in(MemNode::Memory)) {
> +        assert(current != first_mem, "corrupted memory graph");
> +        if(current->is_Mem() && !independent(current, ld)){
> +          schedule_last = false; // a later store depends on this load
> +          break;
> +        }
> +      }
> 
> Thanks for pointing out this.
> 
> Changpeng
> 
> 
> 
> 
>> Vladimir
>>
>> Changpeng Fang wrote:
>>> http://cr.openjdk.java.net/~cfang/6876276/webrev.00/
>>>
>>> Problem:
>>> SuperWord scheduling of loads uses the memory state of the last 
>>> executed load in the packet
>>> for the SuperWord load. This is not correct because memory order may 
>>> be violated. The test case
>>> that has problem is like this:
>>> store1
>>> load1
>>> store2
>>> load2
>>> after scheduling, it becomes:
>>> store1
>>> store2
>>> load1
>>> load2
>>> It turns out that store2 depends on load1, so the memory edge for the 
>>> superword load (combination of load1 and load2)
>>> is not correct.
>>>
>>> Solution:
>>> Schedule the superword loads based on dependence (we have already 
>>> done this for superword stores, and I don't know
>>> why I didn't do this for the loads at that time).
>>>
>>> Minor changes: *Use "97" as the exit code (instead of "-1") for the 
>>> Tests in test/compiler/6636138*
>>>
>>> Tests:
>>> JPRT, CompileTheWord, and *Tests in test/compiler/6636138*
>>>
>>> Thanks,
>>>
>>> Changpeng
> 


More information about the hotspot-compiler-dev mailing list