Failing typeflow responsibility assertions

Tom Rodriguez Thomas.Rodriguez at Sun.COM
Thu May 22 09:47:51 PDT 2008


> I'm using LLVM, a library which (amongst other things) contains
> compiler backends for several platforms.  My original plan was to
> generate LLVM IR from one of the compilers' IRs but the problem
> with that is that LLVM IR is not assembly language -- assumptions
> embedded in C1 and C2 don't hold and working around each one is
> time consuming and hacky.  It might take me a month to make the
> register allocator cope with not having registers -- and until I
> had a working JIT there's no guarantee I wouldn't run across a
> terminal problem.

Why would you want to make the register allocator cope with not having 
registers?  If you are replacing the backend then you aren't using the 
existing register allocators.  You'd simply skip everything after then 
high level ir is generated and you'd be responsible for filling in a 
code buffer that could be turned into an nmethod by new_nmethod.  There 
are no problems in there that you aren't facing with writing a new 
compiler from scratch.  Writing and maintaining a good, correct Java 
front end is work that I would recommend you do everything you can to 
avoid, unless of course you think it sounds like fun.

These are the problem I think you are really facing and maybe you 
already have answer for these:

1.  How do you derive oopmaps from the generated code?

2.  How do you get hotspot's relocation information into the code you 
generate?

3.  If you are intending to support deoptimization, how do you get at 
the locations of the needed values.  This is similar to 1 but may imply 
a little more complexity.

4.  Some things we emit, like compiled inline caches and embedded 
references to oop, require instruction patterns that are patchable from 
the runtime.  Does LLVM provide the control you need?

5.  What's your plan for dealing with unloaded classes?  C2 does it by 
terminating the control flow at the unloaded bytecode and emitting an 
uncommon trap that will fall back to the interpreter.  The class will be 
loaded and a new compile of the code will be generated.  C1 does it by 
emitting special patchable instructions and rewriting them when it 
encounters them.  I don't know which one fits better with LLVM, C1 could 
probably be modified to use the uncommon trap strategy if patching were 
too difficult.

There are probably other issues but those are the ones that jump out at me.

tom


> Cheers,
> Gary
> 
> Tom Rodriguez wrote:
>> I'm curious why you are spending time writing a new java compiler
>> front end instead of replacing the back end of c1 or c2.  Writing
>> and maintaining a new front end will be a bunch of work and it
>> doesn't seem necessary.  You should be able to generate code
>> starting either from C1 or C2's high level IR without that much
>> difficulty.  I know of licensees that do exactly that.
>>
>> tom
>>
>> Gary Benson wrote:
>>> Hi all,
>>>
>>> There's loads of bits in c2 that look like this:
>>>
>>>   bool will_link;
>>>   ciInstanceKlass* klass = 
>>>   iter().get_klass(will_link)->as_instance_klass();
>>>   assert(will_link, "_new: typeflow responsibility");
>>>
>>> I'm doing the same in the JIT I'm writing, but I keep on failing
>>> these assertions.  Do I have to do something to tell the
>>> CompileBroker not to feed me methods full of unloaded stuff?
>>> Oh, and I'm using -Xmixed.
>>>
>>> Cheers,
>>> Gary
> 



More information about the hotspot-compiler-dev mailing list