review (M) for 6879063: SA should use hsdis for disassembly

Volker Simonis volker.simonis at gmail.com
Fri Sep 18 10:33:39 PDT 2009


Hi,

sorry, I was still preparing an answer to the previous mail thread...

So here some comments for this concrete change:

hsdis-demo.c
-----------------
- newer versions of gcc/ld may reorder the location of functions
within the executable. As far as I know there's no easy (and portable)
way of getting the size of a function in C and before we start reading
the elf information of the executable I would suggest the following
simple hack to make hsdis-demo work in any case even if the function
'end_of_file()' is optimized out or relocated before main():

  void *start = (void*) &main;
  void *end = (void*) &end_of_file;
#if defined(__ia64) || defined(__powerpc__)
  /* On IA64 and PPC function pointers are pointers to function descriptors */
  start = *((void**)start);
  end = *((void**)end);
#endif
  disassemble(start, (end > start) ? end : start + 256);

Of course 256 instructions is just a vague estimation and could be
replaced by something more appropriate...

hsdis.c
---------
- add the following to native_arch_name() for ppc64, ia64 and s390 support:

#ifdef LIBARCH_ia64
    res = "ia64";
#endif
#ifdef LIBARCH_ppc64
    res = "powerpc:common64";
#endif
#ifdef LIBARCH_s390x
    res = "s390:64-bit";
#endif

- I'm still not happy with decode_instructions_virtual() since it's
still not possible to decode a single instruction. When I first read
you previous answer I thought you mention the "number of instructions
to decode" with the new 'length' argument. But now that it is the
length of the underlying buffer, it's still not easy to decode a
single instructions on non-RISC architectures. So what about yet
another argument which specifies the number of instructions to decode?
Or do you have a better idea?

Makefile
-----------
- add:

CFLAGS/ppc64    += -m64

for ppc64 support (Notice that I only have access to and tested on
64-bit ppc linux!).

- 'make both' doesn't work on a 64-bit Linux (it only builds the
64-bit library!). To make it work you have to change the ARCH
definition to something like:

CPU             = $(shell uname -m)
ARCH1=$(CPU:x86_64=amd64)
ARCH2=$(ARCH1:i686=i386)
ifndef LP64
ARCH=$(ARCH2:amd64=i386)
else
ARCH=$(ARCH2)
endif

Notice that with this change "make demo" will build the 32-bit demo
even an a 64-bit Linux. You have to use 'LP64=1 make demo' to build
the 64-bit demo. This should be documented in the README file. (The
same is true for Solaris anyway - 'make demo' will build the 32-bit
demo by default and 'LP64=1 make demo'  will build the 64-bit one.

- I didn't build for Windows, but emacs syntax higlighting gives me an
error because of an unbalanced " in:

CFLAGS		+= LIBARCH=\"$(LIBARCH)\""


I just wondered if  the last " is really needed?

Regards,
Volker

On 9/18/09, Tom Rodriguez <Thomas.Rodriguez at sun.com> wrote:
> http://javaweb.sfbay.sun.com/~never/webrev/6879063/
>
>  The SA has Java based disassemblers for x86 and sparc but not for
>  amd64.  Instead of porting to amd64 we should switch over to using
>  hsdis for it like the JVM does.  This required a new entry point into
>  hsdis, decode_instructions_virtual, which separates the address of the
>  code being disassembled from the buffer containing the code.  The
>  existing uses of decode_instructions have been updated to use the new
>  interface and the SA Disassembler has Java native methods that call
>  into hsdis and call back up to Java to perform the disassembly.  I
>  also updated the disassembly printing code to more closely match the
>  JVM though we still don't print out a lot things like oops and relocs
>  in the SA.
>
>  I deleted all the old disassembler logic since it's incompatible with
>  the new disassembly interface.  I also blew away the moribund dbx
>  based SA interface and few other dead files.  In the end I deleted
>  around 22000 lines of source.
>
>  Tested by dumping full assembly from core files.
>


More information about the hotspot-compiler-dev mailing list