ZGC & AArch64

Per Liden per.liden at oracle.com
Thu Mar 29 07:12:09 UTC 2018

Hi Stuart,

On 03/28/2018 05:31 PM, Stuart Monteith wrote:
> Hello,
>     After Ed and I spoke with Per and Stefan at FOSDEM I've started
> working on seeing what is necessary to get ZGC working on the Aarch64
> backend. To be clear, this is the hotspot/cpu/aarch64 backend rather
> than hotspot/cpu/arm backend.

That awesome!

> I have just some observations  and questions so far:
>     1. The address space used by ZGC is impressive - 17TB on my laptop.
> However, you need to make sure your kernel is configured for 48-bit
> virtual address space. I've currently hobbled ZGC a little to work
> within the 42-bit VA. - basically using 39-bits rather than 42-bits as
> the base addresses.

The 17TB reservation we do on x86 is large because of the multi-mapping 
scheme (4TB * 4 for the multi-mapping + the reservation for the mark 
stacks, etc). On Sparc and AArch64 this can be reduced quite a bit. To 
avoid problems related to external fragmentation, it's recommended that 
the address space reservation for the heap is 4 times that of the max 
heap size.

>     2. I'm basing my work on the SPARC port and the Linux x86 port. The
> top 8-bits on aarch64 can be used as tag bits, and so that's where I'm
> putting the "colours" - much like SPARC.
>       The os_cpu code is mostly derived from linux_x86, however. I've
> removed the multi-mapping as that is unnecessary.

Sounds like a good plan. There are pieces from both Solarus/Sparc and 
Linux/x86 that is useful to look at and reuse/copy.

>    3. If this code gets ported on further platforms, we'll need to look
> at refactoring the code. There will be a lot of redundancy. I'll
> follow what you are doing for now, until aarch64 is in an acceptable
> state.
>    4. How active is the SPARC port? I've largely been taking my cues
> from that port in the backend, rather than the OS code, and it would
> be good to know if there are issues doing that.

The Solars/Sparc port for ZGC is kept up to date. Having said that, we 
do focus our regular ZGC testing on Linux/x86 at this time.

Taking inspiration from Solaris/Sparc should be safe, especially the 
src/hotspot/os_cpu/solaris_sparc/z* parts. For the load barriers in 
interpreter/c1/c2 I would recommend leaning more on the x86 
implementation, since that is exposed to more testing and has some 
additional optimizations in how the load barrier stubs work in C2 (see 

>    5. I expected ZAddressBadMask to be held in Universe - is this to
> ease rebasing? Ordinarily we'd retrieve such values through Universe::

The ZAddressBadMask variable should at some point only used inside ZGC 
itself, so the rest of HotSpot should not need to know about it. The 
intention is that load barriers should use Thread::_zaddress_bad_mask 
(this is currently not true on Sparc).

>   6. Do you expect signal handlers to play a part in ZGC in the future?
> It is something to watch on aarch64, as the tag bits will get stripped
> under most circumstances.

There are currently no plans to use any signal handlers.

> I currently have it building, and I can run a GC benchmark on
> interpreter mode, until hitting an issue with a references to a oop in
> an unallocated ZPage. Running the same code on x86 completes just
> fine.

So, almost there ;) Assuming you have the normal load barrier in place 
in the interpreter, the other main thing to look out for is the weak 
barrier in TemplateInterpreterGenerator::generate_Reference_get_entry().


> Thanks,
>      Stuart

More information about the zgc-dev mailing list