Question regarding G1 option to run parallel Old generation garbage collection?

Srinivas Ramakrishna ysr1729 at gmail.com
Mon Oct 29 16:01:54 UTC 2012


Umm, i don't think I said anything about cache effects in the passage you
quoted bwlo.
I said we should prefer using a small VA range, so a small set of physical
pages would suffice.
You may have conflated an earlier reference in passing to caches (and its
immediate rejection in the
same breath) with the discussion about phytsical pages that followed. My
reference to NUMA
was that the allocation code would likely be affected and we could expect
to see some changes
as a result of it, which would be a good time to review the  repeated reuse
of VA range for young
gen allocation..Anyway, hopefully we both know what we mean, and there is
no misunderstanding.

best regards.
-- ramki

On Sat, Oct 27, 2012 at 12:42 PM, Thomas Schatzl <thomas.schatzl at jku.at>wrote:

> Hi,
>
> On Thu, 2012-10-25 at 20:40 -0700, Srinivas Ramakrishna wrote:
> > Thomas, while adding newly freed regions to the head of a common
> > global free region list probably achieves much reuse, I was
> > suggesting something even stronger and deliberate, in that a much
> > smaller set of regions be preferred for repeated reuse for Eden
> > allocation and for reuse as survivor regions, kind of
> > segregating that smaller subset completely from regions used for
> > tenuring objects.
>
> What I mean is, that to exploit cache effects in the way you
> suggest, these areas must be much smaller than you suggest; on
> multi-GB ranges of memory you likely won't benefit from them any
> more than now. NUMA awareness does not change that, especially
> because you typically use NUMA on machines because you want to
> use lots of memory.
>
> I.e. just a quick calculation: with NUMA awareness, the amount
> of memory touched per node/core is likely still too high to
> benefit a lot from caching. E.g. a 64GB eden divided across 8
> nodes still means you're touching 8GB/node with processors that
> have a few MB of cache each. In the best case.
>
> NUMA awareness does not improve upon cacheability (it might as a
> secondary effect!), but its main point is to improve access
> to memory beyond the caches imo.
>
> One idea where you exploit cache effects by giving each
> thread/core a very small heap with eden (typically <= 1 MB,
> i.e. something that fits nicely into the cache) that is
> reused like you suggest over and over again is generally
> referred to as "thread local heaps".
>
> The problem is, that the threshold to get improvements is
> much higher than with NUMA awareness. If a thread only
> has such a small eden, you cannot stop all threads every time
> it fills up any more, because that would naturally decrease
> throughput too much. I.e. it needs thread local gcs (so that
> every thread can collect its own heap without stopping the
> others) and associated complexity in the VM to work
> efficiently though.
>
> This is not meant to discourage you about NUMA awareness, just
> a try to clear up things (if there ever was something to clear
> up). NUMA awareness does have its use and it improves
> performance, but I don't think it mainly does so because of
> improved cachability.
>
> Thomas
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20121029/bef2b445/attachment.htm>


More information about the hotspot-gc-dev mailing list