[9, P1] RFR (S/M): 8180048: Interned string and symbol table leak memory during parallel unlinking

Thomas Schatzl thomas.schatzl at oracle.com
Fri May 12 08:38:22 UTC 2017


Hi Aleksey,

  thanks for your review.

On Thu, 2017-05-11 at 18:26 +0200, Aleksey Shipilev wrote:
> On 05/11/2017 06:04 PM, Thomas Schatzl wrote:
> > 
> > It only affects G1 as it is the only collector that does
> > string/symbol table unlinking in parallel.
> Shenandoah too, it actually shares a part of G1 code for that.
> 
> > 
> > CR:
> > https://bugs.openjdk.java.net/browse/JDK-8180048
> > Webrev:
> > http://cr.openjdk.java.net/~tschatzl/8180048/webrev/
> So the idea is to collect pending ops to BucketUnlinkContext, and
> then purge them in BHT::bulk_free_entries. Sounds sane to me,
> probably also has performance benefits.
> 

Since the question came up and I had already sent the email when
noticing this. I figured out there are three options to fix this:

1) make string/symbol table unlinking single threaded like other
collectors do. Will cause massive performance regressions in some
cases.

2) do the move of old entries to the free list using atomics by fixing
BasicHashtable::free_entry(), entry by entry. This may cause
performance regressions as this is somewhat equivalent to a global lock
on the _free_list member.

3) during unlinking, collect entries in thread local free lists that
are merged into the global free list at the end. Has the least (if any)
performance impact, but most complicated of the three options (but has
been done elsewhere a few times for the same reasons). 

I opted to implement 3) here, i.e. every thread (caller of
buckets_unlink_or_oops_do() for the string table) collects all unlinked
BasicHashtableEntrys into a local linked list (stored in the
BucketUnlinkContext to avoid lots of additional parameters to the
method), and then at the very end every thread adds the locally
collected list to the global free list once.

Then there is only one atomic op for every thread at the end,
minimizing contention.

However that has not been the goal, rather minimum performance impact
of the change.

> Nits:
> 
>  *) Modifier order?
>   176   BasicHashtableEntry<F>* volatile _free_list;
> 
> Otherwise seems fine.

Thanks,
  Thomas



More information about the hotspot-runtime-dev mailing list