RFR (L): 8230706: Waiting on completion of strong nmethod processing causes long pause times with G1

Kim Barrett kim.barrett at oracle.com
Tue Oct 8 23:48:06 UTC 2019


> On Sep 30, 2019, at 7:14 AM, Thomas Schatzl <thomas.schatzl at oracle.com> wrote:
> All fixed in new webrev:
> 
> http://cr.openjdk.java.net/~tschatzl/8230706/webrev.0_to_1 (diff)
> http://cr.openjdk.java.net/~tschatzl/8230706/webrev.1 (full)
> 
> Rerunning hs-tier1-5, almost done
> 
> Thanks,
>  Thomas

Because of NMETHOD_SENTINEL we already have a "lying to the type
system" problem for the nmethod link field, as it doesn't necessarily
contain an nmethod*.  The introduction of the strongly claimed tagging
mechanism just emphasizes that.  I think that should be cleaned up and
the "lying to the type system" should be eliminated.  However, I also
think that can be done as a followup cleanup.

------------------------------------------------------------------------------
src/hotspot/share/code/nmethod.cpp

I initially thought there was a bug in the strong claim.

A weak claim is established by thread1, successfully setting the link
field to NMETHOD_SENTINEL.  Before thread1 continues from there...

Thread2 tries to strongly mark, and sees NMETHOD_SENTINAL in the link
field. NMETHOD_SENTINEL == badAddress == -2, which happens to have the
low bit clear. So this seems to work after all.

Add STATIC_ASSERT(is_aligned_((uintptr_t)NMETHOD_SENTINEL, 2))

------------------------------------------------------------------------------ 
src/hotspot/share/code/nmethod.cpp

[pre-existing]
I think the comment in oops_do_marking_prolog about using cmxchg makes
no sense.

And why does oops_do_marking_epilogue use cmpxchg at the end?

------------------------------------------------------------------------------ 
src/hotspot/share/code/nmethod.cpp

I think using a self-loop to mark end of list would eliminate the need
for NMETHOD_SENTINEL.

Also eliminates the need for oops_do_marking_prologue.
Requires changing oops_do_marking_epilogue to recognize the self-loop.

[This can be deferred to later cleanup.]

------------------------------------------------------------------------------

oops_do_mark_merge_claim second argument is called "claim" but should
be "strongly_claim" or some such.  Actually the whole new suite of

oops_do_mark_is_claimed
oops_do_mark_strip_claim
oops_do_mark_merge_claim

all seem misnamed.  The link field having a non-NULL value is a
(possibly weak) claim.  The link field having a non-NULL not 2byte
aligned value is a strong claim.  Those functions are all dealing with
strong claims.

is_claimed should use is_aligned
strip_claim should use align_down

------------------------------------------------------------------------------

With the introduction of the strongly claimed tag bit, the link field
ought not be of type nmethod*, because using that type means we're
constructing improperly aligned pointers, which is unspecified behavior.
Should now be char* or void* or some opaque pointer type.

struct nmethod_claim;  // nmethod_claimant ?

[This can be deferred to later cleanup.]

------------------------------------------------------------------------------
src/hotspot/share/code/nmethod.cpp
1884   // On fall through, another racing thread marked this nmethod before we did.

[pre-existing] I think s/marked/claimed/ would be better.

------------------------------------------------------------------------------
src/hotspot/share/code/nmethod.cpp  
1900   while (cur != oops_do_mark_strip_claim(NMETHOD_SENTINEL)) {

Why stripping the claim tag from NMETHOD_SENTINEL; it isn't tagged.
(And must not be, as discussed in an earlier comment.)

------------------------------------------------------------------------------
src/hotspot/share/code/nmethod.hpp
 494   bool test_set_oops_do_strongly_marked();
 495   bool test_set_oops_do_mark(bool strongly = false);

I found the naming and protocol here confusing.  I'd prefer a
"try_claim" style that returns true if the claim attempt is
successful, similar to what we now (since JDK-8210119) do for
SubTasksDone and friends.

------------------------------------------------------------------------------
src/hotspot/share/gc/g1/g1ParScanThreadState.cpp 
 131 bool G1ParScanThreadState::has_remembered_strong_nmethods() const {
 132   return _remembered_strong_nmethods != NULL && _remembered_strong_nmethods->length() > 0;
 133 }

Use !is_empty() rather than length() > 0.

------------------------------------------------------------------------------
src/hotspot/share/gc/g1/g1ParScanThreadState.cpp 
 105   assert(_remembered_strong_nmethods == NULL || _remembered_strong_nmethods->is_empty(), "should be empty at this point.");

Use !has_remembered_strong_nmethods().

------------------------------------------------------------------------------
src/hotspot/share/gc/g1/g1CollectedHeap.cpp
3874   if (collector_state()->in_initial_mark_gc()) {
3875     remark_strong_nmethods(per_thread_states);
3876   }

I think this additional task and the associated pending strong nmethod
sets in the pss can be eliminated by using a 2-bit tag and a more
complex state machine earlier.

I think the additional task is acceptable at least for now, and this
could be examined as a followup.  There's a tradeoff between the cost
of the additional task and the added complexity to remove it.

Below is some (obviously) untested pseudo-code (sort of pythonesque)
for what I have in mind.  The basic idea is that if thread A wants to
strongly process an nmethod while thread B is weakly processing it,
thread A can mark the nmethod as needing strong processing.  When
thread B finishes the weak processing it notices the strong request
and performs the strong processing too.

Note that this code doesn't use NMETHOD_SENTINEL.  The end of the
global list is indicated by having the last element have a self-looped
link value with appropriate tagging.  That avoids both the sentinel
and tagged NULL values (which have their own potential problems).

States, encoded in the link member:
- unclaimed: NULL
- weak: tag 00
- weak done: tag 01
- weak, need strong: tag 10
- strong: tag 11

weak_processor(n):
    if n->link != NULL:
        # already claimed; nothing to do here.
        return
    elif not replace_if_null(tagged(n, 0), &n->link):
        # just claimed by another thread; nothing to do here.
        return
    # successfully claimed for weak processing.
    assert n->link == tagged(n, 0)
    do_weak_processing(n)
    # push onto global list.  self-loop end of list to avoid tagged NULL.
    next = xchg(n, &_list_head) 
    if next == NULL: next = n 
    # try to install end of list + weak done tag.
    if cmpxchg(tagged(next, 1), &n->link, tagged(n, 0) == tagged(n, 0):
        return
    # failed, which means some other thread added strong request.
    assert n->link == tagged(n, 2)
    # do deferred strong processing.
    n->link = tagged(next, 3)
    do_strong_processing(n)
    
strong_processor(n):
    if replace_if_null(tagged(n, 3), &n->link):
        # successfully claimed for strong processing.
        do_strong_processing(n) 
        # push onto global list.  self-loop end of list to avoid tagged NULL.
        next = xchg(n, &_list_head)
        if next == NULL: next = n
        n->link = tagged(next, 3)
        return
    # claim failed.  figure out why and handle it.
    while true:
        raw_next = n->link
        next = strip_tag(raw_next)
        if raw_next - next >= 2:
            # already claimed for strong processing or requested for such.
            return
        elif cmpxchg(tagged(next, 2), &n->link, tagged(next, 0)) == tagged(next, 0):
            # added deferred strong request, so done.
            return
        elif cmpxchg(tagged(next, 3), &n->link, tagged(next, 1)) == tagged(next, 1):
            # claimed deferred strong request.
            do_strong_processing(n)
            return
        # concurrent changes interferred with us.  try again.
        # number of retries is bounded and small, since the state
        # transitions are few and monotonic.  (I think we cannot
        # reach here more than 2 times.)

------------------------------------------------------------------------------





More information about the hotspot-gc-dev mailing list