RFR: JDK-8149925 We don't need jdk.internal.ref.Cleaner any more

Per Liden per.liden at oracle.com
Tue Mar 29 14:03:11 UTC 2016


Hi Peter,

On 2016-03-28 19:18, Peter Levart wrote:
[...]
> And now a few words about ReferenceHandler thread and synchronization
> with it (for Kim and Per mostly). I think it should not be a problem to
> move the following two java.lang.ref.Reference methods to native code if
> desired:
>
>      static Reference<?> getPendingReferences(int[] discoveryPhaseHolder)
>      static int getDiscoveryPhase()
>
> The 1st one is only invoked by a ReferenceHandler thread while the 2nd
> is invoked by arbitrary thread. The difference between this and
> webrev.09.part2 is that there's no need any more for ReferenceHandler
> thread to notify the thread executing the 2nd method and that there's no
> need for the 2nd method to perform any waiting. It just needs to obtain
> the lock briefly so that it can read the consistent state of two
> fields.  Those two fields are Java static fields currently:
> Reference.pending & Reference.discoveryPhase and those two methods are
> Java methods, but they could be moved to native code if desired to make
> the protocol between VM and Java code more robust.
>
> So Kim, Per, what do you think of supporting those 2 methods in native
> code? Would that present any problem?

In the best of worlds I'd like the VM to be agnostic to how the pending 
list is processed on the core-libs side. However, after looking at it 
briefly I'm not sure if we can get all they way with only providing a 
getPendingReferences() call.

Anyway, assuming we really need something more than just 
getPendingReferences(), I'm not so keen on exposing a phase counter in 
the API. I think I'd rather have something like this:

/** Get the pending list from the VM, blocking until a list exists.
   * Only used by ReferenceHandler.
   */
Reference<?> getPendingReference();

/** Signal that all references has been enqueued.
   * Only used by ReferenceHandler.
   */
void notifyEnqueuedReferences();

/** If references are pending, wait for a notification from
   * ReferenceHandler that they have been enqueued.
   */
void waitForEnqueuedReferences();


The VM would (roughly) implement this as:


JVM_ENTRY(jobject, JVM_GetPendingReferences(JNIEnv* env))
   // Wait until list becomes non-empty
   {
     MonitorLockerEx ml(Heap_lock);
     while (!Universe::has_reference_pending_list()) {
       ml.wait();
     }

     _references_pending++;
   }

   // Detach and return list
   oop list = Universe::swap_reference_pending_list(NULL);
   return JNIHandles::make_local(env, list);
JVM_END


JVM_ENTRY(void, JVM_notifyEnqueuedReferences(JNIEnv* env))
   MonitorLockerEx ml(Heap_lock);
   _references_enqueued = _references_pending;
   ml.notify_all();
JVM_END


JVM_ENTRY(void, JVM_WaitForEnqueuedReferences(JNIEnv* env))
   MonitorLockerEx ml(Heap_lock);
   while (Universe::has_reference_pending_list() ||
          _references_pending != _references_enqueued) {
     ml.wait();
   }
JVM_END


And the ReferenceHandler would do something like:


         ...

         // Get pending references from the VM
         Reference<Object> pending_list = getPendingReferences();

         // Enqueue references
         while (pending_list != null) {
             // Unlink
             Reference<Object> r = pending_list;
             pending_list = r.discovered;
             r.discovered = null;

             // Enqueue
             ReferenceQueue<? super Object> q = r.queue;
             if (q != ReferenceQueue.NULL) {
                 q.enqueue(r);
             }
         }

         notifyEnqueuedReferences();

         ...


And a helper thread would do something like:

        ...
        System.gc();

        waitForEnqueuedReferences();
        ...


So, this should be fairly similar to what you proposed Peter, but with a 
slightly different API.

But I'm still kind of hoping we can find some way to avoid exposing the 
wait/notify functions, for the sake of keeping the protocol minimal.

cheers,
Per

>
> With webrev.11.part2 I get a 40% improvement in throughput vs.
> webrev.10.part2 executing DirectBufferAllocTest in 16 allocating threads
> on a 4-core i7 CPU.
>
> Regards, Peter
>



More information about the core-libs-dev mailing list