Request for review, 6766644: Redefinition of compiled method fails with assertion "Can not load classes with the Compiler thread"

Keith McGuigan keith.mcguigan at
Mon Jan 31 06:32:33 PST 2011

Hi David,

On Jan 30, 2011, at 6:58 PM, David Holmes wrote:

> This is a pretty major change in the event architecture and I have a  
> few comments and concerns - the main concern being "deadlock" as  
> it's not obvious exactly what locks can be held when the  
> service_lock will be acquired, nor is it obvious that other locks  
> won't be acquired while the service-lock is held.

Since Service_lock is a 'special' lock (see mutex.hpp), no locks can  
be acquired when holding the service-lock.  I believe the locking code  
asserts this and I was careful to not acquire new locks when holding  
the Service_lock.  So deadlock should not be an issue but let me know  
if you see somewhere I screwed this up.

> Further using one thread to wait for and process different kinds of  
> events may introduce new interactions that weren't previously  
> possible. How are you going to test for potential bad interactions  
> between the low-memory detection and compile-event postings? I doubt  
> we have any existing tests that try to exercise both bits of  
> functionality.

I'm trying to come up with some tests for this now.  This risk here  
ought to be low as the behavior of low-memory detection will be  
completely unchanged unless JVMTI compiled-method events are enabled.   
When they are enabled, the low-memory detection actions may be delayed  
by the JVMTI event native code.  I don't suspect that this is a  
problem but I'm investigating to to make sure.

> I'm also a little concerned that changing the LowMemoryDetector  
> (LMD) thread in to the Service thread is somewhat disruptive when  
> doing stack analysis on running apps or core dumps or crash logs,  
> because people have been used to seeing the LMD thread for many  
> years and now it has changed its identity.

The thread "is_hidden_from_external_view", (which I know doesn't hide  
it completely), so it's visibility should be pretty limited.  As it's  
not a public interface we really ought to be able to change it  
(especially during a major version change), but I could see that we  
might want to keep the name the same for jdk6, just in case.   Keeping  
the existing name would be making ti a misnomer and I don't consider  
that an option.

> Looking in jvmtiImpl.cpp
> I'd prefer to see the locking in all the JvmtiDeferredEventQueue  
> methods rather than having the call-sites do the locking, as it is  
> less error-prone. But I see that the main processing loop in the  
> service thread makes this impossible due to the dual use of the  
> "service lock".

It is possible to massage the service thread loop to make that work.   
I did have it coded up that way at one point but decided to change it  
back and put the onus on the caller to acquire the lock instead.  I  
think this is better since it makes it more obvious in the calling  
code that the lock is being acquired.  Putting the lock acquisition  
inside the calls instead would sort of "hide" the lock and I think  
it's safer to make it obvious.

> Where we have:
> 961 void JvmtiDeferredEventQueue::enqueue(const JvmtiDeferredEvent&  
> event) {
> ...
> 967   QueueNode* node = new QueueNode(event);
> and
> 1007 void JvmtiDeferredEventQueue::add_pending_event(
> 1008     const JvmtiDeferredEvent& event) {
> 1009
> 1010   QueueNode* node = new QueueNode(event);
> Will an allocation failure here terminate the VM? I suspect it will  
> but I don't think it should. Can we not add a link field to the  
> event objects rather than have to create Nodes to hold them?

I can add a NULL check to those spots.  There's no need for us to segv  
if we can't allocate memory for this.  We do need a C-heap allocated  
object since we're passing these events from thread-to-thread.  Adding  
the link in the event object itself wouldn't help - it would just mean  
we have to allocate the event object in C-heap instead.

> Minor nit but the canonical pattern typically used for a cas-loop is  
> a do-while, eg instead of:
> 1012   while (true) {
> 1013     node->set_next((QueueNode*)_pending_list);
> 1014     QueueNode* old_value = (QueueNode*)Atomic::cmpxchg_ptr(
> 1015       (void*)node, (volatile void*)&_pending_list, node->next());
> 1016     if (old_value == node->next()) {
> 1017       break;
> 1018     }
> 1019   }
> use:
>      do {
>        node->set_next((QueueNode*)_pending_list);
>      } while ( (QueueNode*)Atomic::cmpxchg_ptr(
>         (void*)node, (volatile void*)&_pending_list, node->next()) ! 
> = node->next() );
> and similarly in process_pending_events. Though in  
> process_pending_events you can just use Atomic::xchg rather than a  
> cmpxchg loop.

Ok, I can do that.

> 1076 void JvmtiDeferredEventQueue::wait_for_empty_queue() {
> 1077   MutexLockerEx ml(Service_lock,  
> Mutex::_no_safepoint_check_flag);
> 1078   while (has_events()) {
> 1079     Service_lock->notify_all();
> 1080     Service_lock->wait(Mutex::_no_safepoint_check_flag);
> 1081   }
> 1082 }
> What is the notify_all for?

To wake up the service thread in case no one else has yet.

> 1084 void JvmtiDeferredEventQueue::notify_empty_queue() {
> 1085   assert(Service_lock->owned_by_self(), "Must own Service_lock");
> 1086   Service_lock->notify_all();
> 1087 }
> This looks suspicious - notification should always occur in  
> conjunction with a change of state, which should be atomic with  
> respect to the notification. Looking at the usage in the service- 
> thread wait-loop I don't understand who is being notified of what.  
> Whomever emptied the queue should be doing the notification ie in  
> dequeue() if _queue_head == NULL ie at line 997.

Ok, I'll move the notify to dequeue() and remove this method.

> In dequeue, this:
> 988   if (_queue_head == NULL) {
> 989     // Just in case this happens in product; it shouldn't be  
> let's not crash
> 990     return JvmtiDeferredEvent();
> 991   }
> doesn't look right. Are we returning a stack allocated instance  
> here? (There's also a typo in the comment.)

Yes, we're returning a stack-allocated instance, which returns to the  
caller as a temp and is copied into a local variable there.  I'll fix  
the comment.

> In jvmtiExport.cpp,  I don't understand why here:
> 832 void JvmtiExport::post_dynamic_code_generated(const char *name,  
> const void *code_begin, const void *code_end) {
> ...
> 1841
> 1842   JvmtiDeferredEventQueue::wait_for_empty_queue();
> 1843
> we have to wait for an empty queue, particularly as the queue may  
> become non-empty at any moment. Normally such a wait must be atomic  
> with respect to an action that requires the queue to be empty, but  
> that doesn't seem to be the case here.

There was a queuing operation in the original code for compiled-method- 
unload events that were generated at a safepoint, and at this point in  
the code the queue was checked and the events were posted (I assume  
that this is because we want the unload events to be generated in- 
order with the dynamic-code-generated events).  This  
'wait_for_empty_queue' call at this point accomplishes the same thing  
- it causes all previously-queued operations to be posted (or in the  
process of posting) before the dynamic -code-generation events are  
posted.  It is not required that the queue is empty when the dynamic- 
code-generation event is posted, we just want to make sure that events  
that occurred *before* this call are flushed.  If new events are  
generated and enqueued during or after this event is being posted,  
that's perfectly fine.  It's more of a 'flush_queue' call and I  
suppose I could call it that explicitly (and maybe add a 'flush  
marker' in the queue so that this won't wait for events that were  
generated while this was waiting).  Actually if I do that I can get  
rid of the 'notify' in dequeue and instead use a special flush event.   
Hmmm... that might work better.

> In serviceThread.cpp, are we sure that it is okay to remain  
> _thread_blocked while executing this:
>  91       ThreadBlockInVM tbivm(jt);
>  92
>  93       MutexLockerEx ml(Service_lock,  
> Mutex::_no_safepoint_check_flag);
>  94       while (!(sensors_changed =  
> LowMemoryDetector::has_pending_requests()) &&
>  95              !(has_jvmti_events =  
> JvmtiDeferredEventQueue::has_events())) {
>  96         // wait until one of the sensors has pending requests,  
> or there is a
>  97         // pending JVMTI event to post
>  98         JvmtiDeferredEventQueue::notify_empty_queue();
>  99         Service_lock->wait(Mutex::_no_safepoint_check_flag);
> 100       }
> 101
> 102       if (has_jvmti_events) {
> 103         jvmti_event = JvmtiDeferredEventQueue::dequeue();
> 104       }
> It may be fine but it seems a little strange.

It ought to be fine as 'dequeue' is pretty much a leaf routine, but  
I'll take a closer look at this.

> ---
> There's no thread.hpp in the webrev but presumably you can now  
> delete the  LowMemoryDetectorThread class.

Thanks, I'll get rid of that.  It was the intention but I forgot! :)

Thank you for the thorough review!

- Keith

More information about the hotspot-runtime-dev mailing list