review for 7129164: JNI Get/ReleasePrimitiveArrayCritical doesn't scale

Tom Rodriguez tom.rodriguez at oracle.com
Fri Jan 27 11:56:24 PST 2012


On Jan 27, 2012, at 11:49 AM, Igor Veresov wrote:

> gcLocker.cpp:  
> 
> 70 wait_begin = os::javaTimeMillis(); 
> 71 if (PrintJNIGCStalls && PrintGCDetails) {
> 
> 
> 
> I think the call to javaTimeMillis() can be moved inside the scope of the if since wait_begin is used only for printing. Also, may be wait_begin could be a member of GCLocker ?

Sure.  Just trying to avoid recompiles during development.

+  static jlong         _wait_begin;      // Timestamp for the setting of _needs_gc.                                                                    
+                                         // Used only by printing code.                                                                                

> 
> Otherwise looks good!

Thanks.

tom

> 
> igor
> 
> 
> On Friday, January 27, 2012 at 10:30 AM, Tom Rodriguez wrote:
> 
>> http://cr.openjdk.java.net/~never/7129164
>> 200 lines changed: 126 ins; 33 del; 41 mod; 3958 unchg
>> 
>> 7129164: JNI Get/ReleasePrimitiveArrayCritical doesn't scale
>> Summary:
>> Reviewed-by:
>> 
>> The machinery for GC_locker which supports GetPrimitiveArrayCritical
>> maintains a count of the number of threads that currently have raw
>> pointers exposed. This is used to allow existing critical sections to
>> drain before creating new ones so that a GC can occur. Currently
>> these counts are updated using atomic all the time, even if a GC isn't
>> being requested. This creates scalability problem when a lot of
>> threads are hammering atomic operations on the jni_lock_count. The
>> count only needs to be valid when checking if a critical section is
>> currently active and when draining the existing sections. The fix is
>> to make the count be computed as part of the safepointing machinery
>> and to only decrement the counters when _needs_gc is true. In debug
>> mode the old count is maintained and validated against the lazily
>> computed count.
>> 
>> On a microbenchmark that stresses GetPrimitiveArrayCritical with many
>> threads and relatively short critical section it can more than double
>> the throughput. This also slightly speeds up the normal
>> GetPrimtiveArrayCritical calls. Mostly it's not easily measurable
>> with larger scale programs.
>> 
>> Tested with microbenchmark that stresses GetPrimtiveArrayCritical and
>> the crypto benchmarks of specjvm2008 on x86 and sparc. I also ran the
>> java.io (http://java.io) regression tests from the JDK.
> 
> 
> 



More information about the hotspot-dev mailing list