RFR: jsr166 jdk9 integration wave 5
Martin Buchholz
martinrb at google.com
Wed Mar 2 00:06:44 UTC 2016
Thanks, Stuart!
On Tue, Mar 1, 2016 at 2:40 PM, Stuart Marks <stuart.marks at oracle.com> wrote:
> Hi Martin,
>
> I'm a bit confused about exactly what pieces need review here. Since you
> mentioned me with respect to 8150523, I took a look at the webrev that adds
> the timeout factors:
>
> http://cr.openjdk.java.net/~martin/webrevs/openjdk9/jsr166-jdk9-integration/timeoutFactor/
>
> Do other webrevs still need review as well? I haven't looked at them. But
> there are others' names on them already....
Getting reviews done is a big problem with most open source projects,
and openjdk is no exception. And it's even more difficult when code
is imported from an upstream project... maybe my usual reviewers have
gotten jsr166-review-fatigue...
> Otherwise, overall, it looks fine, just a few minor questions/comments:
>
> ------------------------------------------------------------
>
> In the following files, various delays of 7 sec, 20 sec, 30 sec, 60 sec, 100
> sec, 120 sec, and 1000 sec were changed to 10 sec (exclusive of timeout
> factor adjustment). Was that intentional? I guess making the "backstop"
> timeouts (e.g., waiting for an executor service to terminate) be uniform is
> reasonable, but there were some very long timeouts that are now much
> shorter. At least, something to keep an eye on.
A useful rule of thumb is that 10 sec seems to be enough for any
"single operation", except when using a slow VM, in which case we
expect someone to provide a timeout factor. But some of these tests
do a whole bunch of trials... Anyways, we adjust timeouts based on
observed test runtimes, and keep adjusting if tests are observed to
fail in the wild.
I've also done stress testing with these tests using a fastdebug VM at Google.
> test/java/util/concurrent/BlockingQueue/Interrupt.java:
> test/java/util/concurrent/BlockingQueue/ProducerConsumerLoops.java
> test/java/util/concurrent/BlockingQueue/SingleProducerMultipleConsumerLoops.java
> test/java/util/concurrent/CompletableFuture/Basic.java
> test/java/util/concurrent/ConcurrentHashMap/MapLoops.java
> test/java/util/concurrent/ConcurrentQueues/ConcurrentQueueLoops.java
> test/java/util/concurrent/Exchanger/ExchangeLoops.java
> test/java/util/concurrent/Executors/AutoShutdown.java
> test/java/util/concurrent/ScheduledThreadPoolExecutor/ZeroCorePoolSize.java
> test/java/util/concurrent/ThreadPoolExecutor/Custom.java
> test/java/util/concurrent/ThreadPoolExecutor/SelfInterrupt.java
> test/java/util/concurrent/ThreadPoolExecutor/ThreadRestarts.java
> test/java/util/concurrent/locks/Lock/FlakyMutex.java
> test/java/util/concurrent/locks/LockSupport/ParkLoops.java
> test/java/util/concurrent/locks/ReentrantLock/LockOncePerThreadLoops.java
> test/java/util/concurrent/locks/ReentrantLock/SimpleReentrantLockLoops.java
> test/java/util/concurrent/locks/ReentrantLock/TimeoutLockLoops.java
> test/java/util/concurrent/locks/StampedLock/Basic.java
>
> ------------------------------------------------------------
>
> test/java/util/concurrent/FutureTask/CancelledFutureLoops.java
> test/java/util/concurrent/ThreadPoolExecutor/TimeOutShrink.java
>
> It's slightly odd to see an additional multiplier at the use site of
> LONG_DELAY_MS, when this doesn't occur in most of the other tests that
> (formerly) had different timeouts. Put another way, why do these tests have
> different timeouts, whereas the tests above that had widely differing
> timeouts were all changed to 10 sec?
Looking more closely, we can improve the readability of TimeOutShrink,
remove the excessive final wait, and claw back one second of test
time.
--- util/concurrent/ThreadPoolExecutor/TimeOutShrink.java 27 Feb 2016
21:15:57 -0000 1.5
+++ util/concurrent/ThreadPoolExecutor/TimeOutShrink.java 2 Mar 2016
00:01:55 -0000
@@ -39,6 +39,7 @@
public class TimeOutShrink {
static final long LONG_DELAY_MS = Utils.adjustTimeout(10_000);
+ static final long KEEPALIVE_MS = 12L;
static void checkPoolSizes(ThreadPoolExecutor pool,
int size, int core, int max) {
@@ -51,7 +52,8 @@
final int n = 4;
final CyclicBarrier barrier = new CyclicBarrier(2*n+1);
final ThreadPoolExecutor pool
- = new ThreadPoolExecutor(n, 2*n, 1L, TimeUnit.SECONDS,
+ = new ThreadPoolExecutor(n, 2*n,
+ KEEPALIVE_MS, MILLISECONDS,
new SynchronousQueue<Runnable>());
final Runnable r = new Runnable() { public void run() {
try {
@@ -64,12 +66,17 @@
barrier.await();
checkPoolSizes(pool, 2*n, n, 2*n);
barrier.await();
- while (pool.getPoolSize() > n)
- Thread.sleep(100);
- Thread.sleep(100);
+ long nap = KEEPALIVE_MS + (KEEPALIVE_MS >> 2);
+ for (long sleepyTime = 0L; pool.getPoolSize() > n; ) {
+ check((sleepyTime += nap) <= LONG_DELAY_MS);
+ Thread.sleep(nap);
+ }
+ checkPoolSizes(pool, n, n, 2*n);
+ Thread.sleep(nap);
checkPoolSizes(pool, n, n, 2*n);
pool.shutdown();
- check(pool.awaitTermination(6 * LONG_DELAY_MS, MILLISECONDS));
+ check(pool.awaitTermination(LONG_DELAY_MS, MILLISECONDS));
}
//--------------------- Infrastructure ---------------------------
> ------------------------------------------------------------
>
> In various scheduled thread pool executor tests, as well as in
>
> test/java/util/concurrent/ThreadPoolExecutor/ThreadRestarts.java
>
> should delays for scheduled tasks also be scaled as well? If the tests are
> running in a slow environment, and some timeouts are scaled but not others,
> it might result in some tasks executing too soon. I guess this depends on
> the semantics of what's being tested.
Yeah... this change clarifies things a bit and creates daemon threads
just in case:
--- util/concurrent/ThreadPoolExecutor/ThreadRestarts.java 27 Feb 2016
21:15:57 -0000 1.4
+++ util/concurrent/ThreadPoolExecutor/ThreadRestarts.java 1 Mar 2016
23:27:33 -0000
@@ -22,6 +22,7 @@
public class ThreadRestarts {
static final long LONG_DELAY_MS = Utils.adjustTimeout(10_000);
+ static final long FAR_FUTURE_MS = 10 * LONG_DELAY_MS;
public static void main(String[] args) throws Exception {
test(false);
@@ -33,8 +34,9 @@
ScheduledThreadPoolExecutor stpe
= new ScheduledThreadPoolExecutor(10, ctf);
try {
+ // schedule a dummy task in the "far future"
Runnable nop = new Runnable() { public void run() {}};
- stpe.schedule(nop, 10*1000L, MILLISECONDS);
+ stpe.schedule(nop, FAR_FUTURE_MS, MILLISECONDS);
stpe.setKeepAliveTime(1L, MILLISECONDS);
stpe.allowCoreThreadTimeOut(allowTimeout);
MILLISECONDS.sleep(12L);
@@ -53,8 +55,9 @@
final AtomicLong count = new AtomicLong(0L);
public Thread newThread(Runnable r) {
- Thread t = new Thread(r);
count.getAndIncrement();
+ Thread t = new Thread(r);
+ t.setDaemon(true);
return t;
}
}
> ------------------------------------------------------------
>
> test/java/util/concurrent/ConcurrentQueues/GCRetention.java
>
> - extra commented-out call to forceFullGc() ?
intentional exercise for the reader
> - probably would be wise to scale the timeout in finalizeDone.await()
It's in a retry loop, so it's probably fine. Maybe scale the number
of iterations?
> ------------------------------------------------------------
>
> test/java/util/concurrent/Exchanger/ExchangeLoops.java
> test/java/util/concurrent/ExecutorCompletionService/ExecutorCompletionServiceLoops.java
>
> The number of iterations was reduced from 100,000 to 2,000, particularly the
> initial "warm up" run (at least in ExchangeLoops). IIRC the C2 compiler
> kicks in at 10,000 iterations. The reduced the number of iterations
> (particularly in the initial "warm up" runs) doesn't meet this threshold.
> Could that be a problem?
It's a general problem of any of these "loops" tests. They can run
forever if we like, they can be used for benchmarks (derived from
Doug's benchmark code), but it's also important to optimize test run
time. And we don't have any standard support in jtreg for scaling
tests for "stress mode". Although we do sometimes find hotspot bugs,
that's not the primary purpose of these tests, and the hotspot team is
good at using interesting flag combinations that should invoke C2 even
with a small number of iterations.
More information about the core-libs-dev
mailing list