Regression (b88): OOM on parallel limit operation
Mallwitz, Christian
christian.mallwitz at Commerzbank.com
Thu May 9 01:57:45 PDT 2013
Hi,
Re-reporting problem for build 88: the limit() on the third examples bombs out with an immediate OOM.
Cheers!
Christian
Java(TM) SE Runtime Environment (build 1.8.0-ea-lambda-nightly-h4200-20130429-b88-b00)
Java HotSpot(TM) Client VM (build 25.0-b28, mixed mode)
-XX:InitialHeapSize=16777216
-XX:MaxHeapSize=268435456
-XX:+PrintCommandLineFlags
-XX:-UseLargePagesIndividualAllocation
package com.snuffbumble.lambda;
public class OOM {
public static void main(String... ignored) {
// prints 200_000
System.out.println(java.util.stream.LongStream.iterate(1L, n -> n + 1L)
.filter(l -> l % 100 == 0).limit(200_000).count());
// prints 100_000
System.out.println(java.util.stream.LongStream.iterate(1L, n -> n + 1L)
.parallel()
.filter(l -> l % 100 == 0).limit(100_000).count());
// immediate OOM
System.out.println(java.util.stream.LongStream.iterate(1L, n -> n + 1L)
.parallel()
.filter(l -> l % 100 == 0).limit(200_000).count());
}
}
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.stream.SpinedBuffer$OfLong.newArray(SpinedBuffer.java:782)
at java.util.stream.SpinedBuffer$OfLong.newArray(SpinedBuffer.java:755)
at java.util.stream.SpinedBuffer$OfPrimitive.ensureCapacity(SpinedBuffer.java:473)
at java.util.stream.Nodes$LongSpinedNodeBuilder.begin(Nodes.java:1884)
at java.util.stream.Sink$ChainedLong.begin(Sink.java:317)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:466)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:457)
at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:328)
at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:273)
at java.util.stream.AbstractTask.compute(AbstractTask.java:312)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:710)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:260)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1001)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1607)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
-----Original Message-----
From: Brian Goetz [mailto:brian.goetz at oracle.com]
Sent: Friday, April 19, 2013 1:44 PM
To: Mallwitz, Christian
Cc: lambda-dev at openjdk.java.net
Subject: Re: Regression (b86): OOM on parallel stream
Yes, in parallel, a limit operation (currently) needs to buffer its entire results. There are optimizations we can apply (not yet done) on SIZED streams and UNORDERED streams that remove this restriction, but in the general case, there's going to be a lot of buffering when run in parallel. This is because limit() is constrained to delivering the elements in encounter order.
Though in this case, buffering 200000 longs should not run out of memory; your heap size is probably tiny -- and the 10m wait was GC thrashing.
On 4/19/2013 8:08 AM, Mallwitz, Christian wrote:
> Hi,
>
> The following throws (after a 10+ minute wait) an OOM - removing the parallel() bit produces the expected result of 200000.
>
> Thanks
> Christian
>
> public class OOM {
> public static void main(String[] args) {
> System.out.println(
> java.util.stream.Streams.iterate(1L, n -> n + 1L)
> .parallel()
> .filter(l -> l % 100 == 0).limit(200_000).count());
> }
> }
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.stream.SpinedBuffer.ensureCapacity(SpinedBuffer.java:129)
> at java.util.stream.Nodes$SpinedNodeBuilder.begin(Nodes.java:1278)
> at java.util.stream.Sink$ChainedReference.begin(Sink.java:252)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:452)
> at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:443)
> at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:328)
> at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:273)
> at java.util.stream.AbstractTask.compute(AbstractTask.java:284)
> at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:710)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1012)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1631)
> at
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.jav
> a:107)
>
>
More information about the lambda-dev
mailing list