Regression (b88): OOM on parallel limit operation
Paul Sandoz
paul.sandoz at oracle.com
Fri May 10 02:15:25 PDT 2013
On May 9, 2013, at 10:57 AM, "Mallwitz, Christian" <christian.mallwitz at commerzbank.com> wrote:
> Hi,
>
> Re-reporting problem for build 88: the limit() on the third examples bombs out with an immediate OOM.
>
Thanks!
Since the current implementation of limit is a full barrier (until the limit is known to have been reached) the heap size is too small:
$ java -XX:+PrintCommandLineFlags OOM
-XX:ClassMetaspaceSize=104857600 -XX:InitialHeapSize=134217728 -XX:MaxHeapSize=2147483648 -XX:+PrintCommandLineFlags -XX:+UseCompressedKlassPointers -XX:+UseCompressedOops -XX:+UseParallelGC
200000
100000
200000
However, i can reproduce if i increase the limit sizes by a factor.
Note that we are still working on fixing this area, there are various optimizations for various stream characteristics we can apply. For example, when the stream is sized, or when the stream is unordered.
The case with Stream.iterator().limit() is the worst possible case, an infinite (unknown sized) and ordered stream.
I think we need to change the implementation so it is no longer a full barrier and instead is a wrapping slice/limit spliterator that peels off left splits as arrays (same trick we do for creating a spliterator from an iterator).
Paul.
> Cheers!
> Christian
>
> Java(TM) SE Runtime Environment (build 1.8.0-ea-lambda-nightly-h4200-20130429-b88-b00)
> Java HotSpot(TM) Client VM (build 25.0-b28, mixed mode)
>
> -XX:InitialHeapSize=16777216
> -XX:MaxHeapSize=268435456
> -XX:+PrintCommandLineFlags
> -XX:-UseLargePagesIndividualAllocation
>
> package com.snuffbumble.lambda;
>
> public class OOM {
> public static void main(String... ignored) {
>
> // prints 200_000
> System.out.println(java.util.stream.LongStream.iterate(1L, n -> n + 1L)
> .filter(l -> l % 100 == 0).limit(200_000).count());
>
> // prints 100_000
> System.out.println(java.util.stream.LongStream.iterate(1L, n -> n + 1L)
> .parallel()
> .filter(l -> l % 100 == 0).limit(100_000).count());
>
> // immediate OOM
> System.out.println(java.util.stream.LongStream.iterate(1L, n -> n + 1L)
> .parallel()
> .filter(l -> l % 100 == 0).limit(200_000).count());
> }
> }
>
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.stream.SpinedBuffer$OfLong.newArray(SpinedBuffer.java:782)
> at java.util.stream.SpinedBuffer$OfLong.newArray(SpinedBuffer.java:755)
> at java.util.stream.SpinedBuffer$OfPrimitive.ensureCapacity(SpinedBuffer.java:473)
> at java.util.stream.Nodes$LongSpinedNodeBuilder.begin(Nodes.java:1884)
> at java.util.stream.Sink$ChainedLong.begin(Sink.java:317)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:466)
> at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:457)
> at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:328)
> at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:273)
> at java.util.stream.AbstractTask.compute(AbstractTask.java:312)
> at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:710)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1001)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1607)
> at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
More information about the lambda-dev
mailing list