Regression (b92): OOM on filter/limit operation on unbound/parallel stream

Paul Sandoz paul.sandoz at oracle.com
Tue Jun 4 08:09:05 PDT 2013


On Jun 4, 2013, at 4:46 PM, Aleksey Shipilev <aleksey.shipilev at oracle.com> wrote:

> On 06/04/2013 06:17 PM, Paul Sandoz wrote:
>> If i run using interpretive mode -Xint, it works fine, suggesting we 
>> are running into a hotspot issue, perhaps a hotspot/GC issue:
> 
> I had reproduced this as well, and seeing
>  java.util.stream.SliceOps$SliceTask ->
>  java.util.Spliterators$LongArraySpliterator ->
>  long[]
> 
> ...occupying the most of the heap (3 Gb on my machine). It passes with
> -Xmx7g without problems. With lower heaps, it does a series of full GC,
> which end up with:
> 
> [Full GC (Ergonomics) [PSYoungGen: 557248K->555317K(901760K)]
> [ParOldGen: 4190056K->4190056K(4194304K)] 4747304K->4745374K(5096064K),
> [Metaspace: 4142K->4702K(110592K)], 0.0800150 secs] [Times: user=0.19
> sys=0.00, real=0.08 secs]
> [Full GC (Ergonomics) [PSYoungGen: 557248K->0K(901760K)] [ParOldGen:
> 4190056K->20912K(1710848K)] 4747304K->20912K(2612608K), [Metaspace:
> 4144K->4702K(110592K)], 0.2308390 secs] [Times: user=0.15 sys=0.16,
> real=0.23 secs]
> 
> ...so the heap appears to be empty. The application is still munching
> through data in the main thread, which explains why there is no
> parallelism happening:
> 

Ah! A light bulb just turned on :-)

That stack trace is an artifact of the spliterator from iterator returning null due to it trapping an OOME and then the leaf processing kicks in the for the right hand side and the iterator is infinite. So it is a nasty secondary cause due to memory issues. Usually what would happen is the task would get cancelled.

My conclusion is that those iterator form spliterators should never throw OOME. 

Still it does not explain why OOMEs are so happening for what seem like reasonable heap sizes (especially when interpreted mode works OK).

Paul.


> "main" #1 prio=5 os_prio=0 tid=0x00007fa290009800 nid=0x7342 runnable
> [0x00007fa298ed1000]
>   java.lang.Thread.State: RUNNABLE
> 	at java.util.stream.Nodes$SpinedNodeBuilder.accept(Nodes.java:1248)
> 	at
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:176)
> 	at java.util.stream.LongPipeline$3$1.accept(LongPipeline.java:230)
> 	at
> java.util.PrimitiveIterator$OfLong.forEachRemaining(PrimitiveIterator.java:166)
> 	at java.util.stream.LongStream$1.forEachRemaining(LongStream.java)
> 	at
> java.util.Spliterators$LongIteratorSpliterator.forEachRemaining(Spliterators.java:2029)
> 	at java.util.Spliterator$OfLong.forEachRemaining(Spliterator.java:744)
> 	at
> java.util.Spliterators$LongIteratorSpliterator.forEachRemaining(Spliterators.java)
> 	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> 	at
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> 	at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:606)
> 	at java.util.stream.SliceOps$SliceTask.doLeaf(SliceOps.java:553)
> 	at java.util.stream.AbstractTask.compute(AbstractTask.java:312)
> 	at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:710)
> 	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> 	at java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:385)
> 	at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:717)
> 	at java.util.stream.SliceOps$1.opEvaluateParallelLazy(SliceOps.java:150)
> 	at
> java.util.stream.AbstractPipeline.sourceSpliterator(AbstractPipeline.java:442)
> 	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:230)
> 	at java.util.stream.LongPipeline.reduce(LongPipeline.java:447)
> 	at java.util.stream.LongPipeline.sum(LongPipeline.java:405)
> 	at java.util.stream.ReferencePipeline.count(ReferencePipeline.java:526)
> 	at OOM.firstNPrimes(OOM.java:24)
> 	at OOM.main(OOM.java:11)
> 
> 
> -Aleksey.



More information about the lambda-dev mailing list