OOM error caused by large array allocation in G1
Ravi
mailravi at gmail.com
Tue Nov 21 14:18:09 UTC 2017
Did you go thru this blog:
https://performancetestexpert.wordpress.com/2017/03/16/important-configuration-parameters-for-tuning-apache-spark-job/
If you have limitation on the hardware availability especially RAM,, one
important suggestion is "Storage level has been changed to
‘Disk_Only’:Before the change, we were getting OOM when processing 250K
messages during the aggregation window of 300 seconds. After the change,
we could process 540K messages in the aggregation window without getting
OOM. Even though, IN-Memory gives better performance, due to limitation of
the hardware availability i had to implement Disk-Only."
Thanks
Ravi
On Tue, Nov 21, 2017 at 7:29 PM, Thomas Schatzl <thomas.schatzl at oracle.com>
wrote:
> Hi,
>
> On Tue, 2017-11-21 at 21:48 +0800, Lijie Xu wrote:
> > Hi Thomas,
> >
> [...]
> > > > I want to know whether my guess is right ...
> > >
> > > Very likely. This is a long-standing issue (actually I have once
> > > investigated about it like 10 years ago on a different regional
> > > collector), and given your findings it is very likely you are
> > > correct.
> > > The issue also has an extra section in the tuning guide.
> >
> > ==> This reference is very helpful for me. Another question is that
> > "Do Parallel and CMS collectors have this defect too"?
>
> No. Parallel and CMS full GC always move all objects. I filed JDK-81915
> 65 [0] to at least avoid the OOME. Maybe it can be fixed by jdk 11.
>
> Thanks,
> Thomas
>
> [0] https://bugs.openjdk.java.net/browse/JDK-8191565
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20171121/9482e7dc/attachment.html>
More information about the hotspot-gc-use
mailing list