Why G1 doesn't cut it for our application
Cornelius Riemenschneider
cri at itscope.de
Wed Apr 9 15:00:06 UTC 2014
Hi,
The server having problems with the 120MB allocation has InitiatingHeapOccupancyPercent=45,
the server with the bigger allocations (600MB, 1.2GB) had InitiatingHeapOccupancyPercent=0, but it allocates objects quickly, so it didn't really help.
Another problem is that our objects are at least sometimes not short-lived - eden is collected every few seconds, but our objects may live for 20-30 sec, because
we load a lot of data from mysql, process the data and munge them in a customer-specified format and then write out a ~100MB zip file containing text files.
The raw data for the text files are obviously stored in the heap, per file, and get quite large.
When I still encounter problems with 8u20 or later, I'll write again, but I can't promise I'll get to it soon.
Regards,
Cornelius Riemenschneider
--
ITscope GmbH
Ludwig-Erhard-Allee 20
76131 Karlsruhe
Email: cornelius.riemenschneider at itscope.de
https://www.itscope.com
Handelsregister: AG Mannheim, HRB 232782
Sitz der Gesellschaft: Karlsruhe
Geschäftsführer: Alexander Münkel, Benjamin Mund, Stefan Reger
-----Ursprüngliche Nachricht-----
Von: Thomas Schatzl [mailto:thomas.schatzl at oracle.com]
Gesendet: Mittwoch, 9. April 2014 16:47
An: Cornelius Riemenschneider
Cc: hotspot-gc-use at openjdk.java.net
Betreff: Re: Why G1 doesn't cut it for our application
Hi Cornelius,
On Wed, 2014-04-09 at 13:56 +0200, Cornelius Riemenschneider wrote:
> Hello,
>
> after recently switching to the latest java 7 (u51), I was eager to
> try out G1.
>
> I used mainly
> http://www.slideshare.net/MonicaBeckwith/garbage-first-garbage-collect
> or-g1-gc-migration-to-expectations-and-advanced-tuning for tuning, but
>
> I hit a roadblock which makes it impossile for us to use G1.
>
> Our allocation pattern includes sometimes huge objects, sometimes in
> the range of ~120MB, sometimes ~600MB, but I?ve seen about 1.2GB as
> well. This is obviously unfriendly to the GC.
>
> Our tuned CMS mostly handles this, but sometimes we hit problems, so
> we had high expectations for G1.
>
> G1, in our case, triggers FullGC way more often than CMS, even when
> the heap is mostly empty.
>
>[...]
>
> We have a total of 20G for the heap available, and try to allocate
> objects in the 120MB range.
>
> 9 GB of the heap are free, so these should fit in without problems,
> even in Eden is a lot of free space.
>
> Still, G1 gets us a FullGC here. This FullGC may be faster than a CMS
> FullGC, but these happen way too often to be tolerated, especially as
> this server is responsible for a web application with which users
> directly interact ? 20 secs pause after clicking are simply not
> tolerable.
>
> Besides using CMS, or not doing large allocations (which is sometimes
> impossible, given that we deal with a lot of data),
>
> do you have oher ideas?
>
> Is it known that an allocation pattern with a lot of huge objects
> breaks G1?
Current releases with G1 all have problems with many large objects.
The only workaround at this time I can think of, for the case when these large objects are rather short-lived, is to increase the frequency of the concurrent marking (decreasing InitiatingHeapOccupancyPercent to a value where marking is running more frequently) to reclaim them faster.
Beginning with 8u20 effort has been put in to decrease this problem in particular for shorter-living large objects.
If the heap is relatively empty, as in your case, one change that sorts the free region list (https://bugs.openjdk.java.net/browse/JDK-8036025)
tends to help a lot. This change has been pushed to the 8u20 repository already, and there may be a Java Early Access download for it already.
We have been working on a variety on other improvements in that area lately, like a method to reclaim short-living large objects at every GC
(https://bugs.openjdk.java.net/browse/JDK-8027959) , or in case of a dense heap, allocating objects in "tail regions" of large objects (https://bugs.openjdk.java.net/browse/JDK-8031381).
There are some more ideas floating around.
> The above linked presentation suggests to increase the G1 region size
> when humongous allocation requests are encountered, so these
> allocation go in eden, but we can not increase the region size beyond
> 32M, so this fix doesn?t work for us.
As mentioned, the only suggestion I can think of at this time is to decrease the InitiatingHeapOccupancyPercent appropriately so that the marking will more frequently try to reclaim these large objects, leading to more space available.
hth,
Thomas
More information about the hotspot-gc-use
mailing list