G1GC Full GCs

Todd Lipcon todd at cloudera.com
Wed Jul 7 17:32:38 PDT 2010


On Wed, Jul 7, 2010 at 5:26 PM, Y. S. Ramakrishna <
y.s.ramakrishna at oracle.com> wrote:

>
>
> On 07/07/10 17:18, Todd Lipcon wrote:
> ...
>
>  Looking at the graph you attached, it appears that the low-water mark
>> stabilizes at somewhere between 4.5G and 5G. The configuration I'm running
>> is to allocate 40% of the heap to Memstore and 20% of the heap to the LRU
>> cache. For an 8G heap, this is 4.8GB. So, for this application it's somewhat
>> expected that, as it runs, it will accumulate more and more data until it
>> reaches this threshold. The data is, of course, not *permanent*, but it's
>> reasonably long-lived, so it makes sense to me that it should go into the
>> old generation.
>>
>
> Ah, i see. In that case, i think you could try using a slightly larger old
> gen. If the old gen stabilizes at 4.2 GB, we should allow as much for slop.
> i.e. make the old gen 8.4 GB (or whatever is the measured stable
> old gen occupancy), then add to that the young gen size, and use
> that for the whole heap. I would be even more aggressive
> and grant more to the old gen -- as i said earlier perhaps
> double the old gen from its present size. If that doesn;t work
> we know that something is amiss in the way we are going at this.
> If it works, we can iterate downwards from a config that we know
> works, down to what may be considered an acceptable space overhead
> for GC.
>
>
OK, I can try some tests with cache configured for only 40% heap usage.
Should I run these tests with CMS or G1?


>
>
>> If you like, I can tune down those percentages to 20/20 instead of 20/40,
>> and I think we'll see the same pattern, just stabilized around 3.2GB. This
>> will probably delay the full GCs, but still eventually hit them. It's also
>> way lower than we can really go - customers won't like "throwing away" 60%
>> of the allocated heap to GC!
>>
>
> I understand that sentiment. I want us to get to a state where we are able
> to completely avoid the creeping fragmentation, if possible. There are
> other ways to tune for this, but they are more labour-intensive and tricky,
> and I would not want to go into that lightly. You might want to contact
> your Java support for help with that.
>
>
Yep, we've considered various solutions involving managing our own
ref-counted slices of a single pre-allocated byte array - essentially
writing our own slab allocator. In theory this should make all of the GCable
objects constrained to a small number of sizes, and thus prevent
fragmentation, but it's quite a project to undertake :)

Regarding Java support, as an open source project we have no such luxury.
Projects like HBase and Hadoop, though, are pretty visible to users as "big
Java apps", so getting them working well on the GC front does good things
for Java adoption in the database/distributed systems community, I think.

-Todd


-- 
Todd Lipcon
Software Engineer, Cloudera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20100707/7eadf068/attachment-0001.html 


More information about the hotspot-gc-use mailing list