Question regarding G1 option to run parallel Old generation garbage collection?

Vitaly Davidovich vitalyd at gmail.com
Thu Oct 25 20:58:38 UTC 2012


Kirk,

Unless I misunderstood your question, not having survivor spaces means you
need to promote to tenured which may be undesired for non long-lived
objects.  The 2 survivor spaces allow for an object to survive a few young
GCs and then die before promotion (provided its tenuring threshold is not
breached).

Sent from my phone
On Oct 25, 2012 4:43 PM, "Kirk Pepperdine" <kirk at kodewerk.com> wrote:

> This is a nice explanation... I would think that not necessarly having a
> to/from survivor space would cut back on some copy costs?
>
> On 2012-10-25, at 7:53 PM, Srinivas Ramakrishna <ysr1729 at gmail.com> wrote:
>
> Kirk, Think of Eden as the minimum space available for allocation before a
> young GC becomes necessary. Think of a survivor space as the minimum space
> set aside to hold objects surviving in the young generation and not being
> tenured. G1 does take advantage of the fact that you do not necessarily
> need to keep the "To" survivor space in reserve separately, but draw from a
> common pool of free regions. In practice, it might be sensible to reuse
> recently collected Eden regions (can't recall how hard G1 tries to do that)
> because it's possible that some caches are warm, but with today's huge
> young generation sizes, may be it doesn't make sense to talk about cache
> reuse. In the presence of paging, reusing Eden and survivor pages becomes
> more important to reduce the cost of inadvertently picking a region whose
> physical pages may need to be faulted in because they had been paged out or
> are being touched for the first time. (This may be more important on
> Windows because of its proclivity to evict pages that haven't been touched
> in a while even when there is no virtual memory pressure.)
>
> John might be able to tell us whether or how hard G1 tries to reuse
> Eden/Survivor pages (I had often lobbied for that because AFAIR old G1 code
> did not make any such attempts, but G1 has seen many recent improvements
> since I last looked).
>
> -- ramki
>
> On Fri, Oct 19, 2012 at 12:38 PM, Kirk Pepperdine <kirk at kodewerk.com>wrote:
>
>> Hi Charlie,
>>
>> Was thinking that as long as you're evacuating regions was there a need
>> to make a distinction between Eden and survivor... they are all just
>> regions in young gen. The distinction seems some what artificial.
>>
>> As for your use case.. make sense on one hand but on the other I'm
>> wondering if it's akin to calling System.gc()... time will tell me thinks.
>> ;-)
>>
>> Regards,
>> Kirk
>>
>> On 2012-10-19, at 9:28 PM, Charlie Hunt <chunt at salesforce.com> wrote:
>>
>> Perhaps if you're really, really ... really squeeze on available heap
>> space and wanted stuff cleaned from old asap, then
>> InitiatiingHeapOccupancyPercent=0 could be justified?
>>
>> Btw, I thought about your question you asked at J1 on "why use survivor
>> spaces with G1?" ... I'll offer an answer, John Cu or Bengt along with
>> Monica are free to offer their thoughts as well.
>>
>> By using survivor spaces, you should, (and I'd expect that to be the
>> case) reduce the amount of concurrent cycles you'll do.  And, you will
>> likely more frequently visit more long live objects if you didn't have
>> survivor spaces as a result of doing more concurrent cycles.  In addition,
>> the total number of different regions you evacuate may be more without
>> survivor spaces, and you may evacuate the same (live) objects more times
>> than without survivor spaces.  In short, I would expect in most cases you
>> end evacuating fewer times per object and you end doing fewer concurrent
>> cycles, all of which saves you CPU cycles for application threads.  Of
>> course, I'm sure we can write an application where it would be advantageous
>> to not have a survivor spaces in G1.  But, we could also write would that
>> could never have the need for a concurrent cycle in a G1 heap that has
>> survivor spaces.
>>
>> Thanks again for the question!
>>
>> charlie ...
>>
>> On Oct 19, 2012, at 2:16 PM, Kirk Pepperdine wrote:
>>
>> Thanks Charlie,
>>
>> I only had a cursor look at the source and found the initial calculation
>> but stopped there figuring someone here would know off the top of their
>> heads. Didn't expect someone to splunk through the code so a big thanks for
>> that.
>>
>> Again, I'm struggling to think of a use case for this behaviour.
>>
>> Regards,
>> Kirk
>>
>> On 2012-10-19, at 8:56 PM, Charlie Hunt <chunt at salesforce.com> wrote:
>>
>> Don't mean to jump in front of Monica. :-/   But, she can confirm. ;-)
>>
>> A quick look at the G1 source code suggests that if
>> InitiatingHeapOccupancyPercent=0, the following will happen:
>> - the first minor GC will initiate a concurrent cycle implying that
>> you'll see a young GC with an initial-mark in the GC log w/ +PrintGCDetails
>> - every minor GC there after, as long as there is not an active
>> concurrent cycle, will initiate the start of a concurrent cycle
>> * So, in other words, concurrent cycles will run back to back.  Remember
>> that there needs to be a minor GC to initiate the concurrent cycle, i.e.
>> the initial-mark. There is at least one caveat to that which I'll explain
>> next.  So, once a concurrent cycle complete, the next concurrent cycle will
>> not start, until the next minor GC, or a humongous allocation occurs as
>> described next.
>> - If there is a humongous object allocation, a concurrent cycle will be
>> initiated (if InitiattingHeapOccupancyPercent=0). This is done before the
>> humongous allocation is done.
>>
>> charlie ...
>>
>> On Oct 19, 2012, at 12:58 PM, Kirk Pepperdine wrote:
>>
>> Hi Monica,
>>
>> Can you comment on what a value of 0 means?
>>
>> Regards,
>> Kirk
>>
>> On 2012-10-19, at 2:55 PM, Monica Beckwith <monica.beckwith at oracle.com>
>> wrote:
>>
>>  Couple of quick observations and questions -
>>
>>    1. G1 is officially supported in 7u4. (There are numerous performance
>>    improvements that I recommend updating to the latest jdk7 update, if
>>    possible)
>>    2. What do you mean by old gen collection? Are you talking about
>>    MixedGCs?
>>    3. Instead of setting InitiatingHeapOccupancyPercent to zero, have
>>    you tried resizing your young generation?
>>       1. I see the NewRatio, but that fixes the nursery to 640, instead
>>       have you tried with a lower (than the min default) of nursery using the
>>       NewSize option?
>>
>> -Monica
>>
>>
>> On 10/19/2012 12:13 AM, csewhiz wrote:
>>
>>  Hello All,
>>   Sorry for posting this question in this mailing list. I am unable to
>> find any answer for this. I am trying to tune our application for G1GC as
>> we need very small pauses Below 500msec.
>>   But the problem is when we are runing with G1GC (under jdk 6_u37)  Old
>> generation of garbage collection only happening when it is reaching the Max
>> GC size I noticed on jdk 6U 37 if max heap size is 1GB then it is close to
>> 1sec 2GB close to 2 sec pauses.
>>
>>   Is there any parameter to force the old gc happening regularly.
>>
>> I am trying following setting,
>>
>> -Xms1280M -Xmx1280M -XX:+UseG1GC -XX:MaxTenuringThreshold=15
>> -XX:SurvivorRatio=8 -XX:NewRatio=1 -XX:GCPauseIntervalMillis=7500
>> -XX:MaxGCPauseMillis=500 -XX:InitiatingHeapOccupancyPercent=0
>> -XX:ParallelGCThreads=7 -XX:ConcGCThreads=7
>>
>> If anyone can give insight on how full GC is triggred internals will be
>> of great help.
>>
>> PS: I have tried without any option for G1 but not of much help hence ..
>> this one trying to be agressive ? but of not much help.
>>
>>
>> Regards,
>> Soumit
>>
>>
>> --
>> <oracle_sig_logo.gif> <http://www.oracle.com/>
>> Monica Beckwith | Java Performance Engineer
>> VOIP: +1 512 401 1274 <+1%20512%20401%201274>
>> Texas
>> <green-for-email-sig_0.gif> <http://www.oracle.com/commitment> Oracle is
>> committed to developing practices and products that help protect the
>> environment
>>
>>
>>
>>
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20121025/7fed0657/attachment.htm>


More information about the hotspot-gc-dev mailing list