RFR(XS): 8001425: G1: Change the default values for certain G1 specific flags
John Cuthbertson
john.cuthbertson at oracle.com
Wed Jan 16 19:17:47 UTC 2013
Hi Kirk,
You should be able to give all the cores to the STW GCs
(ParallelGCThreads) unless your application is in JNI when a STW GC starts.
You can also explicitly set the number of concurrent marking threads
(ConcGCThreads) and concurrent refinement threads
(G1ConcRefinementThreads) to the number of cores you are prepared to
give up when the application is running.
When a marking cycle is started all of the marking threads are activated
and participate equally.
The activation of the concurrent refinement threads is stepped, i.e.
when the number of pending remembered set updates goes above a threshold
the next thread is activated and so on. Once the final refinement thread
is activated, if the number of pending updates is still above the next
step, the application threads are employed to update the remembered
sets. Once the number of pending updates drops below the thresholds the
application threads stop doing the work. The refinement threads are
progressively deactivated as the number of pending updates further reduces.
Choosing the right mix of concurrent threads depends upon your
application. Since you most likely do not want your application threads
to do any processing of pending remembered set updates, I would bias
towards more refinement threads and less marking threads. If your
marking cycles are taking a long time and the amount of old data
mutation is low then I would suggest biasing toward more marking
threads. I think you would find a sweet spot for either/both somewhere
between 4 and 8 cores.
But definitely give as many cores as you can to the STW phases. But I'll
let the more experienced performance guys chime in now. :)
JohnC
On 1/16/2013 10:34 AM, Kirk Pepperdine wrote:
> Well, I have an app running on a box with 24 cores with low latency concerns. The app isn't using all 24 cores which means I'd be happy to give the collector 12 of them if it ti were able to use them all the time without pausing the app.
>
> Regards,
> Kirk
> On 2013-01-16, at 10:17 AM, John Cuthbertson <john.cuthbertson at oracle.com> wrote:
>
>> Hi Kirk,
>>
>> If there were a truly compelling reason then I would defend change and *try* to explain the reason. In this case it was just conservatism - changing the defaults of flags that can really alter behavior always makes me slightly nervous. :)
>>
>> What do you mean by an incremental mode for G1? Anything you can cite?
>>
>> JohnC
>>
>> On 1/16/2013 12:09 AM, Kirk Pepperdine wrote:
>>> Hi John,
>>>
>>> You know, there might be a good reason to have different values for different heap sizes.. some thing that makes sense when you look at the implementation. If so, that might justify the need to do this. I just don't understand *why*? But maybe that's just me. I'm not responsible for the implementation, I just help people deal with what's on the table and so unless something seem really not right, like dropping incremental modes, I'll pass comment and then shutup to let you get on with it... ;-)
>>>
>>> BTW, not to stir up any trouble but it would be nice to have a incremental mode for G1 for machines with large number of cores.
>>>
>>> Regards,
>>> Kirk
>>>
>>> On 2013-01-15, at 9:42 AM, John Cuthbertson <john.cuthbertson at oracle.com> wrote:
>>>
>>>> Hi Kirk,
>>>>
>>>> I know you haven't responded to me directly but I did your email with interest and cited it in my reply to Charlie Hunt.
>>>>
>>>> On 1/12/2013 4:39 AM, Kirk Pepperdine wrote:
>>>>> Hi Charlie,
>>>>>
>>>>> In this case I would have to say that having more frequent GCs that succeed is much better than evacuation failures. Also having different values for different heap sizes is really confusing. Is it really necessary to have different percentages for different heap sizes and is so is there a known gradient for correlating the size vs percent?
>>>>>
>>>> Unless I hear any objections, I'll apply the new young gen bounds to all heap sizes.
>>>>
>>>> JohnC
More information about the hotspot-gc-dev
mailing list