GC listener

Pas pasthelod at gmail.com
Sat Mar 22 14:55:50 UTC 2014


We'd like sub-10ms minor GCs with 8+ GB heaps (sometimes ~24GB), which is
achievable with CMS (with some unholy -XX:ParGCCardsPerStrideChunk
setting), but the inevitable Full GC warrants this (plus it helps us do
some housecleaning on the instance too), G1 has too big overhead and its
failure modes are less understood (by us) and seemed worse in consequences
than that of CMS.

> G1 could have been faster than that but we had to gimp our CPU down
because the vendor licensed by CPU count and throwing more cores at it was
prohibitively expensive.

Sure, if we'd have a really fast half-infinite Turing-tape, probably we
would have opted to use G1 too :)

Also, our codebase changes very rapidly, our clients utilize our backend
instances with great variance, etc. So GC tuning seems like a constant
catch-up with our constrained resources.



On Sat, Mar 22, 2014 at 3:17 PM, Ryan Gardner <ryebrye at gmail.com> wrote:

> What kind of acceptable latency those who are doing the loadbalancer dance
> looking for?
>
> What is an unacceptable pause?
>
> G1 can most definitely be tuned to avoid full gc's - I have a cluster of
> G1 apps that are running I think with 99.9% GC overhead and a median pause
> time of 100ms with live data sets in the 34 GB range on 72 GB heaps. They
> only do GC every couple of minutes which is why the overhead is so small -
> and this is on a cache that probably churns through terabytes of garbage
> every day (though done objects will live in heap for days - its finally a
> use case that can torture many GC algorithms).
>
> (By giving it plenty of extra heap, G1 is able to free up tons of regions
> completely - its able to collect tens of  gigabytes of heap in normal
> collections - and by tuning the collector the 99.9% pauses are kept below
> our acceptable threshold)
>
> G1 could have been faster than that but we had to gimp our CPU down
> because the vendor licensed by CPU count and throwing more cores at it was
> prohibitively expensive.
>
> Taking an app out of a load balancer for GC seems like an outdated
> solution to a problem that has better solutions really available now.
> On Mar 22, 2014 9:36 AM, "Pas" <pasthelod at gmail.com> wrote:
>
>> Hello!
>>
>> Just a quick For-Your-Information. At work we've implemented something
>> very similar to the notification and System.gc scenario. (The only
>> meaningful difference is that it's periodic, nothing
>> dynamic-and-notification, there's a schedule and it blindly takes load off,
>> fires GC, puts load back; and it uses the load-balancer's health-check
>> functionality to signal that it's about to do a GC, so it would even work
>> with HAproxy, Hipache or even with Nginx's 3rd party HealthCheckModule.)
>> And I know of others who do this kind of dance on their backend.
>>
>> Cheers,
>> Pas
>>
>>
>> On Sat, Mar 22, 2014 at 12:40 PM, Milan Mimica <milan.mimica at infobip.com>wrote:
>>
>>> Milan Mimica, Software Engineer / Team Leader
>>>
>>> Milan Mimica, Software Engineer / Team Leader
>>>
>>> On 03/21/2014 06:44 PM, Peter B. Kessler wrote:
>>>
>>>> If you exclude yourself from the load balancer before the collection,
>>>> you won't get any more work, won't do any more allocations, and won't cause
>>>> the collection.  Wouldn't it be better to tune your collector to meet your
>>>> deadlines while getting work done?
>>>>
>>>
>>> Sure, application tuning is one way to go. I just want to know what my
>>> options are.
>>> A scheduled System.gc() with -XX:-ExplicitGCInvokesConcurrent also does
>>> the job, but it's ugly.
>>>
>>>
>>> _______________________________________________
>>> hotspot-gc-use mailing list
>>> hotspot-gc-use at openjdk.java.net
>>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>
>>
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>
>>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20140322/206b1e52/attachment.html>


More information about the hotspot-gc-use mailing list