JEP 132: More-prompt finalization
Tony Printezis
tony.printezis at oracle.com
Wed Dec 28 07:38:08 PST 2011
Kirk,
Inline.
On 12/28/2011 09:33 AM, Kirk Pepperdine wrote:
> Hi Dmitry,
>
> I just received and email from one of my customers. I've cut and paste on of the paragraphs.
>
>> We have already put our expanded knowledge to good use, Daniel was doing
>> some debugging on an old application, performance was sub par. He checked
>> the gc log which showed that the application was doing full gc a lot. He
>> searched the code and found 118 System.gc(); calls! :-D Daniel received
>> guru status
That was an easy guru status. :-)
>> for removing the calls and performance was noticeable quicker!
> Tony dropped a nice diagnostic into the GC logs and it's a bullet point in my GC log seminars.
You're very welcome. :-)
> There is something that I've not investigated so I might start by asking.. why does RMI call System.gc()?
(Jon's email pre-empted me but I'll finish the thought)
RMI has a distributed GC that relies on reference processing to allow
each node to recognize that some objects are unreachable so it can
notify a remote node (or nodes) that some remote references to them do
not exist any more. The remote node might then be able to reclaim
objects that are only remotely reachable. (Or this is how I understood
it at least.)
RMI used to call System.gc() once a minute (!!!) but after some
encouragement from yours truly they changed the default to once an hour
(this is configurable using a property). Note that a STW Full GC is not
really required as long as references are processed. So, in CMS (and
G1), a concurrent cycle is fine which is why we recommend to use
-XX:+ExplicitGCInvokesConcurrent in this case.
I had been warned by the RMI folks against totally disabling those
System.gc()'s (e.g., using -XX:+DisableExplicitGC) given that if Full
GCs / concurrent cycles do not otherwise happen at a reasonable
frequency then remote nodes might experience memory leaks since they
will consider that some otherwise unreachable remote references are
still live. I have no idea how severe such memory leaks would be. I
guess they'd be very application-dependent.
An additional thought that just occurred to me: instead of calling
System.gc() every hour what RMI should really be doing is calling
System.gc() every hour provided no old gen GC has taken place during the
last hour. This would be relatively easy to implement by accessing the
old GC counter through the GC MXBeans.
Tony
> Regards,
> Kirk
>
> On 2011-12-28, at 8:18 AM, Dmitry Samersoff wrote:
>
>> Kirk,
>>
>> On 2011-12-28 03:16, Kirk Pepperdine wrote:
>>> I'm not sure that this usecase is a bug
>>> in GC/finalization but a misunderstanding of what
>>> finalization does and how it's suppose to function.
>>> If they know the socket is dead, why not just call close on it?
>> If I know that socket is dead I don't need finalization at all, so
>> it runs out of scope of this thread.
>>
>> But there are plenty of usecases when we can't determine the moment when we can explicitly close socket. Each of them has it's own workaround (e.g. connection pool manager with refcounting or separate checker thread) but I would like to see finalizers as recommended solution for these cases.
>>
>> -Dmitry
>>
>>> Regards,
>>> Kirk
>>>
>>> On 2011-12-27, at 9:19 PM, Dmitry Samersoff wrote:
>>>
>>>> Jon,
>>>>
>>>> It's not a real (escalated) case. Just an accumulation of
>>>> what I heard from cu during last five years.
>>>>
>>>> On 2011-12-27 20:49, Jon Masamitsu wrote:
>>>>
>>>>> Before I try to answer your question, do you ever try to use
>>>>> System.gc() to get finalizers to run?
>>>> Nope. Because (as you write it below) it's too disruptive especially
>>>> because I have to call System.gc() twice to dry finalization queue.
>>>>
>>>>> If you don't because
>>>>> System.gc() it too disruptive (generally being stop-the-world),
>>>>> are you using CMS and have you tried using System.gc()
>>>>> with -XX:+ExplicitGCInvokesConcurrent?
>>>> Nope also. In most cases System.gc(CMS) cause valuable performance
>>>> degradation for cu's app.
>>>>
>>>>
>>>> To clarify cu requirements:
>>>>
>>>> e.g Huge trader like CBOE. It has the application that have to be very
>>>> responsive during a stock work day.
>>>> So they tune VM to have no GC during work day. Then they kill the app
>>>> and start everything again next morning.
>>>>
>>>> The problem is - they rely to finalization to do some task. Most
>>>> important (but not the only one) one is socket reclaiming .
>>>>
>>>> -Dmitry
>>>>
>>>>
>>>>
>>>>
>>>>> Jon
>>>>>
>>>>> On 12/24/2011 4:33 AM, Dmitry Samersoff wrote:
>>>>>> Jon,
>>>>>>
>>>>>> One of problem with finalization nowdays is that with 1 Tb heap GC (and
>>>>>> thus finalization) never happens.
>>>>>>
>>>>>> Do you plan to address this problem?
>>>>>>
>>>>>> -Dmitry
>>>>>>
>>>>>>
>>>>>> On 2011-12-23 20:13, Jon Masamitsu wrote:
>>>>>>> David,
>>>>>>>
>>>>>>> From the VM side there are two issues that I think we should understand
>>>>>>> better before we work on an API. Those are described in the JEP as
>>>>>>> 1) more aggressive management of the of finalization queue and 2)
>>>>>>> multiple
>>>>>>> finalizer threads. We should see how much of the problem can be
>>>>>>> alleviated by either or both or by other VM side changes that occur
>>>>>>> to us
>>>>>>> during the work and then think about what is left and what information
>>>>>>> we want from the user to address what's left. As Mark has said, I
>>>>>>> think there will be a library side JEP at some point for those
>>>>>>> discussions.
>>>>>>>
>>>>>>> Jon
>>>>>>>
>>>>>>> On 12/22/2011 6:15 PM, David Holmes wrote:
>>>>>>>> On 23/12/2011 9:05 AM, mark.reinhold at oracle.com wrote:
>>>>>>>>> Posted: http://openjdk.java.net/jeps/132
>>>>>>>> hotspot-dev seems the wrong mailing list to discuss this. It is
>>>>>>>> primarily a Java level API. I would suggest using core-libs-dev.
>>>>>>>>
>>>>>>>> David
>>>>
>>>> --
>>>> Dmitry Samersoff
>>>> Java Hotspot development team, SPB04
>>>> * There will come soft rains ...
>>
>> --
>> Dmitry Samersoff
>> Java Hotspot development team, SPB04
>> * There will come soft rains ...
More information about the hotspot-dev
mailing list