Growing GC Young Gen Times
Matt Fowles
matt.fowles at gmail.com
Fri May 14 10:07:59 PDT 2010
Ramki~
The machine has 4 cpus each of which have 4 cores. I will adjust the
survivor spaces as you suggest. Previously I had been running with
MTT 0, but change it to 4 at the suggestion of others.
Running with the JDK7 version may take a bit of time, but I will
pursue that as well.
Matt
On Fri, May 14, 2010 at 12:58 PM, Y. Srinivas Ramakrishna
<y.s.ramakrishna at oracle.com> wrote:
> Hi Matt -- i am computing some metrics from yr log file
> and would like to know how many cpu's you have for the logs below?
>
> Also, as you noted, almost anything that survives a scavenge
> lives for a while. To reduce the overhead of unnecessary
> back-and-forth copying in the survivor spaces, just use
> MaxTenuringThreshold=1 (This suggestion was also made by
> several others in the thread, and is corroborated by your
> PrintTenuringDistribution data). Since you have farily large survivor
> spaces configured now, (at least large enough to fit 4 age cohorts,
> which will be down to 1 age cohort if you use MTT=1), i'd
> suggest making your surviror spaces smaller, may be down to
> about 64 MB from the current 420 MB each, and give the excess
> to your Eden space.
>
> Then use 6u21 when it comes out (or ask your Java support to
> send you a 6u21 for a beta test), or drop in a JVM from JDK 7 into
> your 6u20 installation, and run with that. If you still see
> rising pause times let me know or file a bug, and send us the
> log file and JVM options along with full platform information.
>
> I'll run some metrics from yr log file if you send me the info
> re platform above, and that may perhaps reveal a few more secrets.
>
> later.
> -- ramki
>
> On 05/12/10 15:19, Matt Fowles wrote:
>>
>> All~
>>
>> I have a large app that produces ~4g of garbage every 30 seconds and
>> am trying to reduce the size of gc outliers. About 99% of this data
>> is garbage, but almost anything that survives one collection survives
>> for an indeterminately long amount of time. We are currently using
>> the following VM and options:
>>
>> java version "1.6.0_20"
>> Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
>> Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode)
>>
>> -verbose:gc
>> -XX:+PrintGCTimeStamps
>> -XX:+PrintGCDetails
>> -XX:+PrintGCTaskTimeStamps
>> -XX:+PrintTenuringDistribution
>> -XX:+PrintCommandLineFlags
>> -XX:+PrintReferenceGC
>> -Xms32g -Xmx32g -Xmn4g
>> -XX:+UseParNewGC
>> -XX:ParallelGCThreads=4
>> -XX:+UseConcMarkSweepGC
>> -XX:ParallelCMSThreads=4
>> -XX:CMSInitiatingOccupancyFraction=60
>> -XX:+UseCMSInitiatingOccupancyOnly
>> -XX:+CMSParallelRemarkEnabled
>> -XX:MaxGCPauseMillis=50
>> -Xloggc:gc.log
>>
>>
>> As you can see from the GC log, we never actually reach the point
>> where the CMS kicks in (after app startup). But our young gens seem
>> to take increasingly long to collect as time goes by.
>>
>> The steady state of the app is reached around 956.392 into the log
>> with a collection that takes 0.106 seconds. Thereafter the survivor
>> space remains roughly constantly as filled and the amount promoted to
>> old gen also remains constant, but the collection times increase to
>> 2.855 seconds by the end of the 3.5 hour run.
>>
>> Has anyone seen this sort of behavior before? Are there more switches
>> that I should try running with?
>>
>> Obviously, I am working to profile the app and reduce the garbage load
>> in parallel. But if I still see this sort of problem, it is only a
>> question of how long must the app run before I see unacceptable
>> latency spikes.
>>
>> Matt
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> hotspot-gc-use mailing list
>> hotspot-gc-use at openjdk.java.net
>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>
>
More information about the hotspot-gc-use
mailing list