Question about Object Copy times
D vd Reddy
dvdeepankar.reddy at gmail.com
Mon Aug 24 19:05:48 UTC 2015
Hi,
I made a couple of experiments run over the weekend, first I ran a
experiment with lowering the maxGCPause it did help, lowered the throughput
to 92%.
I found some machines with higher memory and ran the experiment with 3
different memory sizes 92 GB, 144 GB and 192 GB, the 92 Gb one was having
lower throughput (96.7 %) and the other two configs gave same throughput
(97.xx %).
One thing we observed was the survivor size are similar in all cases
(around 2 GB,) and the object copy times (around 200 -250ms per worker, no
of workers ~ 28) is similar(~ish) in all the cases (expect for some
outliers).
If the object copy is just copying memory, Do you know what is expected
throughput of this copy per GB / MB or what is expected in practice.
Thanks in advance
On Thu, Aug 20, 2015 at 9:32 PM, D vd Reddy <dvdeepankar.reddy at gmail.com>
wrote:
> Machines are mostly 192 GB, max we can go 10 or 20 GB more, we have a old
> gen objects (which is what live means right?) of around 40 ish GB size
> other than that every thing is per request churn (short lived objects).
> Does going for even bigger heap help in this scenario ?
>
> Thanks
>
>
>
> On Thu, Aug 20, 2015 at 9:04 PM, Tao Mao <yiyeguhu at gmail.com> wrote:
>
>> Hi,
>>
>> Is 140~150GB your memory limit? What's your ballpark live data set size?
>> Since you are looking to improve throughput, it may be helpful to increase
>> maxheapsize.
>>
>> Thanks.
>> Tao Mao
>>
>>
>> On Thu, Aug 20, 2015 at 8:21 PM, D vd Reddy <dvdeepankar.reddy at gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I have
>>> enable -XX:+PrintTenuringDistribution -XX:+PrintAdaptiveSizePolicy and
>>> moved to a newer build (1.8.0_45). No major improvements but I am seeing
>>> new information.
>>>
>>> Also Is the Desired Survivor Size printed by Tenuring Distribution only
>>> a approximate value ? I am seeing it is significantly lower than the final
>>> Survivor Size.
>>>
>>> The new logs are here :
>>> https://gist.github.com/dvdreddy/1cb9829e526d419d8452
>>>
>>> Thanks
>>>
>>> PS: I am starting new experiment with lower max pause, will post results
>>> later.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Aug 20, 2015 at 6:24 PM, D vd Reddy <dvdeepankar.reddy at gmail.com
>>> > wrote:
>>>
>>>> Thanks, I will add the -XX:+PrintAdaptiveSizePolicy, and will try to
>>>> move the jvm
>>>>
>>>> But about the old gen getting full, there is a contradicting point that
>>>> the overall heap size is going down at the same size as the size of eden
>>>> minus size of survivor, So I feel that the old size is not going up by
>>>> that much. Also in the logs I saw that Mixed GC is not happening that
>>>> frequently only 8 mixed compared to 380 young.
>>>>
>>>> Am I missing something here ?
>>>>
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> On Thu, Aug 20, 2015 at 6:14 PM, Yu Zhang <yu.zhang at oracle.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> can you reduce MaxGCPauseMillis to ~300 or 200, 1000 maybe too high,
>>>>> and the eden size is ~80g, survivor size 2.5g. This might make the old gen
>>>>> getting full quicker so more mixed gc, but worth a try. Also, if you can
>>>>> add -XX:+PrintAdaptiveSizePolicy, it might tell us more.
>>>>>
>>>>> It seems you are on an older version of jvm. can you move to later
>>>>> jdk8u40 or jdk8u60?
>>>>>
>>>>> Thanks,
>>>>> Jenny
>>>>>
>>>>> On 8/20/2015 5:27 PM, D vd Reddy wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> We are running G1 GC with heap size of around 140 - 150 GB, we are
>>>>> observing high object copy times during young gc (> 80 % of the total GC
>>>>> time).
>>>>> Is this expected or is there anything we are doing wrong. I am not
>>>>> able to find any documentation of optimizing high object copy times,
>>>>> any help would be appreciated
>>>>>
>>>>>
>>>>> CommandLine flags: -XX:+AggressiveOpts
>>>>> -XX:InitialHeapSize=154618822656 -XX:+ManagementServer
>>>>> -XX:MaxGCPauseMillis=1000 -XX:MaxHeapSize=154618822656
>>>>> -XX:MaxMetaspaceSize=268435456
>>>>> -XX:MetaspaceSize=268435456 -XX:ObjectAlignmentInBytes=16
>>>>> -XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
>>>>> -XX:+UnlockExperimentalVMOptions
>>>>> -XX:-UseCompressedOops -XX:+UseG1GC
>>>>>
>>>>>
>>>>> Sample young GC snippet
>>>>>
>>>>>
>>>>>
>>>>> 4416.985: [GC pause (G1 Evacuation Pause) (young), 0.3180932 secs]
>>>>> [Parallel Time: 291.1 ms, GC Workers: 23]
>>>>> [GC Worker Start (ms): Min: 4416985.5, Avg: 4416985.9, Max:
>>>>> 4416986.2, Diff: 0.7]
>>>>> [Ext Root Scanning (ms): Min: 1.2, Avg: 1.7, Max: 4.4, Diff:
>>>>> 3.2, Sum: 38.5]
>>>>> [Update RS (ms): Min: 36.3, Avg: 39.4, Max: 40.0, Diff: 3.8,
>>>>> Sum: 906.0]
>>>>> [Processed Buffers: Min: 47, Avg: 80.9, Max: 124, Diff: 77,
>>>>> Sum: 1861]
>>>>> [Scan RS (ms): Min: 0.5, Avg: 1.0, Max: 1.1, Diff: 0.6, Sum:
>>>>> 22.5]
>>>>> [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff:
>>>>> 0.0, Sum: 0.6]
>>>>> [Object Copy (ms): Min: 247.1, Avg: 247.2, Max: 247.5, Diff:
>>>>> 0.4, Sum: 5686.4]
>>>>> [Termination (ms): Min: 0.0, Avg: 0.3, Max: 0.4, Diff: 0.4, Sum:
>>>>> 7.5]
>>>>> [GC Worker Other (ms): Min: 0.0, Avg: 0.3, Max: 0.7, Diff: 0.7,
>>>>> Sum: 7.6]
>>>>> [GC Worker Total (ms): Min: 289.4, Avg: 290.0, Max: 290.4, Diff:
>>>>> 1.0, Sum: 6669.3]
>>>>> [GC Worker End (ms): Min: 4417275.5, Avg: 4417275.8, Max:
>>>>> 4417276.2, Diff: 0.7]
>>>>> [Code Root Fixup: 0.4 ms]
>>>>> [Code Root Migration: 0.5 ms]
>>>>> [Clear CT: 9.3 ms]
>>>>> [Other: 16.8 ms]
>>>>> [Choose CSet: 0.0 ms]
>>>>> [Ref Proc: 3.7 ms]
>>>>> [Ref Enq: 0.1 ms]
>>>>> [Free CSet: 6.0 ms]
>>>>> [Eden: 80.3G(80.3G)->0.0B(81.7G) Survivors: 2944.0M->2176.0M Heap:
>>>>> 126.4G(144.0G)->45.4G(144.0G)]
>>>>> [Times: user=6.84 sys=0.01, real=0.32 secs]
>>>>>
>>>>> Full GC Log for a period of run :
>>>>> https://gist.github.com/dvdreddy/5ecf9a58a3f309e8bb60
>>>>>
>>>>>
>>>>> Thanks in advance
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150824/2950bb47/attachment.html>
More information about the hotspot-gc-use
mailing list