G1 GC tuning
Jon Masamitsu
jon.masamitsu at oracle.com
Tue Apr 7 22:30:51 UTC 2015
Rohit,
Is there any type of change of behavior of the applications
around the 45k second mark? Around where the full GC's
sometimes happen.
Assuming nothing special is happening there,
when you ran with CMS you were using half the heap for the young
gen. Did you try G1 with that sized young gen? If you have not tried
that and have a test setup where you can try it, I'd be interested in
the results.
I understand that specifying a size for young gen limits how G1
controls pauses. G1 is perhaps making some wrong decisions so
I'd like to see how it behaves in a more constrained environment.
Thanks.
Jon
On 04/07/2015 10:10 AM, Chaubey, Rohit wrote:
>
> Hello
>
> We are trying to fine tune our high impact application and would
> appreciate a little help in doing so using G1GC. Following are the
> application requirements
>
> Application Requirements
>
> • Expected load: 40 concurrent users with 1 Txns/user per second.
>
> • 99th percentile – 250 ms .Desired all response times to be less than
> 1 second
>
> • 15 GB RAM per JVM for one set(set1) and 12 GB RAM for another
> set(set 2). Both jvms contain different kinds of data and the
> application interacts with both to fetch desired output. Both JVMS
> have the same GC setting. But the kind of data in set 2 is more
> operation intensive and thus gets more activity that set 1 jvms.
>
> • 2x load capacity. Hopefully we can get to 2X.
>
> We implemented G1GC replacing the CMS as
>
> 1)We have large heaps. Ranging from 8 Gb to 25 GB. The usual problem
> comes in with the set2 JVMS. Set1 generally does not crash.
>
> 2)CMS was crashing the JVM frequently. A full GC cycle would crash the
> JVM. It used to happen during the nightly batch run for the application.
>
> 3)We had Xmn upto 50% of the heap.
>
> Once we switched from CMS to G1GC, the set2 jvms stopped crashing
> frequently. However we did have 2 incidents where the crash did
> happen. The G1GC parametrs that are being used are as follows:
>
> JAVA_ARGS="$JAVA_ARGS --J=-Xss${STACK_SIZE} \
>
> --J=-XX:+UseG1GC \
>
> --J=-XX:MaxGCPauseMillis=200 \
>
> --J=-XX:ParallelGCThreads=20 \
>
> --J=-XX:InitiatingHeapOccupancyPercent=60 \
>
> --J=-XX:SurvivorRatio=2 \
>
> --J=-XX:ConcGCThreads=5 \
>
> --J=-Xmx$SERVER_HEAP \
>
> --J=-Xms$SERVER_HEAP \
>
> --J=-DDistributionManager.DISCONNECT_WAIT=$DISCONNECT_WAIT_TIME \
>
> --J=-XX:+HeapDumpOnOutOfMemoryError \
>
> --J=-XX:HeapDumpPath=${GF_LOG}/jvmdumps \
>
> --J=-DgemfireSecurityPropertyFile=$DIR/$cluster_name/runtime/servers/$SERVER_NAME/gfsecurity.properties
>
> --J=-verbose:gc \
>
> --J=-Xloggc:${GF_LOG}/logs/$SERVER_NAME/gc.log \
>
> --J=-XX:+UseGCLogFileRotation --J=-XX:NumberOfGCLogFiles=10
> --J=-XX:GCLogFileSize=1m \
>
> --J=-XX:+PrintGCDateStamps --J=-XX:+PrintGCDetails
> --J=-XX:+PrintTenuringDistribution --J=-XX:+PrintAdaptiveSizePolicy"
>
> We tried starting from scratch and worked our way up to the above
> config with a series of load tests and other activities. Following are
> the observations that we have seen so far:
>
> ·/When the SurvivorRatio element was removed, then we observed a
> degraded performance/. /Thus we kept it at 2 instead of default 8./
>
> ·/There are a handful of humongous allocations(maybe 3 -4 over a day)
> that are requested during the batch run.The number of those is not
> that high as you can see from the attached logs. Should I be changing
> the region size for them?/
>
> Please let me know what can be changed and tested so that we do not
> encounter the jvm crash during batch times and also maintain the
> current sla. I have attached all the gc log files for the cluster.
>
> Thanks and Regards,
>
> Rohit Chaubey
>
> Email: rohit.chaubey at broadridge.com <mailto:rohit.chaubey at broadridge.com>
>
> Work: 201-714-3379, BB: 201-618-9230
>
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and
> confidential. If the reader of the message is not the intended
> recipient or an authorized representative of the intended recipient,
> you are hereby notified that any dissemination of this communication
> is strictly prohibited. If you have received this communication in
> error, please notify us immediately by e-mail and delete the message
> and any attachments from your system.
>
>
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20150407/6e40535c/attachment.html>
More information about the hotspot-gc-use
mailing list