Fw: Shenandoah: How small is small?
org.openjdk at io7m.com
org.openjdk at io7m.com
Thu Nov 17 15:25:34 UTC 2016
[Reposting to shenandoah-dev@ on the advice of Martijn Verburg]
Hello.
I've been watching the development of Shenandoah since it began. As a
developer of software with mildly soft-realtime requirements (games,
primarily), I'm always eager to see advances that can reduce GC pause
times. Although right now I don't have a GC problem (typically, my
pauses for minor GCs are well below 16ms and therefore are not
perceptible given the usual 30hz/60hz game loop), I still feel that I
have to be more conscious of allocation rates than feels natural in
order to avoid producing too much garbage. I sometimes find myself
avoiding better abstractions and immutable objects simply because I
want to avoid allocations. Escape analysis helps, but sometimes those
objects really do need to hang around. Value types will also help!
I've read in JEP 189 that Shenandoah is intended to try to reduce pause
times on +100gb heaps, and a rather outdated blog post online [0]
suggested that a 512mb heap is simply too small to run at all. The
software I write is written under the general assumption that 1GB of
memory will be a minimum requirement - this includes both the JVM heap
and any allocated native memory.
Right now I'm still using ParNew, although I'll likely move to G1 if it
becomes the default in JDK9. Is Shenandoah likely to be an improvement
for my use case?
Regards,
Mark
[0] https://www.jclarity.com/2014/03/12/shenandoah-experiment-1-will-it-run-pcgen/
More information about the shenandoah-dev
mailing list