From carlo.fernando at baml.com Tue Jul 1 15:09:12 2014 From: carlo.fernando at baml.com (Fernando, Carlo) Date: Tue, 01 Jul 2014 15:09:12 +0000 Subject: PrintGCDateStamps Message-ID: <204609DC9565564AA71E9B4312EA323241466F87@smtp_mail.bankofamerica.com> Hi everyone. Is the timestamp in the GC log indicate the beginning of the ParNew GC cycle or the end? Also, would high context switching on the box contribute to the large real time value? 2014-06-30T10:59:21.470-0400: 112114.215: [GC 112114.744: [ParNew: 147812K->129K(148608K), 0.0024290 secs] 165105K->17476K(261248K), 0.0025220 secs] [Times: user=0.01 sys=0.01, real=0.53 secs] Thanks -carlo ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirtiteja at gmail.com Tue Jul 1 20:42:16 2014 From: kirtiteja at gmail.com (Kirti Teja Rao) Date: Tue, 1 Jul 2014 13:42:16 -0700 Subject: PrintGCDateStamps In-Reply-To: <204609DC9565564AA71E9B4312EA323241466F87@smtp_mail.bankofamerica.com> References: <204609DC9565564AA71E9B4312EA323241466F87@smtp_mail.bankofamerica.com> Message-ID: Hi Fernando, user=0.01 sys=0.01, real=0.53 secs -> User time and sys time both are very less but the real time is high, to me it looks like the time is spent in logging (the actual io) and also note the log IO is not asynchronous and is included in stop the world. Thanks, Srinivas On Tue, Jul 1, 2014 at 8:09 AM, Fernando, Carlo wrote: > Hi everyone. > > > > Is the timestamp in the GC log indicate the beginning of the ParNew GC > cycle or the end? > > Also, would high context switching on the box contribute to the large real > time value? > > > > *2014-06-30T10:59:21.470-0400*: 112114.215: [GC 112114.744: [ParNew: > 147812K->129K(148608K), 0.0024290 secs] 165105K->17476K(261248K), 0.0025220 > secs] [Times: user=0.01 sys=0.01, real=0.53 secs] > > > > Thanks > > > > -carlo > ------------------------------ > This message, and any attachments, is for the intended recipient(s) only, > may contain information that is privileged, confidential and/or proprietary > and subject to important terms and conditions available at > http://www.bankofamerica.com/emaildisclaimer. If you are not the intended > recipient, please delete this message. > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fancyerii at gmail.com Mon Jul 7 03:33:58 2014 From: fancyerii at gmail.com (Li Li) Date: Mon, 7 Jul 2014 11:33:58 +0800 Subject: which objects use heap space? Message-ID: I have a class without any space allocation, but by using jmap -heap, I found about 400K space is used. is it allocated by jvm? public class TestGc2 { public static void main(String[] args) throws Exception { waitEnter("waiting gc..."); waitEnter("before exit"); } public static void test() throws Exception{ } private static void waitEnter(String s) throws IOException{ System.out.println(s); System.in.read(); System.gc(); } } From thomas.schatzl at oracle.com Mon Jul 7 13:18:06 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 07 Jul 2014 15:18:06 +0200 Subject: which objects use heap space? In-Reply-To: References: Message-ID: <1404739086.2735.56.camel@cirrus> Hi Li, On Mon, 2014-07-07 at 11:33 +0800, Li Li wrote: > I have a class without any space allocation, but by using jmap -heap, > I found about 400K space is used. is it allocated by jvm? > > public class TestGc2 { > public static void main(String[] args) throws Exception { > waitEnter("waiting gc..."); > waitEnter("before exit"); > } > public static void test() throws Exception{ > > } > private static void waitEnter(String s) throws IOException{ > System.out.println(s); > System.in.read(); > System.gc(); > } > } as you thought, VM initialization creates a few objects for various purposes. These includes instances of some system objects that always need to be available, some metadata, or objects referenced by static fields. Also the instance of your application class needs to be allocated. I cannot say whether 400k is about what you can expect. Thomas From jon.masamitsu at oracle.com Mon Jul 7 13:45:05 2014 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Mon, 07 Jul 2014 06:45:05 -0700 Subject: PrintGCDateStamps In-Reply-To: <204609DC9565564AA71E9B4312EA323241466F87@smtp_mail.bankofamerica.com> References: <204609DC9565564AA71E9B4312EA323241466F87@smtp_mail.bankofamerica.com> Message-ID: <53BAA461.9080405@oracle.com> On 7/1/2014 8:09 AM, Fernando, Carlo wrote: > > Hi everyone. > > Is the timestamp in the GC log indicate the beginning of the ParNew GC > cycle or the end? > Beginning. Jon > > Also, would high context switching on the box contribute to the large > real time value? > > *2014-06-30T10:59:21.470-0400*: 112114.215: [GC 112114.744: [ParNew: > 147812K->129K(148608K), 0.0024290 secs] 165105K->17476K(261248K), > 0.0025220 secs] [Times: user=0.01 sys=0.01, real=0.53 secs] > > Thanks > > -carlo > > ------------------------------------------------------------------------ > This message, and any attachments, is for the intended recipient(s) > only, may contain information that is privileged, confidential and/or > proprietary and subject to important terms and conditions available at > http://www.bankofamerica.com/emaildisclaimer. If you are not the > intended recipient, please delete this message. > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlo.fernando at baml.com Mon Jul 7 13:52:08 2014 From: carlo.fernando at baml.com (Fernando, Carlo) Date: Mon, 07 Jul 2014 13:52:08 +0000 Subject: PrintGCDateStamps In-Reply-To: <53BAA461.9080405@oracle.com> References: <204609DC9565564AA71E9B4312EA323241466F87@smtp_mail.bankofamerica.com> <53BAA461.9080405@oracle.com> Message-ID: <204609DC9565564AA71E9B4312EA32324146A5BE@smtp_mail.bankofamerica.com> Thanks. From: hotspot-gc-use [mailto:hotspot-gc-use-bounces at openjdk.java.net] On Behalf Of Jon Masamitsu Sent: Monday, July 07, 2014 8:45 AM To: hotspot-gc-use at openjdk.java.net Subject: Re: PrintGCDateStamps On 7/1/2014 8:09 AM, Fernando, Carlo wrote: Hi everyone. Is the timestamp in the GC log indicate the beginning of the ParNew GC cycle or the end? Beginning. Jon Also, would high context switching on the box contribute to the large real time value? 2014-06-30T10:59:21.470-0400: 112114.215: [GC 112114.744: [ParNew: 147812K->129K(148608K), 0.0024290 secs] 165105K->17476K(261248K), 0.0025220 secs] [Times: user=0.01 sys=0.01, real=0.53 secs] Thanks -carlo ________________________________ This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. _______________________________________________ hotspot-gc-use mailing list hotspot-gc-use at openjdk.java.net http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecki at zusammenkunft.net Mon Jul 7 19:19:04 2014 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Mon, 7 Jul 2014 21:19:04 +0200 Subject: which objects use heap space? In-Reply-To: <1404739086.2735.56.camel@cirrus> References: <1404739086.2735.56.camel@cirrus> Message-ID: <20140707211904.000074e2.ecki@zusammenkunft.net> Hello Li, in additin to what Thomas is saying. I think it is a good thing to turn on verbose class loading. You cannot see the callsites there but you can see what classes (subsystems) get activated before your main class is even touched. For each loaded class you can expect one or multiple instances beeing created: C:\>java -verbose:class -version [Opened C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Object from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.Serializable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Comparable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.CharSequence from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.String from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.GenericDeclaration from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Type from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.AnnotatedElement from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Class from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Cloneable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ClassLoader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.System from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Throwable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Error from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ThreadDeath from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Exception from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.RuntimeException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.ProtectionDomain from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.AccessControlContext from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ReflectiveOperationException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ClassNotFoundException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.LinkageError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.NoClassDefFoundError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ClassCastException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ArrayStoreException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.VirtualMachineError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.OutOfMemoryError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.StackOverflowError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.IllegalMonitorStateException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.Reference from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.SoftReference from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.WeakReference from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.FinalReference from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.PhantomReference from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.Finalizer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Runnable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Thread from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Thread$UncaughtExceptionHandler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ThreadGroup from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Map from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Dictionary from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Hashtable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Properties from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.AccessibleObject from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Member from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Field from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Method from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Constructor from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.MagicAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.MethodAccessor from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.MethodAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.ConstructorAccessor from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.ConstructorAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.DelegatingClassLoader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.ConstantPool from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.FieldAccessor from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.FieldAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.UnsafeFieldAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.UnsafeStaticFieldAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.MethodHandle from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.MemberName from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.MethodHandleNatives from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.LambdaForm from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.DirectMethodHandle from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.MethodType from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.BootstrapMethodError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.CallSite from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.ConstantCallSite from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.MutableCallSite from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.invoke.VolatileCallSite from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Appendable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.AbstractStringBuilder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.StringBuffer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.StringBuilder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.StackTraceElement from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.Buffer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.PostVMInitHook from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Boolean from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Character from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Number from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Float from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Double from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Byte from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Short from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Integer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Long from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.NullPointerException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ArithmeticException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.ObjectStreamField from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Comparator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.String$CaseInsensitiveComparator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.Guard from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.Permission from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.BasicPermission from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.RuntimePermission from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.AccessController from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.ReflectPermission from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.PrivilegedAction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.ReflectionFactory$GetReflectionFactoryAction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.cert.Certificate from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Iterable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collection from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.List from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.RandomAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.AbstractCollection from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.AbstractList from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Vector from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Stack from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.ReflectionFactory from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.Reference$Lock from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.Reference$ReferenceHandler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.ReferenceQueue from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.ReferenceQueue$Null from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.ReferenceQueue$Lock from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ref.Finalizer$FinalizerThread from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.VM from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Map$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Hashtable$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Math from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Set from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.AbstractSet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Hashtable$EntrySet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$EmptySet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$EmptyList from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.AbstractMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$EmptyMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$SynchronizedCollection from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$SynchronizedSet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Enumeration from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Iterator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Hashtable$Enumerator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Runtime from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Version from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.AutoCloseable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.Closeable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.InputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FileInputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ThreadLocal from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.atomic.AtomicInteger from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Unsafe from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.IncompatibleClassChangeError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.NoSuchMethodError from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.ArrayList from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$UnmodifiableCollection from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$UnmodifiableList from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$UnmodifiableRandomAccessList from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.Reflection from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashMap$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashMap$EntrySet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashMap$HashIterator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashMap$EntryIterator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Class$3 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Modifier from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.LangReflectAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.ReflectAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FileDescriptor from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.JavaIOFileDescriptorAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FileDescriptor$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.SharedSecrets from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.Flushable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.OutputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FileOutputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FilterInputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.BufferedInputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.atomic.AtomicReferenceFieldUpdater from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.atomic.AtomicReferenceFieldUpdater$AtomicReferenceFieldUpdaterImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.misc.ReflectUtil from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Arrays from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Proxy from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.InvocationHandler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.WeakCache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.WeakCache$BiFunction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Proxy$KeyFactory from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Proxy$ProxyClassFactory from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.atomic.AtomicLong from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ConcurrentMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ConcurrentHashMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ConcurrentHashMap$HashEntry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.Lock from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.ReentrantLock from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ConcurrentHashMap$Segment from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.AbstractOwnableSynchronizer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.AbstractQueuedSynchronizer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.ReentrantLock$Sync from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.ReentrantLock$NonfairSync from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.locks.AbstractQueuedSynchronizer$Node from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Objects from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FilterOutputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.PrintStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.BufferedOutputStream from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.Writer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.OutputStreamWriter from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.StreamEncoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.Charset from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.spi.CharsetProvider from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.FastCharsetProvider from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.StandardCharsets from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.PreHashedMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.StandardCharsets$Aliases from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.StandardCharsets$Classes from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.StandardCharsets$Cache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.security.action.GetPropertyAction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.HistoricallyNamedCharset from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.MS1252 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.SingleByte from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Class$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.ReflectionFactory$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.NativeConstructorAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.reflect.DelegatingConstructorAccessorImpl from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.ArrayEncoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CharsetEncoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.SingleByte$Encoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CodingErrorAction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.ByteBuffer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.HeapByteBuffer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.Bits from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.ByteOrder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.JavaNioAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.Bits$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.BufferedWriter from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.File from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FileSystem from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.Win32FileSystem from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.WinNTFileSystem from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.ExpiringCache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.LinkedHashMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.ExpiringCache$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.LinkedHashMap$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ClassLoader$3 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Enum from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.File$PathStatus from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.ExpiringCache$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ClassLoader$NativeLibrary from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Terminator from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.SignalHandler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Terminator$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Signal from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.NativeSignalHandler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.OSEnvironment from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.io.Win32ErrorMode from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.JavaLangAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.System$2 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.IllegalArgumentException from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Compiler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Compiler$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Launcher from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.URLStreamHandlerFactory from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Launcher$Factory from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.SecureClassLoader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.URLClassLoader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Launcher$ExtClassLoader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.security.util.Debug from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ClassLoader$ParallelLoaders from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.WeakHashMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Hashing from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Random from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ThreadLocalRandom from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ThreadLocalRandom$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ThreadLocal$ThreadLocalMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.ThreadLocal$ThreadLocalMap$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.WeakHashMap$Entry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.WeakHashMap$Holder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Collections$SetFromMap from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.WeakHashMap$KeySet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.JavaNetAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.URLClassLoader$7 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.StringTokenizer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.PrivilegedExceptionAction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Launcher$ExtClassLoader$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.MetaIndex from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Readable from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.Reader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.BufferedReader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.InputStreamReader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.io.FileReader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.StreamDecoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.ArrayDecoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CharsetDecoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.nio.cs.SingleByte$Decoder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.CharBuffer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.HeapCharBuffer from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CoderResult from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CoderResult$Cache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CoderResult$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.nio.charset.CoderResult$2 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.IoTrace from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.reflect.Array from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashMap$Holder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Locale from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.locale.LocaleObjectCache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Locale$Cache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.concurrent.ConcurrentHashMap$Holder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.locale.BaseLocale from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.locale.BaseLocale$Cache from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.locale.BaseLocale$Key from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.locale.LocaleObjectCache$CacheEntry from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Locale$LocaleKey from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.util.locale.LocaleUtils from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.CharacterData from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.CharacterDataLatin1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.net.www.ParseUtil from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.BitSet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.URL from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.Hashtable$Holder from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.URL$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.Parts from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.net.URLStreamHandler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.net.www.protocol.file.Handler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.JavaSecurityAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.ProtectionDomain$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.JavaSecurityProtectionDomainAccess from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.ProtectionDomain$3 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.CodeSource from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.ProtectionDomain$Key from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.security.Principal from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.util.HashSet from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.URLClassPath from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.net.www.protocol.jar.Handler from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Launcher$AppClassLoader from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded sun.misc.Launcher$AppClassLoader$1 from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.SystemClassLoaderAction from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] java version "1.7.0_60" Java(TM) SE Runtime Environment (build 1.7.0_60-b19) Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode) [Loaded java.lang.Shutdown from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] [Loaded java.lang.Shutdown$Lock from C:\Program Files\Java\jdk1.7.0\jre\lib\rt.jar] Am Mon, 07 Jul 2014 15:18:06 +0200 schrieb Thomas Schatzl : > Hi Li, > > On Mon, 2014-07-07 at 11:33 +0800, Li Li wrote: > > I have a class without any space allocation, but by using jmap > > -heap, I found about 400K space is used. is it allocated by jvm? > > > > public class TestGc2 { > > public static void main(String[] args) throws Exception { > > waitEnter("waiting gc..."); > > waitEnter("before exit"); > > } > > public static void test() throws Exception{ > > > > } > > private static void waitEnter(String s) throws IOException{ > > System.out.println(s); > > System.in.read(); > > System.gc(); > > } > > } > > as you thought, VM initialization creates a few objects for various > purposes. These includes instances of some system objects that always > need to be available, some metadata, or objects referenced by static > fields. Also the instance of your application class needs to be > allocated. > > I cannot say whether 400k is about what you can expect. > > Thomas > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From yiyeguhu at gmail.com Mon Jul 7 21:41:08 2014 From: yiyeguhu at gmail.com (Tao Mao) Date: Mon, 7 Jul 2014 14:41:08 -0700 Subject: How many humongous regions? Message-ID: Hi, See if anyone has a quick answer on this: is there any way I can check the number of humongous regions used in a running process (say, I have PID)? Thanks. Tao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yiyeguhu at gmail.com Mon Jul 7 21:41:40 2014 From: yiyeguhu at gmail.com (Tao Mao) Date: Mon, 7 Jul 2014 14:41:40 -0700 Subject: How many humongous regions? In-Reply-To: References: Message-ID: Yeah, I meant I wanted to use G1. On Mon, Jul 7, 2014 at 2:41 PM, Tao Mao wrote: > Hi, > > See if anyone has a quick answer on this: is there any way I can check the > number of humongous regions used in a running process (say, I have PID)? > > Thanks. > Tao > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Mon Jul 7 22:33:27 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Mon, 07 Jul 2014 15:33:27 -0700 Subject: How many humongous regions? In-Reply-To: References: Message-ID: <53BB2037.1030405@oracle.com> Tao, How about G1PrintRegionLivenessInfo? It is printed at marking. Thanks, Jenny On 7/7/2014 2:41 PM, Tao Mao wrote: > Yeah, I meant I wanted to use G1. > > > On Mon, Jul 7, 2014 at 2:41 PM, Tao Mao > wrote: > > Hi, > > See if anyone has a quick answer on this: is there any way I can > check the number of humongous regions used in a running process > (say, I have PID)? > > Thanks. > Tao > > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: From kingtim at gmail.com Mon Jul 14 19:45:48 2014 From: kingtim at gmail.com (Tim King) Date: Mon, 14 Jul 2014 12:45:48 -0700 Subject: Number/size of G1 regions? Message-ID: Hello all! I have a question about the number and size of heap regions we are seeing on our JVMs. We have a 24GB heap, and I am seeing that 24524 1MB regions are being allocated. We are running JDK 1.7.0_51. I read in this article http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html that "the goal is to have no more than 2048 regions". Based on the size of our heap, I am expecting larger region size with fewer regions. Is this something I should be worried about, or is it expected and normal? What kind of impact will this have on the performance? Thank you, Cheers, -Tim Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 25769803776 (24576.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 268435456 (256.0MB) MaxPermSize = 536870912 (512.0MB) G1HeapRegionSize = 1048576 (1.0MB) Heap Usage: G1 Heap: regions = 24524 capacity = 25715277824 (24524.0MB) used = 5114734088 (4877.7905349731445MB) free = 20600543736 (19646.209465026855MB) 19.889865172782354% used .... -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Mon Jul 14 20:41:24 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Mon, 14 Jul 2014 13:41:24 -0700 Subject: Number/size of G1 regions? In-Reply-To: References: Message-ID: <53C44074.5030205@oracle.com> Tim, This is probably due to you did not specify the minimum heap. With jdk7u40 java -Xmx24g -XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+PrintGCDetails -verbose:gc gives the following: Heap garbage-first heap total 516096K, used 0K [0x00000001fae00000, 0x000000021a600000, 0x00000007fae00000) region size 1024K, 1 young (1024K), 0 survivors (0K) ./java -Xms24g -Xms24g -XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+PrintGCDetails -verbose:gc gives: Heap garbage-first heap total 25165824K, used 0K [0x00000001fb000000, 0x00000007fb000000, 0x00000007fb000000) region size 8192K, 1 young (8192K), 0 survivors (0K) If the minimum heap size is not specified in command line, jvm will first decide the minimum based on some factors(newSize, oldSize, os allocatable memory, ect), then take the average of min and max heap (average heap size) to decide the region size. This region size is decided when jvm starts, and would not change if the heap is expanded. With smaller region size, one impact is you may see a lot of humongous objects. By definition, objects size > 1/2 of the region size. Also if you are seeing time spend in RS related operations, probably due to a lot of RS to maintain. Thanks, Jenny On 7/14/2014 12:45 PM, Tim King wrote: > Hello all! > I have a question about the number and size of heap regions we are > seeing on our JVMs. We have a 24GB heap, and I am seeing that 24524 > 1MB regions are being allocated. We are running JDK 1.7.0_51. > > I read in this article > http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html that > "the goal is to have no more than 2048 regions". > > Based on the size of our heap, I am expecting larger region size with > fewer regions. Is this something I should be worried about, or is it > expected and normal? What kind of impact will this have on the > performance? > > Thank you, > Cheers, > -Tim > > Heap Configuration: > MinHeapFreeRatio = 40 > MaxHeapFreeRatio = 70 > MaxHeapSize = 25769803776 (24576.0MB) > NewSize = 1363144 (1.2999954223632812MB) > MaxNewSize = 17592186044415 MB > OldSize = 5452592 (5.1999969482421875MB) > NewRatio = 2 > SurvivorRatio = 8 > PermSize = 268435456 (256.0MB) > MaxPermSize = 536870912 (512.0MB) > G1HeapRegionSize = 1048576 (1.0MB) > > Heap Usage: > G1 Heap: > regions = 24524 > capacity = 25715277824 (24524.0MB) > used = 5114734088 (4877.7905349731445MB) > free = 20600543736 (19646.209465026855MB) > 19.889865172782354% used > .... > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: From kingtim at gmail.com Mon Jul 14 20:45:48 2014 From: kingtim at gmail.com (Tim King) Date: Mon, 14 Jul 2014 13:45:48 -0700 Subject: Number/size of G1 regions? In-Reply-To: <53C44074.5030205@oracle.com> References: <53C44074.5030205@oracle.com> Message-ID: Thank you Jenny. That makes sense. Cheers, Tim On Mon, Jul 14, 2014 at 1:41 PM, Yu Zhang wrote: > Tim, > > This is probably due to you did not specify the minimum heap. With jdk7u40 > java -Xmx24g -XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+PrintGCDetails > -verbose:gc gives the following: > Heap > garbage-first heap total 516096K, used 0K [0x00000001fae00000, > 0x000000021a600000, 0x00000007fae00000) > region size 1024K, 1 young (1024K), 0 survivors (0K) > > ./java -Xms24g -Xms24g -XX:+UseG1GC -XX:+PrintFlagsFinal > -XX:+PrintGCDetails -verbose:gc gives: > Heap > garbage-first heap total 25165824K, used 0K [0x00000001fb000000, > 0x00000007fb000000, 0x00000007fb000000) > region size 8192K, 1 young (8192K), 0 survivors (0K) > > If the minimum heap size is not specified in command line, jvm will first > decide the minimum based on some factors(newSize, oldSize, os allocatable > memory, ect), then take the average of min and max heap (average heap size) > to decide the region size. This region size is decided when jvm starts, > and would not change if the heap is expanded. > > With smaller region size, one impact is you may see a lot of humongous > objects. By definition, objects size > 1/2 of the region size. Also if > you are seeing time spend in RS related operations, probably due to a lot > of RS to maintain. > > Thanks, > Jenny > > On 7/14/2014 12:45 PM, Tim King wrote: > > Hello all! > I have a question about the number and size of heap regions we are seeing > on our JVMs. We have a 24GB heap, and I am seeing that 24524 1MB regions > are being allocated. We are running JDK 1.7.0_51. > > I read in this article > http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html that > "the goal is to have no more than 2048 regions". > > Based on the size of our heap, I am expecting larger region size with > fewer regions. Is this something I should be worried about, or is it > expected and normal? What kind of impact will this have on the performance? > > Thank you, > Cheers, > -Tim > > Heap Configuration: > MinHeapFreeRatio = 40 > MaxHeapFreeRatio = 70 > MaxHeapSize = 25769803776 (24576.0MB) > NewSize = 1363144 (1.2999954223632812MB) > MaxNewSize = 17592186044415 MB > OldSize = 5452592 (5.1999969482421875MB) > NewRatio = 2 > SurvivorRatio = 8 > PermSize = 268435456 (256.0MB) > MaxPermSize = 536870912 (512.0MB) > G1HeapRegionSize = 1048576 (1.0MB) > > Heap Usage: > G1 Heap: > regions = 24524 > capacity = 25715277824 (24524.0MB) > used = 5114734088 (4877.7905349731445MB) > free = 20600543736 (19646.209465026855MB) > 19.889865172782354% used > .... > > > > _______________________________________________ > hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasthelod at gmail.com Wed Jul 16 00:12:38 2014 From: pasthelod at gmail.com (Pas) Date: Wed, 16 Jul 2014 02:12:38 +0200 Subject: CMSEdenChunksRecordAlways & CMSParallelInitialMarkEnabled In-Reply-To: <8217B498-8868-453D-B8DC-39A718310D75@vast.com> References: <8217B498-8868-453D-B8DC-39A718310D75@vast.com> Message-ID: Hello, I was wondering, how come no one (else) uses ParGCCardsPerStrideChunk on large heaps? It's shown to decrease (precious STW) time spent during minor collections by ParNew. ( http://blog.ragozin.info/2012/03/secret-hotspot-option-improving-gc.html and also we started to use it on 20+ GB heaps and it was helpful, though we did a more than one setting at a time, so I can't say that it was just this setting.) Regards, Pas On Sat, Jun 21, 2014 at 6:52 PM, graham sanderson wrote: > Note this works great for us too ? given formatting in this email is a bit > flaky, I?ll refer you to our numbers I posted in a Cassandra issue I opened > to add these flags as defaults for ParNew/CMS (on the appropriate JVMs) > > https://issues.apache.org/jira/browse/CASSANDRA-7432 > > On Jun 14, 2014, at 7:05 PM, graham sanderson wrote: > > Thanks for the answer Gustav, > > The fact that you have been running in production for months makes me > confident enough to try this on at least one our nodes? (this is actually > cassandra) > > Current GC related options are at the bottom - these nodes have 256G of > RAM, and they aren?t swapping, and we are certainly used to a pause within > the first 10 seconds or so, but the nodes haven?t even joined the ring yet, > so we don?t really care. yeah ms != mx is bad; we want one heap size and to > stick with it. > > I will gather data via -XX:+CMSEdenChunksRecordAlways, however I?d be > interested if a developer has an answer as to when we expect potential > chunk recording? Otherwise I?ll have to go dig into the code a bit deeper - > my assumption was that this call would not be in the inlined allocation > code, but I had thought that even allocation of a new TLAB was inlined by > the compilers - perhaps not. > > Current GC related settings - note we were running with a lower > CMSInitiatingOccupancyFraction until recently - seems to have gotten > changed back by accident, but that is kind of tangential. > > -Xms24576M > -Xmx24576M > -Xmn8192M > -XX:+HeapDumpOnOutOfMemoryError > -XX:+UseParNewGC > -XX:+UseConcMarkSweepGC > -XX:+CMSParallelRemarkEnabled > -XX:SurvivorRatio=8 > -XX:MaxTenuringThreshold=1 > -XX:CMSInitiatingOccupancyFraction=70 > -XX:+UseCMSInitiatingOccupancyOnly > -XX:+UseTLAB > -XX:+UseCondCardMark > -XX:+PrintGCDetails > -XX:+PrintGCDateStamps > -XX:+PrintHeapAtGC > -XX:+PrintTenuringDistribution > -XX:+PrintGCApplicationStoppedTime > -XX:+PrintPromotionFailure > -XX:PrintFLSStatistics=1 > -Xloggc:/var/log/cassandra/gc.log > -XX:+UseGCLogFileRotation > -XX:NumberOfGCLogFiles=30 > -XX:GCLogFileSize=20M > -XX:+PrintGCApplicationConcurrentTime > > Thanks, Graham > > P.S. Note tuning here is rather interesting since we use this cassandra > cluster for lots of different data with very different usage patterns - > sometimes we?ll suddenly dump 50G of data in over the course of a few > minutes. Also cassandra doesn?t really mind a node being paused for a while > due to GC, but things get a little more annoying if they pause at the same > time? even though promotion failure can we worse for us (that is a separate > issue), we?ve seen STW pauses up to about 6-8 seconds in re mark > (presumably when things go horribly wrong and you only get one chunk). > Basically I?m on a mission to minimize all pauses, since their effects can > propagate (timeouts are very short in a lot of places) > > I will report back with my findings > > On Jun 14, 2014, at 6:29 PM, Gustav ?kesson > wrote: > > Hi, > > Even though I won't answer all your questions I'd like to share my > experience with these settings (plus additional thoughts) even though I > haven't yet have had the time to dig into details. > > We've been using these flags for several months in production (yes, Java 7 > even before latest update release) and we've seen a lot of improvements for > CMS old gen STW. During execution occasional initial mark of 1.5s could > occur, but using these settings combined CMS pauses are consistently around > ~100ms (on high-end machine as yours, they are 20-30ms). We're using 1gb > and 2gb heaps with roughly half/half old/new. Obviously, YMMV but this is > at least the behavior of this particular application - we've had nothing > but positive outcome from using these settings. Additionally, the pauses > are rather deterministic. > > Not sure what your heap size settings are, but what I've also observed is > that setting Xms != Xmx could also cause occasional long initial mark when > heap capacity is slightly increased. I had a discussion a while back ( > http://mail.openjdk.java.net/pipermail/hotspot-gc-use/2014-February/001795.html > ) regarding this, and this seems to be an issue with CMS. > > Also, swapping/paging is another factor which could cause indeterministic > / occasional long STW GCs. If you're on Linux, try swappiness=0 and see if > pauses get more stable. > > > Best Regards, > Gustav ?kesson > > > On Fri, Jun 13, 2014 at 6:48 AM, graham sanderson wrote: > >> I was investigating abortable preclean timeouts in our app (and >> associated long remark pause) so had a look at the old jdk6 code I had on >> my box, wondered about recording eden chunks during certain eden slow >> allocation paths (I wasn?t sure if TLAB allocation is just a CAS bump), and >> saw what looked perfect in the latest code, so was excited to >> install 1.7.0_60-b19 >> >> I wanted to ask what you consider the stability of these two options to >> be (I?m pretty sure at least the first one is new in this release) >> >> I have just installed locally on my mac, and am aware of >> http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8021809 which I >> could reproduce, however I wasn?t able to reproduce it without -XX:-UseCMSCompactAtFullCollection >> (is this your understanding too?) >> >> We are running our application with 8 gig young generation (6.4g eden), >> on boxes with 32 cores? so parallelism is good for short pauses >> >> we already have >> >> -XX:+UseParNewGC >> -XX:+UseConcMarkSweepGC >> -XX:+CMSParallelRemarkEnabled >> >> we have seen a few long(isn) initial marks, so >> >> -XX:+CMSParallelInitialMarkEnabled sounds good >> >> as for >> >> -XX:+CMSEdenChunksRecordAlways >> >> my question is: what constitutes a slow path such an eden chunk is >> potentially recorded? TLAB allocation, or more horrific things; basically >> (and I?ll test our app with -XX:+CMSPrintEdenSurvivorChunks) is it likely >> that I?ll actually get less samples using -XX:+CMSEdenChunksRecordAlways in >> a highly multithread app than I would with sampling, or put another way? >> what sort of app allocation patterns if any might avoid the slow path >> altogether and might leave me with just one chunk? >> >> Thanks, >> >> Graham >> >> P.S. less relevant I think, but our old generation is 16g >> P.P.S. I suspect the abortable preclean timeouts mostly happen after a >> burst of very high allocation rate followed by an almost complete lull? >> this is one of the patterns that can happen in our application >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From graham at vast.com Wed Jul 16 02:59:22 2014 From: graham at vast.com (graham sanderson) Date: Tue, 15 Jul 2014 21:59:22 -0500 Subject: CMSEdenChunksRecordAlways & CMSParallelInitialMarkEnabled In-Reply-To: References: <8217B498-8868-453D-B8DC-39A718310D75@vast.com> Message-ID: <2495BC2B-2569-4E0F-9522-9D0730222164@vast.com> I didn?t know about it - I?m go to try it out on some of the nodes in a test cluster On Jul 15, 2014, at 7:12 PM, Pas wrote: > Hello, > > I was wondering, how come no one (else) uses ParGCCardsPerStrideChunk on large heaps? It's shown to decrease (precious STW) time spent during minor collections by ParNew. ( http://blog.ragozin.info/2012/03/secret-hotspot-option-improving-gc.html and also we started to use it on 20+ GB heaps and it was helpful, though we did a more than one setting at a time, so I can't say that it was just this setting.) > > Regards, > Pas > > > On Sat, Jun 21, 2014 at 6:52 PM, graham sanderson wrote: > Note this works great for us too ? given formatting in this email is a bit flaky, I?ll refer you to our numbers I posted in a Cassandra issue I opened to add these flags as defaults for ParNew/CMS (on the appropriate JVMs) > > https://issues.apache.org/jira/browse/CASSANDRA-7432 > > On Jun 14, 2014, at 7:05 PM, graham sanderson wrote: > >> Thanks for the answer Gustav, >> >> The fact that you have been running in production for months makes me confident enough to try this on at least one our nodes? (this is actually cassandra) >> >> Current GC related options are at the bottom - these nodes have 256G of RAM, and they aren?t swapping, and we are certainly used to a pause within the first 10 seconds or so, but the nodes haven?t even joined the ring yet, so we don?t really care. yeah ms != mx is bad; we want one heap size and to stick with it. >> >> I will gather data via -XX:+CMSEdenChunksRecordAlways, however I?d be interested if a developer has an answer as to when we expect potential chunk recording? Otherwise I?ll have to go dig into the code a bit deeper - my assumption was that this call would not be in the inlined allocation code, but I had thought that even allocation of a new TLAB was inlined by the compilers - perhaps not. >> >> Current GC related settings - note we were running with a lower CMSInitiatingOccupancyFraction until recently - seems to have gotten changed back by accident, but that is kind of tangential. >> >> -Xms24576M >> -Xmx24576M >> -Xmn8192M >> -XX:+HeapDumpOnOutOfMemoryError >> -XX:+UseParNewGC >> -XX:+UseConcMarkSweepGC >> -XX:+CMSParallelRemarkEnabled >> -XX:SurvivorRatio=8 >> -XX:MaxTenuringThreshold=1 >> -XX:CMSInitiatingOccupancyFraction=70 >> -XX:+UseCMSInitiatingOccupancyOnly >> -XX:+UseTLAB >> -XX:+UseCondCardMark >> -XX:+PrintGCDetails >> -XX:+PrintGCDateStamps >> -XX:+PrintHeapAtGC >> -XX:+PrintTenuringDistribution >> -XX:+PrintGCApplicationStoppedTime >> -XX:+PrintPromotionFailure >> -XX:PrintFLSStatistics=1 >> -Xloggc:/var/log/cassandra/gc.log >> -XX:+UseGCLogFileRotation >> -XX:NumberOfGCLogFiles=30 >> -XX:GCLogFileSize=20M >> -XX:+PrintGCApplicationConcurrentTime >> >> Thanks, Graham >> >> P.S. Note tuning here is rather interesting since we use this cassandra cluster for lots of different data with very different usage patterns - sometimes we?ll suddenly dump 50G of data in over the course of a few minutes. Also cassandra doesn?t really mind a node being paused for a while due to GC, but things get a little more annoying if they pause at the same time? even though promotion failure can we worse for us (that is a separate issue), we?ve seen STW pauses up to about 6-8 seconds in re mark (presumably when things go horribly wrong and you only get one chunk). Basically I?m on a mission to minimize all pauses, since their effects can propagate (timeouts are very short in a lot of places) >> >> I will report back with my findings >> >> On Jun 14, 2014, at 6:29 PM, Gustav ?kesson wrote: >> >>> Hi, >>> >>> Even though I won't answer all your questions I'd like to share my experience with these settings (plus additional thoughts) even though I haven't yet have had the time to dig into details. >>> >>> We've been using these flags for several months in production (yes, Java 7 even before latest update release) and we've seen a lot of improvements for CMS old gen STW. During execution occasional initial mark of 1.5s could occur, but using these settings combined CMS pauses are consistently around ~100ms (on high-end machine as yours, they are 20-30ms). We're using 1gb and 2gb heaps with roughly half/half old/new. Obviously, YMMV but this is at least the behavior of this particular application - we've had nothing but positive outcome from using these settings. Additionally, the pauses are rather deterministic. >>> >>> Not sure what your heap size settings are, but what I've also observed is that setting Xms != Xmx could also cause occasional long initial mark when heap capacity is slightly increased. I had a discussion a while back ( http://mail.openjdk.java.net/pipermail/hotspot-gc-use/2014-February/001795.html ) regarding this, and this seems to be an issue with CMS. >>> >>> Also, swapping/paging is another factor which could cause indeterministic / occasional long STW GCs. If you're on Linux, try swappiness=0 and see if pauses get more stable. >>> >>> >>> Best Regards, >>> Gustav ?kesson >>> >>> >>> On Fri, Jun 13, 2014 at 6:48 AM, graham sanderson wrote: >>> I was investigating abortable preclean timeouts in our app (and associated long remark pause) so had a look at the old jdk6 code I had on my box, wondered about recording eden chunks during certain eden slow allocation paths (I wasn?t sure if TLAB allocation is just a CAS bump), and saw what looked perfect in the latest code, so was excited to install 1.7.0_60-b19 >>> >>> I wanted to ask what you consider the stability of these two options to be (I?m pretty sure at least the first one is new in this release) >>> >>> I have just installed locally on my mac, and am aware of http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8021809 which I could reproduce, however I wasn?t able to reproduce it without -XX:-UseCMSCompactAtFullCollection (is this your understanding too?) >>> >>> We are running our application with 8 gig young generation (6.4g eden), on boxes with 32 cores? so parallelism is good for short pauses >>> >>> we already have >>> >>> -XX:+UseParNewGC >>> -XX:+UseConcMarkSweepGC >>> -XX:+CMSParallelRemarkEnabled >>> >>> we have seen a few long(isn) initial marks, so >>> >>> -XX:+CMSParallelInitialMarkEnabled sounds good >>> >>> as for >>> >>> -XX:+CMSEdenChunksRecordAlways >>> >>> my question is: what constitutes a slow path such an eden chunk is potentially recorded? TLAB allocation, or more horrific things; basically (and I?ll test our app with -XX:+CMSPrintEdenSurvivorChunks) is it likely that I?ll actually get less samples using -XX:+CMSEdenChunksRecordAlways in a highly multithread app than I would with sampling, or put another way? what sort of app allocation patterns if any might avoid the slow path altogether and might leave me with just one chunk? >>> >>> Thanks, >>> >>> Graham >>> >>> P.S. less relevant I think, but our old generation is 16g >>> P.P.S. I suspect the abortable preclean timeouts mostly happen after a burst of very high allocation rate followed by an almost complete lull? this is one of the patterns that can happen in our application >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >>> >>> >> > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1574 bytes Desc: not available URL: From martin.makundi at koodaripalvelut.com Wed Jul 16 03:45:22 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 06:45:22 +0300 Subject: G1gc compaction algorithm Message-ID: Hi! Humongous allocation fails when there is not enough contiguous free space in memory. How does g1gc compaction algorithm work, does it compact only within a region or does it attempt to compact in a whole-heap-defragmentation way? The latter would reduce the probability of "humongous allocation failure". And if this is possible, is there a parameter/s that can be changed to tune this? ** Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From yiyeguhu at gmail.com Wed Jul 16 04:13:51 2014 From: yiyeguhu at gmail.com (Tao Mao) Date: Tue, 15 Jul 2014 21:13:51 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: Message-ID: Ideally, G1 should have heap-wise deframentation but it doesn't as far as I know. Thanks. Tao On Tue, Jul 15, 2014 at 8:45 PM, Martin Makundi < martin.makundi at koodaripalvelut.com> wrote: > Hi! > > Humongous allocation fails when there is not enough contiguous free space > in memory. > > How does g1gc compaction algorithm work, does it compact only within a > region or does it attempt to compact in a whole-heap-defragmentation way? > The latter would reduce the probability of "humongous allocation failure". > And if this is possible, is there a parameter/s that can be changed to tune > this? > > ** > Martin > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Wed Jul 16 05:29:32 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Tue, 15 Jul 2014 22:29:32 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: Message-ID: <53C60DBC.6060205@oracle.com> Martin, Humongous objects are treated specially. They are not collected in young gc. By definition, if the object size > 1/2 region size, it is humongous object. If the object size is just a little over 1/2 region size, then the fragmentation could be high. One trick we can do is to increase region size (-XX:G1HeapRegionSize) so that we do not see much humongous allocation. Do you know the size of the humongous objects? If you use -XX:+PrintAdaptiveSizePolicy, you can see that information. The humongous objects are handled better in latest jdk8, and more improvement under development. Thanks, Jenny On 7/15/2014 8:45 PM, Martin Makundi wrote: > Hi! > > Humongous allocation fails when there is not enough contiguous free > space in memory. > > How does g1gc compaction algorithm work, does it compact only within a > region or does it attempt to compact in a whole-heap-defragmentation > way? The latter would reduce the probability of "humongous allocation > failure". And if this is possible, is there a parameter/s that can be > changed to tune this? > > ** > Martin > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Wed Jul 16 06:27:43 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 09:27:43 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53C60DBC.6060205@oracle.com> References: <53C60DBC.6060205@oracle.com> Message-ID: Hi! The size of humongous objects varies, 1M up to 50M I have seen from logs, I assume they are EhCache objects because they are not direct allocations (we have a memory allocation logger which does not see any of these allocations as simple byte array allocations). Our region size is 5M seems to trigger lowish level of full gcs (only a couple pre day, which is still too much ofcourse, we would prefer zero). What about compacting the small bits, does g1 compact the scattered bits (gather small chunks from all over the memory to the first region, for example) to allow more humongous ones? ** Martin 2014-07-16 8:29 GMT+03:00 Yu Zhang : > Martin, > > Humongous objects are treated specially. They are not collected in young > gc. > > By definition, if the object size > 1/2 region size, it is humongous > object. If the object size is just a little over 1/2 region size, then the > fragmentation could be high. > > One trick we can do is to increase region size (-XX:G1HeapRegionSize) so > that we do not see much humongous allocation. Do you know the size of the > humongous objects? If you use -XX:+PrintAdaptiveSizePolicy, you can see > that information. > > The humongous objects are handled better in latest jdk8, and more > improvement under development. > > Thanks, > Jenny > > On 7/15/2014 8:45 PM, Martin Makundi wrote: > > Hi! > > Humongous allocation fails when there is not enough contiguous free > space in memory. > > How does g1gc compaction algorithm work, does it compact only within a > region or does it attempt to compact in a whole-heap-defragmentation way? > The latter would reduce the probability of "humongous allocation failure". > And if this is possible, is there a parameter/s that can be changed to tune > this? > > ** > Martin > > > _______________________________________________ > hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Wed Jul 16 06:30:50 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 16 Jul 2014 08:30:50 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: Message-ID: <1405492250.2665.12.camel@cirrus> Hi Martin, On Wed, 2014-07-16 at 06:45 +0300, Martin Makundi wrote: > Hi! > > > Humongous allocation fails when there is not enough contiguous free > space in memory. > > > How does g1gc compaction algorithm work, does it compact only within a > region or does it attempt to compact in a whole-heap-defragmentation > way? The latter would reduce the probability of "humongous allocation > failure". And if this is possible, is there a parameter/s that can be > changed to tune this? Full GC compaction algorithm works by compacting on a whole-heap basis ignoring humongous objects. Humongous objects are never moved at this time. There is a recently discovered bug with that that does not consider humongous objects freed in the same collection (https://bugs.openjdk.java.net/browse/JDK-8049332) but leaves that space empty. This may actually be an advantage if the humongous object size is regular. Minor GC only ever evacuates, no compaction. There is no consideration to free memory so that it is contiguous after GC at this point. There are some patches in 8u20 (maybe 8?) that reduce fragmentation by playing around with object distribution per type. If you have quite short-living humongous objects of a particular type, there is https://bugs.openjdk.java.net/browse/JDK-8027959 which will likely be in 8u40. There are no particular knobs to turn to impact compaction density at this time except indirectly via heap region size (G1HeapRegionSize). You can get current region distribution information by eg. -XX: +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . Thanks, Thomas From martin.makundi at koodaripalvelut.com Wed Jul 16 06:41:43 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 09:41:43 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1405492250.2665.12.camel@cirrus> References: <1405492250.2665.12.camel@cirrus> Message-ID: Hi! I am not sure what's my issue, but according to logs the humongous object size varies... usually it's the larger ones that hit the fan unless the region size is too large to start with. I tried 16m and it seemed to make itworse, tried couple more values here and there and 5M seems to work quite nicely. I suspect it's fragmentation issue, because most often Full GC occurs at 60-70% total heap used while the humongous object size is only 0,1% of the total heap size. So I suspect something like "continuous parallel compaction" could solve my issue? > You can get current region distribution information by eg. -XX: > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . First one I had already, will add latter one. What will these look like on the logs, i.e., how can I search for them and look at the particular output? ** Martin 2014-07-16 9:30 GMT+03:00 Thomas Schatzl : > Hi Martin, > > On Wed, 2014-07-16 at 06:45 +0300, Martin Makundi wrote: > > Hi! > > > > > > Humongous allocation fails when there is not enough contiguous free > > space in memory. > > > > > > How does g1gc compaction algorithm work, does it compact only within a > > region or does it attempt to compact in a whole-heap-defragmentation > > way? The latter would reduce the probability of "humongous allocation > > failure". And if this is possible, is there a parameter/s that can be > > changed to tune this? > > Full GC compaction algorithm works by compacting on a whole-heap basis > ignoring humongous objects. Humongous objects are never moved at this > time. > > There is a recently discovered bug with that that does not consider > humongous objects freed in the same collection > (https://bugs.openjdk.java.net/browse/JDK-8049332) but leaves that space > empty. This may actually be an advantage if the humongous object size is > regular. > > Minor GC only ever evacuates, no compaction. There is no consideration > to free memory so that it is contiguous after GC at this point. > > There are some patches in 8u20 (maybe 8?) that reduce fragmentation by > playing around with object distribution per type. > > If you have quite short-living humongous objects of a particular type, > there is https://bugs.openjdk.java.net/browse/JDK-8027959 which will > likely be in 8u40. > > There are no particular knobs to turn to impact compaction density at > this time except indirectly via heap region size (G1HeapRegionSize). > > You can get current region distribution information by eg. -XX: > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . > > Thanks, > Thomas > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Wed Jul 16 06:51:50 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 16 Jul 2014 08:51:50 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> Message-ID: <1405493510.2665.17.camel@cirrus> Hi, On Wed, 2014-07-16 at 09:41 +0300, Martin Makundi wrote: > Hi! > > I am not sure what's my issue, but according to logs the humongous > object size varies... usually it's the larger ones that hit the fan > unless the region size is too large to start with. I tried 16m and it > seemed to make itworse, tried couple more values here and there and 5M > seems to work quite nicely. 4M I guess (it does not matter). > > I suspect it's fragmentation issue, because most often Full GC occurs > at 60-70% total heap used while the humongous object size is only 0,1% > of the total heap size. > > > So I suspect something like "continuous parallel compaction" could > solve my issue? You want that probably: https://bugs.openjdk.java.net/browse/JDK-8038487 > > > You can get current region distribution information by eg. -XX: > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . > > First one I had already, will add latter one. What will these look > like on the logs, i.e., how can I search for them and look at the > particular output? > At the end of GC it prints for what purpose the regions are used currently per region. One region per line, so it fills up your logs. However using it you can see if the humongous allocation request of a particular size could actually fit or not. The described behavior fits the typical symptoms for that issue. Thanks, Thomas From martin.makundi at koodaripalvelut.com Wed Jul 16 07:02:18 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 10:02:18 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1405493510.2665.17.camel@cirrus> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> Message-ID: > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . Does it print only on Full GC or any GC? Any gc might be an overkill. > You want that probably: > https://bugs.openjdk.java.net/browse/JDK-8038487 I cannot comment (didn't find a way to register as user) on that entry but you could add a comment on my behalf that "continuous parallel compaction" would be nice. ** Martin 2014-07-16 9:51 GMT+03:00 Thomas Schatzl : > Hi, > > On Wed, 2014-07-16 at 09:41 +0300, Martin Makundi wrote: > > Hi! > > > > I am not sure what's my issue, but according to logs the humongous > > object size varies... usually it's the larger ones that hit the fan > > unless the region size is too large to start with. I tried 16m and it > > seemed to make itworse, tried couple more values here and there and 5M > > seems to work quite nicely. > > 4M I guess (it does not matter). > > > > I suspect it's fragmentation issue, because most often Full GC occurs > > at 60-70% total heap used while the humongous object size is only 0,1% > > of the total heap size. > > > > > > So I suspect something like "continuous parallel compaction" could > > solve my issue? > > You want that probably: > https://bugs.openjdk.java.net/browse/JDK-8038487 > > > > > You can get current region distribution information by eg. -XX: > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . > > > > First one I had already, will add latter one. What will these look > > like on the logs, i.e., how can I search for them and look at the > > particular output? > > > At the end of GC it prints for what purpose the regions are used > currently per region. One region per line, so it fills up your logs. > > However using it you can see if the humongous allocation request of a > particular size could actually fit or not. The described behavior fits > the typical symptoms for that issue. > > Thanks, > Thomas > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Wed Jul 16 07:07:39 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 16 Jul 2014 09:07:39 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> Message-ID: <1405494459.2665.20.camel@cirrus> Hi, On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . > > Does it print only on Full GC or any GC? Any gc might be an overkill. Unfortunately any GC :/ > > > You want that probably: > > https://bugs.openjdk.java.net/browse/JDK-8038487 > > I cannot comment (didn't find a way to register as user) on that entry > but you could add a comment on my behalf that "continuous parallel > compaction" would be nice. I can add a comment, but what do you mean with "continuous parallel compaction" if I may ask, and what exact purpose does it serve? The terms are too generic to me to discern any particular functionality. Thanks, Thomas From martin.makundi at koodaripalvelut.com Wed Jul 16 07:11:07 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 10:11:07 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1405494459.2665.20.camel@cirrus> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: > > > On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: > > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . > > > > Does it print only on Full GC or any GC? Any gc might be an overkill. > > Unfortunately any GC :/ > > > > > > You want that probably: > > > https://bugs.openjdk.java.net/browse/JDK-8038487 > > > > I cannot comment (didn't find a way to register as user) on that entry > > but you could add a comment on my behalf that "continuous parallel > > compaction" would be nice. > > I can add a comment, but what do you mean with "continuous parallel > compaction" if I may ask, and what exact purpose does it serve? > > The terms are too generic to me to discern any particular functionality. > I mean that currently compacting occurs on full gc meaning stop-the-world. Would be nice if compacting would occur in parallel while app is running and taking into account all timing targets such as MaxGCPauseMillis etc. ** Martin > > Thanks, > Thomas > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaoshengzhe at gmail.com Wed Jul 16 07:25:32 2014 From: yaoshengzhe at gmail.com (yao) Date: Wed, 16 Jul 2014 00:25:32 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: Hi Martin, I think you may mean do a normal GC when you say "compacting", then that's exactly what this JIRA talk about. If you really mean do a cross region compacting (combing live objects from several regions to a single one), I don't think this is implemented so far, at least in jvm 7. Even that was implemented, if your heap has free regions, I think compacting doesn't really matter. In addition, there are several G1 parameters you can tune to prevent scattered regions (regions with few live objects), see http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html. Without seeing the full gc log, I cannot say much how to tune G1 for your case but I would definitely try reducing XX:G1HeapWastePercent first. Best Shengzhe On Wed, Jul 16, 2014 at 12:11 AM, Martin Makundi < martin.makundi at koodaripalvelut.com> wrote: > >> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: >> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >> > >> > Does it print only on Full GC or any GC? Any gc might be an overkill. >> >> Unfortunately any GC :/ >> >> > >> > > You want that probably: >> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >> > >> > I cannot comment (didn't find a way to register as user) on that entry >> > but you could add a comment on my behalf that "continuous parallel >> > compaction" would be nice. >> >> I can add a comment, but what do you mean with "continuous parallel >> compaction" if I may ask, and what exact purpose does it serve? >> >> The terms are too generic to me to discern any particular functionality. >> > > I mean that currently compacting occurs on full gc meaning stop-the-world. > Would be nice if compacting would occur in parallel while app is running > and taking into account all timing targets such as MaxGCPauseMillis etc. > > ** > Martin > >> >> Thanks, >> Thomas >> >> >> > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Wed Jul 16 07:39:08 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 10:39:08 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: Our XX:G1HeapWastePercent is 1% ** Martin 2014-07-16 10:25 GMT+03:00 yao : > Hi Martin, > > I think you may mean do a normal GC when you say "compacting", then > that's exactly what this JIRA talk about. If you really mean do a cross > region compacting (combing live objects from several regions to a single > one), I don't think this is implemented so far, at least in jvm 7. Even > that was implemented, if your heap has free regions, I think compacting > doesn't really matter. In addition, there are several G1 parameters you can > tune to prevent scattered regions (regions with few live objects), see > http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html. > Without seeing the full gc log, I cannot say much how to tune G1 for your > case but I would definitely try reducing XX:G1HeapWastePercent first. > > Best > Shengzhe > > > On Wed, Jul 16, 2014 at 12:11 AM, Martin Makundi < > martin.makundi at koodaripalvelut.com> wrote: > >> >>> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: >>> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >>> > >>> > Does it print only on Full GC or any GC? Any gc might be an overkill. >>> >>> Unfortunately any GC :/ >>> >>> > >>> > > You want that probably: >>> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >>> > >>> > I cannot comment (didn't find a way to register as user) on that entry >>> > but you could add a comment on my behalf that "continuous parallel >>> > compaction" would be nice. >>> >>> I can add a comment, but what do you mean with "continuous parallel >>> compaction" if I may ask, and what exact purpose does it serve? >>> >>> The terms are too generic to me to discern any particular functionality. >>> >> >> I mean that currently compacting occurs on full gc meaning >> stop-the-world. Would be nice if compacting would occur in parallel while >> app is running and taking into account all timing targets such as >> MaxGCPauseMillis etc. >> >> ** >> Martin >> >>> >>> Thanks, >>> Thomas >>> >>> >>> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pasthelod at gmail.com Wed Jul 16 07:45:28 2014 From: pasthelod at gmail.com (Pas) Date: Wed, 16 Jul 2014 09:45:28 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi < martin.makundi at koodaripalvelut.com> wrote: > >> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: >> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >> > >> > Does it print only on Full GC or any GC? Any gc might be an overkill. >> >> Unfortunately any GC :/ >> >> > >> > > You want that probably: >> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >> > >> > I cannot comment (didn't find a way to register as user) on that entry >> > but you could add a comment on my behalf that "continuous parallel >> > compaction" would be nice. >> >> I can add a comment, but what do you mean with "continuous parallel >> compaction" if I may ask, and what exact purpose does it serve? >> >> The terms are too generic to me to discern any particular functionality. >> > > I mean that currently compacting occurs on full gc meaning stop-the-world. > Would be nice if compacting would occur in parallel while app is running > and taking into account all timing targets such as MaxGCPauseMillis etc. > In case of G1, compacting occurs in the mixed phase too. The documentation ( http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is unclear, but we can assume that even unreachable (dead) humongous objects are collected in that phase (for example the description of this bug https://bugs.openjdk.java.net/browse/JDK-8049332 implies that we're correct), they're just not moved (evicted, as they basically always represent one region). G1 does "continous paralell compaction" (the eviction runs is the important difference between a young GC and a mixed GC, and it can and does use multiple threads, so it's paralell). So, if G1 runs continously (because you set maxmilis and IHOP low), and you still run into stop-the-world hammertime good old full GC mode, then either your application is leaking memory (still has references to large object graphs, or just a few large objects) or simply has a load too large (so memory pressure is too high, because the working set is simply too big, so G1 doesn't have any headroom to make the magic in the specified maxmilis). It would be nice, of course, if it could tell us which one is happening, but currently the best practice is to try to simulate different loads. > > ** > Martin > >> >> Thanks, >> Thomas >> >> >> Pas -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Wed Jul 16 08:19:46 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 11:19:46 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: Hi! Here's today's log from today's first Full GC. 81.22.250.165/log Our app parameters: -server -XX:InitiatingHeapOccupancyPercent=0 -XX:+UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit -Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=90 -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy -XX:MaxGCPauseMillis=75 -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC -XX:G1HeapRegionSize=5M -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC -Xloggc:gc.log 2014-07-16 10:45 GMT+03:00 Pas : > > > > On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi < > martin.makundi at koodaripalvelut.com> wrote: > >> >>> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: >>> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >>> > >>> > Does it print only on Full GC or any GC? Any gc might be an overkill. >>> >>> Unfortunately any GC :/ >>> >>> > >>> > > You want that probably: >>> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >>> > >>> > I cannot comment (didn't find a way to register as user) on that entry >>> > but you could add a comment on my behalf that "continuous parallel >>> > compaction" would be nice. >>> >>> I can add a comment, but what do you mean with "continuous parallel >>> compaction" if I may ask, and what exact purpose does it serve? >>> >>> The terms are too generic to me to discern any particular functionality. >>> >> >> I mean that currently compacting occurs on full gc meaning >> stop-the-world. Would be nice if compacting would occur in parallel while >> app is running and taking into account all timing targets such as >> MaxGCPauseMillis etc. >> > > In case of G1, compacting occurs in the mixed phase too. The documentation > (http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is > unclear, but we can assume that even unreachable (dead) humongous objects > are collected in that phase (for example the description of this bug > https://bugs.openjdk.java.net/browse/JDK-8049332 implies that we're > correct), they're just not moved (evicted, as they basically always > represent one region). > > G1 does "continous paralell compaction" (the eviction runs is the > important difference between a young GC and a mixed GC, and it can and does > use multiple threads, so it's paralell). > > So, if G1 runs continously (because you set maxmilis and IHOP low), and > you still run into stop-the-world hammertime good old full GC mode, then > either your application is leaking memory (still has references to large > object graphs, or just a few large objects) or simply has a load too large > (so memory pressure is too high, because the working set is simply too big, > so G1 doesn't have any headroom to make the magic in the specified > maxmilis). > > It would be nice, of course, if it could tell us which one is happening, > but currently the best practice is to try to simulate different loads. > > >> >> ** >> Martin >> >>> >>> Thanks, >>> Thomas >>> >>> >>> > Pas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Wed Jul 16 12:00:58 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 16 Jul 2014 14:00:58 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: <1405512058.2665.42.camel@cirrus> Hi, On Wed, 2014-07-16 at 09:45 +0200, Pas wrote: > On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi > wrote: > I can add a comment, but what do you mean with > "continuous parallel > compaction" if I may ask, and what exact purpose does > it serve? > The terms are too generic to me to discern any > particular functionality. > I mean that currently compacting occurs on full gc meaning > stop-the-world. Would be nice if compacting would occur in > parallel while app is running and taking into account all > timing targets such as MaxGCPauseMillis etc. You mean in-place space reclamation during young gc as it occurs during full gc. Generally, as long as there are free regions, the current scheme of evacuating into other areas is sufficient and simpler (read: faster). This does not say that we won't think about it in the future if it is useful (e.g. to recover more nicely from evacuation failures). I agree with Yaoshengzhe that JDK-8038487 is what you should look out for. About the 60-70% occupied heap - that the value might be so low because of fragmentation at the end of humongous objects; see https://bugs.openjdk.java.net/browse/JDK-8031381 for a discussion. However since full gc can reclaim enough space to fit these objects, this does not seem to be the case. This may also mean that these large objects are simply very short-living if the heap after full gc dramatically decreases, so something like JDK-8027959 will help. The most proper solution for your case can only be found out by PrintHeapAtGC/Extended (at every GC) or G1PrintRegionLivenessInfo (at end of every marking) output. If you have lots of arrays that just straggle a region boundary, and so are basically occupying twice their size, an alternative to get back some memory would be to size the arrays used slightly smaller. > In case of G1, compacting occurs in the mixed phase too. The > documentation > (http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is > unclear, but we can assume that even unreachable (dead) humongous > objects are collected in that phase (for example the description of No. At the moment only marking or full gc clears unreachable (dead) humongous objects. Only JDK-8027959 (which is out for review) removes this restriction. There are some restrictions to this at this time, but they they are purely due to cost/benefit tradeoffs. They are sort-of logically assigned to to the old generation. Which means that humongous region treatment breaks pure generational thinking with that change. > this bug https://bugs.openjdk.java.net/browse/JDK-8049332 implies > that we're correct), they're just not moved (evicted, as they > basically always represent one region). JDK-8049332 only mentions full GCs so I cannot follow that line of thought. Thanks, Thomas From martin.makundi at koodaripalvelut.com Wed Jul 16 19:03:43 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 16 Jul 2014 22:03:43 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1405512058.2665.42.camel@cirrus> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <1405512058.2665.42.camel@cirrus> Message-ID: Hi! Now I get lots of huge drops: 1. [Times: user=2.47 sys=0.02, real=0.29 secs] 5775.731: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: allocation request failed, allocation request: 160 bytes] 5775.731: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 4194304 bytes, attempted expansion amount: 4194304 bytes] 5775.731: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap expansion operation failed] {Heap before GC invocations=711 (full 0): garbage-first heap total 31457280K, used 31077261K [0x00007f9d8c000000, 0x00007fa50c000000, 0x00007fa50c000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 524288K, used 159575K [0x00007fa50c000000, 0x00007fa52c000000, 0x00007fa52c000000) the space 524288K, 30% used [0x00007fa50c000000, 0x00007fa515bd5fe8, 0x00007fa515bd6000, 0x00007fa52c000000) No shared spaces configured. 2014-07-16T10:41:15.638+0300: 5775.731: [Full GC 29G->12G(30G), 47.1685170 secs] 2. [Times: user=3.24 sys=0.04, real=0.32 secs] 9555.819: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: humongous allocation request failed, allocation request: 22749376 bytes] 9555.819: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 25165824 bytes, attempted expansion amount: 25165824 bytes] 9555.819: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap expansion operation failed] 9555.819: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: humongous allocation request failed, allocation request: 22749376 bytes] 9555.819: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 25165824 bytes, attempted expansion amount: 25165824 bytes] 9555.819: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap expansion operation failed] 9555.819: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: allocation request failed, allocation request: 22749376 bytes] 9555.819: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 22749376 bytes, attempted expansion amount: 25165824 bytes] 9555.819: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap expansion operation failed] {Heap before GC invocations=1035 (full 1): garbage-first heap total 31457280K, used 28101287K [0x00007f9d8c000000, 0x00007fa50c000000, 0x00007fa50c000000) region size 4096K, 31 young (126976K), 31 survivors (126976K) compacting perm gen total 524288K, used 166022K [0x00007fa50c000000, 0x00007fa52c000000, 0x00007fa52c000000) the space 524288K, 31% used [0x00007fa50c000000, 0x00007fa516221960, 0x00007fa516221a00, 0x00007fa52c000000) No shared spaces configured. 2014-07-16T11:44:15.727+0300: 9555.819: [Full GC 26G->13G(30G), 50.5019980 secs] 3. [Times: user=4.33 sys=0.06, real=0.48 secs] 14858.255: [G1Ergonomics (Heap Sizing) attempt heap expansion, reason: allocation request failed, allocation request: 24 bytes] 14858.255: [G1Ergonomics (Heap Sizing) expand the heap, requested expansion amount: 4194304 bytes, attempted expansion amount: 4194304 bytes] 14858.255: [G1Ergonomics (Heap Sizing) did not expand the heap, reason: heap expansion operation failed] {Heap before GC invocations=1422 (full 2): garbage-first heap total 31457280K, used 31174168K [0x00007f9d8c000000, 0x00007fa50c000000, 0x00007fa50c000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 524288K, used 176349K [0x00007fa50c000000, 0x00007fa52c000000, 0x00007fa52c000000) the space 524288K, 33% used [0x00007fa50c000000, 0x00007fa516c37428, 0x00007fa516c37600, 0x00007fa52c000000) No shared spaces configured. 2014-07-16T13:12:38.163+0300: 14858.255: [Full GC 29G->10G(30G), 41.2695750 secs] Is there some parameter I can tune to make the phases preceding Full GC to already do most of this cleaning (and thus avoid Full GC) ? From my perspective it looks like the GC has been very lazy and full gc is needed to do the job right... My parameters: -server -XX:InitiatingHeapOccupancyPercent=0 -XX:+UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit -Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=90 -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy -XX:MaxGCPauseMillis=75 -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC -XX:G1HeapRegionSize=5M -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC -Xloggc:gc.log ** Martin 2014-07-16 15:00 GMT+03:00 Thomas Schatzl : > Hi, > > On Wed, 2014-07-16 at 09:45 +0200, Pas wrote: > > > > On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi > > wrote: > > > I can add a comment, but what do you mean with > > "continuous parallel > > compaction" if I may ask, and what exact purpose does > > it serve? > > > The terms are too generic to me to discern any > > particular functionality. > > > I mean that currently compacting occurs on full gc meaning > > stop-the-world. Would be nice if compacting would occur in > > parallel while app is running and taking into account all > > timing targets such as MaxGCPauseMillis etc. > > You mean in-place space reclamation during young gc as it occurs during > full gc. Generally, as long as there are free regions, the current > scheme of evacuating into other areas is sufficient and simpler (read: > faster). > This does not say that we won't think about it in the future if it is > useful (e.g. to recover more nicely from evacuation failures). > > I agree with Yaoshengzhe that JDK-8038487 is what you should look out > for. About the 60-70% occupied heap - that the value might be so low > because of fragmentation at the end of humongous objects; see > https://bugs.openjdk.java.net/browse/JDK-8031381 for a discussion. > However since full gc can reclaim enough space to fit these objects, > this does not seem to be the case. This may also mean that these large > objects are simply very short-living if the heap after full gc > dramatically decreases, so something like JDK-8027959 will help. > > The most proper solution for your case can only be found out by > PrintHeapAtGC/Extended (at every GC) or G1PrintRegionLivenessInfo (at > end of every marking) output. > > If you have lots of arrays that just straggle a region boundary, and so > are basically occupying twice their size, an alternative to get back > some memory would be to size the arrays used slightly smaller. > > > In case of G1, compacting occurs in the mixed phase too. The > > documentation > > (http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is > > unclear, but we can assume that even unreachable (dead) humongous > > objects are collected in that phase (for example the description of > > No. At the moment only marking or full gc clears unreachable (dead) > humongous objects. > > Only JDK-8027959 (which is out for review) removes this restriction. > There are some restrictions to this at this time, but they they are > purely due to cost/benefit tradeoffs. > > They are sort-of logically assigned to to the old generation. Which > means that humongous region treatment breaks pure generational thinking > with that change. > > > this bug https://bugs.openjdk.java.net/browse/JDK-8049332 implies > > that we're correct), they're just not moved (evicted, as they > > basically always represent one region). > > JDK-8049332 only mentions full GCs so I cannot follow that line of > thought. > > Thanks, > Thomas > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Wed Jul 16 21:32:02 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Wed, 16 Jul 2014 14:32:02 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: <53C6EF52.5060006@oracle.com> Martin, I took a look at the log you posted. Here are my observations: 1. at timestamp 125.893, the Eden size(300g) for the mixed gc is messed up. Probably a bug. 2. For most of the time, eden size is 1.4g, survivor 150m, the rest is old gen. I am not sure how much of the old gen is used for humongous allocations. But it seems there are some tunings you can try to help mixed gc: - old regions added to cset is 2-14 for mixed gc. Most of the time the reason is 'predicted time too high'. You can try either increase -XX:MaxGCPauseMillis to a higher value, or decrease -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can be added. - you have -XX:G1MixedGCLiveThresholdPercent=10 which means if the region is more than 10% full, it will not be added to cset. This is too low. 3. marking does clean ~1.5 space. 4. heap usage after full gc is 13g. You should be able to clean more if mixed gc is tuned better. Assuming most of the old regions are not humongous. To confirm that, you can add -XX:+G1PrintRegionLivenessInfo Thanks, Jenny On 7/16/2014 1:19 AM, Martin Makundi wrote: > Hi! > > Here's today's log from today's first Full GC. > > 81.22.250.165/log > > Our app parameters: > > -server -XX:InitiatingHeapOccupancyPercent=0 > -XX:+UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 > -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m > -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc > -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA > -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m > -XX:-UseStringCache -XX:+UseGCOverheadLimit -Duser.timezone=EET > -XX:+UseCompressedOops -XX:+DisableExplicitGC -XX:+AggressiveOpts > -XX:CMSInitiatingOccupancyFraction=90 -XX:+ParallelRefProcEnabled > -XX:+UseAdaptiveSizePolicy -XX:MaxGCPauseMillis=75 > -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC -XX:G1HeapRegionSize=5M > -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC > -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC > -Xloggc:gc.log > > > 2014-07-16 10:45 GMT+03:00 Pas >: > > > > > On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi > > wrote: > > > On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: > > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . > > > > Does it print only on Full GC or any GC? Any gc might be > an overkill. > > Unfortunately any GC :/ > > > > > > You want that probably: > > > https://bugs.openjdk.java.net/browse/JDK-8038487 > > > > I cannot comment (didn't find a way to register as user) > on that entry > > but you could add a comment on my behalf that > "continuous parallel > > compaction" would be nice. > > I can add a comment, but what do you mean with "continuous > parallel > compaction" if I may ask, and what exact purpose does it > serve? > > The terms are too generic to me to discern any particular > functionality. > > > I mean that currently compacting occurs on full gc meaning > stop-the-world. Would be nice if compacting would occur in > parallel while app is running and taking into account all > timing targets such as MaxGCPauseMillis etc. > > > In case of G1, compacting occurs in the mixed phase too. The > documentation > (http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is > unclear, but we can assume that even unreachable (dead) humongous > objects are collected in that phase (for example the description > of this bug https://bugs.openjdk.java.net/browse/JDK-8049332 > implies that we're correct), they're just not moved (evicted, as > they basically always represent one region). > > G1 does "continous paralell compaction" (the eviction runs is the > important difference between a young GC and a mixed GC, and it can > and does use multiple threads, so it's paralell). > > So, if G1 runs continously (because you set maxmilis and IHOP > low), and you still run into stop-the-world hammertime good old > full GC mode, then either your application is leaking memory > (still has references to large object graphs, or just a few large > objects) or simply has a load too large (so memory pressure is too > high, because the working set is simply too big, so G1 doesn't > have any headroom to make the magic in the specified maxmilis). > > It would be nice, of course, if it could tell us which one is > happening, but currently the best practice is to try to simulate > different loads. > > > ** > Martin > > > Thanks, > Thomas > > > > Pas > > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 17 01:46:24 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 17 Jul 2014 04:46:24 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53C6EF52.5060006@oracle.com> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> Message-ID: > > > I took a look at the log you posted. Here are my observations: > 1. at timestamp 125.893, the Eden size(300g) for the mixed gc is messed > up. Probably a bug. > 2. For most of the time, eden size is 1.4g, survivor 150m, the rest is old > gen. I am not sure how much of the old gen is used for humongous > allocations. But it seems there are some tunings you can try to help mixed > gc: > - old regions added to cset is 2-14 for mixed gc. Most of the time the > reason is 'predicted time too high'. You can try either increase > -XX:MaxGCPauseMillis to a higher value, or decrease > -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can > be added. > Does it attempt to do any mixed gc if it cannot do all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper limit? If it just is an upper limit we could keep it at 80 or higher? > - you have -XX:G1MixedGCLiveThresholdPercent=10 which means if the > region is more than 10% full, it will not be added to cset. This is too > low. > Thanks, we thought this works the opposite, now switched to 90. > 3. marking does clean ~1.5 space. > 4. heap usage after full gc is 13g. You should be able to clean more if > mixed gc is tuned better. Assuming most of the old regions are not > humongous. To confirm that, you can add -XX:+G1PrintRegionLivenessInfo > Added this to logs, will get back at this with new results. ** Martin > > Thanks, > Jenny > > On 7/16/2014 1:19 AM, Martin Makundi wrote: > > Hi! > > Here's today's log from today's first Full GC. > > 81.22.250.165/log > > Our app parameters: > > -server -XX:InitiatingHeapOccupancyPercent=0 > -XX:+UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 > -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m > -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc > -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods > -XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit > -Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC > -XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=90 > -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy > -XX:MaxGCPauseMillis=75 -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC > -XX:G1HeapRegionSize=5M -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails > -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps > -XX:+PrintGC -Xloggc:gc.log > > > 2014-07-16 10:45 GMT+03:00 Pas : > >> >> >> >> On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi < >> martin.makundi at koodaripalvelut.com> wrote: >> >>> >>>> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: >>>> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >>>> > >>>> > Does it print only on Full GC or any GC? Any gc might be an overkill. >>>> >>>> Unfortunately any GC :/ >>>> >>>> > >>>> > > You want that probably: >>>> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >>>> > >>>> > I cannot comment (didn't find a way to register as user) on that entry >>>> > but you could add a comment on my behalf that "continuous parallel >>>> > compaction" would be nice. >>>> >>>> I can add a comment, but what do you mean with "continuous parallel >>>> compaction" if I may ask, and what exact purpose does it serve? >>>> >>>> The terms are too generic to me to discern any particular functionality. >>>> >>> >>> I mean that currently compacting occurs on full gc meaning >>> stop-the-world. Would be nice if compacting would occur in parallel while >>> app is running and taking into account all timing targets such as >>> MaxGCPauseMillis etc. >>> >> >> In case of G1, compacting occurs in the mixed phase too. The >> documentation ( >> http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is >> unclear, but we can assume that even unreachable (dead) humongous objects >> are collected in that phase (for example the description of this bug >> https://bugs.openjdk.java.net/browse/JDK-8049332 implies that we're >> correct), they're just not moved (evicted, as they basically always >> represent one region). >> >> G1 does "continous paralell compaction" (the eviction runs is the >> important difference between a young GC and a mixed GC, and it can and does >> use multiple threads, so it's paralell). >> >> So, if G1 runs continously (because you set maxmilis and IHOP low), and >> you still run into stop-the-world hammertime good old full GC mode, then >> either your application is leaking memory (still has references to large >> object graphs, or just a few large objects) or simply has a load too large >> (so memory pressure is too high, because the working set is simply too big, >> so G1 doesn't have any headroom to make the magic in the specified >> maxmilis). >> >> It would be nice, of course, if it could tell us which one is >> happening, but currently the best practice is to try to simulate different >> loads. >> >> >>> >>> ** >>> Martin >>> >>>> >>>> Thanks, >>>> Thomas >>>> >>>> >>>> >> Pas >> > > > > _______________________________________________ > hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 17 02:54:41 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 17 Jul 2014 05:54:41 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> Message-ID: I get error: Improperly specified VM option 'G1PrintRegionLivenessInfo' This is with java 1.7 u 55 2014-07-17 4:46 GMT+03:00 Martin Makundi : > >> I took a look at the log you posted. Here are my observations: >> 1. at timestamp 125.893, the Eden size(300g) for the mixed gc is messed >> up. Probably a bug. >> 2. For most of the time, eden size is 1.4g, survivor 150m, the rest is >> old gen. I am not sure how much of the old gen is used for humongous >> allocations. But it seems there are some tunings you can try to help mixed >> gc: >> - old regions added to cset is 2-14 for mixed gc. Most of the time the >> reason is 'predicted time too high'. You can try either increase >> -XX:MaxGCPauseMillis to a higher value, or decrease >> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >> be added. >> > > Does it attempt to do any mixed gc if it cannot do > all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper > limit? If it just is an upper limit we could keep it at 80 or higher? > > >> - you have -XX:G1MixedGCLiveThresholdPercent=10 which means if the >> region is more than 10% full, it will not be added to cset. This is too >> low. >> > > Thanks, we thought this works the opposite, now switched to 90. > > >> 3. marking does clean ~1.5 space. >> 4. heap usage after full gc is 13g. You should be able to clean more if >> mixed gc is tuned better. Assuming most of the old regions are not >> humongous. To confirm that, you can add -XX:+G1PrintRegionLivenessInfo >> > > Added this to logs, will get back at this with new results. > > ** > Martin > >> >> Thanks, >> Jenny >> >> On 7/16/2014 1:19 AM, Martin Makundi wrote: >> >> Hi! >> >> Here's today's log from today's first Full GC. >> >> 81.22.250.165/log >> >> Our app parameters: >> >> -server -XX:InitiatingHeapOccupancyPercent=0 >> -XX:+UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 >> -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m >> -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc >> -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX:+UseFastAccessorMethods >> -XX:ReservedCodeCacheSize=128m -XX:-UseStringCache -XX:+UseGCOverheadLimit >> -Duser.timezone=EET -XX:+UseCompressedOops -XX:+DisableExplicitGC >> -XX:+AggressiveOpts -XX:CMSInitiatingOccupancyFraction=90 >> -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy >> -XX:MaxGCPauseMillis=75 -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC >> -XX:G1HeapRegionSize=5M -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails >> -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps >> -XX:+PrintGC -Xloggc:gc.log >> >> >> 2014-07-16 10:45 GMT+03:00 Pas : >> >>> >>> >>> >>> On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi < >>> martin.makundi at koodaripalvelut.com> wrote: >>> >>>> >>>>> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi wrote: >>>>> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >>>>> > >>>>> > Does it print only on Full GC or any GC? Any gc might be an overkill. >>>>> >>>>> Unfortunately any GC :/ >>>>> >>>>> > >>>>> > > You want that probably: >>>>> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >>>>> > >>>>> > I cannot comment (didn't find a way to register as user) on that >>>>> entry >>>>> > but you could add a comment on my behalf that "continuous parallel >>>>> > compaction" would be nice. >>>>> >>>>> I can add a comment, but what do you mean with "continuous parallel >>>>> compaction" if I may ask, and what exact purpose does it serve? >>>>> >>>>> The terms are too generic to me to discern any particular >>>>> functionality. >>>>> >>>> >>>> I mean that currently compacting occurs on full gc meaning >>>> stop-the-world. Would be nice if compacting would occur in parallel while >>>> app is running and taking into account all timing targets such as >>>> MaxGCPauseMillis etc. >>>> >>> >>> In case of G1, compacting occurs in the mixed phase too. The >>> documentation ( >>> http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) is >>> unclear, but we can assume that even unreachable (dead) humongous objects >>> are collected in that phase (for example the description of this bug >>> https://bugs.openjdk.java.net/browse/JDK-8049332 implies that we're >>> correct), they're just not moved (evicted, as they basically always >>> represent one region). >>> >>> G1 does "continous paralell compaction" (the eviction runs is the >>> important difference between a young GC and a mixed GC, and it can and does >>> use multiple threads, so it's paralell). >>> >>> So, if G1 runs continously (because you set maxmilis and IHOP low), >>> and you still run into stop-the-world hammertime good old full GC mode, >>> then either your application is leaking memory (still has references to >>> large object graphs, or just a few large objects) or simply has a load too >>> large (so memory pressure is too high, because the working set is simply >>> too big, so G1 doesn't have any headroom to make the magic in the specified >>> maxmilis). >>> >>> It would be nice, of course, if it could tell us which one is >>> happening, but currently the best practice is to try to simulate different >>> loads. >>> >>> >>>> >>>> ** >>>> Martin >>>> >>>>> >>>>> Thanks, >>>>> Thomas >>>>> >>>>> >>>>> >>> Pas >>> >> >> >> >> _______________________________________________ >> hotspot-gc-use mailing listhotspot-gc-use at openjdk.java.nethttp://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Thu Jul 17 03:07:31 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Wed, 16 Jul 2014 20:07:31 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> Message-ID: <53C73DF3.5070604@oracle.com> This is a diagnostic parameter, you need to apply with -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo Thanks, Jenny On 7/16/2014 7:54 PM, Martin Makundi wrote: > I get error: > > Improperly specified VM option 'G1PrintRegionLivenessInfo' > > This is with java 1.7 u 55 > > > 2014-07-17 4:46 GMT+03:00 Martin Makundi > >: > > > I took a look at the log you posted. Here are my observations: > 1. at timestamp 125.893, the Eden size(300g) for the mixed gc > is messed up. Probably a bug. > 2. For most of the time, eden size is 1.4g, survivor 150m, the > rest is old gen. I am not sure how much of the old gen is > used for humongous allocations. But it seems there are some > tunings you can try to help mixed gc: > - old regions added to cset is 2-14 for mixed gc. Most of > the time the reason is 'predicted time too high'. You can try > either increase -XX:MaxGCPauseMillis to a higher value, or > decrease -XX:G1MixedGCCountTarget (currently it is 80) so that > more old regions can be added. > > > Does it attempt to do any mixed gc if it cannot do > all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just > an upper limit? If it just is an upper limit we could keep it at > 80 or higher? > > - you have -XX:G1MixedGCLiveThresholdPercent=10 which means > if the region is more than 10% full, it will not be added to > cset. This is too low. > > > Thanks, we thought this works the opposite, now switched to 90. > > 3. marking does clean ~1.5 space. > 4. heap usage after full gc is 13g. You should be able to > clean more if mixed gc is tuned better. Assuming most of the > old regions are not humongous. To confirm that, you can add > -XX:+G1PrintRegionLivenessInfo > > > Added this to logs, will get back at this with new results. > > ** > Martin > > > Thanks, > Jenny > > On 7/16/2014 1:19 AM, Martin Makundi wrote: >> Hi! >> >> Here's today's log from today's first Full GC. >> >> 81.22.250.165/log >> >> Our app parameters: >> >> -server -XX:InitiatingHeapOccupancyPercent=0 >> -XX:+UnlockExperimentalVMOptions >> -XX:G1MixedGCLiveThresholdPercent=10 >> -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k >> -XX:MaxPermSize=512m -XX:G1HeapWastePercent=1 >> -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc >> -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA >> -XX:+UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m >> -XX:-UseStringCache -XX:+UseGCOverheadLimit >> -Duser.timezone=EET -XX:+UseCompressedOops >> -XX:+DisableExplicitGC -XX:+AggressiveOpts >> -XX:CMSInitiatingOccupancyFraction=90 >> -XX:+ParallelRefProcEnabled -XX:+UseAdaptiveSizePolicy >> -XX:MaxGCPauseMillis=75 -XX:G1MixedGCCountTarget=80 >> -XX:+UseG1GC -XX:G1HeapRegionSize=5M >> -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails >> -XX:+PrintHeapAtGC -XX:+PrintAdaptiveSizePolicy >> -XX:+PrintGCDateStamps -XX:+PrintGC -Xloggc:gc.log >> >> >> 2014-07-16 10:45 GMT+03:00 Pas > >: >> >> >> >> >> On Wed, Jul 16, 2014 at 9:11 AM, Martin Makundi >> > > wrote: >> >> >> On Wed, 2014-07-16 at 10:02 +0300, Martin Makundi >> wrote: >> > > > +PrintHeapAtGC -XX:+PrintHeapAtGCExtended . >> > >> > Does it print only on Full GC or any GC? Any gc >> might be an overkill. >> >> Unfortunately any GC :/ >> >> > >> > > You want that probably: >> > > https://bugs.openjdk.java.net/browse/JDK-8038487 >> > >> > I cannot comment (didn't find a way to register >> as user) on that entry >> > but you could add a comment on my behalf that >> "continuous parallel >> > compaction" would be nice. >> >> I can add a comment, but what do you mean with >> "continuous parallel >> compaction" if I may ask, and what exact purpose >> does it serve? >> >> The terms are too generic to me to discern any >> particular functionality. >> >> >> I mean that currently compacting occurs on full gc >> meaning stop-the-world. Would be nice if compacting >> would occur in parallel while app is running and >> taking into account all timing targets such as >> MaxGCPauseMillis etc. >> >> >> In case of G1, compacting occurs in the mixed phase too. >> The documentation >> (http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html) >> is unclear, but we can assume that even unreachable >> (dead) humongous objects are collected in that phase (for >> example the description of this bug >> https://bugs.openjdk.java.net/browse/JDK-8049332 implies >> that we're correct), they're just not moved (evicted, as >> they basically always represent one region). >> >> G1 does "continous paralell compaction" (the eviction >> runs is the important difference between a young GC and a >> mixed GC, and it can and does use multiple threads, so >> it's paralell). >> >> So, if G1 runs continously (because you set maxmilis and >> IHOP low), and you still run into stop-the-world >> hammertime good old full GC mode, then either your >> application is leaking memory (still has references to >> large object graphs, or just a few large objects) or >> simply has a load too large (so memory pressure is too >> high, because the working set is simply too big, so G1 >> doesn't have any headroom to make the magic in the >> specified maxmilis). >> >> It would be nice, of course, if it could tell us which >> one is happening, but currently the best practice is to >> try to simulate different loads. >> >> >> ** >> Martin >> >> >> Thanks, >> Thomas >> >> >> >> Pas >> >> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Thu Jul 17 03:13:19 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Wed, 16 Jul 2014 20:13:19 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> Message-ID: <53C73F4F.3050403@oracle.com> It will first add old regions if the estimated time is under the MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it will add the minimum decided by #candidate-regions/MixedGCCountTarget. If you set MixedGCCountTarget too high, the minimum will be too low. In your case, it is 2. Thanks, Jenny On 7/16/2014 6:46 PM, Martin Makundi wrote: > > 2. For most of the time, eden size is 1.4g, survivor 150m, the > rest is old gen. I am not sure how much of the old gen is used > for humongous allocations. But it seems there are some tunings > you can try to help mixed gc: > - old regions added to cset is 2-14 for mixed gc. Most of the > time the reason is 'predicted time too high'. You can try either > increase -XX:MaxGCPauseMillis to a higher value, or decrease > -XX:G1MixedGCCountTarget (currently it is 80) so that more old > regions can be added. > > > Does it attempt to do any mixed gc if it cannot do > all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an > upper limit? If it just is an upper limit we could keep it at 80 or > higher? -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 17 04:14:33 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 17 Jul 2014 07:14:33 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53C73F4F.3050403@oracle.com> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> Message-ID: > This is a diagnostic parameter, you need to apply with > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo Thanks, will try that. It will first add old regions if the estimated time is under the > MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it > will add the minimum decided by #candidate-regions/MixedGCCountTarget. If > you set MixedGCCountTarget too high, the minimum will be too low. In your > case, it is 2. > Hmm.. what is the logic behind this candidate-regions/MixedGCCountTarget? Is there no way to tell the gc to estimate the max number of regions it could maybe achieve in the time available and do that instead of 2? ** Martin > Thanks, > Jenny > > On 7/16/2014 6:46 PM, Martin Makundi wrote: > > 2. For most of the time, eden size is 1.4g, survivor 150m, the rest is >> old gen. I am not sure how much of the old gen is used for humongous >> allocations. But it seems there are some tunings you can try to help mixed >> gc: >> - old regions added to cset is 2-14 for mixed gc. Most of the time the >> reason is 'predicted time too high'. You can try either increase >> -XX:MaxGCPauseMillis to a higher value, or decrease >> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >> be added. >> > > Does it attempt to do any mixed gc if it cannot do > all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper > limit? If it just is an upper limit we could keep it at 80 or higher? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Thu Jul 17 08:02:18 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 17 Jul 2014 10:02:18 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> Message-ID: <1405584138.2684.32.camel@cirrus> Hi, some parameter analysis: On Wed, 2014-07-16 at 11:19 +0300, Martin Makundi wrote: > Hi! > > Here's today's log from today's first Full GC. > > 81.22.250.165/log (From the logs: after full gc we have 12g occupied space - let's assume that is the live set size for now). > Our app parameters: > > -server -XX:InitiatingHeapOccupancyPercent=0 -XX: > +UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 G1MixedGCLiveThresholdPercent is the upper threshold for determining whether old gen regions can be collected. I.e. only old regions less than 10% occupied are collected ever. Which means, you are going for an "expected" heap size of 120G (12G * 100 / G1MixedGCLiveThresholdPercent) - which obviously does not fit into 30G of heap. The result is inevitable full gcs. (The documentation can be read both ways I think) Just setting the default (65) will give a more reasonable "expected" heap size. (~20G) Depending on length and amount of humongous allocation bursts, you also want to increase -XX:InitiatingHeapOccupancyPercent to something larger than G1MixedGCLiveThresholdPercent, otherwise concurrent marking will run all the time. You may also need to increase G1MixedGCLiveThresholdPercent if this buffer of 10G is too small. > -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m > -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc -Xnoclassgc disables all class unloading, even during full gc. If you notice increasing "Ext root scan time" over time, this setting is set wrongly. Note that 7u55 can only do class unloading at full gc. Only 8u40 and later will also do this at concurrent mark. > -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX: > +UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m > -XX:-UseStringCache -XX:+UseGCOverheadLimit -Duser.timezone=EET -XX: > +UseCompressedOops -XX:+DisableExplicitGC -XX:+AggressiveOpts > -XX:CMSInitiatingOccupancyFraction=90 -XX:+ParallelRefProcEnabled -XX: You can remove CMSInitiatingOccupancyFraction. It does not have an effect with G1. > +UseAdaptiveSizePolicy -XX:MaxGCPauseMillis=75 > -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC -XX:G1HeapRegionSize=5M G1HeapRegionSize must be a power of two. I think G1 will either round this to either 4M or 8M - check with -XX:+PrintFlagsFinal. > -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC MaxGCPauseMillis=75 in combination with GCPauseIntervalMillis=1000 seems to be a tough target, at least for 7uX. Does your application really need such a low pause time? It may be achievable. >From the log, already collecting the young generation breaks that pause time goal. Try -XX:G1NewSizePercent=1 to allow smaller young generation sizes. > -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC > -Xloggc:gc.log Thanks, Thomas From martin.makundi at koodaripalvelut.com Thu Jul 17 10:31:16 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 17 Jul 2014 13:31:16 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1405584138.2684.32.camel@cirrus> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <1405584138.2684.32.camel@cirrus> Message-ID: > > > > -server -XX:InitiatingHeapOccupancyPercent=0 -XX: > > +UnlockExperimentalVMOptions -XX:G1MixedGCLiveThresholdPercent=10 > > G1MixedGCLiveThresholdPercent is the upper threshold for determining > whether old gen regions can be collected. I.e. only old regions less > than 10% occupied are collected ever. > > Which means, you are going for an "expected" heap size of 120G (12G * > 100 / G1MixedGCLiveThresholdPercent) - which obviously does not fit into > 30G of heap. The result is inevitable full gcs. > > (The documentation can be read both ways I think) > Thanks, we are trying value 90 today, there are a couple of full gc's already. > Just setting the default (65) will give a more reasonable "expected" > heap size. (~20G) > > Depending on length and amount of humongous allocation bursts, you also > want to increase -XX:InitiatingHeapOccupancyPercent to something larger > than G1MixedGCLiveThresholdPercent, otherwise concurrent marking will > run all the time. Is it a problem that concurrent marking runs all the time? It's bit unclear what it means, our goal is to force gc earn its keep all the time, no idle time. However, we are unaware if this low G1MixedGCLiveThresholdPercent will affect adversely some other gc features. > You may also need to increase G1MixedGCLiveThresholdPercent if this buffer of 10G is too small. > > > -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k -XX:MaxPermSize=512m > > -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G -Xnoclassgc > > -Xnoclassgc disables all class unloading, even during full gc. Class unloading halted the whole system for several minutes every time classes unload so we disabled it. It occurred very often and made the system practically unusable, so we disabled it completely. > If you > notice increasing "Ext root scan time" over time, this setting is set > wrongly. Note that 7u55 can only do class unloading at full gc. Only > 8u40 and later will also do this at concurrent mark. > > -XX:-OmitStackTraceInFastThrow -XX:+UseNUMA -XX: > > +UseFastAccessorMethods -XX:ReservedCodeCacheSize=128m > > -XX:-UseStringCache -XX:+UseGCOverheadLimit -Duser.timezone=EET -XX: > > +UseCompressedOops -XX:+DisableExplicitGC -XX:+AggressiveOpts > > -XX:CMSInitiatingOccupancyFraction=90 -XX:+ParallelRefProcEnabled -XX: > > You can remove CMSInitiatingOccupancyFraction. It does not have an > effect with G1. > Ok, good to know. > > > +UseAdaptiveSizePolicy -XX:MaxGCPauseMillis=75 > > -XX:G1MixedGCCountTarget=80 -XX:+UseG1GC -XX:G1HeapRegionSize=5M > > G1HeapRegionSize must be a power of two. I think G1 will either round > this to either 4M or 8M - check with -XX:+PrintFlagsFinal. > Ok. > > > -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX:+PrintHeapAtGC > > MaxGCPauseMillis=75 in combination with GCPauseIntervalMillis=1000 seems > to be a tough target, at least for 7uX. Does your application really > need such a low pause time? It may be achievable. > It's a web application so basically user will experience significant inconvenience in recurringly over 100-200 ms pauses. I am not sure how this target time translates to actual user experience so 75 is somewhat a safe choice. > From the log, already collecting the young generation breaks that pause > time goal. Try -XX:G1NewSizePercent=1 to allow smaller young generation > sizes. > How does this work together with the adaptive sizing? ** Martin > > > -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCDateStamps -XX:+PrintGC > > -Xloggc:gc.log > > Thanks, > Thomas > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Thu Jul 17 11:16:58 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 17 Jul 2014 13:16:58 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <1405584138.2684.32.camel@cirrus> Message-ID: <1405595818.2678.23.camel@cirrus> Hi, On Thu, 2014-07-17 at 13:31 +0300, Martin Makundi wrote: > > > > -server -XX:InitiatingHeapOccupancyPercent=0 -XX: > > +UnlockExperimentalVMOptions > -XX:G1MixedGCLiveThresholdPercent=10 > Just setting the default (65) will give a more reasonable > "expected" > heap size. (~20G) > > Depending on length and amount of humongous allocation bursts, > you also > want to increase -XX:InitiatingHeapOccupancyPercent to > something larger > than G1MixedGCLiveThresholdPercent, otherwise concurrent > marking will > run all the time. > > > Is it a problem that concurrent marking runs all the time? It's bit > unclear what it means, our goal is to force gc earn its keep all the > time, no idle time. However, we are unaware if this low > G1MixedGCLiveThresholdPercent will affect adversely some other gc > features. Constant concurrent marking decreases throughput. If you do not mind the performance loss it can be acceptable. If you evacuate a region with higher occupancy, you need to copy around more data. However, G1 will automatically take less regions to meet the pause time goal. It may need more mixed gcs in sequence to reclaim the same amount of memory (it will always take the ones which are "easiest" first, which mostly coincides to ones that are less occupied). *Up to* G1MixedGCCountTarget mixed gcs iirc. > You may also need to increase > G1MixedGCLiveThresholdPercent if this buffer of 10G is too > small. > > > -XX:G1OldCSetRegionThresholdPercent=85 -Xss4096k > -XX:MaxPermSize=512m > > -XX:G1HeapWastePercent=1 -XX:PermSize=512m -Xms20G -Xmx30G > -Xnoclassgc > > -Xnoclassgc disables all class unloading, even during full > gc. > > Class unloading halted the whole system for several minutes every time > classes unload so we disabled it. It occurred very often and made the > system practically unusable, so we disabled it completely. Okay. That seems extreme though. > If you > notice increasing "Ext root scan time" over time, this setting > is set > wrongly. Note that 7u55 can only do class unloading at full > gc. Only > 8u40 and later will also do this at concurrent mark. [...] > > > -XX:GCPauseIntervalMillis=1000 -XX:+PrintGCDetails -XX: > +PrintHeapAtGC > > > MaxGCPauseMillis=75 in combination with > GCPauseIntervalMillis=1000 seems > to be a tough target, at least for 7uX. Does your application > really > need such a low pause time? It may be achievable. > > > It's a web application so basically user will experience significant > inconvenience in recurringly over 100-200 ms pauses. I am not sure how > this target time translates to actual user experience so 75 is > somewhat a safe choice. > > From the log, already collecting the young generation breaks > that pause > time goal. Try -XX:G1NewSizePercent=1 to allow smaller young > generation > sizes. > > > How does this work together with the adaptive sizing? *Roughly*, the next collection's young gen size will be (where X is the current heap size) MAX(G1NewSizePercent * X, MIN(X / SurvivorRatio, f(pause-time))) I.e. G1NewSizePercent bounds the minimum young gen size. So if the second term is already very low, you might end up with a too large young gen as in that g1 is not able to collect the young gen within the pause time already. Or there is not enough time in the time goal to collect a reasonable amount of old gen regions. Again there are some overrides for the minimum amount of old gen regions to collect iirc. Hth, Thomas From yu.zhang at oracle.com Thu Jul 17 19:13:51 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Thu, 17 Jul 2014 12:13:51 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> Message-ID: <53C8206F.9070303@oracle.com> There are 3 factors: MaxGCPauseMilli, MixedGCCountTarget, G1OldCSetRegionThresholdPercent The candidate regions is calculated by some algorithm. minimum regions = (candidate regions)/MixedGCCountTarget maximum regions = (heap regions)* G1OldCSetRegionThresholdPercent If the estimated mixed gc time is < MaxGCPauseMilli, g1 will try to add the candidate regions to cset while keeping the estimated time below MaxGCPauseMilli, as long as it is less than maximum regions, and reclaimable percentage higher than the waste limit. If the estimated mixed gc time is > MaxGCPauseMilli, g1 will add minimum regions to cset. In your case, MaxGCPauseMilli is low and MixedGCCountTarget is 80. So it can only add 2 when the estimated time > MaxGCPauseMilli Thanks, Jenny On 7/16/2014 9:14 PM, Martin Makundi wrote: > > This is a diagnostic parameter, you need to apply with > > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo > > Thanks, will try that. > > It will first add old regions if the estimated time is under the > MaxGCPauseMilli. If the estimated time is higher than > MaxGCPauseMilli, it will add the minimum decided by > #candidate-regions/MixedGCCountTarget. If you set > MixedGCCountTarget too high, the minimum will be too low. In your > case, it is 2. > > > Hmm.. what is the logic behind this > candidate-regions/MixedGCCountTarget? Is there no way to tell the gc > to estimate the max number of regions it could maybe achieve in the > time available and do that instead of 2? > > ** > Martin > > Thanks, > Jenny > > On 7/16/2014 6:46 PM, Martin Makundi wrote: >> >> 2. For most of the time, eden size is 1.4g, survivor 150m, >> the rest is old gen. I am not sure how much of the old gen >> is used for humongous allocations. But it seems there are >> some tunings you can try to help mixed gc: >> - old regions added to cset is 2-14 for mixed gc. Most of >> the time the reason is 'predicted time too high'. You can >> try either increase -XX:MaxGCPauseMillis to a higher value, >> or decrease -XX:G1MixedGCCountTarget (currently it is 80) so >> that more old regions can be added. >> >> >> Does it attempt to do any mixed gc if it cannot do >> all G1MixedGCCountTarget or is the value G1MixedGCCountTarget >> just an upper limit? If it just is an upper limit we could keep >> it at 80 or higher? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 17 19:21:00 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 17 Jul 2014 22:21:00 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53C8206F.9070303@oracle.com> References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> Message-ID: > There are 3 factors: MaxGCPauseMilli, MixedGCCountTarget, > G1OldCSetRegionThresholdPercent > > The candidate regions is calculated by some algorithm. > minimum regions = (candidate regions)/MixedGCCountTarget > maximum regions = (heap regions)* G1OldCSetRegionThresholdPercent > > If the estimated mixed gc time is < MaxGCPauseMilli, g1 will try to add > the candidate regions to cset while keeping the estimated time below > MaxGCPauseMilli, as long as it is less than maximum regions, and > reclaimable percentage higher than the waste limit. > If the estimated mixed gc time is > MaxGCPauseMilli, g1 will add minimum > regions to cset. > What's the science behind these equations (in short) or are they purely ad-hoc? ** Martin > > In your case, MaxGCPauseMilli is low and MixedGCCountTarget is 80. So it > can only add 2 when the estimated time > MaxGCPauseMilli > > Thanks, > Jenny > > On 7/16/2014 9:14 PM, Martin Makundi wrote: > > > This is a diagnostic parameter, you need to apply with > > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo > > Thanks, will try that. > > It will first add old regions if the estimated time is under the >> MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it >> will add the minimum decided by #candidate-regions/MixedGCCountTarget. If >> you set MixedGCCountTarget too high, the minimum will be too low. In your >> case, it is 2. >> > > Hmm.. what is the logic behind this > candidate-regions/MixedGCCountTarget? Is there no way to tell the gc to > estimate the max number of regions it could maybe achieve in the time > available and do that instead of 2? > > ** > Martin > >> Thanks, >> Jenny >> >> On 7/16/2014 6:46 PM, Martin Makundi wrote: >> >> 2. For most of the time, eden size is 1.4g, survivor 150m, the rest is >>> old gen. I am not sure how much of the old gen is used for humongous >>> allocations. But it seems there are some tunings you can try to help mixed >>> gc: >>> - old regions added to cset is 2-14 for mixed gc. Most of the time >>> the reason is 'predicted time too high'. You can try either increase >>> -XX:MaxGCPauseMillis to a higher value, or decrease >>> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >>> be added. >>> >> >> Does it attempt to do any mixed gc if it cannot do >> all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper >> limit? If it just is an upper limit we could keep it at 80 or higher? >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon.masamitsu at oracle.com Thu Jul 17 19:43:31 2014 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Thu, 17 Jul 2014 12:43:31 -0700 Subject: Minor GC difference Java 7 vs Java 8 In-Reply-To: References: , <53879CEB.2070803@Oracle.COM> , <538D001E.1050801@oracle.com> , <538E07D4.2020407@oracle.com> Message-ID: <53C82763.4030304@oracle.com> On 06/05/2014 05:21 PM, Chris Hurst wrote: > Hi, > > > WorkStealingYieldsBeforeSleep=500 gave me Java 6 like behaviour > (constant cost) and actually slightly better pause than Java 6 (less > than half max (default WorkStealingYieldsBeforeSleep=5000) java 7). So > I think I have defined them and can tune them as I wish ... > reduce/remove or lessen their impact. So happy with that, i have some > nice charts ;-) > > On a separate note with regard to general average pause (non spikes / > regardless of changing that parameter) GC performance appears slower > ~13-14% would that be expected or surprising Chris, In jdk7 there were objects moved out of the permanent generation to the Java heap. Most notable were interned strings. This was as a step toward removing the permanent generation as happened in jdk8. This has the benefit of early collection of short lived interned strings but has the cost of processing the longer lived interned strings through young GC's (until they get copied to the old gen). We did not see much difference when testing with this change but if you have a large young gen mostly full of dead objects, the effects of the longer lived interned strings will be amplified. This may not help but there is a develop flag in jdk7 (i.e., not available in product builds and gone in jdk8) -XX:+JavaObjectsInPerm that reverts to putting the interned strings and other objects back into the perm gen. If you run a debug build with it, some of the young GC difference should go away (although the debug builds are so slow you may find it hard to see). Jon > ? > > Chris > ------------------------------------------------------------------------ > Date: Tue, 3 Jun 2014 10:37:24 -0700 > From: jon.masamitsu at oracle.com > To: christopherhurst at hotmail.com; hotspot-gc-use at openjdk.java.net > Subject: Re: Minor GC difference Java 7 vs Java 8 > > > On 06/03/2014 06:51 AM, Chris Hurst wrote: > > Hi, > > Reducing parallel threads on the simple testbed sample code > actually reduced minor GC STW's (presumably in this scenario > multithreading is inefficient due to the simplicity of the task), > however the reverse was true for the real application though for a > signal thread reduction the difference is hard to measure sub ms > if at all. I'd expect the difference to be because the real > application will have a more complex object graph and be tenuring > with a very small amount of promotion to old. If a spike does > occur with parallel gc threads reduced ie increasing other process > activity on the box the duration of the spike appears unaffected > by the reduction in parallel GC threads which I guess I would expect. > > Any suggestions as to what value to use for > WorkStealingYieldsBeforeSleep, otherwise I'll just trying doubling > it ? If you have any additional notes , details on this change > that would be most helpful. > > > WorkStealingYieldsBeforeSleep is one of those flags that > just depends on your needs so I don't have any good > suggestion. > > Jon > > > Chris > > PS This particular application uses the following (we regularly > test other GC permutations including G1 / CMS, our goal is lower > latency) .. > > -Xms1536m > -Xmx1536m > -XX:+PrintCommandLineFlags > -XX:+UseParallelOldGC > -XX:+UnlockDiagnosticVMOptions > -XX:PermSize=50M > -XX:-UseAdaptiveSizePolicy > -XX:+AlwaysPreTouch > -XX:MaxTenuringThreshold=15 > -XX:InitialTenuringThreshold=15 > -XX:+DTraceMonitorProbes > -XX:+ExtendedDTraceProbes > -Dsun.rmi.dgc.client.gcInterval=604800000 > -Dsun.rmi.dgc.server.gcInterval=604800000 > > ------------------------------------------------------------------------ > Date: Mon, 2 Jun 2014 15:52:14 -0700 > From: jon.masamitsu at oracle.com > To: christopherhurst at hotmail.com > CC: hotspot-gc-use at openjdk.java.net > > Subject: Re: Minor GC difference Java 7 vs Java 8 > > > On 06/01/2014 05:14 PM, Chris Hurst wrote: > > Hi, > > Thanks for the replies, I've been stuck on this issue for > about a year and had raised it with Oracle support but hadn't > got anywhere but last weekend I managed to get a lot further > with it .. > > I wrote a trivial program to continuously fill young gen and > release the garbage for use with some DTrace tests and this > showed a similar issue spikes wise from there I could work out > the issue as the parallel GC threads, i.e. anything less than > number of cores removed the spike (ie cores-1), for the test > program normal young GC oscillated about 1ms but spiked at > about 15ms(need to check) (Reducing the parallel threads > worked on the real application in a similar way). > We managed to identify some very minor tasks (they were so > small they weren't showing up on some of our less fine grained > CPU monitoring) that occasionally competed for CPU, the effect > was surprising relatively but now we understand the cause we > can tune the Java 6 GC better. > The spikes were again larger than I would have expected and > all appear to be every close in size, I wouldn't have > predicted this from the issue but that's fine ;-) > > Currently the Java 7 version is still not quite as good on > overall throughput when not spiking though I will recheck > these results, as our most recent tests were around tuning J6 > with the new info. We're happy with our Java 6 GC performance. > > Although we can now reduce these already rare spikes > (potentially to zero), I can't 100% guarantee they won't occur > so I would still like to understand why Java 7 appears to > handle this scenario less efficiently. > > Using Dtrace we were mostly seeing yields and looking at a > stack trace pointed us toward some JDK 7 changes and some > newer java options that might be related ?? ... > > taskqueue.cpp > > libc.so.1`lwp_yield+0x15 libjvm.so`__1cCosFyield6F_v_+0x257 > libjvm.so`__1cWParallelTaskTerminatorFyield6M_v_+0x18 > libjvm.so`__1cWParallelTaskTerminatorRoffer_termination6MpnUTerminatorTerminator__b_+0xe8 > libjvm.so`__1cJStealTaskFdo_it6MpnNGCTaskManager_I_v_+0x378 > libjvm.so`__1cMGCTaskThreadDrun6M_v_+0x19f > libjvm.so`java_start+0x1f2 libc.so.1`_thr_setup+0x4e > libc.so.1`_lwp_start 17:29 > > uintx WorkStealingHardSpins = > 4096 {experimental} > uintx WorkStealingSleepMillis = > 1 {experimental} > uintx WorkStealingSpinToYieldRatio = > 10 {experimental} > uintx WorkStealingYieldsBeforeSleep = > 5000 {experimental} > > I haven't had a chance to play with these as yet but could > these be involved eg j7 tuned to be more friendly to other > applications at the cost of latency (spin to yield) ? Would > that make sense ? > > > Chris, > > My best recollection is that there was a performance regression > reported internally and the change to 5000 was to fix > that regression. Increasing the number of yield's done before > a sleep made this code work more like the previous behavior. > Let me know if you need better information and I can see what > I can dig up. > > By the way, when you tuned down the number of ParallelGCThreads, > you saw little or no increase in the STW pause times? > > You're using UseParallelGC? > > Jon > > We would like to move to Java 7 for support reasons, also as > we are on Solaris the extra memory over head of J8 (64bit > only) even with compressed oops gives us another latency hit. > > Chris > > PS -XX:+AlwaysPreTouch is on. > > > Date: Thu, 29 May 2014 13:47:39 -0700 > > From: Peter.B.Kessler at Oracle.COM > > > To: christopherhurst at hotmail.com > ; > hotspot-gc-use at openjdk.java.net > > > Subject: Re: Minor GC difference Java 7 vs Java 8 > > > > Are the -XX:+PrintGCDetails "[Times: user=0.01 sys=0.00, > real=0.03 secs]" reports for the long pauses different from > the short pauses? I'm hoping for some anomalous sys time, or > user/real ratio, that would indicate it was something > happening on the machine that is interfering with the > collector. But you'd think that would show up as occasional > 15ms blips in your message processing latency outside of when > the collector goes off. > > > > Does -XX:+PrintHeapAtGC show anything anomalous about the > space occupancy after the long pauses? E.g., more objects > getting copied to the survivor space, or promoted to the old > generation? You could infer the numbers from > -XX:+PrintGCDetails output if you didn't want to deal with the > volume produced by -XX:+PrintHeapAtGC. > > > > You don't say how large or how stable your old generation > size is. If you have to get new pages from the OS to expand > the old generation, or give pages back to the OS because the > old generation can shrink, that's extra work. You can infer > this traffic from -XX:+PrintHeapAtGC output by looking at the > "committed" values for the generations. E.g., in "ParOldGen > total 43008K, used 226K [0xba400000, 0xbce00000, 0xe4e00000)" > those three hex numbers are the start address for the > generation, the end of the committed memory for that > generation, and the end of the reserved memory for that > generation. There's a similar report for the young generation. > Running with -Xms equal to -Xmx should prevent pages from > being acquired from or returned to the OS during the run. > > > > Are you running with -XX:+AlwaysPreTouch? Even if you've > reserved and committed the address space, the first time you > touch new pages the OS wants to zero them, which takes time. > That flags forces all the zeroing at initialization. If you > know your page size, you should be able to see the generations > (mostly the old generation) crossing a page boundary for the > first time in the -XX:+PrintHeapAtGC output. > > > > Or it could be some change in the collector between JDK-6 > and JDK-7. > > > > Posting some log snippets might let sharper eyes see something. > > > > ... peter > > > > On 04/30/14 07:58, Chris Hurst wrote: > > > Hi, > > > > > > Has anyone seen anything similar to this ... > > > > > > On java 6 (range of versions 32bit Solaris) application , > using parallel old gc, non adapative. Using a very heavy test > performance load we see minor GC's around the 5ms mark and > some very rare say 3or4 ish instances in 12 hours say 20ms > pauses the number of pauses is random (though always few > compares with the total number of GC's) and large ~20ms (this > value appears the same for all such points.) We have a large > number of minor GC's in our runs, only a full GC at startup. > These freak GC's can be bunched or spread out and we can run > for many hours without one (though doing minor GC's). > > > > > > What's odd is that if I use Java 7 (range of versions > 32bit) the result is very close but the spikes (1 or 2 > arguably less) are now 30-40ms (depends on run arguably even > rarer). Has anyone experienced anything similar why would Java > 7 up to double a minor GC / The GC throughput is approximately > the same arguably 7 is better throughput just but that freak > minor GC makes it usable due to latency. > > > > > > In terms of the change in spike height (20 (J6)vs40(J7)) > this is very reproducible though the number of points and when > they occur varies slightly. The over all GC graph , throughput > is similar otherwise as is the resultant memory dump at the > end. The test should be constant load, multiple clients just > doing the same thing over and over. > > > > > > Has anyone seen anything similar, I was hoping someone > might have seen a change in defaults, thread timeout, default > data structure size change that would account for this. I was > hoping the marked increase might be a give away to someone as > its way off our average minor GC time. > > > > > > We have looked at gclogs, heap dumps, processor activity, > background processes, amount of disc access, safepoints etc > etc. , we trace message rate into out of the application for > variation, compare heap dumps at end etc. nothing stands out > so far. > > > > > > Chris > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > hotspot-gc-use mailing list > > > hotspot-gc-use at openjdk.java.net > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Sat Jul 19 02:38:55 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Sat, 19 Jul 2014 05:38:55 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> Message-ID: BTW: We have 16 cores and we probably need 4 for serving web users, is there a way we can utilize more efficiently the excess resources towards gc while keeping user experienced pause at minimum? 2014-07-17 22:21 GMT+03:00 Martin Makundi < martin.makundi at koodaripalvelut.com>: > > There are 3 factors: MaxGCPauseMilli, MixedGCCountTarget, >> G1OldCSetRegionThresholdPercent >> >> The candidate regions is calculated by some algorithm. >> minimum regions = (candidate regions)/MixedGCCountTarget >> maximum regions = (heap regions)* G1OldCSetRegionThresholdPercent >> > >> If the estimated mixed gc time is < MaxGCPauseMilli, g1 will try to add >> the candidate regions to cset while keeping the estimated time below >> MaxGCPauseMilli, as long as it is less than maximum regions, and >> reclaimable percentage higher than the waste limit. >> If the estimated mixed gc time is > MaxGCPauseMilli, g1 will add minimum >> regions to cset. >> > > What's the science behind these equations (in short) or are they purely > ad-hoc? > > ** > Martin > > > >> >> In your case, MaxGCPauseMilli is low and MixedGCCountTarget is 80. So it >> can only add 2 when the estimated time > MaxGCPauseMilli >> >> Thanks, >> Jenny >> >> On 7/16/2014 9:14 PM, Martin Makundi wrote: >> >> > This is a diagnostic parameter, you need to apply with >> > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo >> >> Thanks, will try that. >> >> It will first add old regions if the estimated time is under the >>> MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it >>> will add the minimum decided by #candidate-regions/MixedGCCountTarget. If >>> you set MixedGCCountTarget too high, the minimum will be too low. In your >>> case, it is 2. >>> >> >> Hmm.. what is the logic behind this >> candidate-regions/MixedGCCountTarget? Is there no way to tell the gc to >> estimate the max number of regions it could maybe achieve in the time >> available and do that instead of 2? >> >> ** >> Martin >> >>> Thanks, >>> Jenny >>> >>> On 7/16/2014 6:46 PM, Martin Makundi wrote: >>> >>> 2. For most of the time, eden size is 1.4g, survivor 150m, the rest is >>>> old gen. I am not sure how much of the old gen is used for humongous >>>> allocations. But it seems there are some tunings you can try to help mixed >>>> gc: >>>> - old regions added to cset is 2-14 for mixed gc. Most of the time >>>> the reason is 'predicted time too high'. You can try either increase >>>> -XX:MaxGCPauseMillis to a higher value, or decrease >>>> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >>>> be added. >>>> >>> >>> Does it attempt to do any mixed gc if it cannot do >>> all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper >>> limit? If it just is an upper limit we could keep it at 80 or higher? >>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Tue Jul 22 15:11:54 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Tue, 22 Jul 2014 18:11:54 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> Message-ID: Hi! Here is the most recent attempt, we increased GCPauseIntervalMillis to 10000 so that 10% will be a longer time for doing mixed sets. However, we still seem to get a Full GC at "[Full GC 20G->17G(30G), 60.0458620 secs]" due to humongous allocations. Any suggestions how to mitigate the Full GC? Here is the full log with parameters: http://81.22.250.165/log ** Martin 2014-07-19 5:38 GMT+03:00 Martin Makundi : > BTW: We have 16 cores and we probably need 4 for serving web users, is > there a way we can utilize more efficiently the excess resources towards gc > while keeping user experienced pause at minimum? > > > 2014-07-17 22:21 GMT+03:00 Martin Makundi < > martin.makundi at koodaripalvelut.com>: > > >> There are 3 factors: MaxGCPauseMilli, MixedGCCountTarget, >>> G1OldCSetRegionThresholdPercent >>> >>> The candidate regions is calculated by some algorithm. >>> minimum regions = (candidate regions)/MixedGCCountTarget >>> maximum regions = (heap regions)* G1OldCSetRegionThresholdPercent >>> >> >>> If the estimated mixed gc time is < MaxGCPauseMilli, g1 will try to add >>> the candidate regions to cset while keeping the estimated time below >>> MaxGCPauseMilli, as long as it is less than maximum regions, and >>> reclaimable percentage higher than the waste limit. >>> If the estimated mixed gc time is > MaxGCPauseMilli, g1 will add minimum >>> regions to cset. >>> >> >> What's the science behind these equations (in short) or are they purely >> ad-hoc? >> >> ** >> Martin >> >> >> >>> >>> In your case, MaxGCPauseMilli is low and MixedGCCountTarget is 80. So >>> it can only add 2 when the estimated time > MaxGCPauseMilli >>> >>> Thanks, >>> Jenny >>> >>> On 7/16/2014 9:14 PM, Martin Makundi wrote: >>> >>> > This is a diagnostic parameter, you need to apply with >>> > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo >>> >>> Thanks, will try that. >>> >>> It will first add old regions if the estimated time is under the >>>> MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it >>>> will add the minimum decided by #candidate-regions/MixedGCCountTarget. If >>>> you set MixedGCCountTarget too high, the minimum will be too low. In your >>>> case, it is 2. >>>> >>> >>> Hmm.. what is the logic behind this >>> candidate-regions/MixedGCCountTarget? Is there no way to tell the gc to >>> estimate the max number of regions it could maybe achieve in the time >>> available and do that instead of 2? >>> >>> ** >>> Martin >>> >>>> Thanks, >>>> Jenny >>>> >>>> On 7/16/2014 6:46 PM, Martin Makundi wrote: >>>> >>>> 2. For most of the time, eden size is 1.4g, survivor 150m, the rest >>>>> is old gen. I am not sure how much of the old gen is used for humongous >>>>> allocations. But it seems there are some tunings you can try to help mixed >>>>> gc: >>>>> - old regions added to cset is 2-14 for mixed gc. Most of the time >>>>> the reason is 'predicted time too high'. You can try either increase >>>>> -XX:MaxGCPauseMillis to a higher value, or decrease >>>>> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >>>>> be added. >>>>> >>>> >>>> Does it attempt to do any mixed gc if it cannot do >>>> all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper >>>> limit? If it just is an upper limit we could keep it at 80 or higher? >>>> >>>> >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Wed Jul 23 15:01:16 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Wed, 23 Jul 2014 18:01:16 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> Message-ID: 1. Log says "recent GC overhead higher than threshold after GC, recent GC overhead: 13.94 %, threshold: 10.00 %" what is that 10% threshold how can it be changed and what will it affect? 2. Today I changed -XX:G1HeapWastePercent=0 and didn't get any Full GC:s. I assume this is needed to keep heap clean for those random humongous allocations... ** Martin 2014-07-22 18:11 GMT+03:00 Martin Makundi < martin.makundi at koodaripalvelut.com>: > Hi! > > Here is the most recent attempt, we increased GCPauseIntervalMillis to > 10000 so that 10% will be a longer time for doing mixed sets. However, we > still seem to get a Full GC at "[Full GC 20G->17G(30G), 60.0458620 secs]" > due to humongous allocations. > > Any suggestions how to mitigate the Full GC? Here is the full log with > parameters: > > http://81.22.250.165/log > > ** > Martin > > > 2014-07-19 5:38 GMT+03:00 Martin Makundi < > martin.makundi at koodaripalvelut.com>: > > BTW: We have 16 cores and we probably need 4 for serving web users, is >> there a way we can utilize more efficiently the excess resources towards gc >> while keeping user experienced pause at minimum? >> >> >> 2014-07-17 22:21 GMT+03:00 Martin Makundi < >> martin.makundi at koodaripalvelut.com>: >> >> >>> There are 3 factors: MaxGCPauseMilli, MixedGCCountTarget, >>>> G1OldCSetRegionThresholdPercent >>>> >>>> The candidate regions is calculated by some algorithm. >>>> minimum regions = (candidate regions)/MixedGCCountTarget >>>> maximum regions = (heap regions)* G1OldCSetRegionThresholdPercent >>>> >>> >>>> If the estimated mixed gc time is < MaxGCPauseMilli, g1 will try to add >>>> the candidate regions to cset while keeping the estimated time below >>>> MaxGCPauseMilli, as long as it is less than maximum regions, and >>>> reclaimable percentage higher than the waste limit. >>>> If the estimated mixed gc time is > MaxGCPauseMilli, g1 will add >>>> minimum regions to cset. >>>> >>> >>> What's the science behind these equations (in short) or are they purely >>> ad-hoc? >>> >>> ** >>> Martin >>> >>> >>> >>>> >>>> In your case, MaxGCPauseMilli is low and MixedGCCountTarget is 80. So >>>> it can only add 2 when the estimated time > MaxGCPauseMilli >>>> >>>> Thanks, >>>> Jenny >>>> >>>> On 7/16/2014 9:14 PM, Martin Makundi wrote: >>>> >>>> > This is a diagnostic parameter, you need to apply with >>>> > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo >>>> >>>> Thanks, will try that. >>>> >>>> It will first add old regions if the estimated time is under the >>>>> MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it >>>>> will add the minimum decided by #candidate-regions/MixedGCCountTarget. If >>>>> you set MixedGCCountTarget too high, the minimum will be too low. In your >>>>> case, it is 2. >>>>> >>>> >>>> Hmm.. what is the logic behind this >>>> candidate-regions/MixedGCCountTarget? Is there no way to tell the gc to >>>> estimate the max number of regions it could maybe achieve in the time >>>> available and do that instead of 2? >>>> >>>> ** >>>> Martin >>>> >>>>> Thanks, >>>>> Jenny >>>>> >>>>> On 7/16/2014 6:46 PM, Martin Makundi wrote: >>>>> >>>>> 2. For most of the time, eden size is 1.4g, survivor 150m, the rest >>>>>> is old gen. I am not sure how much of the old gen is used for humongous >>>>>> allocations. But it seems there are some tunings you can try to help mixed >>>>>> gc: >>>>>> - old regions added to cset is 2-14 for mixed gc. Most of the time >>>>>> the reason is 'predicted time too high'. You can try either increase >>>>>> -XX:MaxGCPauseMillis to a higher value, or decrease >>>>>> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >>>>>> be added. >>>>>> >>>>> >>>>> Does it attempt to do any mixed gc if it cannot do >>>>> all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper >>>>> limit? If it just is an upper limit we could keep it at 80 or higher? >>>>> >>>>> >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Thu Jul 24 05:20:20 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Wed, 23 Jul 2014 22:20:20 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> Message-ID: <53D09794.3090806@oracle.com> Martin, I took a look at your 2nd gc log. Most of the Humongous objects are of 2M, some goes up to 14M. Can you try to set G1HeapRegionSize=16m? This will get rid of most of humongous allocations. Plus the RS related operations are very long. Having less RSet might help. Considering the heap usage after full gc is 17g, -XX:G1HeapWastePercent=0 is not appropriate. This should be ~50. The reason you run into full gc, is a lot of humongous allocation happened at that time, while the heap used is ~21g. There are other tunings we can try, but I think those 2 should give better results. Comments to your questions inlined. Thanks, Jenny On 7/23/2014 8:01 AM, Martin Makundi wrote: > 1. Log says "recent GC overhead higher than threshold after GC, recent > GC overhead: 13.94 %, threshold: 10.00 %" what is that 10% threshold > how can it be changed and what will it affect? G1 uses this to decide when to expand the heap. It means when the gc pause time is over 10% of the application time, then we need to expand the heap. > > 2. Today I changed -XX:G1HeapWastePercent=0 and didn't get any Full > GC:s. I assume this is needed to keep heap clean for those random > humongous allocations... Please see my comments above. > > ** > Martin > > > 2014-07-22 18:11 GMT+03:00 Martin Makundi > >: > > Hi! > > Here is the most recent attempt, we > increased GCPauseIntervalMillis to 10000 so that 10% will be a > longer time for doing mixed sets. However, we still seem to get a > Full GC at "[Full GC 20G->17G(30G), 60.0458620 secs]" due to > humongous allocations. > > Any suggestions how to mitigate the Full GC? Here is the full log > with parameters: > > http://81.22.250.165/log > > ** > Martin > > > 2014-07-19 5:38 GMT+03:00 Martin Makundi > >: > > BTW: We have 16 cores and we probably need 4 for serving web > users, is there a way we can utilize more efficiently the > excess resources towards gc while keeping user experienced > pause at minimum? > > > 2014-07-17 22:21 GMT+03:00 Martin Makundi > >: > > > There are 3 factors: MaxGCPauseMilli, > MixedGCCountTarget, G1OldCSetRegionThresholdPercent > > The candidate regions is calculated by some algorithm. > minimum regions = (candidate regions)/MixedGCCountTarget > maximum regions = (heap regions)* > G1OldCSetRegionThresholdPercent > > > If the estimated mixed gc time is < MaxGCPauseMilli, > g1 will try to add the candidate regions to cset while > keeping the estimated time below MaxGCPauseMilli, as > long as it is less than maximum regions, and > reclaimable percentage higher than the waste limit. > If the estimated mixed gc time is > MaxGCPauseMilli, > g1 will add minimum regions to cset. > > > What's the science behind these equations (in short) or > are they purely ad-hoc? > > ** > Martin > > > In your case, MaxGCPauseMilli is low and > MixedGCCountTarget is 80. So it can only add 2 when > the estimated time > MaxGCPauseMilli > > Thanks, > Jenny > > On 7/16/2014 9:14 PM, Martin Makundi wrote: >> > This is a diagnostic parameter, you need to apply with >> > -XX:+UnlockDiagnosticVMOptions >> -XX:+G1PrintRegionLivenessInfo >> >> Thanks, will try that. >> >> It will first add old regions if the estimated >> time is under the MaxGCPauseMilli. If the >> estimated time is higher than MaxGCPauseMilli, it >> will add the minimum decided by >> #candidate-regions/MixedGCCountTarget. If you set >> MixedGCCountTarget too high, the minimum will be >> too low. In your case, it is 2. >> >> >> Hmm.. what is the logic behind this >> candidate-regions/MixedGCCountTarget? Is there no way >> to tell the gc to estimate the max number of regions >> it could maybe achieve in the time available and do >> that instead of 2? >> >> ** >> Martin >> >> Thanks, >> Jenny >> >> On 7/16/2014 6:46 PM, Martin Makundi wrote: >>> >>> 2. For most of the time, eden size is 1.4g, >>> survivor 150m, the rest is old gen. I am >>> not sure how much of the old gen is used for >>> humongous allocations. But it seems there >>> are some tunings you can try to help mixed gc: >>> - old regions added to cset is 2-14 for >>> mixed gc. Most of the time the reason is >>> 'predicted time too high'. You can try >>> either increase -XX:MaxGCPauseMillis to a >>> higher value, or decrease >>> -XX:G1MixedGCCountTarget (currently it is >>> 80) so that more old regions can be added. >>> >>> >>> Does it attempt to do any mixed gc if it cannot >>> do all G1MixedGCCountTarget or is the >>> value G1MixedGCCountTarget just an upper limit? >>> If it just is an upper limit we could keep it at >>> 80 or higher? >> >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 24 05:57:51 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 24 Jul 2014 08:57:51 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53D09794.3090806@oracle.com> References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> Message-ID: > > > I took a look at your 2nd gc log. > > Most of the Humongous objects are of 2M, some goes up to 14M. Can you try > to set G1HeapRegionSize=16m? This will get rid of most of humongous > allocations. Plus the RS related operations are very long. Having less > RSet might help. > I have tried 16m and 32m they both result in worse full gc behavior. > > Considering the heap usage after full gc is 17g, -XX:G1HeapWastePercent=0 > is not appropriate. This should be ~50. > The reason you run into full gc, is a lot of humongous allocation happened > at that time, while the heap used is ~21g. > > There are other tunings we can try, but I think those 2 should give better > results. > XX:G1HeapWastePercent=0 seems to work, however. > > Comments to your questions inlined. > > Thanks, > Jenny > > On 7/23/2014 8:01 AM, Martin Makundi wrote: > > 1. Log says "recent GC overhead higher than threshold after GC, recent GC > overhead: 13.94 %, threshold: 10.00 %" what is that 10% threshold how can > it be changed and what will it affect? > > G1 uses this to decide when to expand the heap. It means when the gc > pause time is over 10% of the application time, then we need to expand the > heap. > Is this adjustable, I did not notice any parameters affecting this 10% threshold? ** Martin > > 2. Today I changed -XX:G1HeapWastePercent=0 and didn't get any Full > GC:s. I assume this is needed to keep heap clean for those random humongous > allocations... > > Please see my comments above. > > > ** > Martin > > > 2014-07-22 18:11 GMT+03:00 Martin Makundi < > martin.makundi at koodaripalvelut.com>: > >> Hi! >> >> Here is the most recent attempt, we increased GCPauseIntervalMillis to >> 10000 so that 10% will be a longer time for doing mixed sets. However, we >> still seem to get a Full GC at "[Full GC 20G->17G(30G), 60.0458620 secs]" >> due to humongous allocations. >> >> Any suggestions how to mitigate the Full GC? Here is the full log with >> parameters: >> >> http://81.22.250.165/log >> >> ** >> Martin >> >> >> 2014-07-19 5:38 GMT+03:00 Martin Makundi < >> martin.makundi at koodaripalvelut.com>: >> >> BTW: We have 16 cores and we probably need 4 for serving web users, is >>> there a way we can utilize more efficiently the excess resources towards gc >>> while keeping user experienced pause at minimum? >>> >>> >>> 2014-07-17 22:21 GMT+03:00 Martin Makundi < >>> martin.makundi at koodaripalvelut.com>: >>> >>> >>>> There are 3 factors: MaxGCPauseMilli, MixedGCCountTarget, >>>>> G1OldCSetRegionThresholdPercent >>>>> >>>>> The candidate regions is calculated by some algorithm. >>>>> minimum regions = (candidate regions)/MixedGCCountTarget >>>>> maximum regions = (heap regions)* G1OldCSetRegionThresholdPercent >>>>> >>>> >>>>> If the estimated mixed gc time is < MaxGCPauseMilli, g1 will try to >>>>> add the candidate regions to cset while keeping the estimated time below >>>>> MaxGCPauseMilli, as long as it is less than maximum regions, and >>>>> reclaimable percentage higher than the waste limit. >>>>> If the estimated mixed gc time is > MaxGCPauseMilli, g1 will add >>>>> minimum regions to cset. >>>>> >>>> >>>> What's the science behind these equations (in short) or are they >>>> purely ad-hoc? >>>> >>>> ** >>>> Martin >>>> >>>> >>>> >>>>> >>>>> In your case, MaxGCPauseMilli is low and MixedGCCountTarget is 80. So >>>>> it can only add 2 when the estimated time > MaxGCPauseMilli >>>>> >>>>> Thanks, >>>>> Jenny >>>>> >>>>> On 7/16/2014 9:14 PM, Martin Makundi wrote: >>>>> >>>>> > This is a diagnostic parameter, you need to apply with >>>>> > -XX:+UnlockDiagnosticVMOptions -XX:+G1PrintRegionLivenessInfo >>>>> >>>>> Thanks, will try that. >>>>> >>>>> It will first add old regions if the estimated time is under the >>>>>> MaxGCPauseMilli. If the estimated time is higher than MaxGCPauseMilli, it >>>>>> will add the minimum decided by #candidate-regions/MixedGCCountTarget. If >>>>>> you set MixedGCCountTarget too high, the minimum will be too low. In your >>>>>> case, it is 2. >>>>>> >>>>> >>>>> Hmm.. what is the logic behind this >>>>> candidate-regions/MixedGCCountTarget? Is there no way to tell the gc to >>>>> estimate the max number of regions it could maybe achieve in the time >>>>> available and do that instead of 2? >>>>> >>>>> ** >>>>> Martin >>>>> >>>>>> Thanks, >>>>>> Jenny >>>>>> >>>>>> On 7/16/2014 6:46 PM, Martin Makundi wrote: >>>>>> >>>>>> 2. For most of the time, eden size is 1.4g, survivor 150m, the rest >>>>>>> is old gen. I am not sure how much of the old gen is used for humongous >>>>>>> allocations. But it seems there are some tunings you can try to help mixed >>>>>>> gc: >>>>>>> - old regions added to cset is 2-14 for mixed gc. Most of the >>>>>>> time the reason is 'predicted time too high'. You can try either increase >>>>>>> -XX:MaxGCPauseMillis to a higher value, or decrease >>>>>>> -XX:G1MixedGCCountTarget (currently it is 80) so that more old regions can >>>>>>> be added. >>>>>>> >>>>>> >>>>>> Does it attempt to do any mixed gc if it cannot do >>>>>> all G1MixedGCCountTarget or is the value G1MixedGCCountTarget just an upper >>>>>> limit? If it just is an upper limit we could keep it at 80 or higher? >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Thu Jul 24 07:18:22 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 24 Jul 2014 09:18:22 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> Message-ID: <1406186302.2920.4.camel@cirrus> Hi, On Thu, 2014-07-24 at 08:57 +0300, Martin Makundi wrote: > > I took a look at your 2nd gc log. [...] > Considering the heap usage after full gc is 17g, > -XX:G1HeapWastePercent=0 is not appropriate. This should be > ~50. > The reason you run into full gc, is a lot of humongous > allocation happened at that time, while the heap used is ~21g. > > There are other tunings we can try, but I think those 2 should > give better results. > XX:G1HeapWastePercent=0 seems to work, however. Simply fragmentation. There is no good workaround to Full GCs at this time except hoping that it works out. > > Comments to your questions inlined. > Thanks, > Jenny > On 7/23/2014 8:01 AM, Martin Makundi wrote: > > > 1. Log says "recent GC overhead higher than threshold after > > GC, recent GC overhead: 13.94 %, threshold: 10.00 %" what is > > that 10% threshold how can it be changed and what will it > > affect? > G1 uses this to decide when to expand the heap. It means when > the gc pause time is over 10% of the application time, then we > need to expand the heap. > > Is this adjustable, I did not notice any parameters affecting this 10% > threshold? GC overhead = 100.0 * (1.0 / (1.0 + GCTimeRatio)) This somewhat complicated formula is due to backwards compatibility. Thomas From martin.makundi at koodaripalvelut.com Thu Jul 24 07:22:33 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Thu, 24 Jul 2014 10:22:33 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1406186302.2920.4.camel@cirrus> References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> Message-ID: > > > > > > Comments to your questions inlined. > > Thanks, > > Jenny > > On 7/23/2014 8:01 AM, Martin Makundi wrote: > > > > > 1. Log says "recent GC overhead higher than threshold after > > > GC, recent GC overhead: 13.94 %, threshold: 10.00 %" what is > > > that 10% threshold how can it be changed and what will it > > > affect? > > G1 uses this to decide when to expand the heap. It means when > > the gc pause time is over 10% of the application time, then we > > need to expand the heap. > > > > Is this adjustable, I did not notice any parameters affecting this 10% > > threshold? > > GC overhead = 100.0 * (1.0 / (1.0 + GCTimeRatio)) > > This somewhat complicated formula is due to backwards compatibility. > Is this something that I can tune also for g1gc, e.g., example, with -XX: *GCTimeRatio*=9 default? What will it affect and how does it relate to MaxGCPauseMillis, or are they related at all to each other? ** Martin > > Thomas > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Thu Jul 24 07:27:45 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 24 Jul 2014 09:27:45 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> Message-ID: <1406186865.2920.8.camel@cirrus> Hi, On Thu, 2014-07-24 at 10:22 +0300, Martin Makundi wrote: > > > > > > Comments to your questions inlined. > > Thanks, > > Jenny > > On 7/23/2014 8:01 AM, Martin Makundi wrote: > > > > > 1. Log says "recent GC overhead higher than > threshold after > > > GC, recent GC overhead: 13.94 %, threshold: 10.00 > %" what is > > > that 10% threshold how can it be changed and what > will it > > > affect? > > G1 uses this to decide when to expand the heap. It > means when > > the gc pause time is over 10% of the application > time, then we > > need to expand the heap. > > > > Is this adjustable, I did not notice any parameters > affecting this 10% > > threshold? > > > GC overhead = 100.0 * (1.0 / (1.0 + GCTimeRatio)) > > This somewhat complicated formula is due to backwards > compatibility. > > > Is this something that I can tune also for g1gc, e.g., example, with > -XX:GCTimeRatio=9 default? > > > What will it affect and how does it relate to MaxGCPauseMillis, or are > they related at all to each other? Sorry, I saw the other question too late, sorry. :/ Yes, GCTimeRatio can be used with g1 too. It only seems to be used to determine whether the heap should be expanded at GC or not. I.e. if the current gc overhead is smaller than that value, it does not, otherwise it does. If you did set -Xms == -Xmx increasing this does not have any particular effect. Thomas From thomas.schatzl at oracle.com Thu Jul 24 07:49:14 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 24 Jul 2014 09:49:14 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405492250.2665.12.camel@cirrus> <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> Message-ID: <1406188154.2920.25.camel@cirrus> Hi, some somewhat late response... On Sat, 2014-07-19 at 05:38 +0300, Martin Makundi wrote: > BTW: We have 16 cores and we probably need 4 for serving web users, is > there a way we can utilize more efficiently the excess resources > towards gc while keeping user experienced pause at minimum? You can try assigning threads to the process using the usual OS tools (cpuset, taskset or one of the other ways). You also want to decrease the number of parallel gc threads according to the number of assigned cpus, as the 7uX ergonomics do not check the actual number of threads the process is allowed to run with. Others reported improvements when setting the number of parallel gc threads to the number of available cpu cores (not threads). Again, this likely depends on your workload. > > > 2014-07-17 22:21 GMT+03:00 Martin Makundi > : > > There are 3 factors: MaxGCPauseMilli, > MixedGCCountTarget, G1OldCSetRegionThresholdPercent > > The candidate regions is calculated by some algorithm. > minimum regions = (candidate > regions)/MixedGCCountTarget > maximum regions = (heap regions)* > G1OldCSetRegionThresholdPercent > > If the estimated mixed gc time is < MaxGCPauseMilli, > g1 will try to add the candidate regions to cset while > keeping the estimated time below MaxGCPauseMilli, as > long as it is less than maximum regions, and > reclaimable percentage higher than the waste limit. > If the estimated mixed gc time is > MaxGCPauseMilli, > g1 will add minimum regions to cset. > > > > What's the science behind these equations (in short) or are > they purely ad-hoc? There are a few reasons for the implementation and use of these formulas, like backward compatibility (trying to shoehorn functionality of other collectors on G1, or carrying along earlier wrong decisions), trying to work around performance anomalies (or bugs :), trying to get good performance considering a (mythical) average application without too much tampering, the fact that in pause time control simply more stuff can go wrong given the GC has no good idea about the application, and others. So yes, particularly the default values are somewhat ad-hoc based on measurements on "representative" applications. As these and the implementation change over time, they might not always completely fit your application. I think in particular the pause time control options are more hairy than others. Thomas From alexbool at yandex-team.ru Fri Jul 25 07:30:41 2014 From: alexbool at yandex-team.ru (Alexander Bulaev) Date: Fri, 25 Jul 2014 11:30:41 +0400 Subject: Strange behaviour of G1 GC Message-ID: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> Hello! I am writing to you about strange behaviuor of G1 GC that I have encountered in our production environment. Sometimes there are happening Full GCs that are cleaning a lot of garbage: 2014-07-24T14:27:57.020+0400: 94749.771: [Full GC (Allocation Failure) 11G->5126M(12G), 13.7944745 secs] I suppose that this is garbage in the old generation. I expect it to be cleaned during mixed and concurrent GCs, but, according to the logs, the last concurrent phase happened over half an hour earlier prior to that Full GC: 2014-07-24T13:49:43.228+0400: 92455.979: [GC concurrent-mark-start] And there is no evidence in logs that this concurrent mark has ever ended. Seems like that concurrent GC just hang somewhere. Same thing with mixed GCs: 2014-07-24T13:42:47.425+0400: 92040.176: [GC pause (G1 Evacuation Pause) (mixed) Please help me understand this problem and find a solution if possible. We are using Java 8u5. I can supply these GC logs if needed. Thanks. Best regards, Alexander Bulaev Java developer, Yandex LLC From alexbool at yandex-team.ru Fri Jul 25 14:50:02 2014 From: alexbool at yandex-team.ru (Alexander Bulaev) Date: Fri, 25 Jul 2014 18:50:02 +0400 Subject: Strange behaviour of G1 GC In-Reply-To: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> References: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> Message-ID: <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> Full log file is available at https://www.dropbox.com/s/w17iyy2cxsmhgyo/web-gc.log On 25.07.2014, at 11:30, Alexander Bulaev wrote: > Hello! > > I am writing to you about strange behaviuor of G1 GC that I have encountered in our production environment. > Sometimes there are happening Full GCs that are cleaning a lot of garbage: > 2014-07-24T14:27:57.020+0400: 94749.771: [Full GC (Allocation Failure) 11G->5126M(12G), 13.7944745 secs] > > I suppose that this is garbage in the old generation. I expect it to be cleaned during mixed and concurrent GCs, but, according to the logs, the last concurrent phase happened over half an hour earlier prior to that Full GC: > 2014-07-24T13:49:43.228+0400: 92455.979: [GC concurrent-mark-start] > > And there is no evidence in logs that this concurrent mark has ever ended. Seems like that concurrent GC just hang somewhere. > Same thing with mixed GCs: > 2014-07-24T13:42:47.425+0400: 92040.176: [GC pause (G1 Evacuation Pause) (mixed) > > Please help me understand this problem and find a solution if possible. > We are using Java 8u5. I can supply these GC logs if needed. > Thanks. > > Best regards, > Alexander Bulaev > Java developer, Yandex LLC Best regards, Alexander Bulaev Java developer, Yandex LLC From charlie.hunt at oracle.com Fri Jul 25 19:03:43 2014 From: charlie.hunt at oracle.com (charlie hunt) Date: Fri, 25 Jul 2014 14:03:43 -0500 Subject: Strange behaviour of G1 GC In-Reply-To: <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> References: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> Message-ID: <90B8A722-9709-4D27-85DA-EAF0E9703302@oracle.com> Hi Alexander, Looks like your app is doing frequent large (humongous) object allocations. Some of them are as large as 83 MB, some are 16 MB and some are 4 MB. You could try increasing your G1 region size to 8 MB using -XX:G1HeapRegionSize=8m. That may help some of the Full GCs. If you are frequently allocating 83 MB and 16 MB objects, (increasing region size for those is likely not to be practical), your alternatives (for now) may be limited to any one of, or any combination of: - Lowering the InitiatingHeapOccupancyPercent to run the concurrent cycle more frequently. Currently, humongous objects are not collected until a concurrent cycle is executed. - increasing the size of the overall Java heap so there is more Java heap available for humongous objects, i.e. the Full GCs occur less frequently, assuming the concurrent cycle is running as frequently as it does now, that may require tuning the InitiatingHeapOccupancyPercent. - refactor the application to reduce the frequency of those large object allocations, either by allocating smaller objects, or allocating and re-using the (really) large objects rather than creating new ones. You might find it useful to take a look at this JavaOne 2013 session: https://www.parleys.com/share_channel.html#play/525528dbe4b0a43ac12124d7/about start at about the 17:15 mark with the G1 GC Analysis slide and listen through about the 28:51 mark. This will help you understand the humongous object allocations. hths, charlie On Jul 25, 2014, at 9:50 AM, Alexander Bulaev wrote: > Full log file is available at https://www.dropbox.com/s/w17iyy2cxsmhgyo/web-gc.log > > On 25.07.2014, at 11:30, Alexander Bulaev wrote: > >> Hello! >> >> I am writing to you about strange behaviuor of G1 GC that I have encountered in our production environment. >> Sometimes there are happening Full GCs that are cleaning a lot of garbage: >> 2014-07-24T14:27:57.020+0400: 94749.771: [Full GC (Allocation Failure) 11G->5126M(12G), 13.7944745 secs] >> >> I suppose that this is garbage in the old generation. I expect it to be cleaned during mixed and concurrent GCs, but, according to the logs, the last concurrent phase happened over half an hour earlier prior to that Full GC: >> 2014-07-24T13:49:43.228+0400: 92455.979: [GC concurrent-mark-start] >> >> And there is no evidence in logs that this concurrent mark has ever ended. Seems like that concurrent GC just hang somewhere. >> Same thing with mixed GCs: >> 2014-07-24T13:42:47.425+0400: 92040.176: [GC pause (G1 Evacuation Pause) (mixed) >> >> Please help me understand this problem and find a solution if possible. >> We are using Java 8u5. I can supply these GC logs if needed. >> Thanks. >> >> Best regards, >> Alexander Bulaev >> Java developer, Yandex LLC > > Best regards, > Alexander Bulaev > Java developer, Yandex LLC > > > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexbool at yandex-team.ru Mon Jul 28 07:48:51 2014 From: alexbool at yandex-team.ru (Alexander Bulaev) Date: Mon, 28 Jul 2014 11:48:51 +0400 Subject: Strange behaviour of G1 GC In-Reply-To: <90B8A722-9709-4D27-85DA-EAF0E9703302@oracle.com> References: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> <90B8A722-9709-4D27-85DA-EAF0E9703302@oracle.com> Message-ID: Hi Charlie, thanks for your reply. Yes, the application is doing humongous allocations also causing Full GCs, but at least this is understanadble. I?ll try the options you mentioned. Also, AFAIK there are some improvements on humongous allocations coming in 8u20. But the really mysterious thing is that never ending concurrent mark. Do you know something about it? On 25.07.2014, at 23:03, charlie hunt wrote: > Hi Alexander, > > Looks like your app is doing frequent large (humongous) object allocations. Some of them are as large as 83 MB, some are 16 MB and some are 4 MB. You could try increasing your G1 region size to 8 MB using -XX:G1HeapRegionSize=8m. That may help some of the Full GCs. > > If you are frequently allocating 83 MB and 16 MB objects, (increasing region size for those is likely not to be practical), your alternatives (for now) may be limited to any one of, or any combination of: > - Lowering the InitiatingHeapOccupancyPercent to run the concurrent cycle more frequently. Currently, humongous objects are not collected until a concurrent cycle is executed. > - increasing the size of the overall Java heap so there is more Java heap available for humongous objects, i.e. the Full GCs occur less frequently, assuming the concurrent cycle is running as frequently as it does now, that may require tuning the InitiatingHeapOccupancyPercent. > - refactor the application to reduce the frequency of those large object allocations, either by allocating smaller objects, or allocating and re-using the (really) large objects rather than creating new ones. > > You might find it useful to take a look at this JavaOne 2013 session: https://www.parleys.com/share_channel.html#play/525528dbe4b0a43ac12124d7/about start at about the 17:15 mark with the G1 GC Analysis slide and listen through about the 28:51 mark. This will help you understand the humongous object allocations. > > hths, > > charlie > > On Jul 25, 2014, at 9:50 AM, Alexander Bulaev wrote: > >> Full log file is available at https://www.dropbox.com/s/w17iyy2cxsmhgyo/web-gc.log >> >> On 25.07.2014, at 11:30, Alexander Bulaev wrote: >> >>> Hello! >>> >>> I am writing to you about strange behaviuor of G1 GC that I have encountered in our production environment. >>> Sometimes there are happening Full GCs that are cleaning a lot of garbage: >>> 2014-07-24T14:27:57.020+0400: 94749.771: [Full GC (Allocation Failure) 11G->5126M(12G), 13.7944745 secs] >>> >>> I suppose that this is garbage in the old generation. I expect it to be cleaned during mixed and concurrent GCs, but, according to the logs, the last concurrent phase happened over half an hour earlier prior to that Full GC: >>> 2014-07-24T13:49:43.228+0400: 92455.979: [GC concurrent-mark-start] >>> >>> And there is no evidence in logs that this concurrent mark has ever ended. Seems like that concurrent GC just hang somewhere. >>> Same thing with mixed GCs: >>> 2014-07-24T13:42:47.425+0400: 92040.176: [GC pause (G1 Evacuation Pause) (mixed) >>> >>> Please help me understand this problem and find a solution if possible. >>> We are using Java 8u5. I can supply these GC logs if needed. >>> Thanks. >>> >>> Best regards, >>> Alexander Bulaev >>> Java developer, Yandex LLC >> >> Best regards, >> Alexander Bulaev >> Java developer, Yandex LLC >> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > Best reagrds, Alexander Bulaev Java developer, Yandex LLC From thomas.schatzl at oracle.com Mon Jul 28 08:36:01 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 28 Jul 2014 10:36:01 +0200 Subject: Strange behaviour of G1 GC In-Reply-To: References: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> <90B8A722-9709-4D27-85DA-EAF0E9703302@oracle.com> Message-ID: <1406536561.2621.4.camel@cirrus> hi, On Mon, 2014-07-28 at 11:48 +0400, Alexander Bulaev wrote: > Hi Charlie, > thanks for your reply. > > Yes, the application is doing humongous allocations also causing Full GCs, but at least this is understanadble. I?ll try the options you mentioned. Also, AFAIK there are some improvements on humongous allocations coming in 8u20. > But the really mysterious thing is that never ending concurrent mark. Do you know something about it? Looks like https://bugs.openjdk.java.net/browse/JDK-8040803 . The fix is in 8u20. Not sure if it is a problem (actually I do not think so), but the mark stack seems to overflow frequently. Maybe increasing the "MarkStackSize" helps. Thanks, Thomas From alexbool at yandex-team.ru Mon Jul 28 08:46:45 2014 From: alexbool at yandex-team.ru (Alexander Bulaev) Date: Mon, 28 Jul 2014 12:46:45 +0400 Subject: Strange behaviour of G1 GC In-Reply-To: <1406536561.2621.4.camel@cirrus> References: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> <90B8A722-9709-4D27-85DA-EAF0E9703302@oracle.com> <1406536561.2621.4.camel@cirrus> Message-ID: <088F1A9A-0A97-41E7-A2BF-B677C7DF6E45@yandex-team.ru> Hi Thomas, looks like this is the issue. Is the fix incuded in 8u20 early access builds? Thanks. On 28.07.2014, at 12:36, Thomas Schatzl wrote: > hi, > > On Mon, 2014-07-28 at 11:48 +0400, Alexander Bulaev wrote: >> Hi Charlie, >> thanks for your reply. >> >> Yes, the application is doing humongous allocations also causing Full GCs, but at least this is understanadble. I?ll try the options you mentioned. Also, AFAIK there are some improvements on humongous allocations coming in 8u20. >> But the really mysterious thing is that never ending concurrent mark. Do you know something about it? > > Looks like https://bugs.openjdk.java.net/browse/JDK-8040803 . The fix is > in 8u20. > > Not sure if it is a problem (actually I do not think so), but the mark > stack seems to overflow frequently. Maybe increasing the "MarkStackSize" > helps. > > Thanks, > Thomas > > From thomas.schatzl at oracle.com Mon Jul 28 08:49:00 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 28 Jul 2014 10:49:00 +0200 Subject: Strange behaviour of G1 GC In-Reply-To: <088F1A9A-0A97-41E7-A2BF-B677C7DF6E45@yandex-team.ru> References: <4A4088B9-62F6-4B1E-8E8B-55C79A61C9FE@yandex-team.ru> <8BBB060E-BE0C-4A6D-8DB4-8226120B74A8@yandex-team.ru> <90B8A722-9709-4D27-85DA-EAF0E9703302@oracle.com> <1406536561.2621.4.camel@cirrus> <088F1A9A-0A97-41E7-A2BF-B677C7DF6E45@yandex-team.ru> Message-ID: <1406537340.2621.8.camel@cirrus> Hi, On Mon, 2014-07-28 at 12:46 +0400, Alexander Bulaev wrote: > Hi Thomas, looks like this is the issue. > Is the fix incuded in 8u20 early access builds? > Thanks. JDK-8040803 has been fixed in 8u20-b17. Current EA is 8u20-b22, so I can confirm this. Thomas > > On 28.07.2014, at 12:36, Thomas Schatzl wrote: > > > hi, > > > > On Mon, 2014-07-28 at 11:48 +0400, Alexander Bulaev wrote: > >> Hi Charlie, > >> thanks for your reply. > >> > >> Yes, the application is doing humongous allocations also causing Full GCs, but at least this is understanadble. I?ll try the options you mentioned. Also, AFAIK there are some improvements on humongous allocations coming in 8u20. > >> But the really mysterious thing is that never ending concurrent mark. Do you know something about it? > > > > Looks like https://bugs.openjdk.java.net/browse/JDK-8040803 . The fix is > > in 8u20. > > > > Not sure if it is a problem (actually I do not think so), but the mark > > stack seems to overflow frequently. Maybe increasing the "MarkStackSize" > > helps. > > > > Thanks, > > Thomas > > > > > > > From savasudevan at ebay.com Mon Jul 28 09:01:21 2014 From: savasudevan at ebay.com (Vasudevan, Sattish) Date: Mon, 28 Jul 2014 09:01:21 +0000 Subject: Memory leak in Parallel class loading Message-ID: <1DC9AF2F4037B2498ECB536F93433799163DDE2E@PHX-EXRDA-S22.corp.ebay.com> Hi team, We encountered a memory leak issue while testing one of our apps. When probed the heap dump using MAT, we found 6500 instances of our class loaded by org.jboss.modules.ModuleClassLoader occupy 45% of our heap size. We also noticed that reference tree ends in java.util.concurrent.ConcurrentHashMap. >From the above details, the issue seems to be related parallel class loading introduced in Java 7.Java 7 introduced support for parallel classloading. http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. Want to know is their way by which we can disable parallel class loading to confirm on the leak suspect. We use oracle jdk 1.7 Update 45. Thanks Sattish. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecki at zusammenkunft.net Mon Jul 28 19:29:33 2014 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Mon, 28 Jul 2014 21:29:33 +0200 Subject: Memory leak in Parallel class loading In-Reply-To: <1DC9AF2F4037B2498ECB536F93433799163DDE2E@PHX-EXRDA-S22.corp.ebay.com> References: <1DC9AF2F4037B2498ECB536F93433799163DDE2E@PHX-EXRDA-S22.corp.ebay.com> Message-ID: <20140728212933.00000014.ecki@zusammenkunft.net> Hello, do you mean "instances" as in Objects (with the type of your class), or do you mean instances of the class object. In the later case, are those all of the same class loader? If you look at the Heap with MAT, have a look at the incoming references and the path to the GC root of that CHM. I would suspect it is something else than the class loader locks. Greetings Bernd Am Mon, 28 Jul 2014 09:01:21 +0000 schrieb "Vasudevan, Sattish" : > Hi team, > > We encountered a memory leak issue while testing one of our apps. > When probed the heap dump using MAT, we found 6500 instances of our > class loaded by org.jboss.modules.ModuleClassLoader occupy 45% of our > heap size. > > We also noticed that reference tree ends in > java.util.concurrent.ConcurrentHashMap. > > >From the above details, the issue seems to be related parallel class > >loading introduced in Java 7.Java 7 introduced support for parallel > >classloading. > > http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html > > The solution for parallel classloading was to add to each class > loader a ConcurrentHashMap, referenced through a new field, > parallelLockMap. > > This contains a mapping from class names to Objects to use as a > classloading lock for that class name. > > Want to know is their way by which we can disable parallel class > loading to confirm on the leak suspect. We use oracle jdk 1.7 Update > 45. > > Thanks > > Sattish. > > > > From martin.makundi at koodaripalvelut.com Tue Jul 29 03:27:28 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Tue, 29 Jul 2014 06:27:28 +0300 Subject: G1gc compaction algorithm In-Reply-To: <1406186865.2920.8.camel@cirrus> References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> Message-ID: Hi! We suffered a couple of Full GC's using regionsize 5M (it seems to be exact looking at logged actual parameters) and we tried the 16M option and this resulted in more severe Full GC behavior. Here is the promised log for 16 M setting: http://81.22.250.165/log/gc-16m.log We switch back to 5M hoping it will behave more nicely. ** Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Tue Jul 29 07:44:47 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 29 Jul 2014 09:44:47 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <1405493510.2665.17.camel@cirrus> <1405494459.2665.20.camel@cirrus> <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> Message-ID: <1406619887.2620.3.camel@cirrus> Hi, On Tue, 2014-07-29 at 06:27 +0300, Martin Makundi wrote: > Hi! > > > We suffered a couple of Full GC's using regionsize 5M (it seems to be > exact looking at logged actual parameters) and we tried the 16M option > and this resulted in more severe Full GC behavior. > > > Here is the promised log for 16 M > setting: http://81.22.250.165/log/gc-16m.log > > > We switch back to 5M hoping it will behave more nicely. About the 5M region size issue: the VM you use does not update the parameters. That has been fixed only later. However other log messages give away the true region size (from log/gc.log): {Heap before GC invocations=1 (full 0): garbage-first heap total 20971520K, used 381687K [0x00007f1530000000, 0x00007f1a30000000, 0x00007f1cb0000000) ***region size 4096K***, 94 young (385024K), 3 survivors (12288K) compacting perm gen total 524288K, used 13875K [0x00007f1cb0000000, 0x00007f1cd0000000, 0x00007f1cd0000000) the space 524288K, 2% used [0x00007f1cb0000000, 0x00007f1cb0d8cfd8, 0x00007f1cb0d8d000, 0x00007f1cd0000000) :) Thanks, Thomas From yu.zhang at oracle.com Thu Jul 31 06:46:03 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Wed, 30 Jul 2014 23:46:03 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> Message-ID: <53D9E62B.3070801@oracle.com> Martin, Thanks for the logs. With 4m regions size, after marking, there are 252 HUMS and 80 HUMC regions. With 4m regions size, after marking, there are 1HUMS and 0 HUMC regions. The larger region size reduced Humongous allocation greatly. The mixed gc can not clean much due to predicted time too high. I think we need to reduce other pause time first: UpdateRS, ScanRS, Other... I will get back to you after taking a closer look. Thanks, Jenny On 7/28/2014 8:27 PM, Martin Makundi wrote: > Hi! > > We suffered a couple of Full GC's using regionsize 5M (it seems to be > exact looking at logged actual parameters) and we tried the 16M option > and this resulted in more severe Full GC behavior. > > Here is the promised log for 16 M setting: > http://81.22.250.165/log/gc-16m.log > > We switch back to 5M hoping it will behave more nicely. > > ** > Martin From yu.zhang at oracle.com Thu Jul 31 17:22:21 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Thu, 31 Jul 2014 10:22:21 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> Message-ID: <53DA7B4D.3090000@oracle.com> Martin, The ScanRS for mixed gc is extremely long, 1000-9000ms. Because it is over pause time goal, minimum old regions can be added to CSet. So mixed gc is not keeping up. Can do a run keeping 16m region size, no G1PrintRegionLivenessInfo, no PrintHeapAtGC. But -XX:+G1SummarizeRSetStats -XX:G1SummarizeRSetStatsPeriod=10 This should tell us more about RSet information. While the UpdateRS is not as bad as ScanRS, we can try to push it to the concurrent threads. Can you add -XX:G1RSetUpdatingPauseTimePercent=5. I am hoping this brings the UpdateRS down to 50ms. Thanks, Jenny On 7/28/2014 8:27 PM, Martin Makundi wrote: > Hi! > > We suffered a couple of Full GC's using regionsize 5M (it seems to be > exact looking at logged actual parameters) and we tried the 16M option > and this resulted in more severe Full GC behavior. > > Here is the promised log for 16 M setting: > http://81.22.250.165/log/gc-16m.log > > We switch back to 5M hoping it will behave more nicely. > > ** > Martin From martin.makundi at koodaripalvelut.com Thu Jul 31 21:33:42 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Fri, 1 Aug 2014 00:33:42 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53DA7B4D.3090000@oracle.com> References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> <53DA7B4D.3090000@oracle.com> Message-ID: Hi! G1SummarizeRSetStats does not seem to work, jvm says: Improperly specified VM option 'G1SummarizeRSetStatsPeriod=10' Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. Same for both new options 2014-07-31 20:22 GMT+03:00 Yu Zhang : > Martin, > > The ScanRS for mixed gc is extremely long, 1000-9000ms. Because it is > over pause time goal, minimum old regions can be added to CSet. So mixed > gc is not keeping up. > > Can do a run keeping 16m region size, no G1PrintRegionLivenessInfo, no > PrintHeapAtGC. But -XX:+G1SummarizeRSetStats -XX: > G1SummarizeRSetStatsPeriod=10 > > This should tell us more about RSet information. > > While the UpdateRS is not as bad as ScanRS, we can try to push it to the > concurrent threads. Can you add -XX:G1RSetUpdatingPauseTimePercent=5. I > am hoping this brings the UpdateRS down to 50ms. > > > Thanks, > Jenny > > On 7/28/2014 8:27 PM, Martin Makundi wrote: > >> Hi! >> >> We suffered a couple of Full GC's using regionsize 5M (it seems to be >> exact looking at logged actual parameters) and we tried the 16M option and >> this resulted in more severe Full GC behavior. >> >> Here is the promised log for 16 M setting: http://81.22.250.165/log/gc- >> 16m.log >> >> We switch back to 5M hoping it will behave more nicely. >> >> ** >> Martin >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yu.zhang at oracle.com Thu Jul 31 21:37:01 2014 From: yu.zhang at oracle.com (Yu Zhang) Date: Thu, 31 Jul 2014 14:37:01 -0700 Subject: G1gc compaction algorithm In-Reply-To: References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> <53DA7B4D.3090000@oracle.com> Message-ID: <53DAB6FD.1050501@oracle.com> Martin, These 2 need to run with -XX:+UnlockDiagnosticVMOptions Thanks, Jenny On 7/31/2014 2:33 PM, Martin Makundi wrote: > Hi! > > G1SummarizeRSetStats does not seem to work, jvm says: > > Improperly specified VM option 'G1SummarizeRSetStatsPeriod=10' > Error: Could not create the Java Virtual Machine. > Error: A fatal exception has occurred. Program will exit. > > Same for both new options > > > > 2014-07-31 20:22 GMT+03:00 Yu Zhang >: > > Martin, > > The ScanRS for mixed gc is extremely long, 1000-9000ms. Because > it is over pause time goal, minimum old regions can be added to > CSet. So mixed gc is not keeping up. > > Can do a run keeping 16m region size, no > G1PrintRegionLivenessInfo, no PrintHeapAtGC. But > -XX:+G1SummarizeRSetStats -XX:G1SummarizeRSetStatsPeriod=10 > > This should tell us more about RSet information. > > While the UpdateRS is not as bad as ScanRS, we can try to push it > to the concurrent threads. Can you add > -XX:G1RSetUpdatingPauseTimePercent=5. I am hoping this brings the > UpdateRS down to 50ms. > > > Thanks, > Jenny > > On 7/28/2014 8:27 PM, Martin Makundi wrote: > > Hi! > > We suffered a couple of Full GC's using regionsize 5M (it > seems to be exact looking at logged actual parameters) and we > tried the 16M option and this resulted in more severe Full GC > behavior. > > Here is the promised log for 16 M setting: > http://81.22.250.165/log/gc-16m.log > > We switch back to 5M hoping it will behave more nicely. > > ** > Martin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 31 21:39:05 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Fri, 1 Aug 2014 00:39:05 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53DAB6FD.1050501@oracle.com> References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> <53DA7B4D.3090000@oracle.com> <53DAB6FD.1050501@oracle.com> Message-ID: Hi! UnlockDiagnosticVMOptions is on (though later (on the right side) in the command line). Jvm version is java version "1.7.0_55" Java(TM) SE Runtime Environment (build 1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) 2014-08-01 0:37 GMT+03:00 Yu Zhang : > Martin, > > These 2 need to run with -XX:+UnlockDiagnosticVMOptions > > Thanks, > Jenny > > On 7/31/2014 2:33 PM, Martin Makundi wrote: > > Hi! > > G1SummarizeRSetStats does not seem to work, jvm says: > > Improperly specified VM option 'G1SummarizeRSetStatsPeriod=10' > Error: Could not create the Java Virtual Machine. > Error: A fatal exception has occurred. Program will exit. > > Same for both new options > > > > 2014-07-31 20:22 GMT+03:00 Yu Zhang : > >> Martin, >> >> The ScanRS for mixed gc is extremely long, 1000-9000ms. Because it is >> over pause time goal, minimum old regions can be added to CSet. So mixed >> gc is not keeping up. >> >> Can do a run keeping 16m region size, no G1PrintRegionLivenessInfo, no >> PrintHeapAtGC. But -XX:+G1SummarizeRSetStats >> -XX:G1SummarizeRSetStatsPeriod=10 >> >> This should tell us more about RSet information. >> >> While the UpdateRS is not as bad as ScanRS, we can try to push it to the >> concurrent threads. Can you add -XX:G1RSetUpdatingPauseTimePercent=5. I >> am hoping this brings the UpdateRS down to 50ms. >> >> >> Thanks, >> Jenny >> >> On 7/28/2014 8:27 PM, Martin Makundi wrote: >> >>> Hi! >>> >>> We suffered a couple of Full GC's using regionsize 5M (it seems to be >>> exact looking at logged actual parameters) and we tried the 16M option and >>> this resulted in more severe Full GC behavior. >>> >>> Here is the promised log for 16 M setting: >>> http://81.22.250.165/log/gc-16m.log >>> >>> We switch back to 5M hoping it will behave more nicely. >>> >>> ** >>> Martin >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.makundi at koodaripalvelut.com Thu Jul 31 21:52:49 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Fri, 1 Aug 2014 00:52:49 +0300 Subject: G1gc compaction algorithm In-Reply-To: References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> <53DA7B4D.3090000@oracle.com> <53DAB6FD.1050501@oracle.com> Message-ID: Strange that it is in the property summary but doesn't allow setting it. 2014-08-01 0:39 GMT+03:00 Martin Makundi : > Hi! > > UnlockDiagnosticVMOptions is on (though later (on the right side) in the > command line). Jvm version is > > java version "1.7.0_55" > Java(TM) SE Runtime Environment (build 1.7.0_55-b13) > Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) > > > > 2014-08-01 0:37 GMT+03:00 Yu Zhang : > > Martin, >> >> These 2 need to run with -XX:+UnlockDiagnosticVMOptions >> >> Thanks, >> Jenny >> >> On 7/31/2014 2:33 PM, Martin Makundi wrote: >> >> Hi! >> >> G1SummarizeRSetStats does not seem to work, jvm says: >> >> Improperly specified VM option 'G1SummarizeRSetStatsPeriod=10' >> Error: Could not create the Java Virtual Machine. >> Error: A fatal exception has occurred. Program will exit. >> >> Same for both new options >> >> >> >> 2014-07-31 20:22 GMT+03:00 Yu Zhang : >> >>> Martin, >>> >>> The ScanRS for mixed gc is extremely long, 1000-9000ms. Because it is >>> over pause time goal, minimum old regions can be added to CSet. So mixed >>> gc is not keeping up. >>> >>> Can do a run keeping 16m region size, no G1PrintRegionLivenessInfo, no >>> PrintHeapAtGC. But -XX:+G1SummarizeRSetStats >>> -XX:G1SummarizeRSetStatsPeriod=10 >>> >>> This should tell us more about RSet information. >>> >>> While the UpdateRS is not as bad as ScanRS, we can try to push it to the >>> concurrent threads. Can you add -XX:G1RSetUpdatingPauseTimePercent=5. I >>> am hoping this brings the UpdateRS down to 50ms. >>> >>> >>> Thanks, >>> Jenny >>> >>> On 7/28/2014 8:27 PM, Martin Makundi wrote: >>> >>>> Hi! >>>> >>>> We suffered a couple of Full GC's using regionsize 5M (it seems to be >>>> exact looking at logged actual parameters) and we tried the 16M option and >>>> this resulted in more severe Full GC behavior. >>>> >>>> Here is the promised log for 16 M setting: >>>> http://81.22.250.165/log/gc-16m.log >>>> >>>> We switch back to 5M hoping it will behave more nicely. >>>> >>>> ** >>>> Martin >>>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.schatzl at oracle.com Thu Jul 31 22:10:46 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Fri, 01 Aug 2014 00:10:46 +0200 Subject: G1gc compaction algorithm In-Reply-To: References: <53C6EF52.5060006@oracle.com> <53C73F4F.3050403@oracle.com> <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> <53DA7B4D.3090000@oracle.com> <53DAB6FD.1050501@oracle.com> Message-ID: <1406844646.7992.0.camel@cirrus> Thomas On Fri, 2014-08-01 at 00:52 +0300, Martin Makundi wrote: > Strange that it is in the property summary but doesn't allow setting > it. > UnlockDiagnosticOptions must be to the left of the options you want to unlock. Options parsing and evaluation is done strictly from left to right. Thomas From martin.makundi at koodaripalvelut.com Thu Jul 31 22:17:45 2014 From: martin.makundi at koodaripalvelut.com (Martin Makundi) Date: Fri, 1 Aug 2014 01:17:45 +0300 Subject: G1gc compaction algorithm In-Reply-To: <53DABCAE.9060901@oracle.com> References: <53C8206F.9070303@oracle.com> <53D09794.3090806@oracle.com> <1406186302.2920.4.camel@cirrus> <1406186865.2920.8.camel@cirrus> <53DA7B4D.3090000@oracle.com> <53DAB6FD.1050501@oracle.com> <53DABCAE.9060901@oracle.com> Message-ID: Hmm.. ok, I copy pasted if from the mail, it works after typing manually, thanks. Problem seems to have been BOTH a whitespace typo AND UnlockDiagnosticOptions was on the right side. Thanks. Gathering logs now. ** Martin 2014-08-01 1:01 GMT+03:00 Yu Zhang : > maybe some hidden text? > > Thanks, > Jenny > > On 7/31/2014 2:52 PM, Martin Makundi wrote: > > Strange that it is in the property summary but doesn't allow setting it. > > > 2014-08-01 0:39 GMT+03:00 Martin Makundi < > martin.makundi at koodaripalvelut.com>: > >> Hi! >> >> UnlockDiagnosticVMOptions is on (though later (on the right side) in >> the command line). Jvm version is >> >> java version "1.7.0_55" >> Java(TM) SE Runtime Environment (build 1.7.0_55-b13) >> Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) >> >> >> >> 2014-08-01 0:37 GMT+03:00 Yu Zhang : >> >> Martin, >>> >>> These 2 need to run with -XX:+UnlockDiagnosticVMOptions >>> >>> Thanks, >>> Jenny >>> >>> On 7/31/2014 2:33 PM, Martin Makundi wrote: >>> >>> Hi! >>> >>> G1SummarizeRSetStats does not seem to work, jvm says: >>> >>> Improperly specified VM option 'G1SummarizeRSetStatsPeriod=10' >>> Error: Could not create the Java Virtual Machine. >>> Error: A fatal exception has occurred. Program will exit. >>> >>> Same for both new options >>> >>> >>> >>> 2014-07-31 20:22 GMT+03:00 Yu Zhang : >>> >>>> Martin, >>>> >>>> The ScanRS for mixed gc is extremely long, 1000-9000ms. Because it is >>>> over pause time goal, minimum old regions can be added to CSet. So mixed >>>> gc is not keeping up. >>>> >>>> Can do a run keeping 16m region size, no G1PrintRegionLivenessInfo, no >>>> PrintHeapAtGC. But -XX:+G1SummarizeRSetStats >>>> -XX:G1SummarizeRSetStatsPeriod=10 >>>> >>>> This should tell us more about RSet information. >>>> >>>> While the UpdateRS is not as bad as ScanRS, we can try to push it to >>>> the concurrent threads. Can you add -XX:G1RSetUpdatingPauseTimePercent=5. >>>> I am hoping this brings the UpdateRS down to 50ms. >>>> >>>> >>>> Thanks, >>>> Jenny >>>> >>>> On 7/28/2014 8:27 PM, Martin Makundi wrote: >>>> >>>>> Hi! >>>>> >>>>> We suffered a couple of Full GC's using regionsize 5M (it seems to be >>>>> exact looking at logged actual parameters) and we tried the 16M option and >>>>> this resulted in more severe Full GC behavior. >>>>> >>>>> Here is the promised log for 16 M setting: >>>>> http://81.22.250.165/log/gc-16m.log >>>>> >>>>> We switch back to 5M hoping it will behave more nicely. >>>>> >>>>> ** >>>>> Martin >>>>> >>>> >>>> >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: