From jai.forums2013 at gmail.com Tue Dec 4 03:57:58 2018 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Tue, 4 Dec 2018 09:27:58 +0530 Subject: Java 8 + Docker container - CMS collector leaves around instances that have no GC roots In-Reply-To: <5604aaaf008ce26e313e5e7ad7fa1ae4844afbac.camel@redhat.com> References: <5604aaaf008ce26e313e5e7ad7fa1ae4844afbac.camel@redhat.com> Message-ID: <10777499-8dfa-779d-74c4-51c78dc040cd@gmail.com> Hello Leo and Jeremy, Thank you both for pointing me to those docs. They did help - the ElasticSearch doc, I hadn't seen before during my search and the docs at redhat.com developer blogs, although I had read them before a while back, it was still useful since I found some new JVM options that I could experiment with to get more details for the issue at hand. Apologies for the delayed response though. I wanted to run a bunch of experiments with various different configs to figure out what's really going on, instead of speculating what might be going on. After using all available/relevant JVM options and tuning the heap max sizes and the cgroup limits and using OS level commands (that I know off) to track the resource usage, we still ended up seeing the docker container hitting the limit and then being killed. All the relevant tracking tools (jconsole, the native hotspot memory tracking, the direct/mapped buffer usage, heap usage) all kept showing that the usage was well below the allowed limits. Yet the cgroups memory.usage_in_bytes kept increasing over days and ultimately kept hitting the limit set at cgroups level. At this point, it looked like we were either looking at the wrong info or we weren't really using the right tools to figure out what's really consuming this memory. After a bit more searching, we finally found these issues[1][2] that match exactly to what we are seeing (right down to the exact version of OS and docker and the nature of configuration). So it looks like it's a known issue with docker + the kernel version in use and apparently no known workaround (other than downgrading to a version of docker that doesn't hit this). There appears to be a commit[3] that has been done upstream but isn't yet released. We will evaluate how to either try and patch/test that change or figure out some other way (may be not set a --memory limit for now) to get past this. Thank you all again for the helpful replies. [1] https://github.com/opencontainers/runc/issues/1725 [2] https://github.com/moby/moby/issues/37722 [3] https://github.com/opencontainers/runc/commit/6a2c15596845f6ff5182e2022f38a65e5dfa88eb -Jaikiran On 26/11/18 3:56 PM, jwhiting at redhat.com wrote: > Hi Jaikiran > Have a look at some blog posts by old friends :) These blog posts > might be helpful (along with the other replies you received) to > diagnose the root cause of the issue. In particular native memory > tracking. > > https://developers.redhat.com/blog/2017/03/14/java-inside-docker/ > https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/ > > Regards, > Jeremy > > On Fri, 2018-11-23 at 19:25 +0530, Jaikiran Pai wrote: >> Hi, >> >> I'm looking for some inputs in debugging a high memory usage issue >> (and >> subsequently the process being killed) in one of the applications I >> deal >> with. Given that from what I have looked into this issue so far, this >> appears to be something to do with the CMS collector, so I hope this >> is >> the right place to this question. >> >> A bit of a background - The application that I'm dealing with is >> ElasticSearch server version 1.7.5. We use Java 8: >> >> java version "1.8.0_172" >> Java(TM) SE Runtime Environment (build 1.8.0_172-b11) >> Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) >> >> To add to the complexity in debugging this issue, this runs as a >> docker >> container on docker version 18.03.0-ce on a CentOS 7 host VM kernel >> version 3.10.0-693.5.2.el7.x86_64. >> >> We have been noticing that this container/process keeps getting >> killed >> by the oom-killer every few days. The dmesg logs suggest that the >> process has hit the "limits" set on the docker cgroups level. After >> debugging this over past day or so, I've reached a point where I >> can't >> make much sense of the data I'm looking at. The JVM process is >> started >> using the following params (of relevance): >> >> java -Xms2G -Xmx6G -XX:+UseParNewGC -XX:+UseConcMarkSweepGC >> -XX:CMSInitiatingOccupancyFraction=75 >> -XX:+UseCMSInitiatingOccupancyOnly >> -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC .... >> >> As you can see it uses CMS collector with 75% of tenured/old gen for >> initiating the GC. >> >> After a few hours/days of running I notice that even though the CMS >> collector does run almost every hour or so, there are huge number of >> objects _with no GC roots_ that never get collected. These objects >> internally seem to hold on to ByteBuffer(s) which (from what I see) >> as a >> result never get released and the non-heap memory keeps building up, >> till the process gets killed. To give an example, here's the jmap >> -histo >> output (only relevant parts): >> >> 1: 861642 196271400 [B >> 2: 198776 28623744 >> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame >> 3: 676722 21655104 >> org.apache.lucene.store.ByteArrayDataInput >> 4: 202398 19430208 >> org.apache.lucene.codecs.lucene41.Lucene41PostingsWriter$IntBlockTerm >> State >> 5: 261819 18850968 >> org.apache.lucene.util.fst.FST$Arc >> 6: 178661 17018376 [C >> 7: 31452 16856024 [I >> 8: 203911 8049352 [J >> 9: 85700 5484800 java.nio.DirectByteBufferR >> 10: 168935 5405920 >> java.util.concurrent.ConcurrentHashMap$Node >> 11: 89948 5105328 [Ljava.lang.Object; >> 12: 148514 4752448 >> org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference >> >> .... >> >> Total 5061244 418712248 >> >> This above output is without the "live" option. Running jmap >> -histo:live >> returns something like (again only relevant parts): >> >> 13: 31753 1016096 >> org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference >> ... >> 44: 887 127728 >> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame >> ... >> 50: 3054 97728 >> org.apache.lucene.store.ByteArrayDataInput >> ... >> 59: 888 85248 >> org.apache.lucene.codecs.lucene41.Lucene41PostingsWriter$IntBlockTerm >> State >> >> Total 1177783 138938920 >> >> >> Notice the vast difference between the live and non-live instances of >> the same class. This isn't just in one "snapshot". I have been >> monitoring this for more than a day and this pattern continues. Even >> taking heap dumps and using tools like visualvm shows that these >> instances have "no GC root" and I have even checked the gc log files >> to >> see that the CMS collector does occasionally run. However these >> objects >> never seem to get collected. >> >> I realize this data may not be enough to narrow down the issue, but >> what >> I am looking for is some kind of help/input/hints/suggestions on what >> I >> should be trying to figure out why these instances aren't GCed. Is >> this >> something that's expected in certain situations? >> >> -Jaikiran >> >> >> >> >> >> >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From poonam.bajaj at oracle.com Tue Dec 4 20:26:50 2018 From: poonam.bajaj at oracle.com (Poonam Parhar) Date: Tue, 4 Dec 2018 12:26:50 -0800 Subject: Java 8 + Docker container - CMS collector leaves around instances that have no GC roots In-Reply-To: <10777499-8dfa-779d-74c4-51c78dc040cd@gmail.com> References: <5604aaaf008ce26e313e5e7ad7fa1ae4844afbac.camel@redhat.com> <10777499-8dfa-779d-74c4-51c78dc040cd@gmail.com> Message-ID: Hello Jaikiran, Did you try collecting a heap dump and inspecting these ByteBuffers that you suspect are holding on to non-heap memory? Looking at the GC roots of these objects in the heap dump might shed some light as to why these objects are not getting collected. Thanks, Poonam On 12/3/18 7:57 PM, Jaikiran Pai wrote: > Hello Leo and Jeremy, > > Thank you both for pointing me to those docs. They did help - the > ElasticSearch doc, I hadn't seen before during my search and the docs at > redhat.com developer blogs, although I had read them before a while > back, it was still useful since I found some new JVM options that I > could experiment with to get more details for the issue at hand. > > Apologies for the delayed response though. I wanted to run a bunch of > experiments with various different configs to figure out what's really > going on, instead of speculating what might be going on. > > After using all available/relevant JVM options and tuning the heap max > sizes and the cgroup limits and using OS level commands (that I know > off) to track the resource usage, we still ended up seeing the docker > container hitting the limit and then being killed. All the relevant > tracking tools (jconsole, the native hotspot memory tracking, the > direct/mapped buffer usage, heap usage) all kept showing that the usage > was well below the allowed limits. Yet the cgroups memory.usage_in_bytes > kept increasing over days and ultimately kept hitting the limit set at > cgroups level. At this point, it looked like we were either looking at > the wrong info or we weren't really using the right tools to figure out > what's really consuming this memory. After a bit more searching, we > finally found these issues[1][2] that match exactly to what we are > seeing (right down to the exact version of OS and docker and the nature > of configuration). So it looks like it's a known issue with docker + the > kernel version in use and apparently no known workaround (other than > downgrading to a version of docker that doesn't hit this). There appears > to be a commit[3] that has been done upstream but isn't yet released. We > will evaluate how to either try and patch/test that change or figure out > some other way (may be not set a --memory limit for now) to get past this. > > Thank you all again for the helpful replies. > > [1] https://github.com/opencontainers/runc/issues/1725 > > [2] https://github.com/moby/moby/issues/37722 > [3] > https://github.com/opencontainers/runc/commit/6a2c15596845f6ff5182e2022f38a65e5dfa88eb > > -Jaikiran > > > On 26/11/18 3:56 PM, jwhiting at redhat.com wrote: >> Hi Jaikiran >> Have a look at some blog posts by old friends :) These blog posts >> might be helpful (along with the other replies you received) to >> diagnose the root cause of the issue. In particular native memory >> tracking. >> >> https://developers.redhat.com/blog/2017/03/14/java-inside-docker/ >> https://developers.redhat.com/blog/2017/04/04/openjdk-and-containers/ >> >> Regards, >> Jeremy >> >> On Fri, 2018-11-23 at 19:25 +0530, Jaikiran Pai wrote: >>> Hi, >>> >>> I'm looking for some inputs in debugging a high memory usage issue >>> (and >>> subsequently the process being killed) in one of the applications I >>> deal >>> with. Given that from what I have looked into this issue so far, this >>> appears to be something to do with the CMS collector, so I hope this >>> is >>> the right place to this question. >>> >>> A bit of a background - The application that I'm dealing with is >>> ElasticSearch server version 1.7.5. We use Java 8: >>> >>> java version "1.8.0_172" >>> Java(TM) SE Runtime Environment (build 1.8.0_172-b11) >>> Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode) >>> >>> To add to the complexity in debugging this issue, this runs as a >>> docker >>> container on docker version 18.03.0-ce on a CentOS 7 host VM kernel >>> version 3.10.0-693.5.2.el7.x86_64. >>> >>> We have been noticing that this container/process keeps getting >>> killed >>> by the oom-killer every few days. The dmesg logs suggest that the >>> process has hit the "limits" set on the docker cgroups level. After >>> debugging this over past day or so, I've reached a point where I >>> can't >>> make much sense of the data I'm looking at. The JVM process is >>> started >>> using the following params (of relevance): >>> >>> java -Xms2G -Xmx6G -XX:+UseParNewGC -XX:+UseConcMarkSweepGC >>> -XX:CMSInitiatingOccupancyFraction=75 >>> -XX:+UseCMSInitiatingOccupancyOnly >>> -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC .... >>> >>> As you can see it uses CMS collector with 75% of tenured/old gen for >>> initiating the GC. >>> >>> After a few hours/days of running I notice that even though the CMS >>> collector does run almost every hour or so, there are huge number of >>> objects _with no GC roots_ that never get collected. These objects >>> internally seem to hold on to ByteBuffer(s) which (from what I see) >>> as a >>> result never get released and the non-heap memory keeps building up, >>> till the process gets killed. To give an example, here's the jmap >>> -histo >>> output (only relevant parts): >>> >>> 1: 861642 196271400 [B >>> 2: 198776 28623744 >>> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame >>> 3: 676722 21655104 >>> org.apache.lucene.store.ByteArrayDataInput >>> 4: 202398 19430208 >>> org.apache.lucene.codecs.lucene41.Lucene41PostingsWriter$IntBlockTerm >>> State >>> 5: 261819 18850968 >>> org.apache.lucene.util.fst.FST$Arc >>> 6: 178661 17018376 [C >>> 7: 31452 16856024 [I >>> 8: 203911 8049352 [J >>> 9: 85700 5484800 java.nio.DirectByteBufferR >>> 10: 168935 5405920 >>> java.util.concurrent.ConcurrentHashMap$Node >>> 11: 89948 5105328 [Ljava.lang.Object; >>> 12: 148514 4752448 >>> org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference >>> >>> .... >>> >>> Total 5061244 418712248 >>> >>> This above output is without the "live" option. Running jmap >>> -histo:live >>> returns something like (again only relevant parts): >>> >>> 13: 31753 1016096 >>> org.apache.lucene.util.WeakIdentityMap$IdentityWeakReference >>> ... >>> 44: 887 127728 >>> org.apache.lucene.codecs.blocktree.SegmentTermsEnumFrame >>> ... >>> 50: 3054 97728 >>> org.apache.lucene.store.ByteArrayDataInput >>> ... >>> 59: 888 85248 >>> org.apache.lucene.codecs.lucene41.Lucene41PostingsWriter$IntBlockTerm >>> State >>> >>> Total 1177783 138938920 >>> >>> >>> Notice the vast difference between the live and non-live instances of >>> the same class. This isn't just in one "snapshot". I have been >>> monitoring this for more than a day and this pattern continues. Even >>> taking heap dumps and using tools like visualvm shows that these >>> instances have "no GC root" and I have even checked the gc log files >>> to >>> see that the CMS collector does occasionally run. However these >>> objects >>> never seem to get collected. >>> >>> I realize this data may not be enough to narrow down the issue, but >>> what >>> I am looking for is some kind of help/input/hints/suggestions on what >>> I >>> should be trying to figure out why these instances aren't GCed. Is >>> this >>> something that's expected in certain situations? >>> >>> -Jaikiran >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > > _______________________________________________ > hotspot-gc-use mailing list > hotspot-gc-use at openjdk.java.net > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use From jai.forums2013 at gmail.com Thu Dec 6 03:33:55 2018 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Thu, 6 Dec 2018 09:03:55 +0530 Subject: Java 8 + Docker container - CMS collector leaves around instances that have no GC roots In-Reply-To: <10777499-8dfa-779d-74c4-51c78dc040cd@gmail.com> References: <5604aaaf008ce26e313e5e7ad7fa1ae4844afbac.camel@redhat.com> <10777499-8dfa-779d-74c4-51c78dc040cd@gmail.com> Message-ID: <29704647-f91a-a313-3617-ac3344cac829@gmail.com> On 04/12/18 9:27 AM, Jaikiran Pai wrote: > After a bit more searching, we > finally found these issues[1][2] that match exactly to what we are > seeing (right down to the exact version of OS and docker and the nature > of configuration). So it looks like it's a known issue with docker + the > kernel version in use and apparently no known workaround (other than > downgrading to a version of docker that doesn't hit this). There appears > to be a commit[3] that has been done upstream but isn't yet released. We > will evaluate how to either try and patch/test that change or figure out > some other way (may be not set a --memory limit for now) to get past this. > > Thank you all again for the helpful replies. > > [1] https://github.com/opencontainers/runc/issues/1725 > > [2] https://github.com/moby/moby/issues/37722 > [3] > https://github.com/opencontainers/runc/commit/6a2c15596845f6ff5182e2022f38a65e5dfa88eb > An update - I found that docker-ce has committed a fix[1] for the kmem usage issue. I upgraded to 18.09.1-beta2 of docker (which contains this fix) on the exact same setup that we have been using to monitor this issue. It's been almost 24 hours now and the docker stats command and other related tools show that the memory usage is stable and well within expected usage. On the older version of docker, for this duration of run, we would already start seeing the usage rising over expected levels and would die after a few days. We will continue to monitor this setup for at least a week or more but at this point, I think this issue has been narrowed down and sorted out with this fix in docker. [1] https://github.com/docker/docker-ce/commit/53e37f7583cd3d90a8069de4081ef9bebffb839f#diff-9f3e85b3e5dd57d45832939603ad323e -Jaikiran From jai.forums2013 at gmail.com Thu Dec 6 03:49:19 2018 From: jai.forums2013 at gmail.com (Jaikiran Pai) Date: Thu, 6 Dec 2018 09:19:19 +0530 Subject: Java 8 + Docker container - CMS collector leaves around instances that have no GC roots In-Reply-To: References: <5604aaaf008ce26e313e5e7ad7fa1ae4844afbac.camel@redhat.com> <10777499-8dfa-779d-74c4-51c78dc040cd@gmail.com> Message-ID: Hello Poonam, On 05/12/18 1:56 AM, Poonam Parhar wrote: > Hello Jaikiran, > > Did you try collecting a heap dump and inspecting these ByteBuffers > that you suspect are holding on to non-heap memory? Looking at the GC > roots of these objects in the heap dump might shed some light as to > why these objects are not getting collected. My initial analysis (when I started this thread) turned out to be a false alarm, as you might have come to see based on my recent replies in this thread. Although the ByteBuffer usage was growing (and still does), it wasn't really leaking and based on more detailed analysis it was/is within the expected limits. So it wasn't really a issue with GC but more a kmem usage tracking issue with docker and the 3.x kernel version on CentOS. As I note in a recent reply to this thread, this got fixed in docker-ce, in a yet to be released version and our tests so far have shown that the fix is working. -Jaikiran From thomas.schatzl at oracle.com Wed Dec 19 15:03:04 2018 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Wed, 19 Dec 2018 16:03:04 +0100 Subject: Changes to Garbage Collection in JDK 12 Message-ID: <97e82d0bba6088b6c388acc44e619b7af687d747.camel@oracle.com> Hi all, now that it's Christmas and JDK12 ramping down I would like to give an overview of JDK12 changes to GC from the Oracle Hotspot GC team's view that may impact end-users. >From a statistics POV there have been 197 changes for this release in GC so far [0], 70 bugs fixed, 124 enhancements, and 3 JEPs implemented. As part of that there has been a significant increase in external contributions in the last six months as you will see - that kept the reviewers quite busy :) Let's walk over a selection of improvements categorized by garbage collector: G1 - Abortable mixed collections (JEP 344 [1]) made it into JDK12 [2]: this feature is intended to improve G1 meeting pause time goals when incrementally collecting the old generation by incrementally collecting old generation regions within a particular GC. More information about this feature in last year's FOSDEM presentation [3] - the changes for Promptly return unused committed memory from G1 (JEP 346 [4][5]) let G1 perform regular garbage collection cycles to give back Java heap memory to the operating system if the system is idle. This change has been initially suggested and contributed by Rodrigo B. and Ruslan S. from JElastic. - Kishor K. from Intel contributed changes to allow only the old generation on alternate memory devices like NVDIMMs [6]. (Note that this feature is technically still not in the release yet, but requesting late commit, so it *might* still be moved off to next release) - G1 will now try to give back Java heap memory at every concurrent marking cycle if normal heap sizing heuristics indicate so, e.g. -Xms is smaller than -Xmx, there is a lot of excess free heap and others, as opposed to require a full gc to do that [7]. - _all_ collectors use a new, more latency optimized way of terminating a set of worker threads [8] contributed by RedHat, and generic optimizations to the work stealing mechanism have been implemented [9]. This reduces pause times for all collectors, particularly when using many threads. - optimizations to G1 pause times: the usual bunch of micro- optimizations for shorter pauses [10][11][12][13]. :) - G1 adds a few more JFR events to make its contents closer to what you can get from the logs [14]. ZGC - ZGC added class unloading, and doing that concurrently to the application at the same time [15]. :) - some remaining parts of the ZGC pauses were parallelized or moved into the concurrent phase to further decrease pause times [16][17] - ZGC is now built and available, still as experimental, by default [18] Oracle JDK12 EA builds can already be downloaded right now from http://jdk.java.net/12/ - please report issues here or at https://bugreport.java.com/ to make the VM even better. Thanks again to the many contributors who made this list of changes possible, and stay tuned for JDK13+ GC changes :) Have fun digging references, and enjoy! Thanks, Thomas [0] https://bugs.openjdk.java.net/browse/JDK-8215548?jql=project%20%3D%20JDK%20AND%20issuetype%20in%20(Bug%2C%20Enhancement)%20AND%20fixVersion%20in%20(%2212%22%2C%2012.0.1%2C%2012.0.2)%20AND%20component%20%3D%20hotspot%20AND%20Subcomponent%20%3D%20gc [1] http://openjdk.java.net/jeps/344 [2] https://bugs.openjdk.java.net/browse/JDK-8213890 [3] https://archive.fosdem.org/2018/schedule/event/g1/ [4] http://openjdk.java.net/jeps/346 [5] https://bugs.openjdk.java.net/browse/JDK-8212657 [6] https://bugs.openjdk.java.net/browse/JDK-8202286 [7] https://bugs.openjdk.java.net/browse/JDK-6490394 [8] https://bugs.openjdk.java.net/browse/JDK-8204947 [9] https://bugs.openjdk.java.net/browse/JDK-8205921 [10] https://bugs.openjdk.java.net/browse/JDK-8212911 [11] https://bugs.openjdk.java.net/browse/JDK-8212753 [12] https://bugs.openjdk.java.net/browse/JDK-8211853 [13] https://bugs.openjdk.java.net/browse/JDK-8209843 [14] https://bugs.openjdk.java.net/browse/JDK-8196341 [15] https://bugs.openjdk.java.net/browse/JDK-8214897 [16] https://bugs.openjdk.java.net/browse/JDK-8210883 [17] https://bugs.openjdk.java.net/browse/JDK-8210064 [18] https://bugs.openjdk.java.net/browse/JDK-8214476 From gustav.r.akesson at gmail.com Wed Dec 26 18:14:56 2018 From: gustav.r.akesson at gmail.com (=?UTF-8?Q?Gustav_=C3=85kesson?=) Date: Wed, 26 Dec 2018 19:14:56 +0100 Subject: ParNew latency Message-ID: Hello folks, I have noticed a peculiar behaviour with CMS and ParNew, in which the latency of ParNew is significantly higher before the first CMS cycle has occurred. Looking at the GC logs further down in this mail, we see it drops from around 45ms down to around 35ms. This observation is consistent and reproducible every time. >From what I can tell, it seems to be some type of internal CMS ergonomics that is adjusted after first CMS cycle that improves the ParNew STW latency. Any thoughts on this? What could this be and is there any non-default flag that can be set to get the "post CMS cycle" latency right from start? ----------------------- *Set JVM flags and platform* Java HotSpot(TM) 64-Bit Server VM (25.181-b25) for linux-amd64 JRE (1.8.0_181-b25), built on Jun 29 2018 00:52:17 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8) Memory: 4k page, physical 41037916k(35541784k free), swap 4194300k(4132448k free) CommandLine flags: -XX:+AlwaysPreTouch -XX:+CMSEdenChunksRecordAlways -XX:CMSInitiatingOccupancyFraction=80 -XX:+CMSParallelInitialMarkEnabled -XX:+CMSScavengeBeforeRemark -XX:CMSWaitDuration=60000 -XX:CompressedClassSpaceSize=33554432 -XX:+DebugNonSafepoints -XX:+DisableExplicitGC -XX:ErrorFile=**** -XX:+FlightRecorder -XX:GCLogFileSize=31457280 -XX:InitialHeapSize=31517048832 -XX:MaxHeapSize=31517048832 -XX:MaxMetaspaceSize=268435456 -XX:MaxNewSize=2147483648 -XX:MaxTenuringThreshold=6 -XX:MetaspaceSize=268435456 -XX:NewSize=2147483648 -XX:NumberOfGCLogFiles=3 -XX:OldPLABSize=16 -XX:+PreserveFramePointer -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:ReservedCodeCacheSize=134217728 -XX:+TieredCompilation -XX:+UnlockCommercialFeatures -XX:+UnlockDiagnosticVMOptions -XX:+UseBiasedLocking -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseGCLogFileRotation -XX:+UseParNewGC *GC logs* 2018-12-11T13:23:28.667+0100: 80955.318: [GC (Allocation Failure) 2018-12-11T13:23:28.667+0100: 80955.318: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 6087736 bytes, 6087736 total - age 2: 4069464 bytes, 10157200 total - age 3: 3749080 bytes, 13906280 total - age 4: 3149152 bytes, 17055432 total - age 5: 2986288 bytes, 20041720 total - age 6: 2861776 bytes, 22903496 total : 1707782K->32210K(1887488K), 0.0445715 secs] 24644965K->22972174K(30569728K), 0.0448519 secs] [Times: user=0.33 sys=0.00, real=0.05 secs] 2018-12-11T13:23:29.748+0100: 80956.400: [GC (Allocation Failure) 2018-12-11T13:23:29.748+0100: 80956.400: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 6113256 bytes, 6113256 total - age 2: 4078992 bytes, 10192248 total - age 3: 3700760 bytes, 13893008 total - age 4: 3213552 bytes, 17106560 total - age 5: 2955112 bytes, 20061672 total - age 6: 2881016 bytes, 22942688 total : 1710034K->31117K(1887488K), 0.0446833 secs] 24649998K->22973867K(30569728K), 0.0449682 secs] [Times: user=0.34 sys=0.00, real=0.05 secs] 2018-12-11T13:23:30.830+0100: 80957.482: [GC (Allocation Failure) 2018-12-11T13:23:30.831+0100: 80957.482: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 7805344 bytes, 7805344 total - age 2: 4043072 bytes, 11848416 total - age 3: 3742544 bytes, 15590960 total - age 4: 3180256 bytes, 18771216 total - age 5: 2986552 bytes, 21757768 total - age 6: 2871136 bytes, 24628904 total *: 1708941K->39307K(1887488K), 0.0486555 secs] 24651691K->22984857K(30569728K), 0.0489488 secs] [Times: user=0.34 sys=0.00, real=0.05 secs] * 2018-12-11T13:23:31.918+0100: 80958.570: [GC (Allocation Failure) 2018-12-11T13:23:31.918+0100: 80958.570: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 7905160 bytes, 7905160 total - age 2: 4058768 bytes, 11963928 total - age 3: 3702320 bytes, 15666248 total - age 4: 3218448 bytes, 18884696 total - age 5: 2965648 bytes, 21850344 total - age 6: 2868192 bytes, 24718536 total : 1717131K->38437K(1887488K), 0.0480072 secs] 24662681K->22986764K(30569728K), 0.0483032 secs] [Times: user=0.32 sys=0.00, real=0.05 secs] *2018-12-11T13:23:31.970+0100: 80958.621: [GC (CMS Initial Mark) [1 CMS-initial-mark: 22948327K(28682240K)] 23010362K(30569728K), 0.0076862 secs] [Times: user=0.03 sys=0.01, real=0.01 secs] * 2018-12-11T13:23:31.978+0100: 80958.630: [CMS-concurrent-mark-start] 2018-12-11T13:23:32.808+0100: 80959.460: [CMS-concurrent-mark: 0.830/0.830 secs] [Times: user=5.07 sys=0.49, real=0.83 secs] 2018-12-11T13:23:32.808+0100: 80959.460: [CMS-concurrent-preclean-start] 2018-12-11T13:23:32.867+0100: 80959.518: [CMS-concurrent-preclean: 0.057/0.058 secs] [Times: user=0.33 sys=0.03, real=0.06 secs] 2018-12-11T13:23:32.867+0100: 80959.518: [CMS-concurrent-abortable-preclean-start] 2018-12-11T13:23:33.002+0100: 80959.653: [GC (Allocation Failure) 2018-12-11T13:23:33.002+0100: 80959.654: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 6792248 bytes, 6792248 total - age 2: 4085632 bytes, 10877880 total - age 3: 3725496 bytes, 14603376 total - age 4: 3163816 bytes, 17767192 total - age 5: 2998648 bytes, 20765840 total - age 6: 2836880 bytes, 23602720 total : 1716261K->36026K(1887488K), 0.0459681 secs] 24664588K->22987134K(30569728K), 0.0462493 secs] [Times: user=0.33 sys=0.00, real=0.04 secs] 2018-12-11T13:23:33.647+0100: 80960.299: [CMS-concurrent-abortable-preclean: 0.728/0.780 secs] [Times: user=4.33 sys=0.45, real=0.78 secs] 2018-12-11T13:23:33.649+0100: 80960.300: [GC (CMS Final Remark) [YG occupancy: 1044991 K (1887488 K)]2018-12-11T13:23:33.649+0100: 80960.301: [GC (CMS Final Remark) 2018-12-11T13:23:33.649+0100: 80960.301: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 4522096 bytes, 4522096 total - age 2: 4087288 bytes, 8609384 total - age 3: 3962800 bytes, 12572184 total - age 4: 3364480 bytes, 15936664 total - age 5: 3025768 bytes, 18962432 total - age 6: 2922328 bytes, 21884760 total : 1044991K->29113K(1887488K), 0.0456241 secs] 23996098K->22982992K(30569728K), 0.0458796 secs] [Times: user=0.33 sys=0.00, real=0.05 secs] 2018-12-11T13:23:33.695+0100: 80960.346: [Rescan (parallel) , 0.0138960 secs]2018-12-11T13:23:33.709+0100: 80960.360: [weak refs processing, 0.0022424 secs]2018-12-11T13:23:33.711+0100: 80960.363: [class unloading, 0.1338832 secs]2018-12-11T13:23:33.845+0100: 80960.497: [scrub symbol table, 0.0140718 secs]2018-12-11T13:23:33.859+0100: 80960.511: [scrub string table, 0.0017299 secs][1 CMS-remark: 22953878K(28682240K)] 22982992K(30569728K), 0.2419581 secs] [Times: user=0.62 sys=0.00, real=0.25 secs] 2018-12-11T13:23:33.891+0100: 80960.543: [CMS-concurrent-sweep-start] 2018-12-11T13:23:34.725+0100: 80961.377: [GC (Allocation Failure) 2018-12-11T13:23:34.725+0100: 80961.377: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 7697232 bytes, 7697232 total - age 2: 2434936 bytes, 10132168 total - age 3: 3934840 bytes, 14067008 total - age 4: 3348704 bytes, 17415712 total - age 5: 3063832 bytes, 20479544 total - age 6: 2907808 bytes, 23387352 total *: 1706937K->35508K(1887488K), 0.0355183 secs] 24048561K->22379955K(30569728K), 0.0358070 secs] [Times: user=0.27 sys=0.00, real=0.04 secs] * 2018-12-11T13:23:35.816+0100: 80962.468: [GC (Allocation Failure) 2018-12-11T13:23:35.816+0100: 80962.468: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 8502816 bytes, 8502816 total - age 2: 4081456 bytes, 12584272 total - age 3: 2307088 bytes, 14891360 total - age 4: 3364944 bytes, 18256304 total - age 5: 3066568 bytes, 21322872 total - age 6: 2913112 bytes, 24235984 total : 1713332K->36712K(1887488K), 0.0401425 secs] 23188060K->21514239K(30569728K), 0.0404617 secs] [Times: user=0.29 sys=0.00, real=0.04 secs] 2018-12-11T13:23:36.892+0100: 80963.543: [GC (Allocation Failure) 2018-12-11T13:23:36.892+0100: 80963.543: [ParNew Desired survivor size 107347968 bytes, new threshold 6 (max 6) - age 1: 7042632 bytes, 7042632 total - age 2: 4060640 bytes, 11103272 total - age 3: 3764904 bytes, 14868176 total - age 4: 1924712 bytes, 16792888 total - age 5: 3078712 bytes, 19871600 total - age 6: 2911152 bytes, 22782752 total : 1714536K->32173K(1887488K), 0.0364840 secs] 22227971K->20548418K(30569728K), 0.0367772 secs] [Times: user=0.27 sys=0.00, real=0.03 secs] ----------------------- Best Regards, Gustav ?kesson -------------- next part -------------- An HTML attachment was scrubbed... URL: