From tg at freigmbh.de Wed Mar 1 08:13:23 2017 From: tg at freigmbh.de (Thorsten Goetzke) Date: Wed, 1 Mar 2017 09:13:23 +0100 Subject: Unreachable Memory not freed, Nashorn Demo In-Reply-To: References: <9171bcd9-5212-edb6-59f6-aa17b60b50e2@freigmbh.de> Message-ID: Hello, It's quite intresting the nashorn implementation seems to have changed significantly between java 8 and 9. In Java 8 I can not find any path to root. Java 9 the objects are reported weakly reachable, or reachable through jdk.internal.ref.CleanerImpl$PhantomCleanableRef. The resulting behaviour is the same, the Objects will not be cleared. In my environment I could run hundred's of gc, these Objects would not be reclaimed. In my real world application these Objects would sometimes get reclaimed after some unknown random actions (Executing other Scripts, creating new NashornScriptEngines) I supsect these weak references are just a diversion, the gc thinks there is a strong chain to the LiveItems, but the chain doesn't get exported in the snapshot. Best regards, Thorsten Am 28.02.17 um 18:56 schrieb Poonam Bajaj Parhar: > Hello Thorsten, > > I ran this test program with jdk9-ea and created a Heap Dump after the > first Full GC using -XX:+HeapDumpAfterFullGC. In that heap dump, I can > see 2 instances of LeakImpl: > > Class Name | Objects | Shallow Heap | Retained Heap > ---------------------------------------------------------- > LeakDemo$LeakImpl| 2 | 32 | > ---------------------------------------------------------- > > the first one is reachable as a local variable from the main thread > which is fine: > > Class Name | Ref. Objects | Shallow > Heap | Ref. Shallow Heap | Retained Heap > ---------------------------------------------------------------------------------------------------------------- > java.lang.Thread @ 0x84f211f8 Thread | 1 | > 120 | 16 | 736 > '- LeakDemo$LeakImpl @ 0x850d89f0| 1 | > 16 | 16 | 16 > ---------------------------------------------------------------------------------------------------------------- > > > the other one is reachable through the referent > "jdk.nashorn.internal.objects.Global" of a WeakReference: > > Class Name | Ref. Objects | Shallow Heap | Ref. Shallow > Heap | Retained Heap > ----------------------------------------------------------------------------------------------------------- > class jdk.internal.loader.ClassLoaders @ 0x84f268f8 System > Class | 1 | 16 > | 16 | 16 > '- PLATFORM_LOADER jdk.internal.loader.ClassLoaders$PlatformClassLoader > @ 0x84f2a610 | 1 | 96 > | 16 | 199,624 > '- classes java.util.Vector @ > 0x850b2b70 > | 1 | 32 | 16 | 68,104 > '- elementData java.lang.Object[640] @ > 0x850b2b90 | 1 > | 2,576 | 16 | 68,072 > '- [196] class jdk.nashorn.internal.scripts.JD @ > 0x84f49960 | 1 | 8 > | 16 | 4,560 > '- map$ jdk.nashorn.internal.runtime.PropertyMap @ > 0x850d4a88 | 1 | 64 > | 16 | 4,552 > '- protoHistory java.util.WeakHashMap @ > 0x850d5418 | 1 | > 48 | 16 | 2,208 > '- table java.util.WeakHashMap$Entry[16] @ > 0x850d5448 | 1 | 80 > | 16 | 2,112 > *'- [10] java.util.WeakHashMap$Entry @ > 0x850d5498 | 1 | 40 > | 16 | 2,032* > '- referent jdk.nashorn.internal.objects.Global > @ 0x85137a18 | 1 | 544 > | 16 | 39,920 > '- initscontext > javax.script.SimpleScriptContext @ 0x8515c910 | 1 > | 32 | 16 | 280 > '- engineScope LeakDemo$SimplestBindings @ > 0x8515c930 | 1 | 16 > | 16 | 248 > '- map java.util.HashMap @ > 0x8515c940 | 1 | 48 > | 16 | 232 > '- table java.util.HashMap$Node[16] > @ 0x8515c970 | 1 | 80 > | 16 | 184 > '- [9] java.util.HashMap$Node @ > 0x8515c9f8 | 1 | 32 > | 16 | 48 > '- value > LeakDemo$SimplestBindings$$Lambda$118 @ 0x8515ca18| 1 > | 16 | 16 | 16 > '- arg$1 LeakDemo$LeakImpl > @ 0x8515c600 | 1 | 16 > | 16 | 1,073,741,856 > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > From the GC logs, the referent is present in an old region: > > [2.044s][info ][gc,metaspace ] GC(6) Metaspace: 13522K->13518K(1062912K) > [2.047s][info ][gc,start ] GC(6) Heap Dump (after full gc) > Dumping heap to java_pid20428.hprof ... > Heap dump file created [1081050745 bytes in 25.084 secs] > [27.137s][info ][gc ] GC(6) Heap Dump (after full gc) > 25089.382ms > [27.137s][info ][gc ] GC(6) Pause Full (Allocation > Failure) 1028M->1028M(1970M) 25171.038ms > > Also: > [10.651s][trace][gc,region] GC(6) G1HR POST-COMPACTION(OLD) > [0x0000000085100000, 0x0000000085161f20, 0x0000000085200000] > > This Full GC didn't discover this WeakReference and didn't clear its > referent. It needs to be investigated if it gets cleared and collected > in the subsequent GCs. > > Thanks, > Poonam > > On 2/28/2017 9:06 AM, Jenny Zhang wrote: >> Thorsten >> >> Thanks very much for the micro. I have update it to >> >> https://bugs.openjdk.java.net/browse/JDK-8173594 >> >> Thanks >> Jenny >> >> On 2/28/2017 4:45 AM, Thorsten Goetzke wrote: >>> Hello, >>> >>> Back in January i posted about Unreachable Objects not claimed by the >>> gc, i am finally able to produce a micro, see below. When I run the >>> class below using -Xmx4g and take a memory snaphsot (hprof or >>> yourkit, doesnt matter), I will see 2 LeakImpl Objects. These Objects >>> have no reported path to root, yet they won't be collected. If i >>> lower the heap space to -Xmx2g the Application throws >>> java.lang.OutOfMemoryError: Java heap space. >>> @Jenny Zhang should I create a new bugreport, or will you take care >>> of this? >>> >>> Best Regards, >>> Thorsten Goetzke >>> >>> package de.frei.demo; >>> >>> import jdk.nashorn.api.scripting.NashornScriptEngine; >>> import jdk.nashorn.api.scripting.NashornScriptEngineFactory; >>> >>> import javax.script.CompiledScript; >>> import javax.script.ScriptException; >>> import javax.script.SimpleBindings; >>> import java.util.function.Function; >>> >>> >>> public final class LeakDemo { >>> >>> private static NashornScriptEngine ENGINE = >>> getNashornScriptEngine(); >>> private static CompiledScript SCRIPT; >>> >>> public static void main(String[] args) throws Exception { >>> simulateLoad(); >>> simulateLoad(); >>> System.gc(); >>> Thread.sleep(1000000); >>> >>> } >>> >>> private static void simulateLoad() throws ScriptException { >>> final CompiledScript compiledScript = getCompiledScript(ENGINE); >>> compiledScript.eval(new SimplestBindings(new LeakImpl())); >>> } >>> >>> private static NashornScriptEngine getNashornScriptEngine() { >>> final NashornScriptEngineFactory factory = new >>> NashornScriptEngineFactory(); >>> final NashornScriptEngine scriptEngine = >>> (NashornScriptEngine) factory.getScriptEngine(); >>> return scriptEngine; >>> } >>> >>> >>> >>> private static CompiledScript getCompiledScript(final >>> NashornScriptEngine scriptEngine) throws ScriptException { >>> if (SCRIPT == null) { >>> SCRIPT = scriptEngine.compile(" var pivot = >>> getItem(\"pivot\");"); >>> } >>> return SCRIPT; >>> } >>> >>> public interface Leak { >>> >>> LiveItem getItem(String id); >>> } >>> >>> >>> public static final class LeakImpl implements Leak { >>> private final byte[] payload = new byte[1024 * 1024 * 1024]; >>> >>> >>> @Override >>> public LiveItem getItem(final String id) { >>> return new LiveItem() { >>> }; >>> } >>> >>> >>> } >>> >>> public interface LiveItem { >>> } >>> >>> public static final class SimplestBindings extends SimpleBindings { >>> public SimplestBindings(Leak leak) { >>> >>> put("getItem",(Function< String, LiveItem>) leak::getItem); >>> } >>> } >>> } >>> >>> _______________________________________________ >>> hotspot-gc-use mailing list >>> hotspot-gc-use at openjdk.java.net >>> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use >> >> _______________________________________________ >> hotspot-gc-use mailing list >> hotspot-gc-use at openjdk.java.net >> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use > From amit.mishra at redknee.com Wed Mar 1 12:39:49 2017 From: amit.mishra at redknee.com (Amit Mishra) Date: Wed, 1 Mar 2017 12:39:49 +0000 Subject: Query on how to reducing Minor GC pause without any change in heap size by optimizing CPU cycles Message-ID: Hello Team, I am facing issue on one of the application which is having Minor GC happening every second with pause time of 120 ms due to which application throughput remains around 87%. Sometime it even drops below that. This is causing service impact for our Data network. Application is having heap size of 48G and new size of 800 mb. We have another Customer running same application and having same heap size of 48G but new size of 2GB and catering more traffic in terms of transaction per second but it is having Minor GC every two second with pause time of only 80 ms due to which we are achieving good performance and throughput of more than 96%. Only difference I found out between two Customer is first one is having old generation CPU(GenuineIntel family 6 model 29 step 1) while second one is having relatively new CPU(GenuineIntel family 6 model 62 step 4). But clock speed of both kind of CPU is same that is 2.4 Ghz and number of virtual cores are also same that is 24. So my question is how come one CPU can perform better than another having same clock speed ? What all are other factors associated with CPU that leads to complete 1 transaction and how can I speed up CPU performance on my impacting Customer ? Searching over internet I found one parameter GFLOP(Giga Floating point operations per second) which determine which CPU is faster and in this case obviously GFLOP for better performing Customer is 3.2Ghz while that of poor performing CPU is 1.2Ghz. Avg CPU usage on poor performing site is 40-50% while Avg CPU usage on better performing site is 20%.. Please confirm shall I ask my Customer to add more CPU of same old specifications to speed up Minor GC cycles or they will need CPU upgrade to faster GFLOP CPU's. Other question is if I will increase new size from 800 mb to 2Gb on poor performing site then GC cycle frequency will change from 1 second to two second but I am worried if that will increase the pause time as well and overall application throughput will remain same or go down ? Note : I cannot play with heap size of 48G as my Customer is having Concurrent mode failure issue once in 3-4 months so reducing overall heap size is not an option here. Poor performing site CPU spec: The physical processor has 6 virtual processors (0 4 8 12 16 20) x86 (chipid 0x0 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) Intel(r) Xeon(r) CPU E7458 @ 2.40GHz The physical processor has 6 virtual processors (1 5 9 13 17 21) x86 (chipid 0x1 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) Intel(r) Xeon(r) CPU E7458 @ 2.40GHz The physical processor has 6 virtual processors (2 6 10 14 18 22) x86 (chipid 0x2 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) Intel(r) Xeon(r) CPU E7458 @ 2.40GHz The physical processor has 6 virtual processors (3 7 11 15 19 23) x86 (chipid 0x3 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) Intel(r) Xeon(r) CPU E7458 @ 2.40GHz Better performing site CPU spec: The physical processor has 24 virtual processors (0-11 24-35) x86 (chipid 0x0 GenuineIntel family 6 model 62 step 4 clock 2400 MHz) Intel(r) Xeon(r) CPU E5-2695 v2 @ 2.40GHz The physical processor has 24 virtual processors (12-23 36-47) x86 (chipid 0x1 GenuineIntel family 6 model 62 step 4 clock 2400 MHz) Intel(r) Xeon(r) CPU E5-2695 v2 @ 2.40GHz Thanks, Amit Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecki at zusammenkunft.net Wed Mar 1 21:08:20 2017 From: ecki at zusammenkunft.net (Bernd Eckenfels) Date: Wed, 1 Mar 2017 21:08:20 +0000 (UTC) Subject: Query on how to reducing Minor GC pause without any change in heap size by optimizing CPU cycles In-Reply-To: References: Message-ID: <1931C356834729A1.423BBF51-F94C-4B69-A43B-A93BA9441F13@mail.outlook.com> Well, you have double the number of cores (and HT), more modern CPU (2008 vs 2013 with 3.2GHz Turbo bursts, 3 times the L2 Cache )? https://ark.intel.com/compare/75281,36941 and the new generation is bigger (which means lower young Gc frequency and with that less objects which survive) so it's only natural it is much faster. If you have so frequent young GCs quadrupling the new size is a good start - even without looking at the gclogs. Gruss Bernd -- http://bernd.eckenfels.net _____________________________ From: Amit Mishra Sent: Mittwoch, M?rz 1, 2017 3:22 PM Subject: Query on how to reducing Minor GC pause without any change in heap size by optimizing CPU cycles To: Hello Team, ? I am facing issue on one of the application which is having Minor GC happening every second with pause time of 120 ms due to which application throughput remains around 87%. Sometime it even drops below that. This is causing service impact for our Data network. ? Application is having heap size of 48G and new size of 800 mb. ? We have another Customer running same application and having same heap size of 48G but new size of 2GB and catering more traffic in terms of transaction per second but it is havingMinor GC every two second ?with pause time of only 80 ms due to which we are achieving good performance and throughput of more than 96%. ? Only difference I found out between two Customer is first one is having old generation CPU(GenuineIntel family 6 model 29 step 1) ?while second one is having relatively new CPU(GenuineIntel family 6 model 62 step 4). ? But clock speed of both kind of CPU is same that is 2.4 Ghz and number of virtual cores are also same that is 24. ? So my question is how come one CPU can perform better than another having same clock speed ? ? What all are other factors associated with CPU that leads to complete 1 transaction and how can I speed up CPU performance on my impacting Customer ? ? Searching over internet I found one parameter GFLOP(Giga Floating point operations per second) which determine which CPU is faster and in this case obviously GFLOP for better performing Customer is 3.2Ghz while that of poor performing CPU is 1.2Ghz. ? Avg CPU usage on poor performing site is 40-50% while Avg CPU usage on better performing site is 20%.. ? Please confirm shall I ask my Customer to add more CPU of same old specifications to speed up Minor GC cycles or they will need CPU upgrade to faster GFLOP CPU?s. ? Other question is if I will increase new size from 800 mb to 2Gb on poor performing site then GC cycle frequency will change from 1 second to two second but I am worried if that will increase the pause time as well and overall application throughput will remain same or go down ? ? Note : I cannot play with heap size of 48G as my Customer is having Concurrent mode failure issue once in 3-4 months so reducing overall heap size is not an option here. ? Poor performing site CPU spec: ??????????????????????????????? ??????????????? The physical processor has 6 virtual processors (0 4 8 12 16 20) ? x86 (chipid 0x0 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) ??????? Intel(r) Xeon(r) CPU?????????? E7458? @ 2.40GHz The physical processor has 6 virtual processors (1 5 9 13 17 21) ? x86 (chipid 0x1 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) ??????? Intel(r) Xeon(r) CPU?????????? E7458? @ 2.40GHz The physical processor has 6 virtual processors (2 6 10 14 18 22) ? x86 (chipid 0x2 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) ??????? Intel(r) Xeon(r) CPU?????????? E7458? @ 2.40GHz The physical processor has 6 virtual processors (3 7 11 15 19 23) ? x86 (chipid 0x3 GenuineIntel family 6 model 29 step 1 clock 2400 MHz) ??????? Intel(r) Xeon(r) CPU?????????? E7458? @ 2.40GHz ? ? ? Better performing site CPU spec: The physical processor has 24 virtual processors (0-11 24-35) ? x86 (chipid 0x0 GenuineIntel family 6 model 62 step 4 clock 2400 MHz) ??????? Intel(r) Xeon(r) CPU E5-2695 v2 @ 2.40GHz The physical processor has 24 virtual processors (12-23 36-47) ? x86 (chipid 0x1 GenuineIntel family 6 model 62 step 4 clock 2400 MHz) ??????? Intel(r) Xeon(r) CPU E5-2695 v2 @ 2.40GHz ? ? Thanks, Amit Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From koji.lin at gmail.com Mon Mar 6 09:07:57 2017 From: koji.lin at gmail.com (koji Lin) Date: Mon, 06 Mar 2017 09:07:57 +0000 Subject: G1GC doesn't clean WeakReference at mixed gc in some situation? Message-ID: My server is using 1.8.0_92 on CentOS 6.7, gc param is '-Xms16g -Xmx16g -XX:+UseG1GC'. So the default InitiatingHeapOccupancyPercent is 45, G1HeapWastePercent is 5 and G1MixedGCLiveThresholdPercent is 85. My server's mixed gc starts from 7.2GB, but it clean less and less, finally old gen keeps larger than 7.2GB, so it's always try to do concurrent mark. Finally all heap are exhausted and full GC occurred,. After full gc, old gen used is under 500MB. https://i.stack.imgur.com/mzDlu.png I dumped the heap from another machine with -Xmx4G.I used lettuce as my redis client, and it has tracking featured using LatencyUtils. It make LatencyStats(which contains some long[] with near 3000 elements) instances weak referenced every 10 mins(Reset latencies after publish is true by default, https://github.com/mp911de/lettuce/wiki/Command-Latency-Metrics). So it will make lots of WeakReference of LatencyStats after long time. Before Full GC. https://i.stack.imgur.com/KRdOL.png https://i.stack.imgur.com/wboak.png https://i.stack.imgur.com/vGnmF.png After Full GC. https://i.stack.imgur.com/P0XJV.png Currently I don't need tracking from lettuce, so just disable it and it doesn't have full gc anymore. But want to know why mixed gc doesn't clear them. And I tried in local environment too, but can't reproduce at local environment. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From koji.lin at gmail.com Mon Mar 6 23:29:19 2017 From: koji.lin at gmail.com (koji Lin) Date: Mon, 06 Mar 2017 23:29:19 +0000 Subject: G1GC doesn't clean WeakReference at mixed gc in some situation? In-Reply-To: References: Message-ID: Hi Local environment is I want to reproduce in none production environment(Macbook & linux VM with oracle jdk 1.8.0_92). So try to write some similar WeakReference code as LatencyUtils, but every time mix gc can clear WeakReference instances in local environment. Lettuce is redis client, it has built in latency metrics which uses LatencyUtils, I disable this feature and successfully avoid full GC. koji On Mon, Mar 6, 2017 at 6:07 PM koji Lin wrote: > My server is using 1.8.0_92 on CentOS 6.7, gc param is '-Xms16g -Xmx16g > -XX:+UseG1GC'. So the default InitiatingHeapOccupancyPercent is 45, > G1HeapWastePercent is 5 and G1MixedGCLiveThresholdPercent is 85. My > server's mixed gc starts from 7.2GB, but it clean less and less, finally > old gen keeps larger than 7.2GB, so it's always try to do concurrent mark. > Finally all heap are exhausted and full GC occurred,. After full gc, old > gen used is under 500MB. > > https://i.stack.imgur.com/mzDlu.png > > I dumped the heap from another machine with -Xmx4G.I used lettuce as my > redis client, and it has tracking featured using LatencyUtils. It make > LatencyStats(which contains some long[] with near 3000 elements) > instances weak referenced every 10 mins(Reset latencies after publish is > true by default, > https://github.com/mp911de/lettuce/wiki/Command-Latency-Metrics). So it > will make lots of WeakReference of LatencyStats after long time. > > Before Full GC. > > https://i.stack.imgur.com/KRdOL.png > https://i.stack.imgur.com/wboak.png > https://i.stack.imgur.com/vGnmF.png > > After Full GC. > https://i.stack.imgur.com/P0XJV.png > > Currently I don't need tracking from lettuce, so just disable it and it > doesn't have full gc anymore. But want to know why mixed gc doesn't clear > them. > > And I tried in local environment too, but can't reproduce at local > environment. > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Milan.Mimica at infobip.com Sun Mar 12 00:38:30 2017 From: Milan.Mimica at infobip.com (Milan Mimica) Date: Sun, 12 Mar 2017 00:38:30 +0000 Subject: G1 native memory consumption In-Reply-To: <1486570008.3510.50.camel@oracle.com> References: <1484943874550.90103@infobip.com> ,<1485168079.2811.21.camel@oracle.com> <1486138975652.57172@infobip.com>,<1486570008.3510.50.camel@oracle.com> Message-ID: <1489279110625.91684@infobip.com> Hi again I have tried -XX:G1RSetSparseRegionEntries=64 and it did reduce native memory usage: - Internal (reserved=1203MB, committed=1203MB) (malloc=1203MB #329024) vs. - Internal (reserved=2155MB, committed=2155MB) (malloc=2155MB #567485) There is no measurable performance impact. I have a question: why is BitMap PerRegionTable::_bm allocated using mtInternal memory qualifier? Wouldn't mtGC be a better fit? Milan Mimica, Senior Software Engineer / Division Lead ________________________________________ From: Thomas Schatzl Sent: Wednesday, February 8, 2017 17:06 To: Milan Mimica; hotspot-gc-use at openjdk.java.net Subject: Re: G1 native memory consumption Hi Milan, On Fri, 2017-02-03 at 16:22 +0000, Milan Mimica wrote: > Hi Thomas > > Thanks for your input. I took me a while to have a stable system > again to repeat measurements. > > I have tried setting G1HeapRegionSize to 16M on one instance (8M is > default) and I notice lower GC memory usage: > GC (reserved=1117MB -18MB, committed=1117MB -18MB) > vs > GC (reserved=1604MB +313MB, committed=1604MB +313MB) > > It seems more stable too. However, "Internal" is still relatively > high for a 25G heap, and there is no much difference between > instances: > Internal (reserved=2132MB -7MB, committed=2132MB -7MB) I am not sure why there is no difference, it would be nice to have a breakdown on this like in the previous case to rule out other components or not enough warmup. Everything that is allocated via the OtherRegionsTable::add_reference() -> BitMap::resize() path in the figure from the other email is remembered sets, and they _should_ have gone down. You can try to move memory from that path to the CHeapObj operator new one. This results in g1 storing remembered sets in a much more dense but potentially slower to access representation. The switch to turn here is G1RSetSparseRegionEntries. It gives maximum number of cards (small areas, 512 bytes) per region to store in that representation. If it overflows, pretty large bitmaps that might be really sparsely populated are used (that take lots of time). By default it is somewhat like 4 * (log2(region-size-in-MB + 1) E.g. with 32M region only 24 cards are stored there max. I think you can easily increase this to something like 64 or 128 or even larger. I think (and I am unsure about this, in jdk9 we halved its memory usage) memory usage should be around equal with the bitmaps with 2k entries on 32M regions, so I would stop at something in that area at most. This size need not be a power of two btw. You can try increasing this value significantly and see if it helps with memory consumption without impacting performance too much. Thanks, Thomas From thomas.schatzl at oracle.com Mon Mar 13 14:04:35 2017 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Mon, 13 Mar 2017 15:04:35 +0100 Subject: G1 native memory consumption In-Reply-To: <1489279110625.91684@infobip.com> References: <1484943874550.90103@infobip.com> ,<1485168079.2811.21.camel@oracle.com> <1486138975652.57172@infobip.com> ,<1486570008.3510.50.camel@oracle.com> <1489279110625.91684@infobip.com> Message-ID: <1489413875.3420.41.camel@oracle.com> Hi Milan, On Sun, 2017-03-12 at 00:38 +0000, Milan Mimica wrote: > Hi again > > I have tried -XX:G1RSetSparseRegionEntries=64 and it did reduce > native memory usage: > -??????????????????Internal (reserved=1203MB, committed=1203MB) > ????????????????????????????(malloc=1203MB #329024) > vs. > > -??????????????????Internal (reserved=2155MB, committed=2155MB) > ????????????????????????????(malloc=2155MB #567485) > > There is no measurable performance impact. Great to hear, and also somewhat expected :) Did you check the memory increase for the mtGC category does not use up all the savings in the mtInternal category? > I have a question: why is BitMap PerRegionTable::_bm allocated using > mtInternal memory qualifier? Wouldn't mtGC be a better fit? I filed?JDK-8176571. Thanks, ? Thomas From Milan.Mimica at infobip.com Tue Mar 14 08:49:06 2017 From: Milan.Mimica at infobip.com (Milan Mimica) Date: Tue, 14 Mar 2017 08:49:06 +0000 Subject: G1 native memory consumption In-Reply-To: <1489413875.3420.41.camel@oracle.com> References: <1484943874550.90103@infobip.com> ,<1485168079.2811.21.camel@oracle.com> <1486138975652.57172@infobip.com> ,<1486570008.3510.50.camel@oracle.com> <1489279110625.91684@infobip.com>,<1489413875.3420.41.camel@oracle.com> Message-ID: <1489481346388.33222@infobip.com> Milan Mimica, Senior Software Engineer / Division Lead ________________________________________ From: Thomas Schatzl Sent: Monday, March 13, 2017 15:04 To: Milan Mimica; hotspot-gc-use at openjdk.java.net Subject: Re: G1 native memory consumption > > Did you check the memory increase for the mtGC category does not use up all the savings in the mtInternal category? I did, it doesn't. >> I have a question: why is BitMap PerRegionTable::_bm allocated using >> mtInternal memory qualifier? Wouldn't mtGC be a better fit? > I filed JDK-8176571. Good. I'll look into it.