Trying to understand ZGC

Peter Booth peter_booth at me.com
Wed Nov 28 21:28:51 UTC 2018


Just to add, it’s worth capturing the output of sar -B at smaller intervals than the RedHat default of ten minutes.

One useful way to tread a balance between capturing short lived events, identifying periodic blips yet avoid generating GB of log data is to capture one second data for a few minutes a day at busy times. Of course even this is too coarse for many systems.

Sent from my iPhone

> On Nov 28, 2018, at 3:01 PM, Stefan Reich <stefan.reich.maker.of.eye at googlemail.com> wrote:
> 
> Hi Charlie, thanks for the info.
> 
> I usually do push memory to physical limits (and above), but that's because
> I have a 3.5 GB machine which is because I'm still broke which is because
> getting funding in Germany is hard :)
> 
> Greetings,
> Stefan
> 
>> On Wed, 28 Nov 2018 at 20:59, charlie hunt <charlie.hunt at oracle.com> wrote:
>> 
>> Hi Stefan,
>> 
>> Response to your large / huge pages question below.
>> 
>> hths,
>> 
>> Charlie
>> On 11/28/18 1:09 PM, Stefan Reich wrote:
>> 
>> Hi Per!
>> 
>> On Tue, 13 Nov 2018 at 20:22, Per Liden <per.liden at oracle.com> <per.liden at oracle.com> wrote:
>> 
>> 
>> The RSS accounting on Linux isn't always telling the complete truth and
>> it can even vary depending on if you're using small or large pages. ZGC
>> does heap multi-mapping, which means it will map the same heap memory in
>> three different locations in the virtual address space. When using small
>> pages, Linux isn't clever enough to detect that it's the same memory
>> being mapped multiple times, and so it accounts for each mapping as if
>> it was new/different, inflating the RSS by 3x. This typically doesn't
>> happen when using large pages (-XX:+UseLargePages).
>> 
>> 
>> 
>> Thanks. I would call this an actual bug in Linux then. Counting memory
>> twice is really not OK.
>> 
>> Hm... are large pages really problematic as suggested here?https://www.oracle.com/technetwork/java/javase/tech/largememory-jsp-137182.html
>> 
>> You are probably referring to this paragraph from that article, right?
>> 
>> However please note sometimes using large page memory can negatively
>> affect system performance. For example, when a large mount of memory is
>> pinned by an application, it may create a shortage of regular memory and
>> cause excessive paging in other applications and slow down the entire
>> system. Also please note for a system that has been up for a long time,
>> excessive fragmentation can make it impossible to reserve enough large page
>> memory. When it happens, either the OS or JVM will revert to using regular
>> pages.
>> 
>> This paragraph applies to a system that has multiple applications running
>> on it, and/or applies to a situation where there is not a lot of available
>> memory above what you have configured as large pages.
>> 
>> With some hand waving, and generally speaking, if you have a lot of memory
>> available on your system, or you do not have a situation where there are
>> multiple applications running that could push you close to exhausting
>> available physical memory or a need for large segments of contiguous
>> memory, then configuring large pages as described should work fine.
>> 
>> Another tip that helps with configuring large pages is to reboot your
>> system prior to configuring it for large pages. It is usually not required.
>> But, it does make it easier to find contiguous pages to lock into memory as
>> large pages. You might also consider adding -XX:+AlwaysPreTouch in addition
>> to -XX:+UseLargePages as JVM command line options.
>> 
>> You can use transparent huge pages also. If you want to go down that path
>> I can send you instructions. Just let me know.
>> 
> 
> 
> -- 
> Stefan Reich
> BotCompany.de // Java-based operating systems


More information about the zgc-dev mailing list