RFR: 8271195: Use largest available large page size smaller than LargePageSizeInBytes when available [v5]

Thomas Stuefe stuefe at openjdk.java.net
Mon Feb 14 14:36:10 UTC 2022


On Mon, 14 Feb 2022 14:24:55 GMT, Swati Sharma <duke at openjdk.java.net> wrote:

>> Hi Team,
>> 
>> In this patch I have fixed two issues related to large pages, following is the summary of changes :-
>> 
>> 1. Patch fixes existing large page allocation functionality where if a commit over 1GB pages fails allocation should happen over next small page size i.e. 2M where as currently its happening over 4kb pages resulting into significant TLB miss penalty.
>> Patch includes new JTREG Test case covering various scenarios for checking the correct explicit page allocation according ​to the 1G, 2M, 4K priority.
>> 2. While attempting commit over larger pages we first try to reserve requested bytes over the virtual address space, in case commit to large page fails we should be un reserving entire reservation to avoid leaving any leaks in virtual address space.
>> 
>>>> Please find below the performance data with and without patch for the JMH benchmark included with the patch.
>> 
>> ![image](https://user-images.githubusercontent.com/96874289/152189587-4822a4ca-f5e2-4621-b405-0da941485143.png)
>> 
>> 
>> Please review and provide your valuable comments.
>> 
>> 
>> 
>> Thanks,
>> Swati Sharma
>> Runtime Software Development Engineer 
>> Intel
>
> Swati Sharma has updated the pull request incrementally with one additional commit since the last revision:
> 
>   8271195: Resolved the review comments

Sorry all for dropping out of the discussion.

> 
> > > Which of these are more important? On the other, if we could satisfy the heap of 128m using 2m pages we would be closer to what I would see as the correct solution. This would be achieved today by setting `LargePageSizeInBytes=2m`.
> > 
> > 
> > I would actually like the following behavior:
> > ```
> > * LargePageSizeInBytes is the largest page size usable by the VM. It is free to choose whatever it likes but should give preference to larger page sizes if possible
> > 
> > * when reserving a region and UseLargePages=true, use the largest page size possible which fulfills the size requirement (and possibly the alignment requirement if a wish address was specified).
> > I think this is close to what we do now.
> > So, `-XX:+UseLargePages -XX:LargePageSizeInBytes=1G -Xmx1536m -XX:ReservedCodeCacheSize=256m` would use
> > ```
> > * a single 1G page and 256 2m pages for the heap
> > ```
> 
> Currently this would round the heap up to 2G and use 2 1G pages, right?

Right.

> But we've discussed doing something like this in the past and I think it would be nice from a perf perspective. But there are some issue, say that the 2m commit above fail, then the whole reservation need to be restarted (because we can't guarantee that we still have the range reserved), and should we then try to just use fewer 2m page or directly revert down to 4k pages.

Probably the latter, with a warning. I expect in general the admin to be able to allocate enough huge pages, so error handling is rare and can be simple. Just my opinion.

> There is also other things that well be affected, for example we have some code in G1 that is tracking the page size of the underlying mapping to know when regions can be truly uncommited (multiple regions sharing one underlying OS page), having the heap consist of multiple page sizes would make this more complicated.

I understand this, but I am confused too: I thought large page memory is never uncommitted because we cannot be sure it can be recommitted? When does G1 uncommit large paged heap?

> > Open question would be whether we even need LargePageSizeInBytes. Why not simplify and always use the largest possible page size we find available? If there are 1G pages and they would fit into the to-be-reserved address range, we use them. Why would an administrator allow large pages in general, create a 1G page pool, but disallow them?
> 
> This would be the simplest and cleanest approach in the code now when we support multiple page sizes. One argument I've heard against this is that an administrator might want to setup a 1G page pool for some other application, like a database, while letting the JVM only use 2M pages. So there might be usecases.
> 
> If we would go down this route I also think we should stop caring about what the "default" large page size is as well, and always just use the ones configured. But then there is this question about what is "configured", must a page size pass the sanity test to be considered configured (that is basically what this change proposes).

Yes, I think getting rid of default large page makes sense. And instinctively I would say "configured" means we were, at startup, able to allocate at least one page of that size.

> 
> Counter question, if we go down that route, would we still have `os::large_page_size()` or should all users always ask for a page size given the size of thier mapping?

I think the latter. If we have multiple large pages, but no concept of default, the caller needs to specify its wish size.

-------------

PR: https://git.openjdk.java.net/jdk/pull/7326


More information about the hotspot-runtime-dev mailing list