RFR: 8271195: Use largest available large page size smaller than LargePageSizeInBytes when available [v3]
Swati Sharma
duke at openjdk.java.net
Thu Feb 10 10:12:14 UTC 2022
On Mon, 7 Feb 2022 15:10:19 GMT, Stefan Johansson <sjohanss at openjdk.org> wrote:
> > > I think my example, that a 128m heap is aligned up to 512m if the large page size is 512m, is a case that could be considered a bug, but it is not crystal clear. Cause the user have specified both that it wants large pages but also a heap size of 128m.
> >
> >
> > I remember us discussing this recently: https://bugs.openjdk.java.net/browse/JDK-8267475
>
> Thanks for digging out the JBS issue for this.
>
> > > Which of these are more important? On the other, if we could satisfy the heap of 128m using 2m pages we would be closer to what I would see as the correct solution. This would be achieved today by setting `LargePageSizeInBytes=2m`.
> >
> >
> > I would actually like the following behavior:
> > ```
> > * LargePageSizeInBytes is the largest page size usable by the VM. It is free to choose whatever it likes but should give preference to larger page sizes if possible
> >
> > * when reserving a region and UseLargePages=true, use the largest page size possible which fulfills the size requirement (and possibly the alignment requirement if a wish address was specified).
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > I think this is close to what we do now.
> > So, `-XX:+UseLargePages -XX:LargePageSizeInBytes=1G -Xmx1536m -XX:ReservedCodeCacheSize=256m` would use
> > ```
> > * a single 1G page and 256 2m pages for the heap
> > ```
>
> Currently this would round the heap up to 2G and use 2 1G pages, right? But we've discussed doing something like this in the past and I think it would be nice from a perf perspective. But there are some issue, say that the 2m commit above fail, then the whole reservation need to be restarted (because we can't guarantee that we still have the range reserved), and should we then try to just use fewer 2m page or directly revert down to 4k pages. There is also other things that well be affected, for example we have some code in G1 that is tracking the page size of the underlying mapping to know when regions can be truly uncommited (multiple regions sharing one underlying OS page), having the heap consist of multiple page sizes would make this more complicated.
>
> > ```
> > * 128 2m pages for the code cache
> > ... and if we ever re-introduce large pages for metaspace, those smallish segments would probably use 2m pages too.
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Open question would be whether we even need LargePageSizeInBytes. Why not simplify and always use the largest possible page size we find available? If there are 1G pages and they would fit into the to-be-reserved address range, we use them. Why would an administrator allow large pages in general, create a 1G page pool, but disallow them?
>
> This would be the simplest and cleanest approach in the code now when we support multiple page sizes. One argument I've heard against this is that an administrator might want to setup a 1G page pool for some other application, like a database, while letting the JVM only use 2M pages. So there might be usecases.
>
> If we would go down this route I also think we should stop caring about what the "default" large page size is as well, and always just use the ones configured. But then there is this question about what is "configured", must a page size pass the sanity test to be considered configured (that is basically what this change proposes).
>
> Counter question, if we go down that route, would we still have `os::large_page_size()` or should all users always ask for a page size given the size of thier mapping?
Hi @kstefanj , @tstuefe
Thanks for sharing your views and comments.
Please suggest what specific change you want us to do in the patch since it is fixing the already existing functionality.
Best Regards,
Swati
-------------
PR: https://git.openjdk.java.net/jdk/pull/7326
More information about the hotspot-runtime-dev
mailing list