RFR: 8330181: Move PcDesc cache from nmethod header [v2]
Vladimir Kozlov
kvn at openjdk.org
Thu Apr 25 22:50:41 UTC 2024
On Wed, 24 Apr 2024 15:33:42 GMT, Vladimir Kozlov <kvn at openjdk.org> wrote:
>> Currently PcDescCache (32 bytes in 64-bit VM: PcDesc* _pc_descs[4]) is allocated in `nmethod` header.
>>
>> Moved PcDescContainer (which includes cache) to C heap similar to ExceptionCache to reduce size of `nmethod` header and to remove WXWrite transition when we update the cache in `PcDescCache::add_pc_desc()`.
>>
>> Removed `PcDescSearch` class which was leftover from `CompiledMethod` days.
>>
>> Tested tier1-4,stress,xcomp and performance.
>
> Vladimir Kozlov has updated the pull request incrementally with one additional commit since the last revision:
>
> Remove unneeded ThreadWXEnable
Thank you, John, for review and history lesson. Few comments on your comments ;^)
> As a top level goal, I hope some day soon we will get all the metadata out of code space, both mutable (as in this case) and immutable.
First step for that will be my next PR for [JDK-8331087](https://bugs.openjdk.org/browse/JDK-8331087) "Move read-only nmethod data from CodeCache". They account for 30% space in CodeCache.
Next step will be converting Relocation Info data to immutable by moving all encoded pointer to oops, metadata and other sections. (I not started yet)
I would keep mutable sections (oops, metadata) together with code for now because `oops_do()` and `metadata_do()` process them together with code. And these section are relatively very small (vs whole nmethod size):
relocation = 509520 (6.003523%)
constants = 288 (0.003393%)
main code = 4957240 (58.409695%)
stub code = 286832 (3.379657%)
oops = 20824 (0.245363%)
metadata = 126944 (1.495744%)
> (But malloc still does not fully integrate with HotSpot’s Native Memory Tracking, so that might be an issue.)
It is not issue anymore because we are using our wrapper [os:malloc()](https://github.com/openjdk/jdk/blob/master/src/hotspot/share/runtime/os.cpp#L629) which does NMT.
> As a further investment, I’d replace ad hoc compressed data (which is always hard to maintain) with uniformly compressed data, using Unsigned5 (from Pack200)
Yes, it should be done to compress 0s in data we are already compressing (ScopesDesc).
Compressing all data have an issue because some data (PcDesc) needs random access in big array. We discussed possibility to compress by chunks such arrays to reduce access time. This needs careful investigation.
Thanks again for review, @rose00
-------------
PR Comment: https://git.openjdk.org/jdk/pull/18895#issuecomment-2078291565
More information about the hotspot-compiler-dev
mailing list