RFR: 8333258: C2: high memory usage in PhaseCFG::insert_anti_dependences()
Emanuel Peter
epeter at openjdk.org
Thu Jun 20 12:34:09 UTC 2024
On Wed, 19 Jun 2024 12:54:26 GMT, Roland Westrelin <roland at openjdk.org> wrote:
> In a debug build, `PhaseCFG::insert_anti_dependences()` is called
> twice for a single node: once for actual processing, once for
> verification.
>
> In TestAntiDependenciesHighMemUsage, the test has a `Region` that
> merges 337 incoming path. It also has one `Phi` per memory slice that
> are stored to: 1000 `Phi` nodes. Each `Phi` node has 337 inputs that
> are identical except for one. The common input is the memory state on
> method entry. The test has 60 `Load` that needs to be processed for
> anti dependences. All `Load` share the same memory input: the memory
> state on method entry. For each `Load`, all `Phi` nodes are pushed 336
> times on the work lists for anti dependence processing because all of
> them appear multiple times as uses of each `Load`s memory state: `Phi`s
> are pushed 336 000 on 2 work lists. Memory is not reclaimed on exit
> from `PhaseCFG::insert_anti_dependences()` so memory usage grows as
> `Load` nodes are processed:
>
> 336000 * 2 work lists * 60 loads * 8 bytes pointer = 322 MB.
>
> The fix I propose for this is to not push `Phi` nodes more than once
> when they have the same inputs multiple times.
>
> In TestAntiDependenciesHighMemUsage2, the test has 4000 loads. For
> each of them, when processed for anti dependences, all 4000 loads are
> pushed on the work lists because they share the same memory
> input. Then when they are popped from the work list, they are
> discarded because only stores are of interest:
>
> 4000 loads processed * 4000 loads pushed * 2 work lists * 8 bytes pointer = 256 MB.
>
> The fix I propose for this is to test before pushing on the work list
> whether a node is a store or not.
>
> Finally, I propose adding a `ResourceMark` so memory doesn't
> accumulate over calls to `PhaseCFG::insert_anti_dependences()`.
Never mind, I removed my comment. I need to understand the whole algorithm better first.
I guess it could be a tradeoff: a `Node_List` is here "sparse", it only requires as much space as you push elements. But a `VectorSet` is more compact if you add a lot of elements, plus access time is faster.
Of course that could be investigated separately.
-------------
PR Comment: https://git.openjdk.org/jdk/pull/19791#issuecomment-2180555975
More information about the hotspot-compiler-dev
mailing list