RFR: 8349713: [leyden] Memory map the cached code file
Aleksey Shipilev
shade at openjdk.org
Mon Feb 10 14:47:21 UTC 2025
On Mon, 10 Feb 2025 13:01:27 GMT, Francesco Nigro <duke at openjdk.org> wrote:
> Does the numbers still holds with sync; echo 3 > /proc/sys/vm/drop_caches
Yes, they do, and there is a good reason why: without caches, the hit on the critical startup path is even worse, even with a modern SSD. And AFAIU, file-backed mmap does play well with I/O caches too. Observe:
# Drop caches, read
Time (mean ± σ): 521.0 ms ± 6.1 ms [User: 1297.3 ms, System: 259.0 ms]
Range (min … max): 514.6 ms … 532.2 ms 10 runs
# Drop caches, mmap
Time (mean ± σ): 479.9 ms ± 2.6 ms [User: 1148.7 ms, System: 223.4 ms] ; <--- ~40ms faster
Range (min … max): 476.4 ms … 484.3 ms 10 runs
# Cached, read
Time (mean ± σ): 413.0 ms ± 3.5 ms [User: 1267.0 ms, System: 207.9 ms]
Range (min … max): 408.6 ms … 417.7 ms 10 runs
# Cached, mmap
Time (mean ± σ): 386.0 ms ± 4.7 ms [User: 1258.5 ms, System: 183.3 ms] ; <--- ~30ms faster
Range (min … max): 378.7 ms … 393.7 ms 10 runs
> The other concern re mmap is due to munmap cost which, on kernel side, relies (IIRC) to some v page single (!) lock to guard it - which usually slowdown processes termination
`hyperfine` tests of mine include that cost, as they are end-to-end invocation tests. (I remember this from 1BRC times.)
-------------
PR Comment: https://git.openjdk.org/leyden/pull/34#issuecomment-2648211800
More information about the leyden-dev
mailing list