8259886 : Improve SSL session cache performance and scalability

Daniel Jeliński djelinski1 at gmail.com
Wed Jan 27 09:28:14 UTC 2021

Hi all,
I'd like to modify the MemoryCache class that is used for caching SSL
sessions in Java 11; when the cache is overloaded (full cache with no
expired entries), the computational complexity of put operation is linear
in the cache size.

I am aware that the current Java versions are using stateless resumption;
my company's policy is to use Java LTS, so stateless is not an option to us
at the moment.

The change I propose would alter the behavior of MemoryCache; currently the
cache guarantees that expired entries are removed before live entries when
the cache capacity is reached. I'd like to remove that guarantee, and
instead always remove the least recently used cache entry. Additionally, to
keep the memory use in check, I'd like to check the least recently used
entries for expiration, and remove them as needed. This operation would be
run on every put().

Evaluated alternatives:
- keep MemoryCache as is, increase cache size to avoid overloading. This
would reduce the frequency of check for expired entries, but if the larger
cache eventually gets overloaded, the put() operation would take even
longer to complete.
- keep MemoryCache behavior as is, but improve performance of removing
expired entries. This would require a new data structure - we would need to
have cache entries sorted both by access time and creation time.
- always remove entries in insertion order, regardless if they are used or
- always remove the least recently used entries, even if they are still
valid and there are recently used expired entries somewhere in the cache.
This is my preferred option.

Additionally, we can choose how to remove stale entries:
- all reachable stale entries on every put (that is, iterate over the
internal data structure removing stale entries until a non-stale one is
found); keeps memory use low, but can occasionally scan the entire cache
(computational complexity: linear worst case, amortized constant);
- fixed number of reachable stale entries on every put (same as above, but
stop iterating after a fixed number of removed entries even if more are
available); execution time is bounded by a constant, but some entries may
stay in cache a bit longer than they need to.
- all stale entries on put that would otherwise exceed cache capacity
(current implementation)
- never; once cache reaches full capacity, it stays full until the
application is restarted. Fastest implementation, but at a cost of
increased memory use.

My preference would be to remove either all or a fixed number of reachable
stale entries.

I'm willing to prepare a patch for 17 and 11u, if I can find a sponsor.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/security-dev/attachments/20210127/3c8b3fd2/attachment-0001.htm>

More information about the security-dev mailing list