how to tune gc for tomcat server on large machine that uses almost all old generation smallish objects
Andy Nuss
andrew_nuss at yahoo.com
Wed Dec 13 19:34:05 UTC 2017
Thanks Kirk,
The array is just a temporary buffer held onto that has its entries cleared to null after my LRU sweep. The references that are freed to GC are in the ConcurrentHashMaps, and are all 30 char and 100 char strings, key/vals, but not precisely, so I assume that when I do my LRU sweep when needed, its freeing a ton of small strings, which G1 has to reallocate into bigger chunks, and mark freed, and so, so that I can in the future add new such strings to the LRU cache. The concern was whether this sweep of old gen strings scattered all over the huge heap would cause tomcat nio-based threads to "hang", not respond quickly, or would G1 do things less pre-emptively. Are you basically saying that, "no tomcat servlet response time won't be significantly affected by G1 sweep"?
Also, I was wondering does anyone know how memcached works, and why it is used in preference to a custom design such as mine which seems a lot simpler. I.e. it seems that with "memcached", you have to worry about "slabs" and memcached's own heap management, and waste a lot of memory.
Andy
On Wednesday, December 13, 2017, 7:54:36 AM PST, Kirk Pepperdine <kirk at kodewerk.com> wrote:
Hi Andy,
I wouldn’t do anything special. The array is effectively a cache and in G1 that would be a humongous allocation (in most configurations). After that, it’s business as usual.
Kind regards,Kirk Pepperdine
On Dec 13, 2017, at 3:07 PM, Andy Nuss <andrew_nuss at yahoo.com> wrote:
I am writing a custom servlet replacement for memcached, suited to my own needs. When the servlet boots, I create a huge array, about 1% of total memory in a static variable of my Cache class. This is for quickselect median sorting, when memory is about 75% full by my calculations, so I can throw away half of my cached entries in a background thread.
My cache consists of tiered ConcurrentHashMaps, whose key is a base64 string of about 30 chars that is completely random. The mapped value is always about 100 chars. But the first char of the key takes you to the second tier of ConcurrentHashMaps and so, at a shallow depth, until you get to the ConcurrentHashMap that maps the key to the string. The cache has get and put and delete methods for these String key values, and again, though the strings are roughly the same length they are not exactly the same length. And because the tree of maps is statically held, all the strings are in old generation heap.
The machine is 8gig or 16gig or more with 2 or 4 or more cpus. The map fills and fills over time on the tomcat instance with many millions of mappings, again, because in static references, old generation. So the tomcat machine is mostly old generation heap usage. The question is what happens at about the time the quickselect prunes away half of the LRU entries, by deleting them from the map tree. It is removing the key and value strings, that are smallish, and all over the heap. Thus potentially reclaiming half of memory when done. For future additions to the map once the garbage has been cleared. What gc settings should I use in the tomcat program and what gc (jdk 8)? And the most important question is this, how do I ensure my tomcat threads (using nio) do not hang for long periods of time, when the gc is sweeping the old generation.
E.g. a servlet get method would do puts or gets to the map to fulfill the servlet request, and I want to ensure that it always completes in micros, and does not hang, even when the GC is doing extensive reclamation.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/hotspot-gc-dev/attachments/20171213/73b3246f/attachment.htm>
More information about the hotspot-gc-dev
mailing list