<html aria-label="message body"><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">No. You are incorrect. The same rules against pooling apply to Go as well. Only for VERY expensive objects should you pool - most bulk memory allocations are short lived bump allocations.<br id="lineBreakAtBeginningOfMessage"><div><br><blockquote type="cite"><div>On Jan 23, 2026, at 11:55 PM, Jianbin Chen <jianbin@apache.org> wrote:</div><br class="Apple-interchange-newline"><div><div dir="ltr">Hi Francesco,<br><br>I modified my example as follows:<br><br>```java<br>public static void main(String[] args) throws InterruptedException {<br> Executor executor = Executors.newVirtualThreadPerTaskExecutor();<br> Executor executor2 = new ThreadPoolExecutor(200, Integer.MAX_VALUE, 0L, java.util.concurrent.TimeUnit.SECONDS,<br> new SynchronousQueue<>(), Thread.ofVirtual().factory());<br> for (int i = 0; i < 10100; i++) {<br> executor.execute(() -> {<br> try {<br> Thread.sleep(100);<br> } catch (InterruptedException e) {<br> throw new RuntimeException(e);<br> }<br> });<br> executor2.execute(() -> {<br> try {<br> Thread.sleep(100);<br> } catch (InterruptedException e) {<br> throw new RuntimeException(e);<br> }<br> });<br> }<br> Thread.sleep(5000);<br> long start = System.currentTimeMillis();<br> CountDownLatch countDownLatch = new CountDownLatch(5000000);<br> for (int i = 0; i < 5000000; i++) {<br> executor.execute(() -> {<br> try {<br> Thread.sleep(100);<br> countDownLatch.countDown();<br> } catch (InterruptedException e) {<br> throw new RuntimeException(e);<br> }<br> });<br> }<br> countDownLatch.await();<br> System.out.println("thread time: " + (System.currentTimeMillis() - start) + " ms");<br> start = System.currentTimeMillis();<br> CountDownLatch countDownLatch2 = new CountDownLatch(5000000);<br> for (int i = 0; i < 5000000; i++) {<br> executor2.execute(() -> {<br> try {<br> Thread.sleep(100);<br> countDownLatch2.countDown();<br> } catch (InterruptedException e) {<br> throw new RuntimeException(e);<br> }<br> });<br> }<br> countDownLatch.await();<br> System.out.println("thread pool time: " + (System.currentTimeMillis() - start) + " ms");<br>}<br>```<br><br>I constructed the Executor directly with Executors.newVirtualThreadPerTaskExecutor(); <div>however, the run results still show that the pooled virtual‑thread behavior outperforms the non‑pooled virtual threads.<br><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">Francesco Nigro <<a href="mailto:nigro.fra@gmail.com">nigro.fra@gmail.com</a>> 于2026年1月23日周五 23:39写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I would say, yes:<br>
<a href="https://github.com/openjdk/jdk21/blob/890adb6410dab4606a4f26a942aed02fb2f55387/src/java.base/share/classes/java/lang/ThreadBuilders.java#L317" rel="noreferrer" target="_blank">https://github.com/openjdk/jdk21/blob/890adb6410dab4606a4f26a942aed02fb2f55387/src/java.base/share/classes/java/lang/ThreadBuilders.java#L317</a><br>
unless the fix will be backported - surely @Andrew Haley<br>
<<a href="mailto:aph-open@littlepinkcloud.com" target="_blank">aph-open@littlepinkcloud.com</a>> or @Alan Bateman <<a href="mailto:alan.bateman@oracle.com" target="_blank">alan.bateman@oracle.com</a>><br>
knows<br>
<br>
Il giorno ven 23 gen 2026 alle ore 16:32 Jianbin Chen <<a href="mailto:jianbin@apache.org" target="_blank">jianbin@apache.org</a>><br>
ha scritto:<br>
<br>
> Hi Francesco,<br>
><br>
> I'd like to know if there's a similar issue in JDK 21?<br>
><br>
> Best Regards.<br>
> Jianbin Chen, github-id: funky-eyes<br>
><br>
> Francesco Nigro <<a href="mailto:nigro.fra@gmail.com" target="_blank">nigro.fra@gmail.com</a>> 于 2026年1月23日周五 23:14写道:<br>
><br>
>> In the original code snippet I see named (with a counter) VThreads, so,<br>
>> be aware of <a href="https://bugs.openjdk.org/browse/JDK-8372410" rel="noreferrer" target="_blank">https://bugs.openjdk.org/browse/JDK-8372410</a><br>
>><br>
>> Il giorno ven 23 gen 2026 alle ore 15:52 Jianbin Chen <<a href="mailto:jianbin@apache.org" target="_blank">jianbin@apache.org</a>><br>
>> ha scritto:<br>
>><br>
>>> I'm sorry — I forgot to mention the machine I used for the load test. My<br>
>>> server is 2 cores and 4 GB RAM, and the JVM heap was set to 2880m. Under my<br>
>>> test load (about 20,000 QPS), with non‑pooled virtual threads you generate<br>
>>> at least 20,000 × 8 KB = ~156 MB of byte[] allocations per second just from<br>
>>> that 8 KB buffer; that doesn't include other object allocations. With a<br>
>>> 2880 MB heap this allocation rate already forces very frequent GC, and<br>
>>> frequent GC raises CPU usage, which in turn significantly increases average<br>
>>> response time and p99 / p999 latency.<br>
>>><br>
>>> Pooling is usually introduced to solve performance issues — object pools<br>
>>> and connection pools exist to quickly reuse cached resources and improve<br>
>>> performance. So pooling virtual threads also yields obvious benefits,<br>
>>> especially for memory‑constrained, I/O‑bound applications (gateways,<br>
>>> proxies, etc.) that are sensitive to latency.<br>
>>><br>
>>> Best Regards.<br>
>>> Jianbin Chen, github-id: funky-eyes<br>
>>><br>
>>> Robert Engels <<a href="mailto:rengels@ix.netcom.com" target="_blank">rengels@ix.netcom.com</a>> 于 2026年1月23日周五 22:20写道:<br>
>>><br>
>>>> I understand. I was trying explain how you can not use thread locals<br>
>>>> and maintain the performance. It’s unlikely allocating a 8k buffer is a<br>
>>>> performance bottleneck in a real program if the task is not cpu bound<br>
>>>> (depending on the granularity you make your tasks) - but 2M tasks running<br>
>>>> simultaneously would require 16gb of memory not including the stack.<br>
>>>><br>
>>>> You cannot simply use the thread per task model without an<br>
>>>> understanding of the cpu, IO, and memory footprints of your tasks and then<br>
>>>> configure appropriately.<br>
>>>><br>
>>>> On Jan 23, 2026, at 8:10 AM, Jianbin Chen <<a href="mailto:jianbin@apache.org" target="_blank">jianbin@apache.org</a>> wrote:<br>
>>>><br>
>>>> <br>
>>>> I'm sorry, Robert—perhaps I didn't explain my example clearly enough.<br>
>>>> Here's the code in question:<br>
>>>><br>
>>>> ```java<br>
>>>> Executor executor2 = new ThreadPoolExecutor(<br>
>>>> 200,<br>
>>>> Integer.MAX_VALUE,<br>
>>>> 0L,<br>
>>>> java.util.concurrent.TimeUnit.SECONDS,<br>
>>>> new SynchronousQueue<>(),<br>
>>>> Thread.ofVirtual().name("test-threadpool-", 1).factory()<br>
>>>> );<br>
>>>> ```<br>
>>>><br>
>>>> In this example, the pooled virtual threads don't implement any<br>
>>>> backpressure mechanism; they simply maintain a core pool of 200 virtual<br>
>>>> threads. Given that the queue is a `SynchronousQueue` and the maximum pool<br>
>>>> size is set to `Integer.MAX_VALUE`, once the concurrent tasks exceed 200,<br>
>>>> its behavior becomes identical to that of non-pooled virtual threads.<br>
>>>><br>
>>>> From my perspective, this example demonstrates that the benefits of<br>
>>>> pooling virtual threads outweigh those of creating a new virtual thread for<br>
>>>> every single task. In IO-bound scenarios, the virtual threads are directly<br>
>>>> reused rather than being recreated each time, and the memory footprint of<br>
>>>> virtual threads is far smaller than that of platform threads (which are<br>
>>>> controlled by the `-Xss` flag). Additionally, with pooled virtual threads,<br>
>>>> the 8KB `byte[]` cache I mentioned earlier (stored in `ThreadLocal`) can<br>
>>>> also be reused, which further reduces overall memory usage—wouldn't you<br>
>>>> agree?<br>
>>>><br>
>>>> Best Regards.<br>
>>>> Jianbin Chen, github-id: funky-eyes<br>
>>>><br>
>>>> Robert Engels <<a href="mailto:rengels@ix.netcom.com" target="_blank">rengels@ix.netcom.com</a>> 于 2026年1月23日周五 21:52写道:<br>
>>>><br>
>>>>> Because VT are so efficient to create, without any back pressure they<br>
>>>>> will all be created and running at essentially the same time (dramatically<br>
>>>>> raising the amount of memory in use) - versus with a pool of size N you<br>
>>>>> will have at most N running at once. In a REAL WORLD application there are<br>
>>>>> often external limiters (like number of tcp connections) that provide a<br>
>>>>> limit.<br>
>>>>><br>
>>>>> If your tasks are purely cpu bound you should probably be using a<br>
>>>>> capped thread pool of platform threads as it makes no sense to have more<br>
>>>>> threads than available cores.<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Jan 23, 2026, at 7:42 AM, Jianbin Chen <<a href="mailto:jianbin@apache.org" target="_blank">jianbin@apache.org</a>> wrote:<br>
>>>>><br>
>>>>> <br>
>>>>> The question is why I need to use a semaphore to control the number of<br>
>>>>> concurrently running tasks. In my particular example, the goal is simply to<br>
>>>>> keep the concurrency level the same across different thread pool<br>
>>>>> implementations so I can fairly compare which one completes all the tasks<br>
>>>>> faster. This isn't solely about memory consumption—purely from a<br>
>>>>> **performance** perspective (e.g., total throughput or wall-clock time to<br>
>>>>> finish the workload), the same number of concurrent tasks completes<br>
>>>>> noticeably faster when using pooled virtual threads.<br>
>>>>><br>
>>>>> My email probably didn't explain this clearly enough. In reality, I<br>
>>>>> have two main questions:<br>
>>>>><br>
>>>>> 1. When a third-party library uses `ThreadLocal` as a cache/pool<br>
>>>>> (e.g., to hold expensive reusable objects like connections, formatters, or<br>
>>>>> parsers), is switching to a **pooled virtual thread executor** the only<br>
>>>>> viable solution—assuming we cannot modify the third-party library code?<br>
>>>>><br>
>>>>> 2. When running the exact same number of concurrent tasks, pooled<br>
>>>>> virtual threads deliver better performance.<br>
>>>>><br>
>>>>> Both questions point toward the same conclusion: for an application<br>
>>>>> originally built around a traditional platform thread pool, after upgrading<br>
>>>>> to JDK 21/25, moving to a **pooled virtual threads** approach is generally<br>
>>>>> superior to simply using non-pooled (unbounded) virtual threads.<br>
>>>>><br>
>>>>> If any part of this reasoning or conclusion is mistaken, I would<br>
>>>>> really appreciate being corrected — thank you very much in advance for any<br>
>>>>> feedback or different experiences you can share!<br>
>>>>><br>
>>>>> Best Regards.<br>
>>>>> Jianbin Chen, github-id: funky-eyes<br>
>>>>><br>
>>>>> robert engels <<a href="mailto:robaho@me.com" target="_blank">robaho@me.com</a>> 于 2026年1月23日周五 20:58写道:<br>
>>>>><br>
>>>>>> Exactly, this is your problem. The total number of tasks will all be<br>
>>>>>> running at once in the thread per task model.<br>
>>>>>><br>
>>>>>> On Jan 23, 2026, at 6:49 AM, Jianbin Chen <<a href="mailto:jianbin@apache.org" target="_blank">jianbin@apache.org</a>> wrote:<br>
>>>>>><br>
>>>>>> <br>
>>>>>> Hi Robert,<br>
>>>>>><br>
>>>>>> Thanks you, but I'm a bit confused. In the example above, I only set<br>
>>>>>> the core pool size to 200 virtual threads, but for the specific test case<br>
>>>>>> we’re talking about, the concurrency isn’t actually being limited by the<br>
>>>>>> pool size at all. Since the maximum thread count is Integer.MAX_VALUE and<br>
>>>>>> it’s using a SynchronousQueue, tasks are handed off immediately and a new<br>
>>>>>> thread gets created to run them right away anyway.<br>
>>>>>><br>
>>>>>> Best Regards.<br>
>>>>>> Jianbin Chen, github-id: funky-eyes<br>
>>>>>><br>
>>>>>> robert engels <<a href="mailto:robaho@me.com" target="_blank">robaho@me.com</a>> 于 2026年1月23日周五 20:28写道:<br>
>>>>>><br>
>>>>>>> Try using a semaphore to limit the maximum number of tasks in<br>
>>>>>>> progress at anyone time - that is what is causing your memory spike. Think<br>
>>>>>>> of it this way since VT threads are so cheap to create - you are<br>
>>>>>>> essentially creating them all at once - making the working set size equally<br>
>>>>>>> to the maximum. So you have N * WSS, where as in the other you have<br>
>>>>>>> POOLSIZE * WSS.<br>
>>>>>>><br>
>>>>>>> On Jan 23, 2026, at 4:14 AM, Jianbin Chen <<a href="mailto:jianbin@apache.org" target="_blank">jianbin@apache.org</a>><br>
>>>>>>> wrote:<br>
>>>>>>><br>
>>>>>>> <br>
>>>>>>> Hi Alan,<br>
>>>>>>><br>
>>>>>>> Thanks for your reply and for mentioning JEP 444.<br>
>>>>>>> I’ve gone through the guidance in JEP 444 and have some<br>
>>>>>>> understanding of it — which is exactly why I’m feeling a bit puzzled in<br>
>>>>>>> practice and would really like to hear your thoughts.<br>
>>>>>>><br>
>>>>>>> Background — ThreadLocal example (Aerospike)<br>
>>>>>>> ```java<br>
>>>>>>> private static final ThreadLocal<byte[]> BufferThreadLocal = new<br>
>>>>>>> ThreadLocal<byte[]>() {<br>
>>>>>>> @Override<br>
>>>>>>> protected byte[] initialValue() {<br>
>>>>>>> return new byte[DefaultBufferSize];<br>
>>>>>>> }<br>
>>>>>>> };<br>
>>>>>>> ```<br>
>>>>>>> This Aerospike code allocates a default 8KB byte[] whenever a new<br>
>>>>>>> thread is created and stores it in a ThreadLocal for per-thread caching.<br>
>>>>>>><br>
>>>>>>> My concern<br>
>>>>>>> - With a traditional platform-thread pool, those ThreadLocal byte[]<br>
>>>>>>> instances are effectively reused because threads are long-lived and pooled.<br>
>>>>>>> - If we switch to creating a brand-new virtual thread per task (no<br>
>>>>>>> pooling), each virtual thread gets its own fresh ThreadLocal byte[], which<br>
>>>>>>> leads to many short-lived 8KB allocations.<br>
>>>>>>> - That raises allocation rate and GC pressure (despite collectors<br>
>>>>>>> like ZGC), because ThreadLocal caches aren’t reused when threads are<br>
>>>>>>> ephemeral.<br>
>>>>>>><br>
>>>>>>> So my question is: for applications originally designed around<br>
>>>>>>> platform-thread pools, wouldn’t partially pooling virtual threads be<br>
>>>>>>> beneficial? For example, Tomcat’s default max threads is 200 — if I keep a<br>
>>>>>>> pool of 200 virtual threads, then when load exceeds that core size, a<br>
>>>>>>> SynchronousQueue will naturally cause new virtual threads to be created on<br>
>>>>>>> demand. This seems to preserve the behavior that ThreadLocal-based<br>
>>>>>>> libraries expect, without losing the ability to expand under spikes. Since<br>
>>>>>>> virtual threads are very lightweight, pooling a reasonable number (e.g.,<br>
>>>>>>> 200) seems to have negligible memory downside while retaining ThreadLocal<br>
>>>>>>> cache effectiveness.<br>
>>>>>>><br>
>>>>>>> Empirical test I ran<br>
>>>>>>> (I ran a microbenchmark comparing an unpooled per-task<br>
>>>>>>> virtual-thread executor and a ThreadPoolExecutor that keeps 200 core<br>
>>>>>>> virtual threads.)<br>
>>>>>>><br>
>>>>>>> ```java<br>
>>>>>>> public static void main(String[] args) throws InterruptedException {<br>
>>>>>>> Executor executor =<br>
>>>>>>> Executors.newThreadPerTaskExecutor(Thread.ofVirtual().name("test-",<br>
>>>>>>> 1).factory());<br>
>>>>>>> Executor executor2 = new ThreadPoolExecutor(<br>
>>>>>>> 200,<br>
>>>>>>> Integer.MAX_VALUE,<br>
>>>>>>> 0L,<br>
>>>>>>> java.util.concurrent.TimeUnit.SECONDS,<br>
>>>>>>> new SynchronousQueue<>(),<br>
>>>>>>> Thread.ofVirtual().name("test-threadpool-", 1).factory()<br>
>>>>>>> );<br>
>>>>>>><br>
>>>>>>> // Warm-up<br>
>>>>>>> for (int i = 0; i < 10100; i++) {<br>
>>>>>>> executor.execute(() -> {<br>
>>>>>>> // simulate I/O wait<br>
>>>>>>> try { Thread.sleep(100); } catch (InterruptedException<br>
>>>>>>> e) { throw new RuntimeException(e); }<br>
>>>>>>> });<br>
>>>>>>> executor2.execute(() -> {<br>
>>>>>>> // simulate I/O wait<br>
>>>>>>> try { Thread.sleep(100); } catch (InterruptedException<br>
>>>>>>> e) { throw new RuntimeException(e); }<br>
>>>>>>> });<br>
>>>>>>> }<br>
>>>>>>><br>
>>>>>>> // Ensure JIT + warm-up complete<br>
>>>>>>> Thread.sleep(5000);<br>
>>>>>>><br>
>>>>>>> long start = System.currentTimeMillis();<br>
>>>>>>> CountDownLatch countDownLatch = new CountDownLatch(50000);<br>
>>>>>>> for (int i = 0; i < 50000; i++) {<br>
>>>>>>> executor.execute(() -> {<br>
>>>>>>> try { Thread.sleep(100); countDownLatch.countDown(); }<br>
>>>>>>> catch (InterruptedException e) { throw new RuntimeException(e); }<br>
>>>>>>> });<br>
>>>>>>> }<br>
>>>>>>> countDownLatch.await();<br>
>>>>>>> System.out.println("thread time: " + (System.currentTimeMillis()<br>
>>>>>>> - start) + " ms");<br>
>>>>>>><br>
>>>>>>> start = System.currentTimeMillis();<br>
>>>>>>> CountDownLatch countDownLatch2 = new CountDownLatch(50000);<br>
>>>>>>> for (int i = 0; i < 50000; i++) {<br>
>>>>>>> executor2.execute(() -> {<br>
>>>>>>> try { Thread.sleep(100); countDownLatch2.countDown(); }<br>
>>>>>>> catch (InterruptedException e) { throw new RuntimeException(e); }<br>
>>>>>>> });<br>
>>>>>>> }<br>
>>>>>>> countDownLatch.await();<br>
>>>>>>> System.out.println("thread pool time: " +<br>
>>>>>>> (System.currentTimeMillis() - start) + " ms");<br>
>>>>>>> }<br>
>>>>>>> ```<br>
>>>>>>><br>
>>>>>>> Result summary<br>
>>>>>>> - In my runs, the pooled virtual-thread executor (executor2)<br>
>>>>>>> performed better than the unpooled per-task virtual-thread executor.<br>
>>>>>>> - Even when I increased load by 10x or 100x, the pooled<br>
>>>>>>> virtual-thread executor still showed better performance.<br>
>>>>>>> - In realistic workloads, it seems pooling some virtual threads<br>
>>>>>>> reduces allocation/GC overhead and improves throughput compared to strictly<br>
>>>>>>> unpooled virtual threads.<br>
>>>>>>><br>
>>>>>>> Final thought / request for feedback<br>
>>>>>>> - From my perspective, for systems originally tuned for<br>
>>>>>>> platform-thread pools, partially pooling virtual threads seems to have no<br>
>>>>>>> obvious downside and can restore ThreadLocal cache effectiveness used by<br>
>>>>>>> many third-party libraries.<br>
>>>>>>> - If I’ve misunderstood JEP 444 recommendations, virtual-thread<br>
>>>>>>> semantics, or ThreadLocal behavior, please point out what I’m missing. I’d<br>
>>>>>>> appreciate your guidance.<br>
>>>>>>><br>
>>>>>>> Best Regards.<br>
>>>>>>> Jianbin Chen, github-id: funky-eyes<br>
>>>>>>><br>
>>>>>>> Alan Bateman <<a href="mailto:alan.bateman@oracle.com" target="_blank">alan.bateman@oracle.com</a>> 于 2026年1月23日周五 17:27写道:<br>
>>>>>>><br>
>>>>>>>> On 23/01/2026 07:30, Jianbin Chen wrote:<br>
>>>>>>>> > :<br>
>>>>>>>> ><br>
>>>>>>>> > So my question is:<br>
>>>>>>>> ><br>
>>>>>>>> > **In scenarios where third-party libraries heavily rely on<br>
>>>>>>>> ThreadLocal<br>
>>>>>>>> > for caching / buffering (and we cannot change those libraries to<br>
>>>>>>>> use<br>
>>>>>>>> > object pools instead), is explicitly pooling virtual threads<br>
>>>>>>>> (using a<br>
>>>>>>>> > ThreadPoolExecutor with virtual thread factory) considered a<br>
>>>>>>>> > recommended / acceptable workaround?**<br>
>>>>>>>> ><br>
>>>>>>>> > Or are there better / more idiomatic ways to handle this kind of<br>
>>>>>>>> > compatibility issue with legacy ThreadLocal-based libraries when<br>
>>>>>>>> > migrating to virtual threads?<br>
>>>>>>>> ><br>
>>>>>>>> > I have already opened a related discussion in the Dubbo project<br>
>>>>>>>> (since<br>
>>>>>>>> > Dubbo is one of the libraries affected in our stack):<br>
>>>>>>>> ><br>
>>>>>>>> > <a href="https://github.com/apache/dubbo/issues/16042" rel="noreferrer" target="_blank">https://github.com/apache/dubbo/issues/16042</a><br>
>>>>>>>> ><br>
>>>>>>>> > Would love to hear your thoughts — especially from people who<br>
>>>>>>>> have<br>
>>>>>>>> > experience running large-scale virtual-thread-based services with<br>
>>>>>>>> > mixed third-party dependencies.<br>
>>>>>>>> ><br>
>>>>>>>><br>
>>>>>>>> The guidelines that we put in JEP 444 [1] is to not pool virtual<br>
>>>>>>>> threads<br>
>>>>>>>> and to avoid caching costing resources in thread locals. Virtual<br>
>>>>>>>> threads<br>
>>>>>>>> support thread locals of course but that is not useful when some<br>
>>>>>>>> library<br>
>>>>>>>> is looking to share a costly resource between tasks that run on the<br>
>>>>>>>> same<br>
>>>>>>>> thread in a thread pool.<br>
>>>>>>>><br>
>>>>>>>> I don't know anything about Aerospike but working with the<br>
>>>>>>>> maintainers<br>
>>>>>>>> of that library to re-work its buffer management seems like the<br>
>>>>>>>> right<br>
>>>>>>>> course of action here. Your mail says "byte buffers". If this is<br>
>>>>>>>> ByteBuffer it might be that they are caching direct buffers as they<br>
>>>>>>>> are<br>
>>>>>>>> expensive to create (and managed by the GC). Maybe they could look<br>
>>>>>>>> at<br>
>>>>>>>> using MemorySegment (it's easy to get a ByteBuffer view of a memory<br>
>>>>>>>> segment) and allocate from an arena that better matches the<br>
>>>>>>>> lifecycle.<br>
>>>>>>>><br>
>>>>>>>> Hopefully others will share their experiences with migration as it<br>
>>>>>>>> is<br>
>>>>>>>> indeed challenging to migrate code developed for thread pools to<br>
>>>>>>>> work<br>
>>>>>>>> efficiently on virtual threads where there is 1-1 relationship<br>
>>>>>>>> between<br>
>>>>>>>> the task to execute and the thread.<br>
>>>>>>>><br>
>>>>>>>> -Alan<br>
>>>>>>>><br>
>>>>>>>> [1] <a href="https://openjdk.org/jeps/444#Thread-local-variables" rel="noreferrer" target="_blank">https://openjdk.org/jeps/444#Thread-local-variables</a><br>
>>>>>>>><br>
>>>>>>><br>
</blockquote></div>
</div></blockquote></div><br></body></html>