From duke at openjdk.org Thu Jul 7 11:25:58 2022 From: duke at openjdk.org (duke) Date: Thu, 7 Jul 2022 11:25:58 GMT Subject: git: openjdk/loom: master: 68 new changesets Message-ID: <123ad741-8c28-4d0b-a5cc-5e98b727759f@openjdk.org> Changeset: 910053b7 Author: KIRIYAMA Takuya Committer: David Holmes Date: 2022-06-28 23:37:23 +0000 URL: https://git.openjdk.org/loom/commit/910053b74ec5249b3ecae33b9b0b0a68729ef418 8280235: Deprecated flag FlightRecorder missing from VMDeprecatedOptions test Reviewed-by: dholmes, mgronlun ! test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java Changeset: 779b4e1d Author: Yuta Sato Committer: Yasumasa Suenaga Date: 2022-06-29 01:17:28 +0000 URL: https://git.openjdk.org/loom/commit/779b4e1d1959bc15a27492b7e2b951678e39cca8 8287001: Add warning message when fail to load hsdis libraries Reviewed-by: kvn, ysuenaga ! src/hotspot/share/compiler/disassembler.cpp Changeset: b96ba198 Author: Thomas Stuefe Date: 2022-06-29 04:12:46 +0000 URL: https://git.openjdk.org/loom/commit/b96ba19807845739b36274efb168dd048db819a3 8289182: NMT: MemTracker::baseline should return void Reviewed-by: dholmes, zgu ! src/hotspot/share/services/memBaseline.cpp ! src/hotspot/share/services/memBaseline.hpp ! src/hotspot/share/services/memTracker.cpp ! src/hotspot/share/services/nmtDCmd.cpp ! test/hotspot/jtreg/runtime/NMT/JcmdBaselineDetail.java ! test/hotspot/jtreg/runtime/NMT/JcmdDetailDiff.java ! test/hotspot/jtreg/runtime/NMT/JcmdSummaryDiff.java ! test/hotspot/jtreg/runtime/NMT/MallocSiteTypeChange.java Changeset: 108cd695 Author: Quan Anh Mai Committer: Jatin Bhateja Date: 2022-06-29 10:34:05 +0000 URL: https://git.openjdk.org/loom/commit/108cd695167f0eed7b778c29b55914998f15b90d 8283726: x86_64 intrinsics for compareUnsigned method in Integer and Long Reviewed-by: kvn, jbhateja ! src/hotspot/cpu/x86/x86_64.ad ! src/hotspot/share/classfile/vmIntrinsics.hpp ! src/hotspot/share/opto/c2compiler.cpp ! src/hotspot/share/opto/classes.hpp ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/opto/library_call.hpp ! src/hotspot/share/opto/subnode.cpp ! src/hotspot/share/opto/subnode.hpp ! src/hotspot/share/runtime/vmStructs.cpp ! src/java.base/share/classes/java/lang/Integer.java ! src/java.base/share/classes/java/lang/Long.java + test/hotspot/jtreg/compiler/intrinsics/TestCompareUnsigned.java ! test/hotspot/jtreg/compiler/lib/ir_framework/IRNode.java ! test/micro/org/openjdk/bench/java/lang/Integers.java ! test/micro/org/openjdk/bench/java/lang/Longs.java Changeset: 167ce4da Author: Yasumasa Suenaga Date: 2022-06-29 11:43:45 +0000 URL: https://git.openjdk.org/loom/commit/167ce4dae248024ffda0439c3ccc6b12404eadaf 8289421: No-PCH build for Minimal VM was broken by JDK-8287001 Reviewed-by: mbaesken, jiefu, stuefe ! src/hotspot/share/compiler/disassembler.cpp Changeset: 2961b7ee Author: Albert Mingkun Yang Date: 2022-06-29 13:15:19 +0000 URL: https://git.openjdk.org/loom/commit/2961b7eede7205f8d67427bdf020de7966900424 8285364: Remove REF_ enum for java.lang.ref.Reference Co-authored-by: Stefan Karlsson Reviewed-by: kbarrett, coleenp, stefank ! src/hotspot/share/classfile/classFileParser.cpp ! src/hotspot/share/classfile/classFileParser.hpp ! src/hotspot/share/classfile/vmClasses.cpp ! src/hotspot/share/gc/shared/referenceProcessor.cpp ! src/hotspot/share/gc/shared/referenceProcessor.hpp ! src/hotspot/share/gc/shared/referenceProcessorPhaseTimes.cpp ! src/hotspot/share/gc/shared/referenceProcessorPhaseTimes.hpp ! src/hotspot/share/jfr/recorder/checkpoint/types/jfrType.cpp ! src/hotspot/share/memory/referenceType.hpp ! src/hotspot/share/oops/instanceKlass.cpp ! src/hotspot/share/oops/instanceKlass.hpp ! src/hotspot/share/oops/instanceRefKlass.cpp ! src/hotspot/share/oops/instanceRefKlass.hpp ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/memory/ReferenceType.java Changeset: 0709a6a1 Author: liach Committer: Jaikiran Pai Date: 2022-06-29 14:22:48 +0000 URL: https://git.openjdk.org/loom/commit/0709a6a1fb6bfc8aecde7eb827d1628e181e3253 8284942: Proxy building can just iterate superinterfaces once Reviewed-by: mchung ! src/java.base/share/classes/java/lang/reflect/Proxy.java Changeset: ba670ecb Author: Doug Simon Date: 2022-06-29 16:14:55 +0000 URL: https://git.openjdk.org/loom/commit/ba670ecbb9efdbcaa783d4a933499ca191fb58c5 8289094: [JVMCI] reduce JNI overhead and other VM rounds trips in JVMCI Reviewed-by: kvn, dlong ! src/hotspot/cpu/aarch64/jvmciCodeInstaller_aarch64.cpp ! src/hotspot/cpu/x86/jvmciCodeInstaller_x86.cpp ! src/hotspot/share/code/debugInfo.hpp ! src/hotspot/share/compiler/compileBroker.cpp ! src/hotspot/share/jvmci/jvmciCodeInstaller.cpp ! src/hotspot/share/jvmci/jvmciCodeInstaller.hpp ! src/hotspot/share/jvmci/jvmciCompiler.cpp ! src/hotspot/share/jvmci/jvmciCompiler.hpp ! src/hotspot/share/jvmci/jvmciCompilerToVM.cpp ! src/hotspot/share/jvmci/jvmciEnv.cpp ! src/hotspot/share/jvmci/jvmciEnv.hpp ! src/hotspot/share/jvmci/jvmciJavaClasses.hpp ! src/hotspot/share/jvmci/jvmciRuntime.cpp ! src/hotspot/share/jvmci/vmStructs_jvmci.cpp ! src/hotspot/share/jvmci/vmSymbols_jvmci.hpp ! src/hotspot/share/runtime/timer.cpp ! src/hotspot/share/runtime/timer.hpp ! src/hotspot/share/utilities/ostream.cpp ! src/hotspot/share/utilities/ostream.hpp ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/BytecodeFrame.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/RegisterSaveLayout.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/StackLockValue.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/site/Infopoint.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCodeCacheProvider.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompiledCode.java + src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompiledCodeStream.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompiledNmethod.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotConstantPool.java - src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotConstantPoolObject.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotConstantReflectionProvider.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotJDKReflection.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotJVMCIRuntime.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMemoryAccessProviderImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMethodData.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMethodDataAccessor.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotObjectConstantImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotReferenceMap.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaFieldImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethodImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaType.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedObjectTypeImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedPrimitiveType.java - src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotSentinelConstant.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotSpeculationEncoding.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/JFR.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/MetaspaceObject.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/Assumptions.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/JavaConstant.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/PrimitiveConstant.java ! test/hotspot/jtreg/compiler/jvmci/common/patches/jdk.internal.vm.ci/jdk/vm/ci/hotspot/CompilerToVMHelper.java ! test/hotspot/jtreg/compiler/jvmci/errors/TestInvalidCompilationResult.java ! test/hotspot/jtreg/compiler/jvmci/errors/TestInvalidOopMap.java Changeset: b6bd190d Author: Zdenek Zambersky Committer: Valerie Peng Date: 2022-06-29 17:20:03 +0000 URL: https://git.openjdk.org/loom/commit/b6bd190d8d10fdb177f9fb100c9f44c9f57a3cb5 8288985: P11TlsKeyMaterialGenerator should work with ChaCha20-Poly1305 Reviewed-by: valeriep ! src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11SecretKeyFactory.java + test/jdk/sun/security/pkcs11/tls/TestKeyMaterialChaCha20.java Changeset: 15efb2bd Author: Harshitha Onkar Committer: Alexey Ivanov Date: 2022-06-29 18:36:38 +0000 URL: https://git.openjdk.org/loom/commit/15efb2bdeb73e4e255dcc864be1a83450a2beaa8 8289238: Refactoring changes to PassFailJFrame Test Framework Reviewed-by: azvegint, aivanov ! test/jdk/java/awt/print/PrinterJob/ImagePrinting/ClippedImages.java ! test/jdk/java/awt/print/PrinterJob/PrintGlyphVectorTest.java ! test/jdk/java/awt/print/PrinterJob/PrintLatinCJKTest.java ! test/jdk/java/awt/regtesthelpers/PassFailJFrame.java ! test/jdk/javax/swing/JRadioButton/bug4380543.java ! test/jdk/javax/swing/JTabbedPane/4209065/bug4209065.java ! test/jdk/javax/swing/JTable/PrintAllPagesTest.java ! test/jdk/javax/swing/text/html/HtmlScriptTagParserTest.java Changeset: dbc6e110 Author: Joe Darcy Date: 2022-06-29 00:14:45 +0000 URL: https://git.openjdk.org/loom/commit/dbc6e110100aa6aaa8493158312030b84152b33a 8289399: Update SourceVersion to use snippets Reviewed-by: jjg, iris ! src/java.compiler/share/classes/javax/lang/model/SourceVersion.java Changeset: 57089749 Author: Raffaello Giulietti Committer: Roger Riggs Date: 2022-06-29 14:56:28 +0000 URL: https://git.openjdk.org/loom/commit/570897498baeab8d10f7d9525328a6d85d8c73ec 8288596: Random:from() adapter does not delegate to supplied generator in all cases Reviewed-by: darcy ! src/java.base/share/classes/java/util/Random.java ! test/jdk/java/util/Random/RandomTest.java Changeset: cf715449 Author: Naoto Sato Date: 2022-06-29 15:47:26 +0000 URL: https://git.openjdk.org/loom/commit/cf7154498fffba202b74b41a074f25c657b2e591 8289252: Recommend Locale.of() method instead of the constructor Reviewed-by: joehw, rriggs ! src/java.base/share/classes/java/util/Locale.java Changeset: 048bffad Author: Jesper Wilhelmsson Date: 2022-06-29 23:32:37 +0000 URL: https://git.openjdk.org/loom/commit/048bffad79b302890059ffc1bc559bfc601de92c Merge ! src/java.compiler/share/classes/javax/lang/model/SourceVersion.java ! src/java.compiler/share/classes/javax/lang/model/SourceVersion.java Changeset: dddd4e7c Author: Jaikiran Pai Date: 2022-06-30 01:43:11 +0000 URL: https://git.openjdk.org/loom/commit/dddd4e7c81fccd82b0fd37ea4583ce1a8e175919 8289291: HttpServer sets incorrect value for "max" parameter in Keep-Alive header value Reviewed-by: michaelm, dfuchs ! src/jdk.httpserver/share/classes/sun/net/httpserver/ServerImpl.java + test/jdk/com/sun/net/httpserver/Http10KeepAliveMaxParamTest.java Changeset: 31e50f2c Author: Xin Liu Date: 2022-06-30 03:59:42 +0000 URL: https://git.openjdk.org/loom/commit/31e50f2c7642b046dc9ea1de8ec245dcbc4e1926 8286104: use aggressive liveness for unstable_if traps Reviewed-by: kvn, thartmann ! src/hotspot/share/compiler/methodLiveness.hpp ! src/hotspot/share/opto/c2_globals.hpp ! src/hotspot/share/opto/callnode.cpp ! src/hotspot/share/opto/callnode.hpp ! src/hotspot/share/opto/compile.cpp ! src/hotspot/share/opto/compile.hpp ! src/hotspot/share/opto/graphKit.cpp ! src/hotspot/share/opto/graphKit.hpp ! src/hotspot/share/opto/ifnode.cpp ! src/hotspot/share/opto/node.cpp ! src/hotspot/share/opto/parse.hpp ! src/hotspot/share/opto/parse2.cpp + test/hotspot/jtreg/compiler/c2/TestFoldCompares2.java + test/hotspot/jtreg/compiler/c2/irTests/TestOptimizeUnstableIf.java Changeset: da6d1fc0 Author: Thomas Stuefe Date: 2022-06-30 06:19:25 +0000 URL: https://git.openjdk.org/loom/commit/da6d1fc0e0aeb1fdb504aced4b0dba0290ec240f 8289477: Memory corruption with CPU_ALLOC, CPU_FREE on muslc Reviewed-by: dholmes, clanger ! src/hotspot/os/linux/os_linux.cpp Changeset: 28c5e483 Author: Tobias Holenstein Date: 2022-06-30 07:14:29 +0000 URL: https://git.openjdk.org/loom/commit/28c5e483a80e0291bc784488ea15545dbecb257d 8287094: IGV: show node input numbers in edge tooltips Reviewed-by: chagedorn, thartmann ! src/utils/IdealGraphVisualizer/Graph/src/main/java/com/sun/hotspot/igv/graph/FigureConnection.java Changeset: 7b5bd251 Author: Ryan Ernst Committer: Chris Hegarty Date: 2022-06-30 08:28:45 +0000 URL: https://git.openjdk.org/loom/commit/7b5bd251efb7ad541e2eb9144121e414d17427fc 8286397: Address possibly lossy conversions in jdk.hotspot.agent Reviewed-by: cjplummer, chegar ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/ObjectHeap.java ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/ui/classbrowser/HTMLGenerator.java ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java Changeset: 1305fb5c Author: Xiaohong Gong Date: 2022-06-30 08:53:27 +0000 URL: https://git.openjdk.org/loom/commit/1305fb5ca8e4ca6aa082293e4444fb7de1b1652c 8287984: AArch64: [vector] Make all bits set vector sharable for match rules Reviewed-by: kvn, ngasson ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/share/opto/vectornode.cpp + test/hotspot/jtreg/compiler/vectorapi/AllBitsSetVectorMatchRuleTest.java Changeset: c3addbb1 Author: rmartinc Committer: Aleksei Efimov Date: 2022-06-30 09:17:57 +0000 URL: https://git.openjdk.org/loom/commit/c3addbb1c01483e10189cc46d8f2378e5b56dcee 8288895: LdapContext doesn't honor set referrals limit Reviewed-by: dfuchs, aefimov ! src/java.naming/share/classes/com/sun/jndi/ldap/AbstractLdapNamingEnumeration.java + test/jdk/com/sun/jndi/ldap/ReferralLimitSearchTest.java Changeset: feb223aa Author: Prasanta Sadhukhan Date: 2022-06-30 11:16:07 +0000 URL: https://git.openjdk.org/loom/commit/feb223aacfd89d598a27b27c4b8be4601cc5eaff 8288707: javax/swing/JToolBar/4529206/bug4529206.java: setFloating does not work correctly Reviewed-by: tr, serb ! test/jdk/javax/swing/JToolBar/4529206/bug4529206.java Changeset: 00d06d4a Author: Kevin Walls Date: 2022-06-30 20:18:52 +0000 URL: https://git.openjdk.org/loom/commit/00d06d4a82c5cbc8cc5fde97caa8cb56279c441a 8289440: Remove vmTestbase/nsk/monitoring/MemoryPoolMBean/isCollectionUsageThresholdExceeded/isexceeded003 from ProblemList.txt Reviewed-by: amenkov, lmesnik ! test/hotspot/jtreg/ProblemList.txt ! test/hotspot/jtreg/vmTestbase/nsk/monitoring/MemoryPoolMBean/isCollectionUsageThresholdExceeded/isexceeded001.java Changeset: c20b3aa9 Author: Alan Bateman Date: 2022-06-30 08:49:32 +0000 URL: https://git.openjdk.org/loom/commit/c20b3aa9c5ada4c87b3421fbc3290f4d6a4706ac 8289278: Suspend/ResumeAllVirtualThreads need both can_suspend and can_support_virtual_threads Reviewed-by: sspitsyn, dcubed, dholmes, iris ! src/hotspot/share/prims/jvmti.xml ! src/hotspot/share/prims/jvmti.xsl Changeset: 918068a1 Author: Jesper Wilhelmsson Date: 2022-07-01 00:47:56 +0000 URL: https://git.openjdk.org/loom/commit/918068a115efee7d439084b6d743cab5193bd943 Merge Changeset: 124c63c1 Author: Xiaohong Gong Date: 2022-07-01 01:19:18 +0000 URL: https://git.openjdk.org/loom/commit/124c63c17c897404e3c5c3615d6727303e4f3d06 8288294: [vector] Add Identity/Ideal transformations for vector logic operations Reviewed-by: kvn, jbhateja ! src/hotspot/share/opto/vectornode.cpp ! src/hotspot/share/opto/vectornode.hpp ! test/hotspot/jtreg/compiler/lib/ir_framework/IRNode.java + test/hotspot/jtreg/compiler/vectorapi/VectorLogicalOpIdentityTest.java Changeset: d260a4e7 Author: Richard Reingruber Date: 2022-07-01 06:12:52 +0000 URL: https://git.openjdk.org/loom/commit/d260a4e794681c6f4be4767350702754cfc2035c 8289434: x86_64: Improve comment on gen_continuation_enter() Reviewed-by: kvn ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Changeset: f190f4e6 Author: Harshitha Onkar Committer: Alexander Zvegintsev Date: 2022-07-01 09:07:34 +0000 URL: https://git.openjdk.org/loom/commit/f190f4e6389a0105b0701ec7ea201fab9dda0a48 8288444: Remove the workaround for frame.pack() in ModalDialogTest Reviewed-by: azvegint + test/jdk/java/awt/Dialog/ModalDialogTest/ModalDialogTest.java Changeset: b9b900a6 Author: Tobias Holenstein Date: 2022-07-01 13:34:38 +0000 URL: https://git.openjdk.org/loom/commit/b9b900a61ca914c7931d69bd4a8aeaa948be1d64 8277060: EXCEPTION_INT_DIVIDE_BY_ZERO in TypeAryPtr::dump2 with -XX:+TracePhaseCCP Reviewed-by: kvn, thartmann, chagedorn, dlong ! src/hotspot/share/opto/type.cpp ! src/hotspot/share/utilities/globalDefinitions.cpp + test/hotspot/jtreg/compiler/debug/TestTracePhaseCCP.java Changeset: a8fe2d97 Author: Thomas Stuefe Date: 2022-07-01 13:43:45 +0000 URL: https://git.openjdk.org/loom/commit/a8fe2d97a2ea1d3ce70d6095740c4ac7ec113761 8289512: Fix GCC 12 warnings for adlc output_c.cpp Reviewed-by: kvn, lucy ! src/hotspot/share/adlc/output_c.cpp Changeset: 09b4032f Author: Harold Seigel Date: 2022-07-01 14:31:30 +0000 URL: https://git.openjdk.org/loom/commit/09b4032f8b07335729e71b16b8f735514f3aebce 8289534: Change 'uncomplicated' hotspot runtime options Reviewed-by: coleenp, dholmes ! src/hotspot/share/cds/filemap.cpp ! src/hotspot/share/cds/metaspaceShared.cpp ! src/hotspot/share/jvmci/jvmciCompilerToVMInit.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.hpp ! src/hotspot/share/runtime/globals.hpp ! src/hotspot/share/runtime/perfMemory.cpp ! src/hotspot/share/utilities/vmError.cpp ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/VM.java ! test/jdk/java/lang/instrument/GetObjectSizeIntrinsicsTest.java Changeset: c43bdf71 Author: Calvin Cheung Date: 2022-07-01 16:11:17 +0000 URL: https://git.openjdk.org/loom/commit/c43bdf716596053ebe473c3b3bd5cf89482b9b01 8289257: Some custom loader tests failed due to symbol refcount not decremented Reviewed-by: iklam, coleenp ! test/hotspot/jtreg/ProblemList-zgc.txt ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/HelloUnload.java Changeset: e291a67e Author: Brian Burkhalter Date: 2022-07-01 19:13:49 +0000 URL: https://git.openjdk.org/loom/commit/e291a67e96970d80a9915f8a23afffed6e0b8ded 8289584: (fs) Print size values in java/nio/file/FileStore/Basic.java when they differ by > 1GiB Reviewed-by: alanb ! test/jdk/java/nio/file/FileStore/Basic.java Changeset: 2dd00f58 Author: Kevin Driver Committer: Weijun Wang Date: 2022-07-01 21:28:44 +0000 URL: https://git.openjdk.org/loom/commit/2dd00f580c1c5999a4905ade09bc50a5cb37ca57 8170762: Document that ISO10126Padding pads with random bytes Reviewed-by: weijun ! src/java.base/share/classes/com/sun/crypto/provider/ISO10126Padding.java Changeset: 44e8c462 Author: Kevin Driver Committer: Weijun Wang Date: 2022-07-01 22:01:55 +0000 URL: https://git.openjdk.org/loom/commit/44e8c462b459a7db530dbc23c5ba923439c419b4 8289603: Code change for JDK-8170762 breaks all build Reviewed-by: weijun ! src/java.base/share/classes/com/sun/crypto/provider/ISO10126Padding.java Changeset: cdf69792 Author: Ioi Lam Date: 2022-07-02 14:45:10 +0000 URL: https://git.openjdk.org/loom/commit/cdf697925953f62e17a7916ba611d7e789f09edf 8289230: Move PlatformXXX class declarations out of os_xxx.hpp Reviewed-by: coleenp, ccheung ! src/hotspot/os/linux/decoder_linux.cpp + src/hotspot/os/posix/mutex_posix.hpp ! src/hotspot/os/posix/os_posix.cpp ! src/hotspot/os/posix/os_posix.hpp ! src/hotspot/os/posix/os_posix.inline.hpp + src/hotspot/os/posix/park_posix.hpp ! src/hotspot/os/posix/signals_posix.cpp + src/hotspot/os/posix/threadCrashProtection_posix.cpp + src/hotspot/os/posix/threadCrashProtection_posix.hpp + src/hotspot/os/windows/mutex_windows.hpp ! src/hotspot/os/windows/os_windows.cpp ! src/hotspot/os/windows/os_windows.hpp ! src/hotspot/os/windows/os_windows.inline.hpp + src/hotspot/os/windows/park_windows.hpp + src/hotspot/os/windows/threadCrashProtection_windows.cpp + src/hotspot/os/windows/threadCrashProtection_windows.hpp ! src/hotspot/share/gc/shared/gcLogPrecious.cpp ! src/hotspot/share/gc/shenandoah/shenandoahLock.hpp ! src/hotspot/share/gc/z/zLock.hpp ! src/hotspot/share/jfr/periodic/sampling/jfrThreadSampler.cpp ! src/hotspot/share/logging/logAsyncWriter.hpp ! src/hotspot/share/memory/metaspace/metachunk.cpp ! src/hotspot/share/memory/metaspace/rootChunkArea.cpp ! src/hotspot/share/memory/metaspace/testHelpers.cpp ! src/hotspot/share/prims/jvm.cpp ! src/hotspot/share/prims/jvmtiRawMonitor.hpp ! src/hotspot/share/runtime/mutex.cpp ! src/hotspot/share/runtime/mutex.hpp ! src/hotspot/share/runtime/objectMonitor.hpp ! src/hotspot/share/runtime/os.cpp ! src/hotspot/share/runtime/os.hpp ! src/hotspot/share/runtime/osThread.hpp ! src/hotspot/share/runtime/park.hpp ! src/hotspot/share/runtime/semaphore.hpp ! src/hotspot/share/runtime/synchronizer.cpp + src/hotspot/share/runtime/threadCrashProtection.hpp Changeset: dee5121b Author: Andrey Turbanov Date: 2022-07-02 15:24:23 +0000 URL: https://git.openjdk.org/loom/commit/dee5121bd4b079abb28337395be2d5dd8bbf2f11 8289385: Cleanup redundant synchronization in Http2ClientImpl Reviewed-by: jpai, dfuchs ! src/java.net.http/share/classes/jdk/internal/net/http/Http2ClientImpl.java Changeset: 95497772 Author: Tobias Hartmann Date: 2022-07-01 05:23:57 +0000 URL: https://git.openjdk.org/loom/commit/95497772e7207b5752e6ecace4a6686df2b45227 8284358: Unreachable loop is not removed from C2 IR, leading to a broken graph Co-authored-by: Christian Hagedorn Reviewed-by: kvn, chagedorn ! src/hotspot/share/opto/cfgnode.cpp + test/hotspot/jtreg/compiler/c2/TestDeadDataLoop.java Changeset: 604ea90d Author: Naoto Sato Date: 2022-07-01 16:07:23 +0000 URL: https://git.openjdk.org/loom/commit/604ea90d55ac8354fd7287490ef59b8e3ce020d1 8289549: ISO 4217 Amendment 172 Update Reviewed-by: iris ! src/java.base/share/data/currency/CurrencyData.properties ! test/jdk/java/util/Currency/tablea1.txt Changeset: 20124ac7 Author: Daniel D. Daugherty Date: 2022-07-01 16:21:31 +0000 URL: https://git.openjdk.org/loom/commit/20124ac755acbe801d51a26dc5176239d1256279 8289585: ProblemList sun/tools/jhsdb/JStackStressTest.java on linux-aarch64 Reviewed-by: bpb, kevinw ! test/jdk/ProblemList.txt Changeset: 8e01ffb3 Author: Maurizio Cimadamore Date: 2022-07-01 21:46:07 +0000 URL: https://git.openjdk.org/loom/commit/8e01ffb3a7914a67a66ce284029f19cdf845b626 8289570: SegmentAllocator:allocateUtf8String(String str) default behavior mismatch to spec Reviewed-by: alanb, psandoz ! src/java.base/share/classes/jdk/internal/foreign/Utils.java ! test/jdk/java/foreign/TestSegmentAllocators.java Changeset: 99250140 Author: Vladimir Ivanov Date: 2022-07-01 22:56:48 +0000 URL: https://git.openjdk.org/loom/commit/9925014035ed203ba42cce80a23730328bbe8a50 8280320: C2: Loop opts are missing during OSR compilation Reviewed-by: thartmann, iveresov ! src/hotspot/share/ci/ciMethodData.cpp Changeset: cfc9a881 Author: Sergey Bylokhov Date: 2022-07-02 00:25:20 +0000 URL: https://git.openjdk.org/loom/commit/cfc9a881afd300bd7c1ce784287d1669308e89fc 8288854: getLocalGraphicsEnvironment() on for multi-screen setups throws exception NPE Reviewed-by: azvegint, aivanov ! src/java.desktop/unix/classes/sun/awt/X11GraphicsEnvironment.java Changeset: 9515560c Author: Serguei Spitsyn Date: 2022-07-02 05:43:43 +0000 URL: https://git.openjdk.org/loom/commit/9515560c54438156b37f1549229bcb5535df5fd1 8288703: GetThreadState returns 0 for virtual thread that has terminated Reviewed-by: alanb, amenkov, cjplummer ! src/hotspot/share/prims/jvmtiEnvBase.cpp ! test/hotspot/jtreg/serviceability/jvmti/thread/GetThreadState/thrstat03/thrstat03.java ! test/hotspot/jtreg/serviceability/jvmti/vthread/SelfSuspendDisablerTest/SelfSuspendDisablerTest.java ! test/hotspot/jtreg/serviceability/jvmti/vthread/SelfSuspendDisablerTest/libSelfSuspendDisablerTest.cpp Changeset: f5cdabad Author: Igor Veresov Date: 2022-07-02 05:55:10 +0000 URL: https://git.openjdk.org/loom/commit/f5cdabad06b1658d9a3ac01f94cbd29080ffcdb1 8245268: -Xcomp is missing from java launcher documentation Reviewed-by: kvn ! src/java.base/share/man/java.1 Changeset: 70f56933 Author: Jesper Wilhelmsson Date: 2022-07-02 18:07:36 +0000 URL: https://git.openjdk.org/loom/commit/70f5693356277c0685668219a79819707d099d9f Merge ! src/hotspot/share/prims/jvmtiEnvBase.cpp ! src/java.base/share/man/java.1 ! test/hotspot/jtreg/serviceability/jvmti/thread/GetThreadState/thrstat03/thrstat03.java ! test/jdk/ProblemList.txt ! src/hotspot/share/prims/jvmtiEnvBase.cpp ! src/java.base/share/man/java.1 ! test/hotspot/jtreg/serviceability/jvmti/thread/GetThreadState/thrstat03/thrstat03.java ! test/jdk/ProblemList.txt Changeset: d8444aa4 Author: Bill Huang Committer: Jaikiran Pai Date: 2022-07-03 02:37:30 +0000 URL: https://git.openjdk.org/loom/commit/d8444aa45ef10279f5ca034bb522e92411f07255 8286610: Add additional diagnostic output to java/net/DatagramSocket/InterruptibleDatagramSocket.java Reviewed-by: msheppar, dfuchs, jpai ! test/jdk/java/net/DatagramSocket/InterruptibleDatagramSocket.java Changeset: 649f2d88 Author: Prasanta Sadhukhan Date: 2022-07-03 08:36:08 +0000 URL: https://git.openjdk.org/loom/commit/649f2d8835027128c6c8cf37236808094a12a35f 8065097: [macosx] javax/swing/Popup/TaskbarPositionTest.java fails because Popup is one pixel off Reviewed-by: aivanov ! test/jdk/ProblemList.txt ! test/jdk/javax/swing/Popup/TaskbarPositionTest.java Changeset: 8e7a3cb5 Author: Andrey Turbanov Date: 2022-07-04 06:54:09 +0000 URL: https://git.openjdk.org/loom/commit/8e7a3cb5ab3852f0c367c8807d51ffbec2d0ad49 8289431: (zipfs) Avoid redundant HashMap.get in ZipFileSystemProvider.removeFileSystem Reviewed-by: lancea, attila ! src/jdk.zipfs/share/classes/jdk/nio/zipfs/ZipFileSystemProvider.java Changeset: e31003a0 Author: Albert Mingkun Yang Date: 2022-07-04 08:04:01 +0000 URL: https://git.openjdk.org/loom/commit/e31003a064693765a52f15ff9d4de2c342869a13 8289575: G1: Remove unnecessary is-marking-active check in G1BarrierSetRuntime::write_ref_field_pre_entry Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1BarrierSetRuntime.cpp ! src/hotspot/share/gc/shared/satbMarkQueue.hpp Changeset: a8edd7a1 Author: Matthias Baesken Date: 2022-07-04 08:56:35 +0000 URL: https://git.openjdk.org/loom/commit/a8edd7a12f955fe843c7c9ad4273e9c653a80c5a 8289569: [test] java/lang/ProcessBuilder/Basic.java fails on Alpine/musl Reviewed-by: clanger, alanb, stuefe ! test/jdk/java/lang/ProcessBuilder/Basic.java Changeset: d53b02eb Author: Albert Mingkun Yang Date: 2022-07-04 12:03:57 +0000 URL: https://git.openjdk.org/loom/commit/d53b02eb9fceb6d170e0ea8613c2a064a7175892 8287312: G1: Early return on first failure in VerifyRegionClosure Reviewed-by: tschatzl, iwalulya, kbarrett ! src/hotspot/share/gc/g1/g1HeapVerifier.cpp Changeset: b5d96565 Author: Andrew Haley Date: 2022-07-04 13:26:54 +0000 URL: https://git.openjdk.org/loom/commit/b5d965656d937e31ca7d3224c4e981d5083091c9 8288971: AArch64: Clean up stack and register handling in interpreter Reviewed-by: adinn, ngasson ! src/hotspot/cpu/aarch64/abstractInterpreter_aarch64.cpp ! src/hotspot/cpu/aarch64/assembler_aarch64.cpp ! src/hotspot/cpu/aarch64/assembler_aarch64.hpp ! src/hotspot/cpu/aarch64/frame_aarch64.cpp ! src/hotspot/cpu/aarch64/frame_aarch64.hpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.cpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.hpp ! src/hotspot/cpu/aarch64/methodHandles_aarch64.cpp ! src/hotspot/cpu/aarch64/register_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateTable_aarch64.cpp ! src/hotspot/cpu/arm/templateInterpreterGenerator_arm.cpp ! src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp Changeset: bad9ffe4 Author: Albert Mingkun Yang Date: 2022-07-04 15:18:24 +0000 URL: https://git.openjdk.org/loom/commit/bad9ffe47112c3d532e0486af093f662508a5816 8288947: G1: Consolidate per-region is-humongous query in G1CollectedHeap Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.hpp ! src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp ! src/hotspot/share/gc/g1/g1HeapRegionAttr.hpp ! src/hotspot/share/gc/g1/g1YoungCollector.cpp Changeset: 9ccae707 Author: Ryan Ernst Committer: Chris Hegarty Date: 2022-07-04 16:09:40 +0000 URL: https://git.openjdk.org/loom/commit/9ccae7078e22c27a8f84152f005c628534c9af53 8287593: ShortResponseBody could be made more resilient to rogue connections Reviewed-by: chegar, dfuchs ! test/jdk/java/net/httpclient/ShortResponseBody.java Changeset: df063f7d Author: Andrey Turbanov Date: 2022-07-04 20:21:11 +0000 URL: https://git.openjdk.org/loom/commit/df063f7db18a40ea7325fe608b3206a6dff812c1 8289484: Cleanup unnecessary null comparison before instanceof check in java.rmi Reviewed-by: jpai, attila ! src/java.rmi/share/classes/java/rmi/MarshalledObject.java ! src/java.rmi/share/classes/sun/rmi/transport/LiveRef.java ! src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPEndpoint.java Changeset: 688712f7 Author: Thomas Stuefe Date: 2022-07-05 04:26:45 +0000 URL: https://git.openjdk.org/loom/commit/688712f75cd54caa264494adbe4dfeefc079e1dd 8289633: Forbid raw C-heap allocation functions in hotspot and fix findings Reviewed-by: kbarrett, dholmes ! src/hotspot/cpu/ppc/macroAssembler_ppc_sha.cpp ! src/hotspot/cpu/ppc/stubRoutines_ppc_64.cpp ! src/hotspot/os/linux/decoder_linux.cpp ! src/hotspot/os/linux/gc/z/zMountPoint_linux.cpp ! src/hotspot/os/linux/os_linux.cpp ! src/hotspot/os/linux/os_perf_linux.cpp ! src/hotspot/os/posix/gc/z/zUtils_posix.cpp ! src/hotspot/os/posix/os_posix.cpp ! src/hotspot/share/compiler/compilerEvent.cpp ! src/hotspot/share/gc/shared/gcLogPrecious.cpp ! src/hotspot/share/jvmci/jvmci.cpp ! src/hotspot/share/jvmci/jvmciCodeInstaller.cpp ! src/hotspot/share/logging/logTagSet.cpp ! src/hotspot/share/runtime/os.cpp ! src/hotspot/share/services/nmtPreInit.cpp ! src/hotspot/share/utilities/globalDefinitions.hpp ! test/hotspot/gtest/gtestMain.cpp ! test/hotspot/gtest/logging/test_logDecorators.cpp ! test/hotspot/gtest/utilities/test_bitMap_setops.cpp ! test/hotspot/gtest/utilities/test_concurrentHashtable.cpp Changeset: 1b997db7 Author: KIRIYAMA Takuya Committer: Tobias Hartmann Date: 2022-07-05 06:38:10 +0000 URL: https://git.openjdk.org/loom/commit/1b997db734315f6cd08af94149e6622a8afbe88c 8289427: compiler/compilercontrol/jcmd/ClearDirectivesFileStackTest.java failed with null setting Reviewed-by: kvn, thartmann ! test/hotspot/jtreg/compiler/compilercontrol/share/scenario/DirectiveBuilder.java Changeset: 4c997ba8 Author: Albert Mingkun Yang Date: 2022-07-05 07:29:02 +0000 URL: https://git.openjdk.org/loom/commit/4c997ba8303cc1116c73f6699888a77073a125a2 8289520: G1: Remove duplicate checks in G1BarrierSetC1::post_barrier Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/c1/g1BarrierSetC1.cpp Changeset: fd1bb078 Author: Andrey Turbanov Date: 2022-07-05 07:39:05 +0000 URL: https://git.openjdk.org/loom/commit/fd1bb078ea3c8d3a10be696384ecf04d16573baa 8287603: Avoid redundant HashMap.containsKey calls in NimbusDefaults.getDerivedColor Reviewed-by: attila, aivanov ! src/java.desktop/share/classes/javax/swing/plaf/nimbus/Defaults.template Changeset: a5934cdd Author: Andrew Haley Date: 2022-07-05 07:54:38 +0000 URL: https://git.openjdk.org/loom/commit/a5934cddca9b962d8e1b709de23c169904b95525 8289698: AArch64: Need to relativize extended_sp in frame Reviewed-by: alanb, dholmes ! src/hotspot/cpu/aarch64/continuationFreezeThaw_aarch64.inline.hpp Changeset: 77c3bbf1 Author: Michael McMahon Date: 2022-07-05 09:15:41 +0000 URL: https://git.openjdk.org/loom/commit/77c3bbf105403089fec69d51406fe3e6f562271f 8289617: Remove test/jdk/java/net/ServerSocket/ThreadStop.java Reviewed-by: alanb, jpai - test/jdk/java/net/ServerSocket/ThreadStop.java Changeset: c45d613f Author: Doug Simon Date: 2022-07-05 18:25:12 +0000 URL: https://git.openjdk.org/loom/commit/c45d613faa8b8658c714513da89852f1f9ff0a4a 8289687: [JVMCI] bug in HotSpotResolvedJavaMethodImpl.equals Reviewed-by: kvn ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethodImpl.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaField.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaMethod.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java Changeset: d48694d0 Author: Lance Andersen Date: 2022-07-05 19:45:08 +0000 URL: https://git.openjdk.org/loom/commit/d48694d0f3865c1b205acdfa2e6c6d032a39959d 8283335: Add exists and readAttributesIfExists methods to FileSystemProvider Reviewed-by: alanb ! src/java.base/share/classes/java/nio/file/Files.java ! src/java.base/share/classes/java/nio/file/spi/FileSystemProvider.java ! src/java.base/share/classes/sun/nio/fs/AbstractFileSystemProvider.java ! src/java.base/unix/classes/sun/nio/fs/UnixFileAttributes.java ! src/java.base/unix/classes/sun/nio/fs/UnixFileSystemProvider.java ! src/java.base/unix/classes/sun/nio/fs/UnixNativeDispatcher.java ! src/java.base/unix/classes/sun/nio/fs/UnixUriUtils.java ! src/java.base/unix/native/libnio/fs/UnixNativeDispatcher.c ! src/jdk.zipfs/share/classes/jdk/nio/zipfs/ZipFileSystemProvider.java ! src/jdk.zipfs/share/classes/jdk/nio/zipfs/ZipPath.java + test/jdk/java/nio/file/spi/TestDelegation.java ! test/jdk/java/nio/file/spi/TestProvider.java + test/micro/org/openjdk/bench/jdk/nio/zipfs/ZipfileSystemProviderDelegation.java Changeset: 35156041 Author: Evgeny Astigeevich Committer: Paul Hohensee Date: 2022-07-05 20:50:02 +0000 URL: https://git.openjdk.org/loom/commit/351560414d7ddc0694126ab184bdb78be604e51f 8280481: Duplicated stubs to interpreter for static calls Reviewed-by: kvn, phh ! src/hotspot/cpu/aarch64/aarch64.ad + src/hotspot/cpu/aarch64/codeBuffer_aarch64.cpp ! src/hotspot/cpu/aarch64/codeBuffer_aarch64.hpp ! src/hotspot/cpu/arm/codeBuffer_arm.hpp ! src/hotspot/cpu/ppc/codeBuffer_ppc.hpp ! src/hotspot/cpu/riscv/codeBuffer_riscv.hpp ! src/hotspot/cpu/s390/codeBuffer_s390.hpp + src/hotspot/cpu/x86/codeBuffer_x86.cpp ! src/hotspot/cpu/x86/codeBuffer_x86.hpp ! src/hotspot/cpu/x86/compiledIC_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.hpp ! src/hotspot/cpu/x86/x86_32.ad ! src/hotspot/cpu/x86/x86_64.ad ! src/hotspot/cpu/zero/codeBuffer_zero.hpp ! src/hotspot/share/asm/codeBuffer.cpp ! src/hotspot/share/asm/codeBuffer.hpp + src/hotspot/share/asm/codeBuffer.inline.hpp ! src/hotspot/share/c1/c1_LIRAssembler.cpp ! src/hotspot/share/ci/ciEnv.cpp ! src/hotspot/share/runtime/globals.hpp + test/hotspot/jtreg/compiler/sharedstubs/SharedStubToInterpTest.java Changeset: fafe8b3f Author: Xiaohong Gong Date: 2022-07-06 06:15:04 +0000 URL: https://git.openjdk.org/loom/commit/fafe8b3f8dc1bdb7216f2b02416487a2c5fd9a26 8289604: compiler/vectorapi/VectorLogicalOpIdentityTest.java failed on x86 AVX1 system Reviewed-by: jiefu, kvn ! test/hotspot/jtreg/compiler/vectorapi/VectorLogicalOpIdentityTest.java Changeset: f783244c Author: Andrey Turbanov Date: 2022-07-06 06:40:19 +0000 URL: https://git.openjdk.org/loom/commit/f783244caf041b6f79036dfcf29ff857d9c1c78f 8289706: (cs) Avoid redundant TreeMap.containsKey call in AbstractCharsetProvider Reviewed-by: attila, naoto ! src/jdk.charsets/share/classes/sun/nio/cs/ext/AbstractCharsetProvider.java Changeset: d8f4e97b Author: Matthias Baesken Date: 2022-07-06 07:12:32 +0000 URL: https://git.openjdk.org/loom/commit/d8f4e97bd3f4e50902e80b4b6b4eb3268c6d4a9d 8289146: containers/docker/TestMemoryWithCgroupV1.java fails on linux ppc64le machine with missing Memory and Swap Limit output Reviewed-by: sgehwolf, mdoerr, iklam ! test/hotspot/jtreg/containers/docker/TestMemoryWithCgroupV1.java From duke at openjdk.org Thu Jul 7 11:22:02 2022 From: duke at openjdk.org (duke) Date: Thu, 7 Jul 2022 11:22:02 GMT Subject: git: openjdk/loom: fibers: 69 new changesets Message-ID: Changeset: 910053b7 Author: KIRIYAMA Takuya Committer: David Holmes Date: 2022-06-28 23:37:23 +0000 URL: https://git.openjdk.org/loom/commit/910053b74ec5249b3ecae33b9b0b0a68729ef418 8280235: Deprecated flag FlightRecorder missing from VMDeprecatedOptions test Reviewed-by: dholmes, mgronlun ! test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java Changeset: 779b4e1d Author: Yuta Sato Committer: Yasumasa Suenaga Date: 2022-06-29 01:17:28 +0000 URL: https://git.openjdk.org/loom/commit/779b4e1d1959bc15a27492b7e2b951678e39cca8 8287001: Add warning message when fail to load hsdis libraries Reviewed-by: kvn, ysuenaga ! src/hotspot/share/compiler/disassembler.cpp Changeset: b96ba198 Author: Thomas Stuefe Date: 2022-06-29 04:12:46 +0000 URL: https://git.openjdk.org/loom/commit/b96ba19807845739b36274efb168dd048db819a3 8289182: NMT: MemTracker::baseline should return void Reviewed-by: dholmes, zgu ! src/hotspot/share/services/memBaseline.cpp ! src/hotspot/share/services/memBaseline.hpp ! src/hotspot/share/services/memTracker.cpp ! src/hotspot/share/services/nmtDCmd.cpp ! test/hotspot/jtreg/runtime/NMT/JcmdBaselineDetail.java ! test/hotspot/jtreg/runtime/NMT/JcmdDetailDiff.java ! test/hotspot/jtreg/runtime/NMT/JcmdSummaryDiff.java ! test/hotspot/jtreg/runtime/NMT/MallocSiteTypeChange.java Changeset: 108cd695 Author: Quan Anh Mai Committer: Jatin Bhateja Date: 2022-06-29 10:34:05 +0000 URL: https://git.openjdk.org/loom/commit/108cd695167f0eed7b778c29b55914998f15b90d 8283726: x86_64 intrinsics for compareUnsigned method in Integer and Long Reviewed-by: kvn, jbhateja ! src/hotspot/cpu/x86/x86_64.ad ! src/hotspot/share/classfile/vmIntrinsics.hpp ! src/hotspot/share/opto/c2compiler.cpp ! src/hotspot/share/opto/classes.hpp ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/opto/library_call.hpp ! src/hotspot/share/opto/subnode.cpp ! src/hotspot/share/opto/subnode.hpp ! src/hotspot/share/runtime/vmStructs.cpp ! src/java.base/share/classes/java/lang/Integer.java ! src/java.base/share/classes/java/lang/Long.java + test/hotspot/jtreg/compiler/intrinsics/TestCompareUnsigned.java ! test/hotspot/jtreg/compiler/lib/ir_framework/IRNode.java ! test/micro/org/openjdk/bench/java/lang/Integers.java ! test/micro/org/openjdk/bench/java/lang/Longs.java Changeset: 167ce4da Author: Yasumasa Suenaga Date: 2022-06-29 11:43:45 +0000 URL: https://git.openjdk.org/loom/commit/167ce4dae248024ffda0439c3ccc6b12404eadaf 8289421: No-PCH build for Minimal VM was broken by JDK-8287001 Reviewed-by: mbaesken, jiefu, stuefe ! src/hotspot/share/compiler/disassembler.cpp Changeset: 2961b7ee Author: Albert Mingkun Yang Date: 2022-06-29 13:15:19 +0000 URL: https://git.openjdk.org/loom/commit/2961b7eede7205f8d67427bdf020de7966900424 8285364: Remove REF_ enum for java.lang.ref.Reference Co-authored-by: Stefan Karlsson Reviewed-by: kbarrett, coleenp, stefank ! src/hotspot/share/classfile/classFileParser.cpp ! src/hotspot/share/classfile/classFileParser.hpp ! src/hotspot/share/classfile/vmClasses.cpp ! src/hotspot/share/gc/shared/referenceProcessor.cpp ! src/hotspot/share/gc/shared/referenceProcessor.hpp ! src/hotspot/share/gc/shared/referenceProcessorPhaseTimes.cpp ! src/hotspot/share/gc/shared/referenceProcessorPhaseTimes.hpp ! src/hotspot/share/jfr/recorder/checkpoint/types/jfrType.cpp ! src/hotspot/share/memory/referenceType.hpp ! src/hotspot/share/oops/instanceKlass.cpp ! src/hotspot/share/oops/instanceKlass.hpp ! src/hotspot/share/oops/instanceRefKlass.cpp ! src/hotspot/share/oops/instanceRefKlass.hpp ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/memory/ReferenceType.java Changeset: 0709a6a1 Author: liach Committer: Jaikiran Pai Date: 2022-06-29 14:22:48 +0000 URL: https://git.openjdk.org/loom/commit/0709a6a1fb6bfc8aecde7eb827d1628e181e3253 8284942: Proxy building can just iterate superinterfaces once Reviewed-by: mchung ! src/java.base/share/classes/java/lang/reflect/Proxy.java Changeset: ba670ecb Author: Doug Simon Date: 2022-06-29 16:14:55 +0000 URL: https://git.openjdk.org/loom/commit/ba670ecbb9efdbcaa783d4a933499ca191fb58c5 8289094: [JVMCI] reduce JNI overhead and other VM rounds trips in JVMCI Reviewed-by: kvn, dlong ! src/hotspot/cpu/aarch64/jvmciCodeInstaller_aarch64.cpp ! src/hotspot/cpu/x86/jvmciCodeInstaller_x86.cpp ! src/hotspot/share/code/debugInfo.hpp ! src/hotspot/share/compiler/compileBroker.cpp ! src/hotspot/share/jvmci/jvmciCodeInstaller.cpp ! src/hotspot/share/jvmci/jvmciCodeInstaller.hpp ! src/hotspot/share/jvmci/jvmciCompiler.cpp ! src/hotspot/share/jvmci/jvmciCompiler.hpp ! src/hotspot/share/jvmci/jvmciCompilerToVM.cpp ! src/hotspot/share/jvmci/jvmciEnv.cpp ! src/hotspot/share/jvmci/jvmciEnv.hpp ! src/hotspot/share/jvmci/jvmciJavaClasses.hpp ! src/hotspot/share/jvmci/jvmciRuntime.cpp ! src/hotspot/share/jvmci/vmStructs_jvmci.cpp ! src/hotspot/share/jvmci/vmSymbols_jvmci.hpp ! src/hotspot/share/runtime/timer.cpp ! src/hotspot/share/runtime/timer.hpp ! src/hotspot/share/utilities/ostream.cpp ! src/hotspot/share/utilities/ostream.hpp ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/BytecodeFrame.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/RegisterSaveLayout.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/StackLockValue.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.code/src/jdk/vm/ci/code/site/Infopoint.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCodeCacheProvider.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompiledCode.java + src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompiledCodeStream.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotCompiledNmethod.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotConstantPool.java - src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotConstantPoolObject.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotConstantReflectionProvider.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotJDKReflection.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotJVMCIRuntime.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMemoryAccessProviderImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMethodData.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMethodDataAccessor.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotObjectConstantImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotReferenceMap.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaFieldImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethodImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaType.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedObjectTypeImpl.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedPrimitiveType.java - src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotSentinelConstant.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotSpeculationEncoding.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/JFR.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/MetaspaceObject.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/Assumptions.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/JavaConstant.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.meta/src/jdk/vm/ci/meta/PrimitiveConstant.java ! test/hotspot/jtreg/compiler/jvmci/common/patches/jdk.internal.vm.ci/jdk/vm/ci/hotspot/CompilerToVMHelper.java ! test/hotspot/jtreg/compiler/jvmci/errors/TestInvalidCompilationResult.java ! test/hotspot/jtreg/compiler/jvmci/errors/TestInvalidOopMap.java Changeset: b6bd190d Author: Zdenek Zambersky Committer: Valerie Peng Date: 2022-06-29 17:20:03 +0000 URL: https://git.openjdk.org/loom/commit/b6bd190d8d10fdb177f9fb100c9f44c9f57a3cb5 8288985: P11TlsKeyMaterialGenerator should work with ChaCha20-Poly1305 Reviewed-by: valeriep ! src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11SecretKeyFactory.java + test/jdk/sun/security/pkcs11/tls/TestKeyMaterialChaCha20.java Changeset: 15efb2bd Author: Harshitha Onkar Committer: Alexey Ivanov Date: 2022-06-29 18:36:38 +0000 URL: https://git.openjdk.org/loom/commit/15efb2bdeb73e4e255dcc864be1a83450a2beaa8 8289238: Refactoring changes to PassFailJFrame Test Framework Reviewed-by: azvegint, aivanov ! test/jdk/java/awt/print/PrinterJob/ImagePrinting/ClippedImages.java ! test/jdk/java/awt/print/PrinterJob/PrintGlyphVectorTest.java ! test/jdk/java/awt/print/PrinterJob/PrintLatinCJKTest.java ! test/jdk/java/awt/regtesthelpers/PassFailJFrame.java ! test/jdk/javax/swing/JRadioButton/bug4380543.java ! test/jdk/javax/swing/JTabbedPane/4209065/bug4209065.java ! test/jdk/javax/swing/JTable/PrintAllPagesTest.java ! test/jdk/javax/swing/text/html/HtmlScriptTagParserTest.java Changeset: dbc6e110 Author: Joe Darcy Date: 2022-06-29 00:14:45 +0000 URL: https://git.openjdk.org/loom/commit/dbc6e110100aa6aaa8493158312030b84152b33a 8289399: Update SourceVersion to use snippets Reviewed-by: jjg, iris ! src/java.compiler/share/classes/javax/lang/model/SourceVersion.java Changeset: 57089749 Author: Raffaello Giulietti Committer: Roger Riggs Date: 2022-06-29 14:56:28 +0000 URL: https://git.openjdk.org/loom/commit/570897498baeab8d10f7d9525328a6d85d8c73ec 8288596: Random:from() adapter does not delegate to supplied generator in all cases Reviewed-by: darcy ! src/java.base/share/classes/java/util/Random.java ! test/jdk/java/util/Random/RandomTest.java Changeset: cf715449 Author: Naoto Sato Date: 2022-06-29 15:47:26 +0000 URL: https://git.openjdk.org/loom/commit/cf7154498fffba202b74b41a074f25c657b2e591 8289252: Recommend Locale.of() method instead of the constructor Reviewed-by: joehw, rriggs ! src/java.base/share/classes/java/util/Locale.java Changeset: 048bffad Author: Jesper Wilhelmsson Date: 2022-06-29 23:32:37 +0000 URL: https://git.openjdk.org/loom/commit/048bffad79b302890059ffc1bc559bfc601de92c Merge ! src/java.compiler/share/classes/javax/lang/model/SourceVersion.java ! src/java.compiler/share/classes/javax/lang/model/SourceVersion.java Changeset: dddd4e7c Author: Jaikiran Pai Date: 2022-06-30 01:43:11 +0000 URL: https://git.openjdk.org/loom/commit/dddd4e7c81fccd82b0fd37ea4583ce1a8e175919 8289291: HttpServer sets incorrect value for "max" parameter in Keep-Alive header value Reviewed-by: michaelm, dfuchs ! src/jdk.httpserver/share/classes/sun/net/httpserver/ServerImpl.java + test/jdk/com/sun/net/httpserver/Http10KeepAliveMaxParamTest.java Changeset: 31e50f2c Author: Xin Liu Date: 2022-06-30 03:59:42 +0000 URL: https://git.openjdk.org/loom/commit/31e50f2c7642b046dc9ea1de8ec245dcbc4e1926 8286104: use aggressive liveness for unstable_if traps Reviewed-by: kvn, thartmann ! src/hotspot/share/compiler/methodLiveness.hpp ! src/hotspot/share/opto/c2_globals.hpp ! src/hotspot/share/opto/callnode.cpp ! src/hotspot/share/opto/callnode.hpp ! src/hotspot/share/opto/compile.cpp ! src/hotspot/share/opto/compile.hpp ! src/hotspot/share/opto/graphKit.cpp ! src/hotspot/share/opto/graphKit.hpp ! src/hotspot/share/opto/ifnode.cpp ! src/hotspot/share/opto/node.cpp ! src/hotspot/share/opto/parse.hpp ! src/hotspot/share/opto/parse2.cpp + test/hotspot/jtreg/compiler/c2/TestFoldCompares2.java + test/hotspot/jtreg/compiler/c2/irTests/TestOptimizeUnstableIf.java Changeset: da6d1fc0 Author: Thomas Stuefe Date: 2022-06-30 06:19:25 +0000 URL: https://git.openjdk.org/loom/commit/da6d1fc0e0aeb1fdb504aced4b0dba0290ec240f 8289477: Memory corruption with CPU_ALLOC, CPU_FREE on muslc Reviewed-by: dholmes, clanger ! src/hotspot/os/linux/os_linux.cpp Changeset: 28c5e483 Author: Tobias Holenstein Date: 2022-06-30 07:14:29 +0000 URL: https://git.openjdk.org/loom/commit/28c5e483a80e0291bc784488ea15545dbecb257d 8287094: IGV: show node input numbers in edge tooltips Reviewed-by: chagedorn, thartmann ! src/utils/IdealGraphVisualizer/Graph/src/main/java/com/sun/hotspot/igv/graph/FigureConnection.java Changeset: 7b5bd251 Author: Ryan Ernst Committer: Chris Hegarty Date: 2022-06-30 08:28:45 +0000 URL: https://git.openjdk.org/loom/commit/7b5bd251efb7ad541e2eb9144121e414d17427fc 8286397: Address possibly lossy conversions in jdk.hotspot.agent Reviewed-by: cjplummer, chegar ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/oops/ObjectHeap.java ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/ui/classbrowser/HTMLGenerator.java ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/utilities/HeapHprofBinWriter.java Changeset: 1305fb5c Author: Xiaohong Gong Date: 2022-06-30 08:53:27 +0000 URL: https://git.openjdk.org/loom/commit/1305fb5ca8e4ca6aa082293e4444fb7de1b1652c 8287984: AArch64: [vector] Make all bits set vector sharable for match rules Reviewed-by: kvn, ngasson ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/share/opto/vectornode.cpp + test/hotspot/jtreg/compiler/vectorapi/AllBitsSetVectorMatchRuleTest.java Changeset: c3addbb1 Author: rmartinc Committer: Aleksei Efimov Date: 2022-06-30 09:17:57 +0000 URL: https://git.openjdk.org/loom/commit/c3addbb1c01483e10189cc46d8f2378e5b56dcee 8288895: LdapContext doesn't honor set referrals limit Reviewed-by: dfuchs, aefimov ! src/java.naming/share/classes/com/sun/jndi/ldap/AbstractLdapNamingEnumeration.java + test/jdk/com/sun/jndi/ldap/ReferralLimitSearchTest.java Changeset: feb223aa Author: Prasanta Sadhukhan Date: 2022-06-30 11:16:07 +0000 URL: https://git.openjdk.org/loom/commit/feb223aacfd89d598a27b27c4b8be4601cc5eaff 8288707: javax/swing/JToolBar/4529206/bug4529206.java: setFloating does not work correctly Reviewed-by: tr, serb ! test/jdk/javax/swing/JToolBar/4529206/bug4529206.java Changeset: 00d06d4a Author: Kevin Walls Date: 2022-06-30 20:18:52 +0000 URL: https://git.openjdk.org/loom/commit/00d06d4a82c5cbc8cc5fde97caa8cb56279c441a 8289440: Remove vmTestbase/nsk/monitoring/MemoryPoolMBean/isCollectionUsageThresholdExceeded/isexceeded003 from ProblemList.txt Reviewed-by: amenkov, lmesnik ! test/hotspot/jtreg/ProblemList.txt ! test/hotspot/jtreg/vmTestbase/nsk/monitoring/MemoryPoolMBean/isCollectionUsageThresholdExceeded/isexceeded001.java Changeset: c20b3aa9 Author: Alan Bateman Date: 2022-06-30 08:49:32 +0000 URL: https://git.openjdk.org/loom/commit/c20b3aa9c5ada4c87b3421fbc3290f4d6a4706ac 8289278: Suspend/ResumeAllVirtualThreads need both can_suspend and can_support_virtual_threads Reviewed-by: sspitsyn, dcubed, dholmes, iris ! src/hotspot/share/prims/jvmti.xml ! src/hotspot/share/prims/jvmti.xsl Changeset: 918068a1 Author: Jesper Wilhelmsson Date: 2022-07-01 00:47:56 +0000 URL: https://git.openjdk.org/loom/commit/918068a115efee7d439084b6d743cab5193bd943 Merge Changeset: 124c63c1 Author: Xiaohong Gong Date: 2022-07-01 01:19:18 +0000 URL: https://git.openjdk.org/loom/commit/124c63c17c897404e3c5c3615d6727303e4f3d06 8288294: [vector] Add Identity/Ideal transformations for vector logic operations Reviewed-by: kvn, jbhateja ! src/hotspot/share/opto/vectornode.cpp ! src/hotspot/share/opto/vectornode.hpp ! test/hotspot/jtreg/compiler/lib/ir_framework/IRNode.java + test/hotspot/jtreg/compiler/vectorapi/VectorLogicalOpIdentityTest.java Changeset: d260a4e7 Author: Richard Reingruber Date: 2022-07-01 06:12:52 +0000 URL: https://git.openjdk.org/loom/commit/d260a4e794681c6f4be4767350702754cfc2035c 8289434: x86_64: Improve comment on gen_continuation_enter() Reviewed-by: kvn ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp Changeset: f190f4e6 Author: Harshitha Onkar Committer: Alexander Zvegintsev Date: 2022-07-01 09:07:34 +0000 URL: https://git.openjdk.org/loom/commit/f190f4e6389a0105b0701ec7ea201fab9dda0a48 8288444: Remove the workaround for frame.pack() in ModalDialogTest Reviewed-by: azvegint + test/jdk/java/awt/Dialog/ModalDialogTest/ModalDialogTest.java Changeset: b9b900a6 Author: Tobias Holenstein Date: 2022-07-01 13:34:38 +0000 URL: https://git.openjdk.org/loom/commit/b9b900a61ca914c7931d69bd4a8aeaa948be1d64 8277060: EXCEPTION_INT_DIVIDE_BY_ZERO in TypeAryPtr::dump2 with -XX:+TracePhaseCCP Reviewed-by: kvn, thartmann, chagedorn, dlong ! src/hotspot/share/opto/type.cpp ! src/hotspot/share/utilities/globalDefinitions.cpp + test/hotspot/jtreg/compiler/debug/TestTracePhaseCCP.java Changeset: a8fe2d97 Author: Thomas Stuefe Date: 2022-07-01 13:43:45 +0000 URL: https://git.openjdk.org/loom/commit/a8fe2d97a2ea1d3ce70d6095740c4ac7ec113761 8289512: Fix GCC 12 warnings for adlc output_c.cpp Reviewed-by: kvn, lucy ! src/hotspot/share/adlc/output_c.cpp Changeset: 09b4032f Author: Harold Seigel Date: 2022-07-01 14:31:30 +0000 URL: https://git.openjdk.org/loom/commit/09b4032f8b07335729e71b16b8f735514f3aebce 8289534: Change 'uncomplicated' hotspot runtime options Reviewed-by: coleenp, dholmes ! src/hotspot/share/cds/filemap.cpp ! src/hotspot/share/cds/metaspaceShared.cpp ! src/hotspot/share/jvmci/jvmciCompilerToVMInit.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.hpp ! src/hotspot/share/runtime/globals.hpp ! src/hotspot/share/runtime/perfMemory.cpp ! src/hotspot/share/utilities/vmError.cpp ! src/jdk.hotspot.agent/share/classes/sun/jvm/hotspot/runtime/VM.java ! test/jdk/java/lang/instrument/GetObjectSizeIntrinsicsTest.java Changeset: c43bdf71 Author: Calvin Cheung Date: 2022-07-01 16:11:17 +0000 URL: https://git.openjdk.org/loom/commit/c43bdf716596053ebe473c3b3bd5cf89482b9b01 8289257: Some custom loader tests failed due to symbol refcount not decremented Reviewed-by: iklam, coleenp ! test/hotspot/jtreg/ProblemList-zgc.txt ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/HelloUnload.java Changeset: e291a67e Author: Brian Burkhalter Date: 2022-07-01 19:13:49 +0000 URL: https://git.openjdk.org/loom/commit/e291a67e96970d80a9915f8a23afffed6e0b8ded 8289584: (fs) Print size values in java/nio/file/FileStore/Basic.java when they differ by > 1GiB Reviewed-by: alanb ! test/jdk/java/nio/file/FileStore/Basic.java Changeset: 2dd00f58 Author: Kevin Driver Committer: Weijun Wang Date: 2022-07-01 21:28:44 +0000 URL: https://git.openjdk.org/loom/commit/2dd00f580c1c5999a4905ade09bc50a5cb37ca57 8170762: Document that ISO10126Padding pads with random bytes Reviewed-by: weijun ! src/java.base/share/classes/com/sun/crypto/provider/ISO10126Padding.java Changeset: 44e8c462 Author: Kevin Driver Committer: Weijun Wang Date: 2022-07-01 22:01:55 +0000 URL: https://git.openjdk.org/loom/commit/44e8c462b459a7db530dbc23c5ba923439c419b4 8289603: Code change for JDK-8170762 breaks all build Reviewed-by: weijun ! src/java.base/share/classes/com/sun/crypto/provider/ISO10126Padding.java Changeset: cdf69792 Author: Ioi Lam Date: 2022-07-02 14:45:10 +0000 URL: https://git.openjdk.org/loom/commit/cdf697925953f62e17a7916ba611d7e789f09edf 8289230: Move PlatformXXX class declarations out of os_xxx.hpp Reviewed-by: coleenp, ccheung ! src/hotspot/os/linux/decoder_linux.cpp + src/hotspot/os/posix/mutex_posix.hpp ! src/hotspot/os/posix/os_posix.cpp ! src/hotspot/os/posix/os_posix.hpp ! src/hotspot/os/posix/os_posix.inline.hpp + src/hotspot/os/posix/park_posix.hpp ! src/hotspot/os/posix/signals_posix.cpp + src/hotspot/os/posix/threadCrashProtection_posix.cpp + src/hotspot/os/posix/threadCrashProtection_posix.hpp + src/hotspot/os/windows/mutex_windows.hpp ! src/hotspot/os/windows/os_windows.cpp ! src/hotspot/os/windows/os_windows.hpp ! src/hotspot/os/windows/os_windows.inline.hpp + src/hotspot/os/windows/park_windows.hpp + src/hotspot/os/windows/threadCrashProtection_windows.cpp + src/hotspot/os/windows/threadCrashProtection_windows.hpp ! src/hotspot/share/gc/shared/gcLogPrecious.cpp ! src/hotspot/share/gc/shenandoah/shenandoahLock.hpp ! src/hotspot/share/gc/z/zLock.hpp ! src/hotspot/share/jfr/periodic/sampling/jfrThreadSampler.cpp ! src/hotspot/share/logging/logAsyncWriter.hpp ! src/hotspot/share/memory/metaspace/metachunk.cpp ! src/hotspot/share/memory/metaspace/rootChunkArea.cpp ! src/hotspot/share/memory/metaspace/testHelpers.cpp ! src/hotspot/share/prims/jvm.cpp ! src/hotspot/share/prims/jvmtiRawMonitor.hpp ! src/hotspot/share/runtime/mutex.cpp ! src/hotspot/share/runtime/mutex.hpp ! src/hotspot/share/runtime/objectMonitor.hpp ! src/hotspot/share/runtime/os.cpp ! src/hotspot/share/runtime/os.hpp ! src/hotspot/share/runtime/osThread.hpp ! src/hotspot/share/runtime/park.hpp ! src/hotspot/share/runtime/semaphore.hpp ! src/hotspot/share/runtime/synchronizer.cpp + src/hotspot/share/runtime/threadCrashProtection.hpp Changeset: dee5121b Author: Andrey Turbanov Date: 2022-07-02 15:24:23 +0000 URL: https://git.openjdk.org/loom/commit/dee5121bd4b079abb28337395be2d5dd8bbf2f11 8289385: Cleanup redundant synchronization in Http2ClientImpl Reviewed-by: jpai, dfuchs ! src/java.net.http/share/classes/jdk/internal/net/http/Http2ClientImpl.java Changeset: 95497772 Author: Tobias Hartmann Date: 2022-07-01 05:23:57 +0000 URL: https://git.openjdk.org/loom/commit/95497772e7207b5752e6ecace4a6686df2b45227 8284358: Unreachable loop is not removed from C2 IR, leading to a broken graph Co-authored-by: Christian Hagedorn Reviewed-by: kvn, chagedorn ! src/hotspot/share/opto/cfgnode.cpp + test/hotspot/jtreg/compiler/c2/TestDeadDataLoop.java Changeset: 604ea90d Author: Naoto Sato Date: 2022-07-01 16:07:23 +0000 URL: https://git.openjdk.org/loom/commit/604ea90d55ac8354fd7287490ef59b8e3ce020d1 8289549: ISO 4217 Amendment 172 Update Reviewed-by: iris ! src/java.base/share/data/currency/CurrencyData.properties ! test/jdk/java/util/Currency/tablea1.txt Changeset: 20124ac7 Author: Daniel D. Daugherty Date: 2022-07-01 16:21:31 +0000 URL: https://git.openjdk.org/loom/commit/20124ac755acbe801d51a26dc5176239d1256279 8289585: ProblemList sun/tools/jhsdb/JStackStressTest.java on linux-aarch64 Reviewed-by: bpb, kevinw ! test/jdk/ProblemList.txt Changeset: 8e01ffb3 Author: Maurizio Cimadamore Date: 2022-07-01 21:46:07 +0000 URL: https://git.openjdk.org/loom/commit/8e01ffb3a7914a67a66ce284029f19cdf845b626 8289570: SegmentAllocator:allocateUtf8String(String str) default behavior mismatch to spec Reviewed-by: alanb, psandoz ! src/java.base/share/classes/jdk/internal/foreign/Utils.java ! test/jdk/java/foreign/TestSegmentAllocators.java Changeset: 99250140 Author: Vladimir Ivanov Date: 2022-07-01 22:56:48 +0000 URL: https://git.openjdk.org/loom/commit/9925014035ed203ba42cce80a23730328bbe8a50 8280320: C2: Loop opts are missing during OSR compilation Reviewed-by: thartmann, iveresov ! src/hotspot/share/ci/ciMethodData.cpp Changeset: cfc9a881 Author: Sergey Bylokhov Date: 2022-07-02 00:25:20 +0000 URL: https://git.openjdk.org/loom/commit/cfc9a881afd300bd7c1ce784287d1669308e89fc 8288854: getLocalGraphicsEnvironment() on for multi-screen setups throws exception NPE Reviewed-by: azvegint, aivanov ! src/java.desktop/unix/classes/sun/awt/X11GraphicsEnvironment.java Changeset: 9515560c Author: Serguei Spitsyn Date: 2022-07-02 05:43:43 +0000 URL: https://git.openjdk.org/loom/commit/9515560c54438156b37f1549229bcb5535df5fd1 8288703: GetThreadState returns 0 for virtual thread that has terminated Reviewed-by: alanb, amenkov, cjplummer ! src/hotspot/share/prims/jvmtiEnvBase.cpp ! test/hotspot/jtreg/serviceability/jvmti/thread/GetThreadState/thrstat03/thrstat03.java ! test/hotspot/jtreg/serviceability/jvmti/vthread/SelfSuspendDisablerTest/SelfSuspendDisablerTest.java ! test/hotspot/jtreg/serviceability/jvmti/vthread/SelfSuspendDisablerTest/libSelfSuspendDisablerTest.cpp Changeset: f5cdabad Author: Igor Veresov Date: 2022-07-02 05:55:10 +0000 URL: https://git.openjdk.org/loom/commit/f5cdabad06b1658d9a3ac01f94cbd29080ffcdb1 8245268: -Xcomp is missing from java launcher documentation Reviewed-by: kvn ! src/java.base/share/man/java.1 Changeset: 70f56933 Author: Jesper Wilhelmsson Date: 2022-07-02 18:07:36 +0000 URL: https://git.openjdk.org/loom/commit/70f5693356277c0685668219a79819707d099d9f Merge ! src/hotspot/share/prims/jvmtiEnvBase.cpp ! src/java.base/share/man/java.1 ! test/hotspot/jtreg/serviceability/jvmti/thread/GetThreadState/thrstat03/thrstat03.java ! test/jdk/ProblemList.txt ! src/hotspot/share/prims/jvmtiEnvBase.cpp ! src/java.base/share/man/java.1 ! test/hotspot/jtreg/serviceability/jvmti/thread/GetThreadState/thrstat03/thrstat03.java ! test/jdk/ProblemList.txt Changeset: d8444aa4 Author: Bill Huang Committer: Jaikiran Pai Date: 2022-07-03 02:37:30 +0000 URL: https://git.openjdk.org/loom/commit/d8444aa45ef10279f5ca034bb522e92411f07255 8286610: Add additional diagnostic output to java/net/DatagramSocket/InterruptibleDatagramSocket.java Reviewed-by: msheppar, dfuchs, jpai ! test/jdk/java/net/DatagramSocket/InterruptibleDatagramSocket.java Changeset: 649f2d88 Author: Prasanta Sadhukhan Date: 2022-07-03 08:36:08 +0000 URL: https://git.openjdk.org/loom/commit/649f2d8835027128c6c8cf37236808094a12a35f 8065097: [macosx] javax/swing/Popup/TaskbarPositionTest.java fails because Popup is one pixel off Reviewed-by: aivanov ! test/jdk/ProblemList.txt ! test/jdk/javax/swing/Popup/TaskbarPositionTest.java Changeset: 8e7a3cb5 Author: Andrey Turbanov Date: 2022-07-04 06:54:09 +0000 URL: https://git.openjdk.org/loom/commit/8e7a3cb5ab3852f0c367c8807d51ffbec2d0ad49 8289431: (zipfs) Avoid redundant HashMap.get in ZipFileSystemProvider.removeFileSystem Reviewed-by: lancea, attila ! src/jdk.zipfs/share/classes/jdk/nio/zipfs/ZipFileSystemProvider.java Changeset: e31003a0 Author: Albert Mingkun Yang Date: 2022-07-04 08:04:01 +0000 URL: https://git.openjdk.org/loom/commit/e31003a064693765a52f15ff9d4de2c342869a13 8289575: G1: Remove unnecessary is-marking-active check in G1BarrierSetRuntime::write_ref_field_pre_entry Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1BarrierSetRuntime.cpp ! src/hotspot/share/gc/shared/satbMarkQueue.hpp Changeset: a8edd7a1 Author: Matthias Baesken Date: 2022-07-04 08:56:35 +0000 URL: https://git.openjdk.org/loom/commit/a8edd7a12f955fe843c7c9ad4273e9c653a80c5a 8289569: [test] java/lang/ProcessBuilder/Basic.java fails on Alpine/musl Reviewed-by: clanger, alanb, stuefe ! test/jdk/java/lang/ProcessBuilder/Basic.java Changeset: d53b02eb Author: Albert Mingkun Yang Date: 2022-07-04 12:03:57 +0000 URL: https://git.openjdk.org/loom/commit/d53b02eb9fceb6d170e0ea8613c2a064a7175892 8287312: G1: Early return on first failure in VerifyRegionClosure Reviewed-by: tschatzl, iwalulya, kbarrett ! src/hotspot/share/gc/g1/g1HeapVerifier.cpp Changeset: b5d96565 Author: Andrew Haley Date: 2022-07-04 13:26:54 +0000 URL: https://git.openjdk.org/loom/commit/b5d965656d937e31ca7d3224c4e981d5083091c9 8288971: AArch64: Clean up stack and register handling in interpreter Reviewed-by: adinn, ngasson ! src/hotspot/cpu/aarch64/abstractInterpreter_aarch64.cpp ! src/hotspot/cpu/aarch64/assembler_aarch64.cpp ! src/hotspot/cpu/aarch64/assembler_aarch64.hpp ! src/hotspot/cpu/aarch64/frame_aarch64.cpp ! src/hotspot/cpu/aarch64/frame_aarch64.hpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.cpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.hpp ! src/hotspot/cpu/aarch64/methodHandles_aarch64.cpp ! src/hotspot/cpu/aarch64/register_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateTable_aarch64.cpp ! src/hotspot/cpu/arm/templateInterpreterGenerator_arm.cpp ! src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp Changeset: bad9ffe4 Author: Albert Mingkun Yang Date: 2022-07-04 15:18:24 +0000 URL: https://git.openjdk.org/loom/commit/bad9ffe47112c3d532e0486af093f662508a5816 8288947: G1: Consolidate per-region is-humongous query in G1CollectedHeap Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.hpp ! src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp ! src/hotspot/share/gc/g1/g1HeapRegionAttr.hpp ! src/hotspot/share/gc/g1/g1YoungCollector.cpp Changeset: 9ccae707 Author: Ryan Ernst Committer: Chris Hegarty Date: 2022-07-04 16:09:40 +0000 URL: https://git.openjdk.org/loom/commit/9ccae7078e22c27a8f84152f005c628534c9af53 8287593: ShortResponseBody could be made more resilient to rogue connections Reviewed-by: chegar, dfuchs ! test/jdk/java/net/httpclient/ShortResponseBody.java Changeset: df063f7d Author: Andrey Turbanov Date: 2022-07-04 20:21:11 +0000 URL: https://git.openjdk.org/loom/commit/df063f7db18a40ea7325fe608b3206a6dff812c1 8289484: Cleanup unnecessary null comparison before instanceof check in java.rmi Reviewed-by: jpai, attila ! src/java.rmi/share/classes/java/rmi/MarshalledObject.java ! src/java.rmi/share/classes/sun/rmi/transport/LiveRef.java ! src/java.rmi/share/classes/sun/rmi/transport/tcp/TCPEndpoint.java Changeset: 688712f7 Author: Thomas Stuefe Date: 2022-07-05 04:26:45 +0000 URL: https://git.openjdk.org/loom/commit/688712f75cd54caa264494adbe4dfeefc079e1dd 8289633: Forbid raw C-heap allocation functions in hotspot and fix findings Reviewed-by: kbarrett, dholmes ! src/hotspot/cpu/ppc/macroAssembler_ppc_sha.cpp ! src/hotspot/cpu/ppc/stubRoutines_ppc_64.cpp ! src/hotspot/os/linux/decoder_linux.cpp ! src/hotspot/os/linux/gc/z/zMountPoint_linux.cpp ! src/hotspot/os/linux/os_linux.cpp ! src/hotspot/os/linux/os_perf_linux.cpp ! src/hotspot/os/posix/gc/z/zUtils_posix.cpp ! src/hotspot/os/posix/os_posix.cpp ! src/hotspot/share/compiler/compilerEvent.cpp ! src/hotspot/share/gc/shared/gcLogPrecious.cpp ! src/hotspot/share/jvmci/jvmci.cpp ! src/hotspot/share/jvmci/jvmciCodeInstaller.cpp ! src/hotspot/share/logging/logTagSet.cpp ! src/hotspot/share/runtime/os.cpp ! src/hotspot/share/services/nmtPreInit.cpp ! src/hotspot/share/utilities/globalDefinitions.hpp ! test/hotspot/gtest/gtestMain.cpp ! test/hotspot/gtest/logging/test_logDecorators.cpp ! test/hotspot/gtest/utilities/test_bitMap_setops.cpp ! test/hotspot/gtest/utilities/test_concurrentHashtable.cpp Changeset: 1b997db7 Author: KIRIYAMA Takuya Committer: Tobias Hartmann Date: 2022-07-05 06:38:10 +0000 URL: https://git.openjdk.org/loom/commit/1b997db734315f6cd08af94149e6622a8afbe88c 8289427: compiler/compilercontrol/jcmd/ClearDirectivesFileStackTest.java failed with null setting Reviewed-by: kvn, thartmann ! test/hotspot/jtreg/compiler/compilercontrol/share/scenario/DirectiveBuilder.java Changeset: 4c997ba8 Author: Albert Mingkun Yang Date: 2022-07-05 07:29:02 +0000 URL: https://git.openjdk.org/loom/commit/4c997ba8303cc1116c73f6699888a77073a125a2 8289520: G1: Remove duplicate checks in G1BarrierSetC1::post_barrier Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/c1/g1BarrierSetC1.cpp Changeset: fd1bb078 Author: Andrey Turbanov Date: 2022-07-05 07:39:05 +0000 URL: https://git.openjdk.org/loom/commit/fd1bb078ea3c8d3a10be696384ecf04d16573baa 8287603: Avoid redundant HashMap.containsKey calls in NimbusDefaults.getDerivedColor Reviewed-by: attila, aivanov ! src/java.desktop/share/classes/javax/swing/plaf/nimbus/Defaults.template Changeset: a5934cdd Author: Andrew Haley Date: 2022-07-05 07:54:38 +0000 URL: https://git.openjdk.org/loom/commit/a5934cddca9b962d8e1b709de23c169904b95525 8289698: AArch64: Need to relativize extended_sp in frame Reviewed-by: alanb, dholmes ! src/hotspot/cpu/aarch64/continuationFreezeThaw_aarch64.inline.hpp Changeset: 77c3bbf1 Author: Michael McMahon Date: 2022-07-05 09:15:41 +0000 URL: https://git.openjdk.org/loom/commit/77c3bbf105403089fec69d51406fe3e6f562271f 8289617: Remove test/jdk/java/net/ServerSocket/ThreadStop.java Reviewed-by: alanb, jpai - test/jdk/java/net/ServerSocket/ThreadStop.java Changeset: c45d613f Author: Doug Simon Date: 2022-07-05 18:25:12 +0000 URL: https://git.openjdk.org/loom/commit/c45d613faa8b8658c714513da89852f1f9ff0a4a 8289687: [JVMCI] bug in HotSpotResolvedJavaMethodImpl.equals Reviewed-by: kvn ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethodImpl.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaField.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaMethod.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.runtime.test/src/jdk/vm/ci/runtime/test/TestResolvedJavaType.java Changeset: d48694d0 Author: Lance Andersen Date: 2022-07-05 19:45:08 +0000 URL: https://git.openjdk.org/loom/commit/d48694d0f3865c1b205acdfa2e6c6d032a39959d 8283335: Add exists and readAttributesIfExists methods to FileSystemProvider Reviewed-by: alanb ! src/java.base/share/classes/java/nio/file/Files.java ! src/java.base/share/classes/java/nio/file/spi/FileSystemProvider.java ! src/java.base/share/classes/sun/nio/fs/AbstractFileSystemProvider.java ! src/java.base/unix/classes/sun/nio/fs/UnixFileAttributes.java ! src/java.base/unix/classes/sun/nio/fs/UnixFileSystemProvider.java ! src/java.base/unix/classes/sun/nio/fs/UnixNativeDispatcher.java ! src/java.base/unix/classes/sun/nio/fs/UnixUriUtils.java ! src/java.base/unix/native/libnio/fs/UnixNativeDispatcher.c ! src/jdk.zipfs/share/classes/jdk/nio/zipfs/ZipFileSystemProvider.java ! src/jdk.zipfs/share/classes/jdk/nio/zipfs/ZipPath.java + test/jdk/java/nio/file/spi/TestDelegation.java ! test/jdk/java/nio/file/spi/TestProvider.java + test/micro/org/openjdk/bench/jdk/nio/zipfs/ZipfileSystemProviderDelegation.java Changeset: 35156041 Author: Evgeny Astigeevich Committer: Paul Hohensee Date: 2022-07-05 20:50:02 +0000 URL: https://git.openjdk.org/loom/commit/351560414d7ddc0694126ab184bdb78be604e51f 8280481: Duplicated stubs to interpreter for static calls Reviewed-by: kvn, phh ! src/hotspot/cpu/aarch64/aarch64.ad + src/hotspot/cpu/aarch64/codeBuffer_aarch64.cpp ! src/hotspot/cpu/aarch64/codeBuffer_aarch64.hpp ! src/hotspot/cpu/arm/codeBuffer_arm.hpp ! src/hotspot/cpu/ppc/codeBuffer_ppc.hpp ! src/hotspot/cpu/riscv/codeBuffer_riscv.hpp ! src/hotspot/cpu/s390/codeBuffer_s390.hpp + src/hotspot/cpu/x86/codeBuffer_x86.cpp ! src/hotspot/cpu/x86/codeBuffer_x86.hpp ! src/hotspot/cpu/x86/compiledIC_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.hpp ! src/hotspot/cpu/x86/x86_32.ad ! src/hotspot/cpu/x86/x86_64.ad ! src/hotspot/cpu/zero/codeBuffer_zero.hpp ! src/hotspot/share/asm/codeBuffer.cpp ! src/hotspot/share/asm/codeBuffer.hpp + src/hotspot/share/asm/codeBuffer.inline.hpp ! src/hotspot/share/c1/c1_LIRAssembler.cpp ! src/hotspot/share/ci/ciEnv.cpp ! src/hotspot/share/runtime/globals.hpp + test/hotspot/jtreg/compiler/sharedstubs/SharedStubToInterpTest.java Changeset: fafe8b3f Author: Xiaohong Gong Date: 2022-07-06 06:15:04 +0000 URL: https://git.openjdk.org/loom/commit/fafe8b3f8dc1bdb7216f2b02416487a2c5fd9a26 8289604: compiler/vectorapi/VectorLogicalOpIdentityTest.java failed on x86 AVX1 system Reviewed-by: jiefu, kvn ! test/hotspot/jtreg/compiler/vectorapi/VectorLogicalOpIdentityTest.java Changeset: f783244c Author: Andrey Turbanov Date: 2022-07-06 06:40:19 +0000 URL: https://git.openjdk.org/loom/commit/f783244caf041b6f79036dfcf29ff857d9c1c78f 8289706: (cs) Avoid redundant TreeMap.containsKey call in AbstractCharsetProvider Reviewed-by: attila, naoto ! src/jdk.charsets/share/classes/sun/nio/cs/ext/AbstractCharsetProvider.java Changeset: d8f4e97b Author: Matthias Baesken Date: 2022-07-06 07:12:32 +0000 URL: https://git.openjdk.org/loom/commit/d8f4e97bd3f4e50902e80b4b6b4eb3268c6d4a9d 8289146: containers/docker/TestMemoryWithCgroupV1.java fails on linux ppc64le machine with missing Memory and Swap Limit output Reviewed-by: sgehwolf, mdoerr, iklam ! test/hotspot/jtreg/containers/docker/TestMemoryWithCgroupV1.java Changeset: bbf89e3a Author: Alan Bateman Date: 2022-07-07 09:29:03 +0000 URL: https://git.openjdk.org/loom/commit/bbf89e3afa3c28468def0345ea1a3657a384a1e9 Merge with jdk-20+5 ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/prims/jvm.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.hpp ! src/hotspot/share/runtime/globals.hpp ! test/hotspot/jtreg/ProblemList.txt ! test/jdk/ProblemList.txt ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/prims/jvm.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.cpp ! src/hotspot/share/runtime/flags/jvmFlagConstraintsRuntime.hpp ! src/hotspot/share/runtime/globals.hpp ! test/hotspot/jtreg/ProblemList.txt ! test/jdk/ProblemList.txt From andrey.lomakin at jetbrains.com Sat Jul 9 12:49:28 2022 From: andrey.lomakin at jetbrains.com (Andrey Lomakin) Date: Sat, 9 Jul 2022 14:49:28 +0200 Subject: Usage of direct IO with virtual threads Message-ID: Hi guys. Could you clarify is it OK to use com.sun.nio.file.ExtendedOpenOption#DIRECT open option with virtual threads or in other words, will reading/writing to the file which uses direct IO be converted into the non-blocking call ? And second question, what is the general status of integration of FileChannel API with virtual threads, if I , for example, perform fsync, will it be also converted into non-blocking call at leas on platforms which theoretically support such possibilities (like io_uring https://patchwork.kernel.org/project/linux-fsdevel/patch/20190116175003.17880-7-axboe at kernel.dk/) ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alan.Bateman at oracle.com Sat Jul 9 16:24:48 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Sat, 9 Jul 2022 17:24:48 +0100 Subject: Usage of direct IO with virtual threads In-Reply-To: References: Message-ID: <1cbf08ae-2eb5-6e76-bbf6-957c16715cb1@oracle.com> On 09/07/2022 13:49, Andrey Lomakin wrote: > Hi guys. > > Could you clarify is it OK to > use?com.sun.nio.file.ExtendedOpenOption#DIRECT? open option with > virtual threads or in other words, will reading/writing to the file > which uses direct IO be converted into the non-blocking call ? > > And second?question, what is the general status of integration of > FileChannel API with virtual?threads You can use direct I/O, "force", or any other FileChannel method from virtual threads. Right now they pin the thread but they increase parallelism for the duration of the I/O operation so the temporary pinning should be mostly transparent. In time they make make use of io_uring and other facilities but there is significantly low level refactoring require before that will plug in, so not JDK 19. -Alan From andrey.lomakin at jetbrains.com Sat Jul 9 16:28:41 2022 From: andrey.lomakin at jetbrains.com (Andrey Lomakin) Date: Sat, 9 Jul 2022 18:28:41 +0200 Subject: Usage of direct IO with virtual threads In-Reply-To: <1cbf08ae-2eb5-6e76-bbf6-957c16715cb1@oracle.com> References: <1cbf08ae-2eb5-6e76-bbf6-957c16715cb1@oracle.com> Message-ID: Thank you for your reply. Is there any issue which I can follow to track state of such refactoring ? On Sat, Jul 9, 2022, 18:24 Alan Bateman wrote: > On 09/07/2022 13:49, Andrey Lomakin wrote: > > Hi guys. > > > > Could you clarify is it OK to > > use com.sun.nio.file.ExtendedOpenOption#DIRECT open option with > > virtual threads or in other words, will reading/writing to the file > > which uses direct IO be converted into the non-blocking call ? > > > > And second question, what is the general status of integration of > > FileChannel API with virtual threads > > You can use direct I/O, "force", or any other FileChannel method from > virtual threads. Right now they pin the thread but they increase > parallelism for the duration of the I/O operation so the temporary > pinning should be mostly transparent. In time they make make use of > io_uring and other facilities but there is significantly low level > refactoring require before that will plug in, so not JDK 19. > > -Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From egor.ushakov at jetbrains.com Mon Jul 11 17:37:13 2022 From: egor.ushakov at jetbrains.com (Egor Ushakov) Date: Mon, 11 Jul 2022 19:37:13 +0200 Subject: jstack, profilers and other tools Message-ID: Hi all, I'm trying to prepare IntelliJ for loom and have some trouble understanding how tooling should be modified: 1. with debugger it is more or less ok - virtual threads are separated from carrier threads, stack are separate and debugger is responsible for showing all of this. 2. thread dumps (jstack as an example) - no virtual threads are shown, carrier threads stacks are truncated even if they are doing some work in the mounted virtual threads. It is not clear for me how users should understand what (even mounted) virtual thread are doing. Should we always switch to the new json format? Should user decide on which format to use? Previously thread dumps way an easy way to grab "what the app is doing" at the moment. Is there a way to achieve this now? Or this way should be abandoned? 3. Profilers - with jfr I was not able to see any sampling data for virtual threads, where should I find it? With async-profiler (using AsyncGetCallTrace) it is still posible to see the sampling data (and merged stacks when vthread is mounted) - good. Hopefully this wont break? Could someone please clarify this? Thanks, Egor From Alan.Bateman at oracle.com Mon Jul 11 18:25:48 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Mon, 11 Jul 2022 19:25:48 +0100 Subject: jstack, profilers and other tools In-Reply-To: References: Message-ID: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> On 11/07/2022 18:37, Egor Ushakov wrote: > Hi all, > > I'm trying to prepare IntelliJ for loom and have some trouble > understanding how tooling should be modified: > 1. with debugger it is more or less ok - virtual threads are separated > from carrier threads, stack are separate and debugger is responsible > for showing all of this. > 2. thread dumps (jstack as an example) - no virtual threads are shown, > carrier threads stacks are truncated even if they are doing some work > in the mounted virtual threads. It is not clear for me how users > should understand what (even mounted) virtual thread are doing. Should > we always switch to the new json format? Should user decide on which > format to use? Previously thread dumps way an easy way to grab "what > the app is doing" at the moment. Is there a way to achieve this now? > Or this way should be abandoned? The stack trace of the carrier and the virtual thread are intentionally separate. So if a virtual thread throws an exception then you won't see the carrier stack traces. It's the same thing with thread dumps: the stack trace of a carrier thread won't show the stack frames of a mounted virtual thread, and vice-versa. There is more on this in the JEP. The HotSpot thread dump (jcmd Thread.print, jstack) only shows platform threads (the threads that the VM knows about). The new thread dump (with the HotSpotDiagnisticMXBean API or jcmd Thread.dump_to_file) will print both platform and virtual threads. The plain text and JSON formats can be used to get a view of what the application is doing. The JSON format is intended to be parsed of course. > 3. Profilers - with jfr I was not able to see any sampling data for > virtual threads, where should I find it? I'm not aware of any issues. I checked with Markus Gr?nlund (works on JFR) and he's not aware of any issues either. What tool are you are using to look at the recording? Do you see the virtual threads when you use `jfr print --events JDK.ExecutionSample `? -Alan From Alan.Bateman at oracle.com Mon Jul 11 18:27:50 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Mon, 11 Jul 2022 19:27:50 +0100 Subject: Usage of direct IO with virtual threads In-Reply-To: References: <1cbf08ae-2eb5-6e76-bbf6-957c16715cb1@oracle.com> Message-ID: <8e6a5105-5a44-9bbb-4eaf-43d9961fb3fe@oracle.com> On 09/07/2022 17:28, Andrey Lomakin wrote: > Thank you for your reply. > Is there any issue which I can follow to track state of such refactoring ? > Nothing to point to in JBS right now but these are changes that are usually reviewed on nio-dev or core-libs-dev. -Alan From ron.pressler at oracle.com Mon Jul 11 18:58:16 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Mon, 11 Jul 2022 18:58:16 +0000 Subject: jstack, profilers and other tools In-Reply-To: References: Message-ID: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> Hi. Alan gave you the specifics, but I?d like to make a more general point. It?s important to remember that the reason to use virtual threads is to have lots of them. It is, therefore, unusual for an application that has any virtual threads to have fewer than, say, 10,000. An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads, but spawns a virtual thread for each of its *tasks*. As a consequence, virtual threads are very numerous and most of them are very short lived, and the simple thread dump format used by jstack will not be informative. The set of mounted virtual threads is essentially a random sample of less than 1% of threads (8-30 threads out of at least 10K), and probably not useful to understanding what the application is doing. That is why a new kind of thread dump, designed for these circumstances, is introduced. ? Ron > On 11 Jul 2022, at 18:37, Egor Ushakov wrote: > > Hi all, > > I'm trying to prepare IntelliJ for loom and have some trouble understanding how tooling should be modified: > 1. with debugger it is more or less ok - virtual threads are separated from carrier threads, stack are separate and debugger is responsible for showing all of this. > 2. thread dumps (jstack as an example) - no virtual threads are shown, carrier threads stacks are truncated even if they are doing some work in the mounted virtual threads. It is not clear for me how users should understand what (even mounted) virtual thread are doing. Should we always switch to the new json format? Should user decide on which format to use? Previously thread dumps way an easy way to grab "what the app is doing" at the moment. Is there a way to achieve this now? Or this way should be abandoned? > 3. Profilers - with jfr I was not able to see any sampling data for virtual threads, where should I find it? With async-profiler (using AsyncGetCallTrace) it is still posible to see the sampling data (and merged stacks when vthread is mounted) - good. Hopefully this wont break? > > Could someone please clarify this? > > Thanks, > Egor > > From egor.ushakov at jetbrains.com Mon Jul 11 18:59:42 2022 From: egor.ushakov at jetbrains.com (Egor Ushakov) Date: Mon, 11 Jul 2022 20:59:42 +0200 Subject: jstack, profilers and other tools In-Reply-To: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> References: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> Message-ID: <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> Thanks Alan! I've read jep and the user approach to virtual threads is still unclear to me: on the one hand for writing the code they are made to look and feel like "regular" threads, but the approach for tooling seems to be very different - virtual threads are separated almost everywhere and seems to require very different observability techniques. I'm fine with supporting all kinds of approaches, just want to make sure that it will be clear for users how to approach this. For me it is not yet clear. As for jfr - I was able to view the samples from virtual threads, for some reason jmc 8.2 does not show them, but they are in the snapshot. Egor On 11.07.2022 20:25, Alan Bateman wrote: > On 11/07/2022 18:37, Egor Ushakov wrote: >> Hi all, >> >> I'm trying to prepare IntelliJ for loom and have some trouble >> understanding how tooling should be modified: >> 1. with debugger it is more or less ok - virtual threads are >> separated from carrier threads, stack are separate and debugger is >> responsible for showing all of this. >> 2. thread dumps (jstack as an example) - no virtual threads are >> shown, carrier threads stacks are truncated even if they are doing >> some work in the mounted virtual threads. It is not clear for me how >> users should understand what (even mounted) virtual thread are doing. >> Should we always switch to the new json format? Should user decide on >> which format to use? Previously thread dumps way an easy way to grab >> "what the app is doing" at the moment. Is there a way to achieve this >> now? Or this way should be abandoned? > > The stack trace of the carrier and the virtual thread are > intentionally separate. So if a virtual thread throws an exception > then you won't see the carrier stack traces. It's the same thing with > thread dumps: the stack trace of a carrier thread won't show the stack > frames of a mounted virtual thread, and vice-versa. There is more on > this in the JEP. > > The HotSpot thread dump (jcmd Thread.print, jstack) only shows > platform threads (the threads that the VM knows about). The new thread > dump (with the HotSpotDiagnisticMXBean API or jcmd > Thread.dump_to_file) will print both platform and virtual threads. The > plain text and JSON formats can be used to get a view of what the > application is doing. The JSON format is intended to be parsed of course. > >> 3. Profilers - with jfr I was not able to see any sampling data for >> virtual threads, where should I find it? > > I'm not aware of any issues. I checked with Markus Gr?nlund (works on > JFR) and he's not aware of any issues either. What tool are you are > using to look at the recording? Do you see the virtual threads when > you use `jfr print --events JDK.ExecutionSample `? > > -Alan From robin.bygrave at gmail.com Mon Jul 11 21:13:22 2022 From: robin.bygrave at gmail.com (Rob Bygrave) Date: Tue, 12 Jul 2022 09:13:22 +1200 Subject: jstack, profilers and other tools In-Reply-To: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> Message-ID: *> An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads* I'd be keen to clarify this around the use of virtual threads for http servers. The Helidon team has stated that they are working on a new loom based http server. It will be interesting to see how that works compared to say Jetty with a loom based thread pool and I hope they release that soon and that provides us with a view of how http servers might work best with virtual threads. What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? > *unusual* for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of *unusual* is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. For example, by default a Jetty instance today would have a platform thread based thread pool with default max 200 threads. There is queuing that goes on for higher amounts of concurrent load, and yes it is easy to change that pool size but my gut and experience to date says that a big proportion of http server instances out there are operating at less than 200 concurrent requests per server instance. As such, operating jetty + loom at sub 200 requests per second is something I've investigated and tested in the past and I didn't see a problem here (tldr: loom performs fine with sub 200 concurrent virtual threads). Ron, Is there something that means we should not be using Jetty with Loom threads? (I'm suggesting this use case is frequently going to be sub 1000 virtual threads) Are you familiar with what the Helidon team is building wrt a loom based http server? (Does that only target 10K concurrent requests??) Thanks, Rob. On Tue, 12 Jul 2022 at 06:58, Ron Pressler wrote: > Hi. > > Alan gave you the specifics, but I?d like to make a more general point. > > It?s important to remember that the reason to use virtual threads is to > have lots of them. It is, therefore, unusual for an application that has > any virtual threads to have fewer than, say, 10,000. An existing > application that migrates to using virtual threads doesn?t replace its > platform threads with virtual threads, but spawns a virtual thread for each > of its *tasks*. As a consequence, virtual threads are very numerous and > most of them are very short lived, and the simple thread dump format used > by jstack will not be informative. The set of mounted virtual threads is > essentially a random sample of less than 1% of threads (8-30 threads out of > at least 10K), and probably not useful to understanding what the > application is doing. That is why a new kind of thread dump, designed for > these circumstances, is introduced. > > ? Ron > > > On 11 Jul 2022, at 18:37, Egor Ushakov > wrote: > > > > Hi all, > > > > I'm trying to prepare IntelliJ for loom and have some trouble > understanding how tooling should be modified: > > 1. with debugger it is more or less ok - virtual threads are separated > from carrier threads, stack are separate and debugger is responsible for > showing all of this. > > 2. thread dumps (jstack as an example) - no virtual threads are shown, > carrier threads stacks are truncated even if they are doing some work in > the mounted virtual threads. It is not clear for me how users should > understand what (even mounted) virtual thread are doing. Should we always > switch to the new json format? Should user decide on which format to use? > Previously thread dumps way an easy way to grab "what the app is doing" at > the moment. Is there a way to achieve this now? Or this way should be > abandoned? > > 3. Profilers - with jfr I was not able to see any sampling data for > virtual threads, where should I find it? With async-profiler (using > AsyncGetCallTrace) it is still posible to see the sampling data (and merged > stacks when vthread is mounted) - good. Hopefully this wont break? > > > > Could someone please clarify this? > > > > Thanks, > > Egor > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alan.Bateman at oracle.com Tue Jul 12 08:36:06 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Tue, 12 Jul 2022 09:36:06 +0100 Subject: jstack, profilers and other tools In-Reply-To: <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> References: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> Message-ID: <2e45b43e-6c57-eeb0-ec61-693a4e520921@oracle.com> On 11/07/2022 19:59, Egor Ushakov wrote: > Thanks Alan! > > I've read jep and the user approach to virtual threads is still > unclear to me: > on the one hand for writing the code they are made to look and feel > like "regular" threads, but the approach for tooling seems to be very > different > - virtual threads are separated almost everywhere and seems to require > very different observability techniques. > I'm fine with supporting all kinds of approaches, just want to make > sure that it will be clear for users how to approach this. > For me it is not yet clear. The threads are distinct and the mental model should be that they have separate stack traces. It would be misleading for APIs or Java tooling to show carrier frames in the virtual thread stack. There will be native tools, esp. those that attach via /proc, that don't know about virtual threads and there isn't much we can do about that. I think part of your mail is asking why the HotSpot thread dump (triggered by ctrl-\, jcmd Thread.print or the older jstack tool) doesn't include virtual threads. Virtual threads are just objects in the heap and we decided early on that changing this to walk the heap and print tens of thousands of threads would be not be helpful. It has been modified in a small way to identify the threads that are used as carriers but that's about it. The new thread dump that is added includes both platform and virtual threads and is intended to be parsed. Many virtual threads are likely to have the same stack trace and so lends itself to deduplication for example.? In conjunction with JEP 428, tooling will be able to observe the task hierarchy. Thanks for confirming that the sampling with JFR is working for virtual threads. Hopefully JMC will be updated in time to support virtual threads. -Alan From Alan.Bateman at oracle.com Tue Jul 12 08:48:28 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Tue, 12 Jul 2022 09:48:28 +0100 Subject: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> Message-ID: On 11/07/2022 22:13, Rob Bygrave wrote: > : > > > /*unusual*/ for an application that has any virtual threads to have > fewer than, say, 10,000 > > In the case of http server use of virtual thread, I feel the use of > /*unusual*/ is too strong. That is, when we are using virtual threads > for application code handling of http request/response (like Jetty + > Loom), I suspect this is frequently going to operate with less than > 1000 concurrent requests per server instance. > I think the interesting thing is that those 1000 concurrent requests can be handled by code that is written to do blocking operations without having to resort to writing asynchronous code. It may be that the handling of these requests will "fan out" where the work in the handler splits into several sub-tasks and each sub-task running in its own virtual thread. In that scenario there will be a lot more threads. -Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From egor.ushakov at jetbrains.com Tue Jul 12 13:36:34 2022 From: egor.ushakov at jetbrains.com (Egor Ushakov) Date: Tue, 12 Jul 2022 15:36:34 +0200 Subject: jstack, profilers and other tools In-Reply-To: <2e45b43e-6c57-eeb0-ec61-693a4e520921@oracle.com> References: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> <2e45b43e-6c57-eeb0-ec61-693a4e520921@oracle.com> Message-ID: <45700dd6-4133-70cf-34d8-1f9c5595e0dc@jetbrains.com> So does this mean that the older thread dumps way (jstack etc.) is now obsolete? And we should switch our efforts to supporting the new way on jdk 19+? -Egor On 12.07.2022 10:36, Alan Bateman wrote: > On 11/07/2022 19:59, Egor Ushakov wrote: >> Thanks Alan! >> >> I've read jep and the user approach to virtual threads is still >> unclear to me: >> on the one hand for writing the code they are made to look and feel >> like "regular" threads, but the approach for tooling seems to be very >> different >> - virtual threads are separated almost everywhere and seems to >> require very different observability techniques. >> I'm fine with supporting all kinds of approaches, just want to make >> sure that it will be clear for users how to approach this. >> For me it is not yet clear. > > The threads are distinct and the mental model should be that they have > separate stack traces. It would be misleading for APIs or Java tooling > to show carrier frames in the virtual thread stack. There will be > native tools, esp. those that attach via /proc, that don't know about > virtual threads and there isn't much we can do about that. > > I think part of your mail is asking why the HotSpot thread dump > (triggered by ctrl-\, jcmd Thread.print or the older jstack tool) > doesn't include virtual threads. Virtual threads are just objects in > the heap and we decided early on that changing this to walk the heap > and print tens of thousands of threads would be not be helpful. It has > been modified in a small way to identify the threads that are used as > carriers but that's about it. The new thread dump that is added > includes both platform and virtual threads and is intended to be > parsed. Many virtual threads are likely to have the same stack trace > and so lends itself to deduplication for example.? In conjunction with > JEP 428, tooling will be able to observe the task hierarchy. > > Thanks for confirming that the sampling with JFR is working for > virtual threads. Hopefully JMC will be updated in time to support > virtual threads. > > -Alan > > > > > From ron.pressler at oracle.com Tue Jul 12 13:20:26 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 12 Jul 2022 13:20:26 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> Message-ID: <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alan.Bateman at oracle.com Tue Jul 12 15:19:32 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Tue, 12 Jul 2022 16:19:32 +0100 Subject: jstack, profilers and other tools In-Reply-To: <45700dd6-4133-70cf-34d8-1f9c5595e0dc@jetbrains.com> References: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> <2e45b43e-6c57-eeb0-ec61-693a4e520921@oracle.com> <45700dd6-4133-70cf-34d8-1f9c5595e0dc@jetbrains.com> Message-ID: <3b2e58aa-afc8-dfbb-71ab-015ae62ef4d1@oracle.com> On 12/07/2022 14:36, Egor Ushakov wrote: > So does this mean that the older thread dumps way (jstack etc.) is now > obsolete? > And we should switch our efforts to supporting the new way on jdk 19+? It's not obsolete. The HotSpot thread dump remains critical for troubleshooting. It's also much "richer" and contains more than just a list of threads. If the context is the IntelliJ "Get Thread Dump" button in the Debugger window then the JSON format might be useful as it parsable. -Alan From egor.ushakov at jetbrains.com Wed Jul 13 10:56:47 2022 From: egor.ushakov at jetbrains.com (Egor Ushakov) Date: Wed, 13 Jul 2022 12:56:47 +0200 Subject: jstack, profilers and other tools In-Reply-To: <3b2e58aa-afc8-dfbb-71ab-015ae62ef4d1@oracle.com> References: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> <2e45b43e-6c57-eeb0-ec61-693a4e520921@oracle.com> <45700dd6-4133-70cf-34d8-1f9c5595e0dc@jetbrains.com> <3b2e58aa-afc8-dfbb-71ab-015ae62ef4d1@oracle.com> Message-ID: This is actually interesting: how is it expected for a debugger to get the new thread dump? It is not exposed through jdwp. And in many cases this is the only communication way with a process during debugging. Please advise. Thanks, Egor On 12.07.2022 17:19, Alan Bateman wrote: > On 12/07/2022 14:36, Egor Ushakov wrote: >> So does this mean that the older thread dumps way (jstack etc.) is >> now obsolete? >> And we should switch our efforts to supporting the new way on jdk 19+? > > It's not obsolete. The HotSpot thread dump remains critical for > troubleshooting. It's also much "richer" and contains more than just a > list of threads. If the context is the IntelliJ "Get Thread Dump" > button in the Debugger window then the JSON format might be useful as > it parsable. > > > -Alan From Alan.Bateman at oracle.com Wed Jul 13 11:16:52 2022 From: Alan.Bateman at oracle.com (Alan Bateman) Date: Wed, 13 Jul 2022 12:16:52 +0100 Subject: jstack, profilers and other tools In-Reply-To: References: <4353c176-a529-8b26-d3c2-60166e76ffb7@oracle.com> <87131b9a-f62d-eeab-0ef3-b7e355dc0c76@jetbrains.com> <2e45b43e-6c57-eeb0-ec61-693a4e520921@oracle.com> <45700dd6-4133-70cf-34d8-1f9c5595e0dc@jetbrains.com> <3b2e58aa-afc8-dfbb-71ab-015ae62ef4d1@oracle.com> Message-ID: On 13/07/2022 11:56, Egor Ushakov wrote: > This is actually interesting: how is it expected for a debugger to get > the new thread dump? > It is not exposed through jdwp. And in many cases this is the only > communication way with a process during debugging. > The debugger APIs will probably need to expand in time, esp. for structured concurrency, but we've limited the API additions for now until there is more feedback and real world usage. For the thread dump, the debugger can use the JDI invokeMethod to invoke com.sun.management.HotSpotDiagnosticMXBean.dumpThreads in the target VM. This of course assumes the debugger and target VM have access to the same file system but that shouldn't be too bad for most environments. -Aan From oleksandr.otenko at gmail.com Wed Jul 13 13:00:04 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Wed, 13 Jul 2022 14:00:04 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> Message-ID: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, wrote: > > > On 11 Jul 2022, at 22:13, Rob Bygrave wrote: > > *> An existing application that migrates to using virtual threads doesn?t > replace its platform threads with virtual threads* > > What I have been confident about to date based on the testing I've done is > that we can use Jetty with a Loom based thread pool and that has worked > very well. That is replacing current platform threads with virtual threads. > I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are > you suggesting this isn't a valid use of virtual threads or am I reading > too much into what you've said here? > > > The throughput advantage to virtual threads comes from one aspect ? their > *number* ? as explained by Little?s law. A web server employing virtual > thread would not replace a pool of N platform threads with a pool of N > virtual threads, as that does not increase the number of threads required > to increase throughput. Rather, it replaces the pool of N virtual threads > with an unpooled ExecutorService that spawns at least one new virtual > thread for every HTTP serving task. Only that can increase the number of > threads sufficiently to improve throughput. > > > > > *unusual* for an application that has any virtual threads to have fewer > than, say, 10,000 > > In the case of http server use of virtual thread, I feel the use of > *unusual* is too strong. That is, when we are using virtual threads for > application code handling of http request/response (like Jetty + Loom), I > suspect this is frequently going to operate with less than 1000 concurrent > requests per server instance. > > > 1000 concurrent requests would likely translate to more than 10,000 > virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even > without fanout, every HTTP request might wish to spawn more than one > thread, for example to have one thread for reading and one for writing. The > number 10,000, however, is just illustrative. Clearly, an application with > virtual threads will have some large number of threads (significantly > larger than applications with just platform threads), because the ability > to have a large number of threads is what virtual threads are for. > > The important point is that tooling needs to adapt to a high number of > threads, which is why we?ve added a tool that?s designed to make sense of > many threads, where jstack might not be very useful. > > ? Ron > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Wed Jul 13 13:29:30 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Wed, 13 Jul 2022 13:29:30 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> Message-ID: <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Wed Jul 13 18:26:29 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Wed, 13 Jul 2022 11:26:29 -0700 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> Message-ID: <050d01d896e6$12520800$36f61800$@kolotyluk.net> Just testing my intuition here? because reading what Ron says is often eye-opening? and changes my intuition 1. Loom improves concurrency via Virtual Threads a. And consequently, potentially improves throughput 2. A key aspect of concurrency is blocking, where blocked tasks enable resources to be applied to unblocked tasks (where Fork-Join is highly effective) a. Pre-Loom, resources such as Threads could be applied to unblocked tasks, but i. Platform Threads are heavy, expensive, etc. such that the number of Platform Threads puts a bound on concurrency b. Post-Loom, resources such as Virtual Threads can now be applied to unblocked tasks, such that i. Light, cheap, etc. Virtual Threads enable a much higher bound on concurrency ii. According to Little?s Law, throughput can rise because the number of threads can rise. 3. Little?s Law also says ?The only requirements are that the system be stable and non-preemptive;? a. While the underlying O/S may be preemptive, the JVM is not, so this requirement is met. b. But, Ron says, ?While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound).? c. Which I take to imply, that increasing the number of Virtual Threads increases the stability? ? i. Even in Loom, there is an upper bound on Virtual Threads created, albeit a much higher upper bound. 4. Where I am still confused is a. In Loom, I would expect that even when all our CPU Cores are at 100%, 100% throughput, the system is still stable? i. Or maybe I am misinterpreting what Ron said? b. However, latency will suffer, unless i. more CPU Cores are added to the overall load, via some load balancer ii. flow control, such as backpressure, is added such that queues do not grow without bound (a topic I would love to explore more) iii. Or, does an increase in latency mean a loss of stability? Cheers, Eric From: loom-dev On Behalf Of Ron Pressler Sent: July 13, 2022 6:30 AM To: Alex Otenko Cc: Rob Bygrave ; Egor Ushakov ; loom-dev at openjdk.org Subject: Re: [External] : Re: jstack, profilers and other tools The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Thu Jul 14 07:12:12 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Thu, 14 Jul 2022 08:12:12 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> Message-ID: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, wrote: > The application of Little?s law is 100% correct. Little?s law tells us > that the number of threads must *necessarily* rise if throughput is to be > high. Whether or not that alone is *sufficient* might depend on the > concurrency level of other resources as well. The number of threads is not > the only quantity that limits the L in the formula, but L cannot be higher > than the number of threads. Obviously, if the system?s level of concurrency > is bounded at a very low level ? say, 10 ? then having more than 10 threads > is unhelpful, but as we?re talking about a program that uses virtual > threads, we know that is not the case. > > Also, Little?s law describes *stable* systems; i.e. it says that *if* the > system is stable, then a certain relationship must hold. While it is true > that the rate of arrival might rise without bound, if the number of threads > is insufficient to meet it, then the system is no longer stable (normally > that means that queues are growing without bound). > > ? Ron > > On 13 Jul 2022, at 14:00, Alex Otenko wrote: > > This is an incorrect application of Little's Law. The law only posits that > there is a connection between quantities. It doesn't specify which > variables depend on which. In particular, throughput is not a free > variable. > > Throughput is something outside your control. 100k users open their > laptops at 9am and login within 1 second - that's it, you have throughput > of 100k ops/sec. > > Then based on response time the system is able to deliver, you can tell > what concurrency makes sense here. Adding threads is not going to change > anything - certainly not if threads are not the bottleneck resource. > Threads become the bottleneck when you have hardware to run them, but not > the threads. > > On Tue, 12 Jul 2022, 15:47 Ron Pressler, wrote: > >> >> >> On 11 Jul 2022, at 22:13, Rob Bygrave wrote: >> >> *> An existing application that migrates to using virtual threads doesn?t >> replace its platform threads with virtual threads* >> >> What I have been confident about to date based on the testing I've done >> is that we can use Jetty with a Loom based thread pool and that has worked >> very well. That is replacing current platform threads with virtual threads. >> I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are >> you suggesting this isn't a valid use of virtual threads or am I reading >> too much into what you've said here? >> >> >> The throughput advantage to virtual threads comes from one aspect ? their >> *number* ? as explained by Little?s law. A web server employing virtual >> thread would not replace a pool of N platform threads with a pool of N >> virtual threads, as that does not increase the number of threads required >> to increase throughput. Rather, it replaces the pool of N virtual threads >> with an unpooled ExecutorService that spawns at least one new virtual >> thread for every HTTP serving task. Only that can increase the number of >> threads sufficiently to improve throughput. >> >> >> >> > *unusual* for an application that has any virtual threads to have >> fewer than, say, 10,000 >> >> In the case of http server use of virtual thread, I feel the use of >> *unusual* is too strong. That is, when we are using virtual threads for >> application code handling of http request/response (like Jetty + Loom), I >> suspect this is frequently going to operate with less than 1000 concurrent >> requests per server instance. >> >> >> 1000 concurrent requests would likely translate to more than 10,000 >> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >> without fanout, every HTTP request might wish to spawn more than one >> thread, for example to have one thread for reading and one for writing. The >> number 10,000, however, is just illustrative. Clearly, an application with >> virtual threads will have some large number of threads (significantly >> larger than applications with just platform threads), because the ability >> to have a large number of threads is what virtual threads are for. >> >> The important point is that tooling needs to adapt to a high number of >> threads, which is why we?ve added a tool that?s designed to make sense of >> many threads, where jstack might not be very useful. >> >> ? Ron >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duke at openjdk.org Thu Jul 14 08:38:26 2022 From: duke at openjdk.org (duke) Date: Thu, 14 Jul 2022 08:38:26 GMT Subject: git: openjdk/loom: fibers: 125 new changesets Message-ID: <4a690a8e-9b8c-4eb6-a8a1-60aef862bd15@openjdk.org> Changeset: 4ad18cf0 Author: ScientificWare Committer: Andrey Turbanov Date: 2022-07-06 08:19:40 +0000 URL: https://git.openjdk.org/loom/commit/4ad18cf088e12f3582b8f6117a44ae4607f69839 8289730: Deprecated code sample in java.lang.ClassCastException Reviewed-by: darcy ! src/java.base/share/classes/java/lang/ClassCastException.java Changeset: ac6be165 Author: Severin Gehwolf Date: 2022-07-06 08:24:47 +0000 URL: https://git.openjdk.org/loom/commit/ac6be165196457a26d837760b5f5030fe010d633 8289695: [TESTBUG] TestMemoryAwareness.java fails on cgroups v2 and crun Reviewed-by: sspitsyn ! test/hotspot/jtreg/containers/docker/TestMemoryAwareness.java Changeset: 83418952 Author: Thomas Schatzl Date: 2022-07-06 09:39:25 +0000 URL: https://git.openjdk.org/loom/commit/834189527e16d6fc3aedb97108b0f74c391dbc3b 8289739: Add G1 specific GC breakpoints for testing Reviewed-by: kbarrett, iwalulya ! src/hotspot/share/gc/g1/g1ConcurrentMarkThread.cpp ! test/hotspot/jtreg/gc/TestConcurrentGCBreakpoints.java ! test/lib/sun/hotspot/WhiteBox.java Changeset: cbaf6e80 Author: Roland Westrelin Date: 2022-07-06 11:36:12 +0000 URL: https://git.openjdk.org/loom/commit/cbaf6e807e2b959a0264c87035916850798a2dc6 8288022: c2: Transform (CastLL (AddL into (AddL (CastLL when possible Reviewed-by: thartmann, kvn ! src/hotspot/share/opto/castnode.cpp ! src/hotspot/share/opto/castnode.hpp ! src/hotspot/share/opto/compile.hpp ! src/hotspot/share/opto/convertnode.cpp ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/opto/type.cpp ! src/hotspot/share/opto/type.hpp + test/hotspot/jtreg/compiler/c2/irTests/TestPushAddThruCast.java Changeset: 83a5d599 Author: Coleen Phillimore Date: 2022-07-06 12:07:36 +0000 URL: https://git.openjdk.org/loom/commit/83a5d5996bca26b5f2e97b67f9bfd0a6ad110327 8278479: RunThese test failure with +UseHeavyMonitors and +VerifyHeavyMonitors Reviewed-by: kvn, dcubed, dlong ! src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp ! src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp ! src/hotspot/cpu/ppc/c1_LIRAssembler_ppc.cpp ! src/hotspot/cpu/riscv/c1_LIRAssembler_riscv.cpp ! src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp ! src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp Changeset: 75c0a5b8 Author: Thomas Stuefe Date: 2022-07-06 13:17:54 +0000 URL: https://git.openjdk.org/loom/commit/75c0a5b828de5a2c1baa7226e43d23db62aa8375 8288824: [arm32] Display isetstate in register output Reviewed-by: dsamersoff, snazarki ! src/hotspot/os_cpu/linux_arm/os_linux_arm.cpp Changeset: cc2b7927 Author: Andrew Haley Date: 2022-07-06 13:49:46 +0000 URL: https://git.openjdk.org/loom/commit/cc2b79270445ccfb2181894fed2edfd4518a2904 8288992: AArch64: CMN should be handled the same way as CMP Reviewed-by: adinn, ngasson ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp Changeset: 82a8bd7e Author: Xue-Lei Andrew Fan Date: 2022-07-06 14:23:44 +0000 URL: https://git.openjdk.org/loom/commit/82a8bd7e92a1867b0c82f051361938be8610428d 8287596: Reorg jdk.test.lib.util.ForceGC Reviewed-by: rriggs ! test/jdk/java/io/ObjectStreamClass/TestOSCClassLoaderLeak.java ! test/jdk/java/lang/ClassLoader/loadLibraryUnload/LoadLibraryUnload.java ! test/jdk/java/lang/ClassLoader/nativeLibrary/NativeLibraryTest.java ! test/jdk/java/lang/invoke/defineHiddenClass/UnloadingTest.java ! test/jdk/java/lang/reflect/callerCache/ReflectionCallerCacheTest.java ! test/jdk/javax/security/auth/callback/PasswordCallback/CheckCleanerBound.java ! test/jdk/sun/security/jgss/GssContextCleanup.java ! test/jdk/sun/security/jgss/GssNameCleanup.java ! test/jdk/sun/security/pkcs11/Provider/MultipleLogins.java ! test/lib/jdk/test/lib/util/ForceGC.java Changeset: dfb24ae4 Author: Andrew Haley Date: 2022-07-06 15:22:00 +0000 URL: https://git.openjdk.org/loom/commit/dfb24ae4b7d32c0c625a9396429d167d9dcca183 8289060: Undefined Behaviour in class VMReg Reviewed-by: jvernee, kvn ! src/hotspot/share/code/vmreg.cpp ! src/hotspot/share/code/vmreg.hpp ! src/hotspot/share/opto/optoreg.hpp Changeset: 9f37ba44 Author: Lance Andersen Date: 2022-07-06 15:37:23 +0000 URL: https://git.openjdk.org/loom/commit/9f37ba44b8a6dfb635f39b6950fd5a7ae8894902 8288706: Unused parameter 'boolean newln' in method java.lang.VersionProps#print(boolean, boolean) Reviewed-by: iris, alanb, rriggs ! src/java.base/share/classes/java/lang/VersionProps.java.template ! src/java.base/share/native/libjli/java.c Changeset: 35387d5c Author: Raffaello Giulietti Committer: Joe Darcy Date: 2022-07-06 16:22:18 +0000 URL: https://git.openjdk.org/loom/commit/35387d5cb6aa9e59d62b8e1b137b53ec88521310 8289260: BigDecimal movePointLeft() and movePointRight() do not follow their API spec Reviewed-by: darcy ! src/java.base/share/classes/java/math/BigDecimal.java + test/jdk/java/math/BigDecimal/MovePointTests.java Changeset: c4dcce4b Author: Serguei Spitsyn Date: 2022-07-02 20:43:11 +0000 URL: https://git.openjdk.org/loom/commit/c4dcce4bca8808f8f733128f2e2b1dd48a28a322 8289619: JVMTI SelfSuspendDisablerTest.java failed with RuntimeException: Test FAILED: Unexpected thread state Reviewed-by: alanb, cjplummer ! test/hotspot/jtreg/serviceability/jvmti/vthread/SelfSuspendDisablerTest/SelfSuspendDisablerTest.java Changeset: dc4edd3f Author: Erik Gahlin Date: 2022-07-03 19:28:39 +0000 URL: https://git.openjdk.org/loom/commit/dc4edd3fe83038b03cad6b3652d12aff987f3987 8289183: jdk.jfr.consumer.RecordedThread.getId references Thread::getId, should be Thread::threadId Reviewed-by: alanb ! src/jdk.jfr/share/classes/jdk/jfr/consumer/RecordedThread.java Changeset: 5b5bc6c2 Author: Christoph Langer Date: 2022-07-04 07:52:38 +0000 URL: https://git.openjdk.org/loom/commit/5b5bc6c26e9843e16f241b89853a3a1fa5ae61f0 8287672: jtreg test com/sun/jndi/ldap/LdapPoolTimeoutTest.java fails intermittently in nightly run Reviewed-by: stuefe Backport-of: 7e211d7daac32dca8f26f408d1a3b2c7805b5a2e ! test/jdk/com/sun/jndi/ldap/LdapPoolTimeoutTest.java Changeset: 1a271645 Author: Jatin Bhateja Date: 2022-07-04 11:31:32 +0000 URL: https://git.openjdk.org/loom/commit/1a271645a84ac4d7d6570e739d42c05cc328891d 8287851: C2 crash: assert(t->meet(t0) == t) failed: Not monotonic Reviewed-by: thartmann, chagedorn ! src/hotspot/share/opto/intrinsicnode.cpp ! test/jdk/ProblemList.txt Changeset: 0dff3276 Author: Matthias Baesken Date: 2022-07-04 14:45:48 +0000 URL: https://git.openjdk.org/loom/commit/0dff3276e863fcbf496fe6decd3335cd43cab21f 8289569: [test] java/lang/ProcessBuilder/Basic.java fails on Alpine/musl Reviewed-by: clanger Backport-of: a8edd7a12f955fe843c7c9ad4273e9c653a80c5a ! test/jdk/java/lang/ProcessBuilder/Basic.java Changeset: f640fc5a Author: Pavel Rappo Date: 2022-07-04 16:00:53 +0000 URL: https://git.openjdk.org/loom/commit/f640fc5a1eb876a657d0de011dcd9b9a42b88eec 8067757: Incorrect HTML generation for copied javadoc with multiple @throws tags Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/TagletWriterImpl.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/ThrowsTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/CommentHelper.java ! test/langtools/jdk/javadoc/doclet/testThrowsInheritance/TestThrowsTagInheritance.java + test/langtools/jdk/javadoc/doclet/testThrowsInheritanceMultiple/TestOneToMany.java Changeset: 29ea6429 Author: Chris Plummer Date: 2022-07-05 17:46:59 +0000 URL: https://git.openjdk.org/loom/commit/29ea6429d2f906a61331aab1aef172d0d854fb6f 8287847: Fatal Error when suspending virtual thread after it has terminated Reviewed-by: alanb, sspitsyn ! src/jdk.jdwp.agent/share/native/libjdwp/threadControl.c ! test/jdk/TEST.groups + test/jdk/com/sun/jdi/SuspendAfterDeath.java ! test/jdk/com/sun/jdi/TestScaffold.java Changeset: 30e134e9 Author: Daniel D. Daugherty Date: 2022-07-05 20:42:42 +0000 URL: https://git.openjdk.org/loom/commit/30e134e909c53423acd1ec20c106f4200bc10285 8289091: move oop safety check from SharedRuntime::get_java_tid() to JavaThread::threadObj() Reviewed-by: rehn, dholmes ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/hotspot/share/runtime/thread.cpp Changeset: 0b6fd482 Author: Tyler Steele Date: 2022-07-05 21:11:50 +0000 URL: https://git.openjdk.org/loom/commit/0b6fd4820c1f98d6154d7182345273a4c9468af5 8288128: S390X: Fix crashes after JDK-8284161 (Virtual Threads) Reviewed-by: mdoerr ! src/hotspot/cpu/s390/frame_s390.cpp ! src/hotspot/cpu/s390/frame_s390.hpp ! src/hotspot/cpu/s390/frame_s390.inline.hpp ! src/hotspot/cpu/s390/nativeInst_s390.hpp ! src/hotspot/cpu/s390/stubGenerator_s390.cpp ! src/hotspot/cpu/s390/templateInterpreterGenerator_s390.cpp ! src/hotspot/share/runtime/signature.cpp Changeset: b3a0e482 Author: Alan Bateman Date: 2022-07-06 06:40:07 +0000 URL: https://git.openjdk.org/loom/commit/b3a0e482adc32946d03b10589f746bb31f9c9e5b 8289439: Clarify relationship between ThreadStart/ThreadEnd and can_support_virtual_threads capability Reviewed-by: dholmes, dcubed, sspitsyn, cjplummer ! src/hotspot/share/prims/jvmti.xml ! src/hotspot/share/prims/jvmtiH.xsl Changeset: 0526402a Author: Thomas Stuefe Date: 2022-07-06 10:15:38 +0000 URL: https://git.openjdk.org/loom/commit/0526402a023d5725bf32ef6587001ad05e28c10f 8289477: Memory corruption with CPU_ALLOC, CPU_FREE on muslc Backport-of: da6d1fc0e0aeb1fdb504aced4b0dba0290ec240f ! src/hotspot/os/linux/os_linux.cpp Changeset: 2a6ec88c Author: Jesper Wilhelmsson Date: 2022-07-06 21:01:10 +0000 URL: https://git.openjdk.org/loom/commit/2a6ec88cd09adec43df3da1b22653271517b14a8 Merge ! src/hotspot/cpu/s390/stubGenerator_s390.cpp ! src/hotspot/share/runtime/javaThread.cpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! test/jdk/ProblemList.txt ! src/hotspot/cpu/s390/stubGenerator_s390.cpp + src/hotspot/share/runtime/javaThread.cpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! test/jdk/ProblemList.txt Changeset: a40c17b7 Author: Joe Darcy Date: 2022-07-06 21:28:09 +0000 URL: https://git.openjdk.org/loom/commit/a40c17b730257919f18066dbce4fc92ed3c4f10e 8289775: Update java.lang.invoke.MethodHandle[s] to use snippets Reviewed-by: jrose ! src/java.base/share/classes/java/lang/invoke/MethodHandle.java ! src/java.base/share/classes/java/lang/invoke/MethodHandles.java Changeset: 403a9bc7 Author: Tongbao Zhang Committer: Jie Fu Date: 2022-07-06 22:49:57 +0000 URL: https://git.openjdk.org/loom/commit/403a9bc79645018ee61b47bab67fe231577dd914 8289436: Make the redefine timer statistics more accurate Reviewed-by: sspitsyn, cjplummer, lmesnik ! src/hotspot/share/prims/jvmtiRedefineClasses.cpp ! src/hotspot/share/prims/jvmtiRedefineClasses.hpp Changeset: 569de453 Author: Thomas Stuefe Date: 2022-07-07 05:30:10 +0000 URL: https://git.openjdk.org/loom/commit/569de453c3267089d04befd756b81470693cf2de 8289620: gtest/MetaspaceUtilsGtests.java failed with unexpected stats values Reviewed-by: coleenp ! test/hotspot/gtest/metaspace/test_metaspaceUtils.cpp Changeset: a79ce4e7 Author: Xiaohong Gong Date: 2022-07-07 08:14:21 +0000 URL: https://git.openjdk.org/loom/commit/a79ce4e74858e78acc83c12d500303f667dc3f6b 8286941: Add mask IR for partial vector operations for ARM SVE Reviewed-by: kvn, jbhateja, njian, ngasson ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/cpu/aarch64/aarch64_sve.ad ! src/hotspot/cpu/aarch64/aarch64_sve_ad.m4 ! src/hotspot/cpu/aarch64/assembler_aarch64.hpp ! src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.hpp ! src/hotspot/cpu/arm/arm.ad ! src/hotspot/cpu/ppc/ppc.ad ! src/hotspot/cpu/riscv/riscv.ad ! src/hotspot/cpu/s390/s390.ad ! src/hotspot/cpu/x86/x86.ad ! src/hotspot/share/opto/matcher.cpp ! src/hotspot/share/opto/matcher.hpp ! src/hotspot/share/opto/memnode.hpp ! src/hotspot/share/opto/node.hpp ! src/hotspot/share/opto/vectornode.cpp ! src/hotspot/share/opto/vectornode.hpp ! test/hotspot/gtest/aarch64/aarch64-asmtest.py ! test/hotspot/gtest/aarch64/asmtest.out.h Changeset: d1249aa5 Author: Kevin Walls Date: 2022-07-07 08:41:50 +0000 URL: https://git.openjdk.org/loom/commit/d1249aa5cbf3a3a3a24e85bcec30aecbc3e09bc0 8198668: MemoryPoolMBean/isUsageThresholdExceeded/isexceeded001/TestDescription.java still failing Reviewed-by: lmesnik, sspitsyn ! test/hotspot/jtreg/ProblemList.txt ! test/hotspot/jtreg/vmTestbase/nsk/monitoring/MemoryPoolMBean/isUsageThresholdExceeded/isexceeded001.java Changeset: cce77a70 Author: Thomas Stuefe Date: 2022-07-07 09:42:14 +0000 URL: https://git.openjdk.org/loom/commit/cce77a700141a854bafaa5ccb33db026affcf322 8289799: Build warning in methodData.cpp memset zero-length parameter Reviewed-by: jiefu, lucy ! src/hotspot/share/oops/methodData.cpp Changeset: e05b2f2c Author: Martin Doerr Date: 2022-07-07 10:21:25 +0000 URL: https://git.openjdk.org/loom/commit/e05b2f2c3b9b0276099766bc38a55ff835c989e1 8289856: [PPC64] SIGSEGV in C2Compiler::init_c2_runtime() after JDK-8289060 Reviewed-by: dlong, lucy ! src/hotspot/cpu/ppc/ppc.ad Changeset: 532a6ec7 Author: Prasanta Sadhukhan Date: 2022-07-07 11:51:49 +0000 URL: https://git.openjdk.org/loom/commit/532a6ec7e3a048624b380b38b4611533a7caae18 7124313: [macosx] Swing Popups should overlap taskbar Reviewed-by: serb, dmarkov ! test/jdk/ProblemList.txt ! test/jdk/javax/swing/JPopupMenu/6580930/bug6580930.java Changeset: 77ad998b Author: Jie Fu Date: 2022-07-07 12:52:04 +0000 URL: https://git.openjdk.org/loom/commit/77ad998b6e741f7cd7cdd52155c024bbc77f2027 8289778: ZGC: incorrect use of os::free() for mountpoint string handling after JDK-8289633 Reviewed-by: stuefe, dholmes, mdoerr ! src/hotspot/os/linux/gc/z/zMountPoint_linux.cpp Changeset: 013a5eee Author: Albert Mingkun Yang Date: 2022-07-07 13:53:24 +0000 URL: https://git.openjdk.org/loom/commit/013a5eeeb9d9a46778f68261ac69ed7235cdc7dd 8137280: Remove eager reclaim of humongous controls Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.hpp ! src/hotspot/share/gc/g1/g1GCPhaseTimes.cpp ! src/hotspot/share/gc/g1/g1YoungGCPostEvacuateTasks.cpp ! src/hotspot/share/gc/g1/g1_globals.hpp ! test/hotspot/jtreg/gc/g1/TestGreyReclaimedHumongousObjects.java Changeset: 86f63f97 Author: Justin Gu Committer: Coleen Phillimore Date: 2022-07-07 14:57:24 +0000 URL: https://git.openjdk.org/loom/commit/86f63f9703b47b3b5b8fd093dbd117d8746091ff 8289164: Convert ResolutionErrorTable to use ResourceHashtable Reviewed-by: iklam, coleenp ! src/hotspot/share/classfile/resolutionErrors.cpp ! src/hotspot/share/classfile/resolutionErrors.hpp ! src/hotspot/share/classfile/systemDictionary.cpp ! src/hotspot/share/classfile/systemDictionary.hpp ! src/hotspot/share/interpreter/linkResolver.cpp ! src/hotspot/share/oops/instanceKlass.cpp + test/hotspot/jtreg/runtime/ClassResolutionFail/ErrorsDemoTest.java Changeset: 74ca6ca2 Author: Ivan Walulya Date: 2022-07-07 15:09:30 +0000 URL: https://git.openjdk.org/loom/commit/74ca6ca25ba3ece0c92bf2c6e4f940996785c9a3 8289800: G1: G1CollectionSet::finalize_young_part clears survivor list too early Reviewed-by: ayang, tschatzl ! src/hotspot/share/gc/g1/g1CollectionSet.cpp Changeset: 8e7b45b8 Author: Coleen Phillimore Date: 2022-07-07 15:27:55 +0000 URL: https://git.openjdk.org/loom/commit/8e7b45b82062cabad110ddcd51fa969b67483089 8282986: Remove "system" in boot class path names Reviewed-by: iklam, dholmes ! src/hotspot/share/cds/filemap.cpp ! src/hotspot/share/classfile/classLoader.cpp ! src/hotspot/share/classfile/modules.cpp ! src/hotspot/share/runtime/arguments.cpp ! src/hotspot/share/runtime/arguments.hpp ! src/hotspot/share/runtime/os.cpp Changeset: 95e3190d Author: Thomas Schatzl Date: 2022-07-07 15:46:05 +0000 URL: https://git.openjdk.org/loom/commit/95e3190d96424885707dd7d07e25e898ad642e5b 8210708: Use single mark bitmap in G1 Co-authored-by: Stefan Johansson Co-authored-by: Ivan Walulya Reviewed-by: iwalulya, ayang ! src/hotspot/share/gc/g1/g1BlockOffsetTable.cpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp ! src/hotspot/share/gc/g1/g1CodeBlobClosure.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.hpp ! src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp ! src/hotspot/share/gc/g1/g1CollectionSet.cpp ! src/hotspot/share/gc/g1/g1CollectorState.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.inline.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkThread.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkThread.hpp + src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.cpp + src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.hpp ! src/hotspot/share/gc/g1/g1EvacFailure.cpp ! src/hotspot/share/gc/g1/g1FullCollector.cpp ! src/hotspot/share/gc/g1/g1FullGCCompactTask.cpp ! src/hotspot/share/gc/g1/g1FullGCCompactTask.hpp ! src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp ! src/hotspot/share/gc/g1/g1FullGCPrepareTask.hpp ! src/hotspot/share/gc/g1/g1HeapVerifier.cpp ! src/hotspot/share/gc/g1/g1HeapVerifier.hpp ! src/hotspot/share/gc/g1/g1OopClosures.inline.hpp ! src/hotspot/share/gc/g1/g1ParScanThreadState.cpp ! src/hotspot/share/gc/g1/g1Policy.cpp ! src/hotspot/share/gc/g1/g1RegionMarkStatsCache.hpp ! src/hotspot/share/gc/g1/g1RemSet.cpp ! src/hotspot/share/gc/g1/g1RemSet.hpp ! src/hotspot/share/gc/g1/g1RemSetTrackingPolicy.cpp ! src/hotspot/share/gc/g1/g1SATBMarkQueueSet.cpp ! src/hotspot/share/gc/g1/g1YoungCollector.cpp ! src/hotspot/share/gc/g1/g1YoungGCPostEvacuateTasks.cpp ! src/hotspot/share/gc/g1/heapRegion.cpp ! src/hotspot/share/gc/g1/heapRegion.hpp ! src/hotspot/share/gc/g1/heapRegion.inline.hpp ! src/hotspot/share/gc/g1/heapRegionManager.cpp ! src/hotspot/share/gc/g1/heapRegionManager.hpp ! src/hotspot/share/gc/shared/markBitMap.hpp ! src/hotspot/share/gc/shared/markBitMap.inline.hpp ! src/hotspot/share/gc/shared/verifyOption.hpp ! test/hotspot/gtest/gc/g1/test_heapRegion.cpp ! test/hotspot/gtest/utilities/test_bitMap_search.cpp ! test/hotspot/jtreg/gc/g1/TestLargePageUseForAuxMemory.java Changeset: a694e9e3 Author: Alex Kasko Committer: Alexey Semenyuk Date: 2022-07-07 16:45:35 +0000 URL: https://git.openjdk.org/loom/commit/a694e9e34d1e4388df200d11b168ca5265cea4ac 8288838: jpackage: file association additional arguments Reviewed-by: asemenyuk, almatvee ! src/jdk.jpackage/windows/classes/jdk/jpackage/internal/WixAppImageFragmentBuilder.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/FileAssociations.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/LinuxHelper.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/PackageTest.java ! test/jdk/tools/jpackage/share/FileAssociationsTest.java Changeset: 5564effe Author: Ioi Lam Date: 2022-07-07 17:29:25 +0000 URL: https://git.openjdk.org/loom/commit/5564effe9c69a5aa1975d059f69cef546be28502 8289763: Remove NULL check in CDSProtectionDomain::init_security_info() Reviewed-by: ccheung, coleenp ! src/hotspot/share/cds/cdsProtectionDomain.cpp Changeset: f7b18305 Author: Thomas Schatzl Date: 2022-07-07 18:08:43 +0000 URL: https://git.openjdk.org/loom/commit/f7b183059a3023f8da73859f1577d08a807749b2 8289538: Make G1BlockOffsetTablePart unaware of block sizes Reviewed-by: ayang, iwalulya ! src/hotspot/share/gc/g1/g1BlockOffsetTable.cpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/heapRegion.hpp ! src/hotspot/share/gc/g1/heapRegion.inline.hpp Changeset: 3e60e828 Author: Zdenek Zambersky Committer: Valerie Peng Date: 2022-07-07 18:18:04 +0000 URL: https://git.openjdk.org/loom/commit/3e60e828148a0490a4422d0724d15f3eccec17f0 8289301: P11Cipher should not throw out of bounds exception during padding Reviewed-by: valeriep ! src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11Cipher.java + test/jdk/sun/security/pkcs11/Cipher/TestPaddingOOB.java Changeset: f93beacd Author: Coleen Phillimore Date: 2022-07-07 20:27:31 +0000 URL: https://git.openjdk.org/loom/commit/f93beacd2f64aab0f930ac822859380c00c51f0c 8252329: runtime/LoadClass/TestResize.java timed out Reviewed-by: hseigel, iklam ! src/hotspot/share/classfile/classLoaderData.cpp ! src/hotspot/share/classfile/dictionary.cpp ! src/hotspot/share/classfile/dictionary.hpp ! test/hotspot/jtreg/runtime/LoadClass/TestResize.java Changeset: 8cdead0c Author: Coleen Phillimore Date: 2022-07-07 20:28:34 +0000 URL: https://git.openjdk.org/loom/commit/8cdead0c94094a025c48eaefc7a3ef0c36a9629e 8278923: Document Klass::is_loader_alive Reviewed-by: dholmes, iklam ! src/hotspot/share/oops/klass.inline.hpp Changeset: f804f2ce Author: Mark Powers Committer: Valerie Peng Date: 2022-07-07 23:20:58 +0000 URL: https://git.openjdk.org/loom/commit/f804f2ce8ef7a859aae021b20cbdcd9e34f9fb94 8284851: Update javax.crypto files to use proper javadoc for mentioned classes Reviewed-by: weijun, valeriep ! src/java.base/share/classes/java/security/AccessControlContext.java ! src/java.base/share/classes/java/security/AccessControlException.java ! src/java.base/share/classes/java/security/AccessController.java ! src/java.base/share/classes/java/security/AlgorithmConstraints.java ! src/java.base/share/classes/java/security/AlgorithmParameterGenerator.java ! src/java.base/share/classes/java/security/AlgorithmParameterGeneratorSpi.java ! src/java.base/share/classes/java/security/AlgorithmParameters.java ! src/java.base/share/classes/java/security/AlgorithmParametersSpi.java ! src/java.base/share/classes/java/security/AllPermission.java ! src/java.base/share/classes/java/security/BasicPermission.java ! src/java.base/share/classes/java/security/Certificate.java ! src/java.base/share/classes/java/security/CodeSigner.java ! src/java.base/share/classes/java/security/CodeSource.java ! src/java.base/share/classes/java/security/DigestException.java ! src/java.base/share/classes/java/security/DigestInputStream.java ! src/java.base/share/classes/java/security/DigestOutputStream.java ! src/java.base/share/classes/java/security/DomainCombiner.java ! src/java.base/share/classes/java/security/DomainLoadStoreParameter.java ! src/java.base/share/classes/java/security/GeneralSecurityException.java ! src/java.base/share/classes/java/security/Guard.java ! src/java.base/share/classes/java/security/GuardedObject.java ! src/java.base/share/classes/java/security/Identity.java ! src/java.base/share/classes/java/security/IdentityScope.java ! src/java.base/share/classes/java/security/InvalidAlgorithmParameterException.java ! src/java.base/share/classes/java/security/InvalidKeyException.java ! src/java.base/share/classes/java/security/InvalidParameterException.java ! src/java.base/share/classes/java/security/Key.java ! src/java.base/share/classes/java/security/KeyException.java ! src/java.base/share/classes/java/security/KeyFactory.java ! src/java.base/share/classes/java/security/KeyManagementException.java ! src/java.base/share/classes/java/security/KeyPairGenerator.java ! src/java.base/share/classes/java/security/KeyPairGeneratorSpi.java ! src/java.base/share/classes/java/security/KeyStore.java ! src/java.base/share/classes/java/security/KeyStoreException.java ! src/java.base/share/classes/java/security/KeyStoreSpi.java ! src/java.base/share/classes/java/security/MessageDigest.java ! src/java.base/share/classes/java/security/MessageDigestSpi.java ! src/java.base/share/classes/java/security/NoSuchAlgorithmException.java ! src/java.base/share/classes/java/security/NoSuchProviderException.java ! src/java.base/share/classes/java/security/Permission.java ! src/java.base/share/classes/java/security/PermissionCollection.java ! src/java.base/share/classes/java/security/Permissions.java ! src/java.base/share/classes/java/security/Policy.java ! src/java.base/share/classes/java/security/PolicySpi.java ! src/java.base/share/classes/java/security/Principal.java ! src/java.base/share/classes/java/security/PrivilegedActionException.java ! src/java.base/share/classes/java/security/ProtectionDomain.java ! src/java.base/share/classes/java/security/Provider.java ! src/java.base/share/classes/java/security/ProviderException.java ! src/java.base/share/classes/java/security/SecureClassLoader.java ! src/java.base/share/classes/java/security/SecureRandom.java ! src/java.base/share/classes/java/security/Security.java ! src/java.base/share/classes/java/security/SecurityPermission.java ! src/java.base/share/classes/java/security/Signature.java ! src/java.base/share/classes/java/security/SignatureException.java ! src/java.base/share/classes/java/security/SignatureSpi.java ! src/java.base/share/classes/java/security/SignedObject.java ! src/java.base/share/classes/java/security/Signer.java ! src/java.base/share/classes/java/security/Timestamp.java ! src/java.base/share/classes/java/security/URIParameter.java ! src/java.base/share/classes/java/security/UnrecoverableEntryException.java ! src/java.base/share/classes/java/security/UnrecoverableKeyException.java ! src/java.base/share/classes/java/security/UnresolvedPermission.java ! src/java.base/share/classes/java/security/UnresolvedPermissionCollection.java ! src/java.base/share/classes/javax/crypto/AEADBadTagException.java ! src/java.base/share/classes/javax/crypto/BadPaddingException.java ! src/java.base/share/classes/javax/crypto/Cipher.java ! src/java.base/share/classes/javax/crypto/CipherInputStream.java ! src/java.base/share/classes/javax/crypto/CipherOutputStream.java ! src/java.base/share/classes/javax/crypto/CipherSpi.java ! src/java.base/share/classes/javax/crypto/CryptoAllPermission.java ! src/java.base/share/classes/javax/crypto/CryptoPermission.java ! src/java.base/share/classes/javax/crypto/CryptoPermissions.java ! src/java.base/share/classes/javax/crypto/CryptoPolicyParser.java ! src/java.base/share/classes/javax/crypto/EncryptedPrivateKeyInfo.java ! src/java.base/share/classes/javax/crypto/ExemptionMechanism.java ! src/java.base/share/classes/javax/crypto/ExemptionMechanismException.java ! src/java.base/share/classes/javax/crypto/ExemptionMechanismSpi.java ! src/java.base/share/classes/javax/crypto/IllegalBlockSizeException.java ! src/java.base/share/classes/javax/crypto/KeyAgreement.java ! src/java.base/share/classes/javax/crypto/KeyAgreementSpi.java ! src/java.base/share/classes/javax/crypto/KeyGenerator.java ! src/java.base/share/classes/javax/crypto/KeyGeneratorSpi.java ! src/java.base/share/classes/javax/crypto/Mac.java ! src/java.base/share/classes/javax/crypto/MacSpi.java ! src/java.base/share/classes/javax/crypto/NoSuchPaddingException.java ! src/java.base/share/classes/javax/crypto/NullCipher.java ! src/java.base/share/classes/javax/crypto/ProviderVerifier.java ! src/java.base/share/classes/javax/crypto/SealedObject.java ! src/java.base/share/classes/javax/crypto/SecretKeyFactory.java ! src/java.base/share/classes/javax/crypto/SecretKeyFactorySpi.java ! src/java.base/share/classes/javax/crypto/ShortBufferException.java Changeset: 3f1174aa Author: Yasumasa Suenaga Date: 2022-07-08 00:04:46 +0000 URL: https://git.openjdk.org/loom/commit/3f1174aa4709aabcfde8b40deec88b8ed466cc06 8289646: configure script failed on WSL Reviewed-by: ihse ! make/scripts/fixpath.sh Changeset: ef3f2ed9 Author: Daniel D. Daugherty Date: 2022-07-06 16:50:14 +0000 URL: https://git.openjdk.org/loom/commit/ef3f2ed9ba920ab8b1e3fb2029e7c0096dd11cc6 8289841: ProblemList vmTestbase/gc/gctests/MemoryEaterMT/MemoryEaterMT.java with ZGC on windows Reviewed-by: rriggs ! test/hotspot/jtreg/ProblemList-zgc.txt Changeset: 32b650c0 Author: Daniel D. Daugherty Date: 2022-07-06 16:51:03 +0000 URL: https://git.openjdk.org/loom/commit/32b650c024bc294f6d28d1f0ebbef9865f455daf 8289840: ProblemList vmTestbase/nsk/jdwp/ThreadReference/ForceEarlyReturn/forceEarlyReturn002/forceEarlyReturn002.java when run with vthread wrapper Reviewed-by: bpb ! test/hotspot/jtreg/ProblemList-svc-vthread.txt Changeset: 55fa19b5 Author: Daniel D. Daugherty Date: 2022-07-06 20:52:25 +0000 URL: https://git.openjdk.org/loom/commit/55fa19b508ab4d760d1c5ff71e37399c3b79d85c 8289857: ProblemList jdk/jfr/event/runtime/TestActiveSettingEvent.java Reviewed-by: darcy ! test/jdk/ProblemList.txt Changeset: 9a0fa824 Author: Ron Pressler Date: 2022-07-06 20:53:13 +0000 URL: https://git.openjdk.org/loom/commit/9a0fa8242461afe9ee4bcf80523af13500c9c1f2 8288949: serviceability/jvmti/vthread/ContStackDepthTest/ContStackDepthTest.java failing Reviewed-by: dlong, eosterlund, rehn ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/share/code/compiledIC.cpp ! src/hotspot/share/code/compiledIC.hpp ! src/hotspot/share/oops/method.cpp ! src/hotspot/share/runtime/continuationEntry.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! test/hotspot/jtreg/ProblemList-Xcomp.txt Changeset: 8f24d251 Author: Pavel Rappo Date: 2022-07-06 22:01:12 +0000 URL: https://git.openjdk.org/loom/commit/8f24d25168c576191075c7344ef0d95a8f08b347 6509045: {@inheritDoc} only copies one instance of the specified exception Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/ThrowsTaglet.java ! test/langtools/jdk/javadoc/doclet/testThrowsInheritanceMultiple/TestOneToMany.java Changeset: 8dd94a2c Author: Jan Lahoda Date: 2022-07-07 07:54:18 +0000 URL: https://git.openjdk.org/loom/commit/8dd94a2c14f7456b3eaf3e02f38d9e114eb8acc3 8289196: Pattern domination not working properly for record patterns Reviewed-by: vromero ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Attr.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/langtools/tools/javac/patterns/Domination.java ! test/langtools/tools/javac/patterns/Domination.out ! test/langtools/tools/javac/patterns/SwitchErrors.out Changeset: 889150b4 Author: Maurizio Cimadamore Date: 2022-07-07 09:08:09 +0000 URL: https://git.openjdk.org/loom/commit/889150b47a7a33d302c1883320d2cfbb915c52e7 8289558: Need spec clarification of j.l.foreign.*Layout Reviewed-by: psandoz, jvernee ! src/java.base/share/classes/java/lang/foreign/AbstractLayout.java ! src/java.base/share/classes/java/lang/foreign/GroupLayout.java ! src/java.base/share/classes/java/lang/foreign/MemoryLayout.java ! src/java.base/share/classes/java/lang/foreign/SequenceLayout.java ! src/java.base/share/classes/java/lang/foreign/ValueLayout.java Changeset: a8eb7286 Author: Stuart Marks Date: 2022-07-07 16:54:15 +0000 URL: https://git.openjdk.org/loom/commit/a8eb728680529e81bea0584912dead394c35b040 8289779: Map::replaceAll javadoc has redundant @throws clauses Reviewed-by: prappo, iris ! src/java.base/share/classes/java/util/Map.java Changeset: 3212dc9c Author: Joe Wang Date: 2022-07-07 19:07:04 +0000 URL: https://git.openjdk.org/loom/commit/3212dc9c6f3538e1d0bd1809efd5f33ad8b47701 8289486: Improve XSLT XPath operators count efficiency Reviewed-by: naoto, lancea ! src/java.xml/share/classes/com/sun/java_cup/internal/runtime/lr_parser.java ! src/java.xml/share/classes/com/sun/org/apache/xalan/internal/xsltc/compiler/XPathParser.java Changeset: 01b9f95c Author: Jesper Wilhelmsson Date: 2022-07-08 02:07:36 +0000 URL: https://git.openjdk.org/loom/commit/01b9f95c62953e7f9ca10eafd42d21c634413827 Merge ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/share/runtime/continuationEntry.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/share/runtime/continuationEntry.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt Changeset: 1fec62f2 Author: Ioi Lam Date: 2022-07-08 05:39:24 +0000 URL: https://git.openjdk.org/loom/commit/1fec62f299294a0c3b3c639883cdcdc8f1410224 8289710: Move Suspend/Resume classes out of os.hpp Reviewed-by: dholmes, coleenp ! src/hotspot/os/aix/osThread_aix.hpp ! src/hotspot/os/bsd/osThread_bsd.hpp ! src/hotspot/os/linux/osThread_linux.hpp ! src/hotspot/os/posix/signals_posix.cpp + src/hotspot/os/posix/suspendResume_posix.cpp + src/hotspot/os/posix/suspendResume_posix.hpp ! src/hotspot/os/windows/os_windows.cpp ! src/hotspot/os_cpu/linux_s390/javaThread_linux_s390.cpp ! src/hotspot/share/jfr/periodic/sampling/jfrThreadSampler.cpp ! src/hotspot/share/runtime/os.cpp ! src/hotspot/share/runtime/os.hpp ! src/hotspot/share/runtime/osThread.hpp + src/hotspot/share/runtime/suspendedThreadTask.cpp + src/hotspot/share/runtime/suspendedThreadTask.hpp Changeset: ac399e97 Author: Robbin Ehn Date: 2022-07-08 07:12:19 +0000 URL: https://git.openjdk.org/loom/commit/ac399e9777731e7a9cbc2ad3396acfa5358b1c76 8286957: Held monitor count Reviewed-by: rpressler, eosterlund ! make/test/JtregNativeHotspot.gmk ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/globalDefinitions_aarch64.hpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateTable_aarch64.cpp ! src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp ! src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp ! src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp ! src/hotspot/cpu/x86/globalDefinitions_x86.hpp ! src/hotspot/cpu/x86/interp_masm_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.hpp ! src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/cpu/x86/stubGenerator_x86_64.cpp ! src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp ! src/hotspot/cpu/x86/templateTable_x86.cpp ! src/hotspot/cpu/zero/globalDefinitions_zero.hpp ! src/hotspot/cpu/zero/zeroInterpreter_zero.cpp ! src/hotspot/share/c1/c1_Runtime1.cpp ! src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp ! src/hotspot/share/jvmci/vmStructs_jvmci.cpp ! src/hotspot/share/opto/macro.cpp ! src/hotspot/share/opto/runtime.cpp ! src/hotspot/share/prims/jni.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/continuationFreezeThaw.cpp ! src/hotspot/share/runtime/deoptimization.cpp ! src/hotspot/share/runtime/javaThread.cpp ! src/hotspot/share/runtime/javaThread.hpp ! src/hotspot/share/runtime/objectMonitor.cpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/hotspot/share/runtime/sharedRuntime.hpp ! src/hotspot/share/runtime/synchronizer.cpp ! src/hotspot/share/runtime/thread.cpp + test/hotspot/jtreg/runtime/Monitor/CompleteExit.java + test/hotspot/jtreg/runtime/Monitor/libCompleteExit.c Changeset: 1b8f466d Author: Thomas Schatzl Date: 2022-07-08 07:15:56 +0000 URL: https://git.openjdk.org/loom/commit/1b8f466dbad08c0fccb8f0069ff5141cf8d6bf2c 8289740: Add verification testing during all concurrent phases in G1 Reviewed-by: iwalulya, ayang + test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java Changeset: f1967cfa Author: Thomas Schatzl Date: 2022-07-08 08:49:17 +0000 URL: https://git.openjdk.org/loom/commit/f1967cfaabb30dba82eca0ab028f43020fe50c2b 8289997: gc/g1/TestVerificationInConcurrentCycle.java fails due to use of debug-only option Reviewed-by: lkorinth ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java Changeset: a13af650 Author: Dmitry Chuyko Date: 2022-07-08 08:55:13 +0000 URL: https://git.openjdk.org/loom/commit/a13af650437de508d64f0b12285a6ffc9901f85f 8282322: AArch64: Provide a means to eliminate all STREX family of instructions Reviewed-by: ngasson, aph ! src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp ! src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S Changeset: d852e99a Author: Vladimir Kempik Date: 2022-07-08 09:14:51 +0000 URL: https://git.openjdk.org/loom/commit/d852e99ae9de4c611438c50ce37ea1806f58cbdf 8289697: buffer overflow in MTLVertexCache.m: MTLVertexCache_AddGlyphQuad Reviewed-by: prr ! src/java.desktop/macosx/native/libawt_lwawt/java2d/metal/MTLVertexCache.m Changeset: e7795851 Author: Coleen Phillimore Date: 2022-07-08 15:55:14 +0000 URL: https://git.openjdk.org/loom/commit/e7795851d2e02389e63950fef939084b18ec4bfb 8271707: migrate tests to use jdk.test.whitebox.WhiteBox Reviewed-by: lmesnik, dholmes ! test/hotspot/jtreg/applications/ctw/modules/generate.bash ! test/hotspot/jtreg/applications/ctw/modules/java_base.java ! test/hotspot/jtreg/applications/ctw/modules/java_base_2.java ! test/hotspot/jtreg/applications/ctw/modules/java_compiler.java ! test/hotspot/jtreg/applications/ctw/modules/java_datatransfer.java ! test/hotspot/jtreg/applications/ctw/modules/java_desktop.java ! test/hotspot/jtreg/applications/ctw/modules/java_desktop_2.java ! test/hotspot/jtreg/applications/ctw/modules/java_instrument.java ! test/hotspot/jtreg/applications/ctw/modules/java_logging.java ! test/hotspot/jtreg/applications/ctw/modules/java_management.java ! test/hotspot/jtreg/applications/ctw/modules/java_management_rmi.java ! test/hotspot/jtreg/applications/ctw/modules/java_naming.java ! test/hotspot/jtreg/applications/ctw/modules/java_net_http.java ! test/hotspot/jtreg/applications/ctw/modules/java_prefs.java ! test/hotspot/jtreg/applications/ctw/modules/java_rmi.java ! test/hotspot/jtreg/applications/ctw/modules/java_scripting.java ! test/hotspot/jtreg/applications/ctw/modules/java_security_jgss.java ! test/hotspot/jtreg/applications/ctw/modules/java_security_sasl.java ! test/hotspot/jtreg/applications/ctw/modules/java_smartcardio.java ! test/hotspot/jtreg/applications/ctw/modules/java_sql.java ! test/hotspot/jtreg/applications/ctw/modules/java_sql_rowset.java ! test/hotspot/jtreg/applications/ctw/modules/java_transaction_xa.java ! test/hotspot/jtreg/applications/ctw/modules/java_xml.java ! test/hotspot/jtreg/applications/ctw/modules/java_xml_crypto.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_accessibility.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_attach.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_charsets.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_compiler.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_cryptoki.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_ec.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_mscapi.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_dynalink.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_editpad.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_hotspot_agent.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_httpserver.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_ed.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_jvmstat.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_le.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_opt.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_vm_ci.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jartool.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_javadoc.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jcmd.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jconsole.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jdeps.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jdi.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jfr.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jlink.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jshell.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jsobject.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jstatd.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_localedata.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_localedata_2.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_management.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_management_agent.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_management_jfr.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_naming_dns.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_naming_rmi.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_net.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_sctp.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_security_auth.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_security_jgss.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_unsupported.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_unsupported_desktop.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_xml_dom.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_zipfs.java ! test/hotspot/jtreg/compiler/allocation/TestFailedAllocationBadGraph.java ! test/hotspot/jtreg/compiler/arguments/TestUseBMI1InstructionsOnSupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseBMI1InstructionsOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountLeadingZerosInstructionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountLeadingZerosInstructionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountTrailingZerosInstructionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountTrailingZerosInstructionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/arraycopy/TestArrayCopyNoInitDeopt.java ! test/hotspot/jtreg/compiler/arraycopy/TestDefaultMethodArrayCloneDeoptC2.java ! test/hotspot/jtreg/compiler/arraycopy/TestOutOfBoundsArrayLoad.java ! test/hotspot/jtreg/compiler/c2/Test6857159.java ! test/hotspot/jtreg/compiler/c2/Test8004741.java ! test/hotspot/jtreg/compiler/c2/TestDeadDataLoopIGVN.java ! test/hotspot/jtreg/compiler/c2/aarch64/TestVolatiles.java ! test/hotspot/jtreg/compiler/c2/cr6589834/Test_ia32.java ! test/hotspot/jtreg/compiler/c2/irTests/TestSuperwordFailsUnrolling.java ! test/hotspot/jtreg/compiler/calls/common/CallsBase.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeDynamic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeDynamic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeDynamic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeInterface2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeInterface2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeInterface2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeSpecial2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeSpecial2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeSpecial2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeStatic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeStatic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeStatic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeVirtual2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeVirtual2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeVirtual2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeDynamic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeDynamic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeDynamic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeInterface2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeInterface2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeInterface2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeSpecial2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeSpecial2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeSpecial2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeStatic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeStatic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeStatic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeVirtual2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeVirtual2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeVirtual2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeSpecial2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeSpecial2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeSpecial2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeStatic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeStatic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeStatic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeVirtual2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeVirtual2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeVirtual2NativeTest.java ! test/hotspot/jtreg/compiler/cha/AbstractRootMethod.java ! test/hotspot/jtreg/compiler/cha/DefaultRootMethod.java ! test/hotspot/jtreg/compiler/cha/StrengthReduceInterfaceCall.java ! test/hotspot/jtreg/compiler/cha/Utils.java ! test/hotspot/jtreg/compiler/ciReplay/TestClientVM.java ! test/hotspot/jtreg/compiler/ciReplay/TestDumpReplay.java ! test/hotspot/jtreg/compiler/ciReplay/TestDumpReplayCommandLine.java ! test/hotspot/jtreg/compiler/ciReplay/TestInlining.java ! test/hotspot/jtreg/compiler/ciReplay/TestLambdas.java ! test/hotspot/jtreg/compiler/ciReplay/TestNoClassFile.java ! test/hotspot/jtreg/compiler/ciReplay/TestSAClient.java ! test/hotspot/jtreg/compiler/ciReplay/TestSAServer.java ! test/hotspot/jtreg/compiler/ciReplay/TestServerVM.java ! test/hotspot/jtreg/compiler/ciReplay/TestUnresolvedClasses.java ! test/hotspot/jtreg/compiler/ciReplay/TestVMNoCompLevel.java ! test/hotspot/jtreg/compiler/ciReplay/VMBase.java ! test/hotspot/jtreg/compiler/classUnloading/methodUnloading/TestMethodUnloading.java ! test/hotspot/jtreg/compiler/codecache/CheckSegmentedCodeCache.java ! test/hotspot/jtreg/compiler/codecache/OverflowCodeCacheTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/BeanTypeTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeCacheUtils.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeHeapBeanPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/GetUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/InitialAndMaxUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ManagerNamesTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/MemoryPoolsPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PeakUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PoolsIndependenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ThresholdNotificationsTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdExceededSeveralTimesTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdExceededTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdIncreasedTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdNotExceededTest.java ! test/hotspot/jtreg/compiler/codecache/stress/Helper.java ! test/hotspot/jtreg/compiler/codecache/stress/OverloadCompileQueueTest.java ! test/hotspot/jtreg/compiler/codecache/stress/RandomAllocationTest.java ! test/hotspot/jtreg/compiler/codecache/stress/ReturnBlobToWrongHeapTest.java ! test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java ! test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationTest.java ! test/hotspot/jtreg/compiler/codegen/TestOopCmp.java ! test/hotspot/jtreg/compiler/codegen/aes/TestAESMain.java ! test/hotspot/jtreg/compiler/codegen/aes/TestCipherBlockChainingEncrypt.java ! test/hotspot/jtreg/compiler/compilercontrol/InlineMatcherTest.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityBase.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityCommandOff.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityCommandOn.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityFlag.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/CompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/ExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/PrintTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/CompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/ControlIntrinsicTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/ExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/PrintTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/CompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/ControlIntrinsicTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/ExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/PrintTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddAndRemoveTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddCompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddLogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddPrintAssemblyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/ClearDirectivesFileStackTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/ClearDirectivesStackTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/ControlIntrinsicTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/PrintDirectivesTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/StressAddMultiThreadedTest.java ! test/hotspot/jtreg/compiler/compilercontrol/logcompilation/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/matcher/MethodMatcherTest.java ! test/hotspot/jtreg/compiler/compilercontrol/mixed/RandomCommandsTest.java ! test/hotspot/jtreg/compiler/compilercontrol/mixed/RandomValidCommandsTest.java ! test/hotspot/jtreg/compiler/compilercontrol/share/actions/CompileAction.java ! test/hotspot/jtreg/compiler/cpuflags/TestAESIntrinsicsOnSupportedConfig.java ! test/hotspot/jtreg/compiler/cpuflags/TestAESIntrinsicsOnUnsupportedConfig.java ! test/hotspot/jtreg/compiler/escapeAnalysis/TestArrayCopy.java ! test/hotspot/jtreg/compiler/floatingpoint/NaNTest.java ! test/hotspot/jtreg/compiler/floatingpoint/TestPow2.java ! test/hotspot/jtreg/compiler/gcbarriers/EqvUncastStepOverBarrier.java ! test/hotspot/jtreg/compiler/gcbarriers/PreserveFPRegistersTest.java ! test/hotspot/jtreg/compiler/interpreter/DisableOSRTest.java ! test/hotspot/jtreg/compiler/intrinsics/IntrinsicAvailableTest.java ! test/hotspot/jtreg/compiler/intrinsics/IntrinsicDisabledTest.java ! test/hotspot/jtreg/compiler/intrinsics/TestCheckIndex.java ! test/hotspot/jtreg/compiler/intrinsics/base64/TestBase64.java ! test/hotspot/jtreg/compiler/intrinsics/bigInteger/MontgomeryMultiplyTest.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBzhiI2L.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/AndnTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/AndnTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsiTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsiTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsmskTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsmskTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsrTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsrTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BzhiTestI2L.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/LZcntTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/LZcntTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/TZcntTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/TZcntTestL.java ! test/hotspot/jtreg/compiler/intrinsics/klass/CastNullCheckDroppingsTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/AddExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/AddExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/DecrementExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/DecrementExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/IncrementExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/IncrementExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/MultiplyExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/MultiplyExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/NegateExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/NegateExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/SubtractExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/SubtractExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseMD5IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseMD5IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA1IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA1IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA256IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA256IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA3IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA3IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA512IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA512IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHAOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHAOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/DigestSanityTestBase.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestMD5Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestMD5MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA1Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA1MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA256Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA256MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA3Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA3MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA512Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA512MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/string/TestStringIntrinsics2.java ! test/hotspot/jtreg/compiler/jsr292/ContinuousCallSiteTargetChange.java ! test/hotspot/jtreg/compiler/jsr292/InvokerGC.java ! test/hotspot/jtreg/compiler/jsr292/NonInlinedCall/InvokeTest.java ! test/hotspot/jtreg/compiler/jsr292/NonInlinedCall/RedefineTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/AllocateCompileIdTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/CollectCountersTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/CompileCodeTestCase.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ConstantPoolTestCase.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ConstantPoolTestsHelper.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DoNotInlineOrCompileTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DummyClass.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetFlagValueTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetResolvedJavaMethodTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetResolvedJavaTypeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/HasNeverInlineDirectiveTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IsCompilableTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IsMatureTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IsMatureVsReprofileTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IterateFramesNative.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupKlassInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupKlassRefIndexInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupMethodInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupNameAndTypeRefIndexInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupNameInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupSignatureInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/MaterializeVirtualObjectTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ReprofileTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ResolveFieldInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ResolvePossiblyCachedConstantInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ResolveTypeInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ShouldInlineMethodTest.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderData.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderTest.java ! test/hotspot/jtreg/compiler/lib/ir_framework/IRNode.java ! test/hotspot/jtreg/compiler/lib/ir_framework/TestFramework.java ! test/hotspot/jtreg/compiler/lib/ir_framework/flag/FlagVM.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/AbstractTest.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/CustomRunTest.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/IREncodingPrinter.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/TestVM.java ! test/hotspot/jtreg/compiler/loopopts/UseCountedLoopSafepoints.java ! test/hotspot/jtreg/compiler/loopopts/UseCountedLoopSafepointsTest.java ! test/hotspot/jtreg/compiler/onSpinWait/TestOnSpinWaitAArch64DefaultFlags.java ! test/hotspot/jtreg/compiler/oracle/GetMethodOptionTest.java ! test/hotspot/jtreg/compiler/oracle/MethodMatcherTest.java ! test/hotspot/jtreg/compiler/profiling/TestTypeProfiling.java ! test/hotspot/jtreg/compiler/rangechecks/TestExplicitRangeChecks.java ! test/hotspot/jtreg/compiler/rangechecks/TestLongRangeCheck.java ! test/hotspot/jtreg/compiler/rangechecks/TestRangeCheckSmearing.java ! test/hotspot/jtreg/compiler/regalloc/TestC2IntPressure.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMAbortRatio.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMAbortThreshold.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMAfterNonRTMDeopt.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMDeoptOnHighAbortRatio.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMDeoptOnLowAbortRatio.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMLockingCalculationDelay.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMLockingThreshold.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMRetryCount.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMSpinLoopCount.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMTotalCountIncrRate.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMAfterLockInflation.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMDeopt.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMForInflatedLocks.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMForStackLocks.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMXendForLockBusy.java ! test/hotspot/jtreg/compiler/rtm/method_options/TestNoRTMLockElidingOption.java ! test/hotspot/jtreg/compiler/rtm/method_options/TestUseRTMLockElidingOption.java ! test/hotspot/jtreg/compiler/rtm/print/TestPrintPreciseRTMLockingStatistics.java ! test/hotspot/jtreg/compiler/runtime/Test8010927.java ! test/hotspot/jtreg/compiler/stable/StableConfiguration.java ! test/hotspot/jtreg/compiler/stable/TestStableBoolean.java ! test/hotspot/jtreg/compiler/stable/TestStableByte.java ! test/hotspot/jtreg/compiler/stable/TestStableChar.java ! test/hotspot/jtreg/compiler/stable/TestStableDouble.java ! test/hotspot/jtreg/compiler/stable/TestStableFloat.java ! test/hotspot/jtreg/compiler/stable/TestStableInt.java ! test/hotspot/jtreg/compiler/stable/TestStableLong.java ! test/hotspot/jtreg/compiler/stable/TestStableObject.java ! test/hotspot/jtreg/compiler/stable/TestStableShort.java ! test/hotspot/jtreg/compiler/stable/TestStableUByte.java ! test/hotspot/jtreg/compiler/stable/TestStableUShort.java ! test/hotspot/jtreg/compiler/testlibrary/CompilerUtils.java ! test/hotspot/jtreg/compiler/testlibrary/rtm/AbortProvoker.java ! test/hotspot/jtreg/compiler/testlibrary/sha/predicate/IntrinsicPredicates.java ! test/hotspot/jtreg/compiler/tiered/ConstantGettersTransitionsTest.java ! test/hotspot/jtreg/compiler/tiered/Level2RecompilationTest.java ! test/hotspot/jtreg/compiler/tiered/LevelTransitionTest.java ! test/hotspot/jtreg/compiler/tiered/NonTieredLevelsTest.java ! test/hotspot/jtreg/compiler/tiered/TestEnqueueMethodForCompilation.java ! test/hotspot/jtreg/compiler/tiered/TieredLevelsTest.java ! test/hotspot/jtreg/compiler/types/TestMeetIncompatibleInterfaceArrays.java ! test/hotspot/jtreg/compiler/types/correctness/CorrectnessTest.java ! test/hotspot/jtreg/compiler/types/correctness/OffTest.java ! test/hotspot/jtreg/compiler/uncommontrap/DeoptReallocFailure.java ! test/hotspot/jtreg/compiler/uncommontrap/Test8009761.java ! test/hotspot/jtreg/compiler/uncommontrap/TestNullAssertAtCheckCast.java ! test/hotspot/jtreg/compiler/uncommontrap/TestUnstableIfTrap.java ! test/hotspot/jtreg/compiler/unsafe/UnsafeGetStableArrayElement.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayCopyTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayIndexFillTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayInvariantFillTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayShiftOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayTypeConvertTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayUnsafeOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicBooleanOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicByteOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicCharOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicDoubleOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicFloatOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicIntOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicLongOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicShortOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopArrayIndexComputeTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopCombinedOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopControlFlowTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopLiveOutNodesTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopRangeStrideTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopReductionOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/MultipleLoopsTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/StripMinedLoopTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/VectorizationTestRunner.java ! test/hotspot/jtreg/compiler/whitebox/AllocationCodeBlobTest.java ! test/hotspot/jtreg/compiler/whitebox/BlockingCompilation.java ! test/hotspot/jtreg/compiler/whitebox/ClearMethodStateTest.java ! test/hotspot/jtreg/compiler/whitebox/CompilerWhiteBoxTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeAllTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeFramesTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeMultipleOSRTest.java ! test/hotspot/jtreg/compiler/whitebox/EnqueueMethodForCompilationTest.java ! test/hotspot/jtreg/compiler/whitebox/ForceNMethodSweepTest.java ! test/hotspot/jtreg/compiler/whitebox/GetCodeHeapEntriesTest.java ! test/hotspot/jtreg/compiler/whitebox/GetNMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/IsMethodCompilableTest.java ! test/hotspot/jtreg/compiler/whitebox/LockCompilationTest.java ! test/hotspot/jtreg/compiler/whitebox/MakeMethodNotCompilableTest.java ! test/hotspot/jtreg/compiler/whitebox/OSRFailureLevel4Test.java ! test/hotspot/jtreg/compiler/whitebox/SetDontInlineMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/SetForceInlineMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/SimpleTestCase.java ! test/hotspot/jtreg/compiler/whitebox/TestEnqueueInitializerForCompilation.java ! test/hotspot/jtreg/compiler/whitebox/TestMethodCompilableCompilerDirectives.java ! test/hotspot/jtreg/containers/cgroup/CgroupSubsystemFactory.java ! test/hotspot/jtreg/containers/cgroup/PlainRead.java ! test/hotspot/jtreg/containers/docker/CheckContainerized.java ! test/hotspot/jtreg/containers/docker/PrintContainerInfo.java ! test/hotspot/jtreg/containers/docker/TestCPUSets.java ! test/hotspot/jtreg/containers/docker/TestMemoryAwareness.java ! test/hotspot/jtreg/containers/docker/TestMemoryWithCgroupV1.java ! test/hotspot/jtreg/containers/docker/TestMisc.java ! test/hotspot/jtreg/containers/docker/TestPids.java ! test/hotspot/jtreg/gc/TestAgeOutput.java ! test/hotspot/jtreg/gc/TestConcurrentGCBreakpoints.java ! test/hotspot/jtreg/gc/TestJNIWeak/TestJNIWeak.java ! test/hotspot/jtreg/gc/TestNumWorkerOutput.java ! test/hotspot/jtreg/gc/TestReferenceClearDuringMarking.java ! test/hotspot/jtreg/gc/TestReferenceClearDuringReferenceProcessing.java ! test/hotspot/jtreg/gc/TestReferenceRefersTo.java ! test/hotspot/jtreg/gc/TestReferenceRefersToDuringConcMark.java ! test/hotspot/jtreg/gc/TestSmallHeap.java ! test/hotspot/jtreg/gc/arguments/TestG1HeapSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestMaxHeapSizeTools.java ! test/hotspot/jtreg/gc/arguments/TestMaxRAMFlags.java ! test/hotspot/jtreg/gc/arguments/TestMinAndInitialSurvivorRatioFlags.java ! test/hotspot/jtreg/gc/arguments/TestMinInitialErgonomics.java ! test/hotspot/jtreg/gc/arguments/TestNewRatioFlag.java ! test/hotspot/jtreg/gc/arguments/TestNewSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestParallelGCThreads.java ! test/hotspot/jtreg/gc/arguments/TestParallelHeapSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestParallelRefProc.java ! test/hotspot/jtreg/gc/arguments/TestSerialHeapSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestSmallInitialHeapWithLargePageAndNUMA.java ! test/hotspot/jtreg/gc/arguments/TestSurvivorRatioFlag.java ! test/hotspot/jtreg/gc/arguments/TestTargetSurvivorRatioFlag.java ! test/hotspot/jtreg/gc/arguments/TestUseCompressedOopsErgo.java ! test/hotspot/jtreg/gc/arguments/TestUseCompressedOopsErgoTools.java ! test/hotspot/jtreg/gc/arguments/TestVerifyBeforeAndAfterGCFlags.java ! test/hotspot/jtreg/gc/class_unloading/TestClassUnloadingDisabled.java ! test/hotspot/jtreg/gc/class_unloading/TestG1ClassUnloadingHWM.java ! test/hotspot/jtreg/gc/ergonomics/TestDynamicNumberOfGCThreads.java ! test/hotspot/jtreg/gc/ergonomics/TestInitialGCThreadLogging.java ! test/hotspot/jtreg/gc/g1/TestEagerReclaimHumongousRegionsLog.java ! test/hotspot/jtreg/gc/g1/TestEdenSurvivorLessThanMax.java ! test/hotspot/jtreg/gc/g1/TestEvacuationFailure.java ! test/hotspot/jtreg/gc/g1/TestFromCardCacheIndex.java ! test/hotspot/jtreg/gc/g1/TestGCLogMessages.java ! test/hotspot/jtreg/gc/g1/TestHumongousCodeCacheRoots.java ! test/hotspot/jtreg/gc/g1/TestHumongousConcurrentStartUndo.java ! test/hotspot/jtreg/gc/g1/TestHumongousRemsetsMatch.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForAuxMemory.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForHeap.java ! test/hotspot/jtreg/gc/g1/TestMixedGCLiveThreshold.java ! test/hotspot/jtreg/gc/g1/TestNoEagerReclaimOfHumongousRegions.java ! test/hotspot/jtreg/gc/g1/TestNoUseHCC.java ! test/hotspot/jtreg/gc/g1/TestPLABOutput.java ! test/hotspot/jtreg/gc/g1/TestRegionLivenessPrint.java ! test/hotspot/jtreg/gc/g1/TestRemsetLogging.java ! test/hotspot/jtreg/gc/g1/TestRemsetLoggingPerRegion.java ! test/hotspot/jtreg/gc/g1/TestRemsetLoggingTools.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData00.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData05.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData10.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData15.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData20.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData25.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData27.java ! test/hotspot/jtreg/gc/g1/TestSkipRebuildRemsetPhase.java ! test/hotspot/jtreg/gc/g1/TestVerifyGCType.java ! test/hotspot/jtreg/gc/g1/humongousObjects/G1SampleClass.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHeapCounters.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousClassLoader.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousMovement.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousNonArrayAllocation.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousThreshold.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestNoAllocationsInHRegions.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestObjectCollected.java ! test/hotspot/jtreg/gc/g1/humongousObjects/objectGraphTest/GC.java ! test/hotspot/jtreg/gc/g1/humongousObjects/objectGraphTest/TestObjectGraphAfterGC.java ! test/hotspot/jtreg/gc/g1/mixedgc/TestLogging.java ! test/hotspot/jtreg/gc/g1/mixedgc/TestOldGenCollectionUsage.java ! test/hotspot/jtreg/gc/g1/numa/TestG1NUMATouchRegions.java ! test/hotspot/jtreg/gc/g1/plab/TestPLABPromotion.java ! test/hotspot/jtreg/gc/g1/plab/TestPLABResize.java ! test/hotspot/jtreg/gc/g1/plab/lib/AppPLABPromotion.java ! test/hotspot/jtreg/gc/g1/plab/lib/AppPLABResize.java ! test/hotspot/jtreg/gc/logging/TestGCId.java ! test/hotspot/jtreg/gc/logging/TestMetaSpaceLog.java ! test/hotspot/jtreg/gc/metaspace/TestCapacityUntilGCWrapAround.java ! test/hotspot/jtreg/gc/shenandoah/TestReferenceRefersToShenandoah.java ! test/hotspot/jtreg/gc/shenandoah/TestReferenceShortcutCycle.java ! test/hotspot/jtreg/gc/stress/TestMultiThreadStressRSet.java ! test/hotspot/jtreg/gc/stress/TestStressRSetCoarsening.java ! test/hotspot/jtreg/gc/testlibrary/Helpers.java ! test/hotspot/jtreg/gc/testlibrary/g1/MixedGCProvoker.java ! test/hotspot/jtreg/gc/whitebox/TestConcMarkCycleWB.java ! test/hotspot/jtreg/gc/whitebox/TestWBGC.java ! test/hotspot/jtreg/resourcehogs/compiler/intrinsics/string/TestStringIntrinsics2LargeArray.java ! test/hotspot/jtreg/runtime/ClassInitErrors/InitExceptionUnloadTest.java ! test/hotspot/jtreg/runtime/ClassUnload/ConstantPoolDependsTest.java ! test/hotspot/jtreg/runtime/ClassUnload/DictionaryDependsTest.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveClass.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveClassLoader.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveObject.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveSoftReference.java ! test/hotspot/jtreg/runtime/ClassUnload/SuperDependsTest.java ! test/hotspot/jtreg/runtime/ClassUnload/UnloadInterfaceTest.java ! test/hotspot/jtreg/runtime/ClassUnload/UnloadTest.java ! test/hotspot/jtreg/runtime/ClassUnload/UnloadTestWithVerifyDuringGC.java ! test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java ! test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java ! test/hotspot/jtreg/runtime/Dictionary/CleanProtectionDomain.java ! test/hotspot/jtreg/runtime/ElfDecoder/TestElfDirectRead.java ! test/hotspot/jtreg/runtime/HiddenClasses/TestHiddenClassUnloading.java ! test/hotspot/jtreg/runtime/MemberName/MemberNameLeak.java ! test/hotspot/jtreg/runtime/Metaspace/DefineClass.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/MetaspaceTestArena.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/MetaspaceTestContext.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/Settings.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/TestMetaspaceAllocation.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/TestMetaspaceAllocationMT1.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/TestMetaspaceAllocationMT2.java ! test/hotspot/jtreg/runtime/NMT/CommitOverlappingRegions.java ! test/hotspot/jtreg/runtime/NMT/HugeArenaTracking.java ! test/hotspot/jtreg/runtime/NMT/JcmdDetailDiff.java ! test/hotspot/jtreg/runtime/NMT/JcmdSummaryDiff.java ! test/hotspot/jtreg/runtime/NMT/MallocRoundingReportTest.java ! test/hotspot/jtreg/runtime/NMT/MallocSiteHashOverflow.java ! test/hotspot/jtreg/runtime/NMT/MallocSiteTypeChange.java ! test/hotspot/jtreg/runtime/NMT/MallocStressTest.java ! test/hotspot/jtreg/runtime/NMT/MallocTestType.java ! test/hotspot/jtreg/runtime/NMT/MallocTrackingVerify.java ! test/hotspot/jtreg/runtime/NMT/ReleaseCommittedMemory.java ! test/hotspot/jtreg/runtime/NMT/ReleaseNoCommit.java ! test/hotspot/jtreg/runtime/NMT/SummarySanityCheck.java ! test/hotspot/jtreg/runtime/NMT/ThreadedMallocTestType.java ! test/hotspot/jtreg/runtime/NMT/ThreadedVirtualAllocTestType.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocAttemptReserveMemoryAt.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocCommitMerge.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocCommitUncommitRecommit.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocTestType.java ! test/hotspot/jtreg/runtime/Nestmates/protectionDomain/TestDifferentProtectionDomains.java ! test/hotspot/jtreg/runtime/Safepoint/TestAbortVMOnSafepointTimeout.java ! test/hotspot/jtreg/runtime/Thread/ThreadObjAccessAtExit.java ! test/hotspot/jtreg/runtime/Unsafe/InternalErrorTest.java ! test/hotspot/jtreg/runtime/cds/CheckDefaultArchiveFile.java ! test/hotspot/jtreg/runtime/cds/CheckSharingWithDefaultArchive.java ! test/hotspot/jtreg/runtime/cds/DumpSymbolAndStringTable.java ! test/hotspot/jtreg/runtime/cds/SharedStrings.java ! test/hotspot/jtreg/runtime/cds/SharedStringsWb.java ! test/hotspot/jtreg/runtime/cds/SpaceUtilizationCheck.java ! test/hotspot/jtreg/runtime/cds/appcds/ClassLoaderTest.java ! test/hotspot/jtreg/runtime/cds/appcds/CommandLineFlagCombo.java ! test/hotspot/jtreg/runtime/cds/appcds/HelloExtTest.java ! test/hotspot/jtreg/runtime/cds/appcds/JvmtiAddPath.java ! test/hotspot/jtreg/runtime/cds/appcds/MultiProcessSharing.java ! test/hotspot/jtreg/runtime/cds/appcds/RewriteBytecodesTest.java ! test/hotspot/jtreg/runtime/cds/appcds/SharedArchiveConsistency.java ! test/hotspot/jtreg/runtime/cds/appcds/SharedRegionAlignmentTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedIntegerCacheTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedModuleComboTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedModuleWithCustomImageTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckArchivedModuleApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedMirrorApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedMirrorTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedResolvedReferences.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedResolvedReferencesApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckIntegerCacheApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/DifferentHeapSizes.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/GCStressApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/GCStressTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/MirrorWithReferenceFieldsApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/MirrorWithReferenceFieldsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/PrimitiveTypesApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/PrimitiveTypesTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/RedefineClassApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/RedefineClassTest.java ! test/hotspot/jtreg/runtime/cds/appcds/condy/CondyHelloApp.java ! test/hotspot/jtreg/runtime/cds/appcds/condy/CondyHelloTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/HelloCustom.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/HelloCustom_JFR.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/LoaderSegregationTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/OldClassAndInf.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/PrintSharedArchiveAndExit.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/SameNameInTwoLoadersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/UnintendedLoadersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/UnloadUnregisteredLoaderTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/Hello.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/HelloUnload.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/LoaderSegregation.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/OldClassApp.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/SameNameUnrelatedLoaders.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/UnintendedLoaders.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/UnloadUnregisteredLoader.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/AppendClasspath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ArchiveConsistency.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ArchivedSuperIf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ArrayKlasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/BasicLambdaTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/CDSStreamTestDriver.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ClassResolutionFailure.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DoubleSumAverageTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DumpToDefaultArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DuplicatedCustomTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicArchiveRelocationTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicArchiveTestBase.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicLotsOfClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicSharedSymbols.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ExcludedClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/HelloDynamic.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/HelloDynamicCustom.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/HelloDynamicCustomUnload.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/JFRDynamicCDS.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/JITInteraction.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaContainsOldInf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaCustomLoader.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaForClassInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaForOldInfInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaProxyCallerIsHidden.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaProxyDuringShutdown.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LinkClassTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LotsUnloadTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MethodSorting.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MismatchedBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MissingArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ModulePath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/NestHostOldInf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/NestTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/NoClassToArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/OldClassAndInf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/OldClassInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ParallelLambdaLoadTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/PredicateTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/PrintSharedArchiveAndExit.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/RedefineCallerClassTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/RegularHiddenClass.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/RelativePath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/SharedArchiveFileOption.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/SharedBaseAddressOption.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/StaticInnerTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestAutoCreateSharedArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestDynamicDumpAtOom.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestDynamicRegenerateHolderClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestLambdaInvokers.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/UnsupportedBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/UnusedCPDuringDump.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/UsedAllArchivedLambdas.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/VerifyObjArrayCloneTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/VerifyWithDynamicArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/WrongTopClasspath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/CDSMHTest_generate.sh ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesAsCollectorTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesCastFailureTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesGeneralTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesInvokersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesPermuteArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesSpreadArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/DuplicatedCustomApp.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/LambdaVerification.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/LoadClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/TestJIT.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/UsedAllArchivedLambdasApp.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/ArrayTest.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/ArrayTestHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/GCSharedStringsDuringDump.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/GCSharedStringsDuringDumpWb.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestDumpBase.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestDynamicDump.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestFileSafety.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestStaticDump.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/classpathtests/DummyClassesInBootClassPath.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/modulepath/JvmtiAddPath.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/modulepath/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/ClassFileLoadHook.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/ClassFileLoadHookTest.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/InstrumentationApp.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/InstrumentationTest.java ! test/hotspot/jtreg/runtime/cds/appcds/loaderConstraints/DynamicLoaderConstraintsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/CDSMHTest_generate.sh ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesAsCollectorTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesCastFailureTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesGeneralTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesInvokersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesPermuteArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesSpreadArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineBasic.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineBasicTest.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineRunningMethods_Shared.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineRunningMethods_SharedHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/ExerciseGC.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/HelloStringGC.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/HelloStringPlus.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/IncompatibleOptions.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/InternSharedString.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/InternStringTest.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/LockSharedStrings.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/LockStringTest.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/LockStringValueTest.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsBasicPlus.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsHumongous.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsUtils.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsWb.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsWbTest.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/BootClassPathAppendHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/DummyClassHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/ForNameTest.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/GenericTestApp.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/HelloExt.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/HelloWB.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/JvmtiApp.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/MultiProcClass.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/RewriteBytecodes.java ! test/hotspot/jtreg/runtime/cds/serviceability/ReplaceCriticalClasses.java ! test/hotspot/jtreg/runtime/cds/serviceability/ReplaceCriticalClassesForSubgraphs.java ! test/hotspot/jtreg/runtime/exceptionMsgs/AbstractMethodError/AbstractMethodErrorTest.java ! test/hotspot/jtreg/runtime/exceptionMsgs/IncompatibleClassChangeError/IncompatibleClassChangeErrorTest.java ! test/hotspot/jtreg/runtime/execstack/TestCheckJDK.java ! test/hotspot/jtreg/runtime/handshake/AsyncHandshakeWalkStackTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeDirectTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeTimeoutTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeWalkExitTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeWalkOneExitTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeWalkStackTest.java ! test/hotspot/jtreg/runtime/handshake/MixedHandshakeWalkStackTest.java ! test/hotspot/jtreg/runtime/handshake/SuspendBlocked.java ! test/hotspot/jtreg/runtime/interned/SanityTest.java ! test/hotspot/jtreg/runtime/logging/loadLibraryTest/LoadLibraryTest.java ! test/hotspot/jtreg/runtime/memory/ReadFromNoaccessArea.java ! test/hotspot/jtreg/runtime/memory/ReadVMPageSize.java ! test/hotspot/jtreg/runtime/memory/ReserveMemory.java ! test/hotspot/jtreg/runtime/memory/StressVirtualSpaceResize.java ! test/hotspot/jtreg/runtime/modules/AccessCheckAllUnnamed.java ! test/hotspot/jtreg/runtime/modules/AccessCheckExp.java ! test/hotspot/jtreg/runtime/modules/AccessCheckJavaBase.java ! test/hotspot/jtreg/runtime/modules/AccessCheckOpen.java ! test/hotspot/jtreg/runtime/modules/AccessCheckRead.java ! test/hotspot/jtreg/runtime/modules/AccessCheckSuper.java ! test/hotspot/jtreg/runtime/modules/AccessCheckUnnamed.java ! test/hotspot/jtreg/runtime/modules/AccessCheckWorks.java ! test/hotspot/jtreg/runtime/modules/CCE_module_msg.java ! test/hotspot/jtreg/runtime/modules/ExportTwice.java ! test/hotspot/jtreg/runtime/modules/JVMAddModuleExportToAllUnnamed.java ! test/hotspot/jtreg/runtime/modules/JVMAddModuleExports.java ! test/hotspot/jtreg/runtime/modules/JVMAddModuleExportsToAll.java ! test/hotspot/jtreg/runtime/modules/JVMAddReadsModule.java ! test/hotspot/jtreg/runtime/modules/JVMDefineModule.java ! test/hotspot/jtreg/runtime/modules/LoadUnloadModuleStress.java ! test/hotspot/jtreg/runtime/modules/ModuleHelper.java ! test/hotspot/jtreg/runtime/modules/SealedInterfaceModuleTest.java ! test/hotspot/jtreg/runtime/modules/SealedModuleTest.java ! test/hotspot/jtreg/runtime/stringtable/StringTableCleaningTest.java ! test/hotspot/jtreg/runtime/whitebox/TestHiddenClassIsAlive.java ! test/hotspot/jtreg/runtime/whitebox/TestWBDeflateIdleMonitors.java ! test/hotspot/jtreg/runtime/whitebox/WBStackSize.java ! test/hotspot/jtreg/serviceability/ParserTest.java ! test/hotspot/jtreg/serviceability/dcmd/compiler/CodelistTest.java ! test/hotspot/jtreg/serviceability/dcmd/compiler/CompilerQueueTest.java ! test/hotspot/jtreg/serviceability/jvmti/Heap/IterateHeapWithEscapeAnalysisEnabled.java ! test/hotspot/jtreg/serviceability/sa/TestInstanceKlassSize.java ! test/hotspot/jtreg/serviceability/sa/TestInstanceKlassSizeForInterface.java ! test/hotspot/jtreg/serviceability/sa/TestUniverse.java ! test/hotspot/jtreg/testlibrary/ctw/src/sun/hotspot/tools/ctw/Compiler.java ! test/hotspot/jtreg/testlibrary_tests/ctw/ClassesDirTest.java ! test/hotspot/jtreg/testlibrary_tests/ctw/ClassesListTest.java ! test/hotspot/jtreg/testlibrary_tests/ctw/JarDirTest.java ! test/hotspot/jtreg/testlibrary_tests/ctw/JarsTest.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestBasics.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestCompLevels.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestControls.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestDIgnoreCompilerControls.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestIRMatching.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/check/ClassAssertion.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/loading/ClassLoadingHelper.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/staticReferences/StaticReferences.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/common/PerformChecksHelper.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy003/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy004/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy005/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy006/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy007/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy008/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy009/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy010/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy011/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy012/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy013/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy014/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy015/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/HiddenClass/events/events001.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/HiddenClass/events/events001a.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects002/referringObjects002.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects002/referringObjects002a.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/forceEarlyReturn001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/forceEarlyReturn002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/heapwalking001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/heapwalking002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/mixed001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/mixed002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/monitorEvents001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/monitorEvents002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/ownedMonitorsAndFrames001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/ownedMonitorsAndFrames002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/general_functions/GF08/gf08t001/TestDriver.java ! test/hotspot/jtreg/vmTestbase/nsk/share/jdi/SerialExecutionDebuggee.java ! test/hotspot/jtreg/vmTestbase/vm/compiler/CodeCacheInfo/Test.java ! test/hotspot/jtreg/vmTestbase/vm/mlvm/indy/stress/gc/lotsOfCallSites/Test.java ! test/hotspot/jtreg/vmTestbase/vm/share/gc/TriggerUnloadingWithWhiteBox.java ! test/jdk/com/sun/jdi/EATests.java ! test/jdk/java/foreign/stackwalk/TestAsyncStackWalk.java ! test/jdk/java/foreign/stackwalk/TestStackWalk.java ! test/jdk/java/foreign/upcalldeopt/TestUpcallDeopt.java ! test/jdk/java/lang/instrument/GetObjectSizeIntrinsicsTest.java ! test/jdk/java/lang/management/MemoryMXBean/CollectionUsageThreshold.java ! test/jdk/java/lang/management/MemoryMXBean/LowMemoryTest.java ! test/jdk/java/lang/management/MemoryMXBean/ResetPeakMemoryUsage.java ! test/jdk/java/lang/ref/CleanerTest.java ! test/jdk/java/util/Arrays/TimSortStackSize2.java ! test/jdk/jdk/internal/vm/Continuation/Fuzz.java ! test/jdk/jdk/jfr/api/consumer/TestRecordedFrameType.java ! test/jdk/jdk/jfr/event/allocation/TestObjectAllocationInNewTLABEvent.java ! test/jdk/jdk/jfr/event/allocation/TestObjectAllocationOutsideTLABEvent.java ! test/jdk/jdk/jfr/event/allocation/TestObjectAllocationSampleEventThrottling.java ! test/jdk/jdk/jfr/event/compiler/TestCodeCacheConfig.java ! test/jdk/jdk/jfr/event/compiler/TestCodeCacheFull.java ! test/jdk/jdk/jfr/event/compiler/TestCodeSweeper.java ! test/jdk/jdk/jfr/event/compiler/TestCodeSweeperStats.java ! test/jdk/jdk/jfr/event/compiler/TestCompilerCompile.java ! test/jdk/jdk/jfr/event/compiler/TestCompilerInlining.java ! test/jdk/jdk/jfr/event/compiler/TestCompilerPhase.java ! test/jdk/jdk/jfr/event/compiler/TestDeoptimization.java ! test/jdk/jdk/jfr/event/gc/collection/TestG1ParallelPhases.java ! test/jdk/jdk/jfr/event/gc/configuration/TestGCHeapConfigurationEventWith32BitOops.java ! test/jdk/jdk/jfr/event/gc/configuration/TestGCHeapConfigurationEventWithHeapBasedOops.java ! test/jdk/jdk/jfr/event/gc/detailed/TestEvacuationFailedEvent.java ! test/jdk/jdk/jfr/event/gc/detailed/TestGCLockerEvent.java ! test/jdk/jdk/jfr/event/gc/heapsummary/TestHeapSummaryCommittedSize.java ! test/jdk/jdk/jfr/event/runtime/TestSafepointEvents.java ! test/jdk/jdk/jfr/event/runtime/TestThrowableInstrumentation.java ! test/jdk/jdk/jfr/jvm/TestJFRIntrinsic.java ! test/jdk/jdk/jfr/startupargs/TestBadOptionValues.java ! test/lib-test/jdk/test/lib/TestPlatformIsTieredSupported.java ! test/lib/jdk/test/lib/cds/CDSArchiveUtils.java ! test/lib/jdk/test/lib/helpers/ClassFileInstaller.java ! test/lib/jdk/test/whitebox/WhiteBox.java ! test/lib/sun/hotspot/code/BlobType.java ! test/lib/sun/hotspot/code/CodeBlob.java ! test/lib/sun/hotspot/code/Compiler.java ! test/lib/sun/hotspot/code/NMethod.java ! test/lib/sun/hotspot/cpuinfo/CPUInfo.java ! test/lib/sun/hotspot/gc/GC.java Changeset: 9c86c820 Author: Vicente Romero Date: 2022-07-08 17:24:27 +0000 URL: https://git.openjdk.org/loom/commit/9c86c82091827e781c3919b4b4410981ae322732 8282714: synthetic arguments are being added to the constructors of static local classes Reviewed-by: jlahoda ! src/jdk.compiler/share/classes/com/sun/tools/javac/code/Symbol.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Lower.java + test/langtools/tools/javac/records/LocalStaticDeclarations2.java ! test/langtools/tools/javac/records/RecordCompilationTests.java Changeset: 1877533f Author: Weijun Wang Date: 2022-07-08 18:38:08 +0000 URL: https://git.openjdk.org/loom/commit/1877533f757731e2ce918230bfb345716954fa53 6522064: Aliases from Microsoft CryptoAPI has bad character encoding Reviewed-by: coffeys, hchao ! src/jdk.crypto.mscapi/windows/native/libsunmscapi/security.cpp + test/jdk/sun/security/mscapi/NonAsciiAlias.java Changeset: 6aaf141f Author: Lance Andersen Date: 2022-07-08 18:56:04 +0000 URL: https://git.openjdk.org/loom/commit/6aaf141f61416104020107c371592812a4c723d9 8289984: Files:isDirectory and isRegularFile methods not throwing SecurityException Reviewed-by: iris, alanb ! src/java.base/unix/classes/sun/nio/fs/UnixFileSystemProvider.java ! test/jdk/java/nio/file/Files/CheckPermissions.java Changeset: 54b4576f Author: Jonathan Gibbons Date: 2022-07-08 19:33:03 +0000 URL: https://git.openjdk.org/loom/commit/54b4576f78277335e9b45d0b36d943a20cf40888 8288699: cleanup HTML tree in HtmlDocletWriter.commentTagsToContent Reviewed-by: hannesw ! src/jdk.compiler/share/classes/com/sun/tools/javac/parser/DocCommentParser.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/resources/compiler.properties ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/AbstractOverviewIndexWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlSerialFieldWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/Signatures.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/TagletWriterImpl.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/Entity.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/RawHtml.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/Text.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/TextBuilder.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/CommentUtils.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/Content.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/UserTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/CommentHelper.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/Utils.java ! test/langtools/jdk/javadoc/doclet/testTypeAnnotations/TestTypeAnnotations.java = test/langtools/tools/javac/diags/examples/InvalidHtml.java Changeset: 3c08e6b3 Author: Ioi Lam Date: 2022-07-09 03:47:20 +0000 URL: https://git.openjdk.org/loom/commit/3c08e6b311121e05e30b88c0e325317f364ef15d 8289780: Avoid formatting stub names when Forte is not enabled Reviewed-by: dholmes, coleenp, sspitsyn ! src/hotspot/share/code/codeBlob.cpp ! src/hotspot/share/interpreter/abstractInterpreter.cpp ! src/hotspot/share/prims/forte.cpp ! src/hotspot/share/prims/forte.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp Changeset: 81ee7d28 Author: Jatin Bhateja Date: 2022-07-09 15:13:25 +0000 URL: https://git.openjdk.org/loom/commit/81ee7d28f8cb9f6c7fb6d2c76a0f14fd5147d93c 8289186: Support predicated vector load/store operations over X86 AVX2 targets. Reviewed-by: xgong, kvn ! src/hotspot/cpu/x86/assembler_x86.cpp ! src/hotspot/cpu/x86/assembler_x86.hpp ! src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp ! src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp ! src/hotspot/cpu/x86/x86.ad ! src/hotspot/share/opto/vectorIntrinsics.cpp ! src/hotspot/share/opto/vectornode.hpp + test/micro/org/openjdk/bench/jdk/incubator/vector/StoreMaskedIOOBEBenchmark.java Changeset: 87aa3ce0 Author: Andrey Turbanov Date: 2022-07-09 17:59:43 +0000 URL: https://git.openjdk.org/loom/commit/87aa3ce03e5e294b35cf2cab3cbba0d1964bbbff 8289274: Cleanup unnecessary null comparison before instanceof check in security modules Reviewed-by: mullan ! src/java.base/macosx/classes/apple/security/KeychainStore.java ! src/java.base/share/classes/com/sun/crypto/provider/RC2Cipher.java ! src/java.base/share/classes/javax/security/auth/PrivateCredentialPermission.java ! src/java.base/share/classes/sun/security/pkcs12/PKCS12KeyStore.java ! src/java.base/share/classes/sun/security/provider/JavaKeyStore.java ! src/java.base/share/classes/sun/security/provider/PolicyFile.java ! src/java.base/share/classes/sun/security/provider/SubjectCodeSource.java ! src/java.base/share/classes/sun/security/provider/certpath/CertId.java ! src/java.base/share/classes/sun/security/provider/certpath/RevocationChecker.java ! src/java.base/share/classes/sun/security/util/BitArray.java ! src/java.base/share/classes/sun/security/x509/AccessDescription.java ! src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11KeyStore.java ! src/jdk.security.auth/share/classes/com/sun/security/auth/module/KeyStoreLoginModule.java ! src/jdk.security.jgss/share/classes/com/sun/security/sasl/gsskerb/GssKrb5Client.java Changeset: e9d9cc6d Author: Ioi Lam Date: 2022-07-11 05:21:01 +0000 URL: https://git.openjdk.org/loom/commit/e9d9cc6d0aece2237c490a610d79a562867251d8 8290027: Move inline functions from vm_version_x86.hpp to cpp Reviewed-by: kbarrett, dholmes ! src/hotspot/cpu/x86/vm_version_x86.cpp ! src/hotspot/cpu/x86/vm_version_x86.hpp Changeset: 4ab77ac6 Author: Thomas Schatzl Date: 2022-07-11 07:36:21 +0000 URL: https://git.openjdk.org/loom/commit/4ab77ac60df78eedb16ebe142a51f703165e808d 8290017: Directly call HeapRegion::block_start in G1CMObjArrayProcessor::process_slice Reviewed-by: ayang, iwalulya ! src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp Changeset: e2598207 Author: Thomas Schatzl Date: 2022-07-11 07:58:07 +0000 URL: https://git.openjdk.org/loom/commit/e25982071d6d1586d723bcc0d261be619a187f00 8290019: Refactor HeapRegion::oops_on_memregion_iterate() Reviewed-by: ayang, iwalulya ! src/hotspot/share/gc/g1/heapRegion.hpp ! src/hotspot/share/gc/g1/heapRegion.inline.hpp Changeset: 0225eb43 Author: Thomas Schatzl Date: 2022-07-11 07:59:00 +0000 URL: https://git.openjdk.org/loom/commit/0225eb434cb8792d362923bf2c2e3607be4efcb9 8290018: Remove dead declarations in G1BlockOffsetTablePart Reviewed-by: ayang ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp Changeset: 2579373d Author: Koichi Sakata Committer: David Holmes Date: 2022-07-11 09:24:16 +0000 URL: https://git.openjdk.org/loom/commit/2579373dd0cc151dad22e4041f42bbd314b3be5f 8280472: Don't mix legacy logging with UL Reviewed-by: dholmes, mgronlun ! src/hotspot/share/oops/method.cpp Changeset: bba6be79 Author: Aggelos Biboudis Committer: Jan Lahoda Date: 2022-07-11 11:13:55 +0000 URL: https://git.openjdk.org/loom/commit/bba6be79e06b2b83b97e6def7b6a520e93f5737c 8269674: Improve testing of parenthesized patterns Reviewed-by: jlahoda ! src/jdk.compiler/share/classes/com/sun/tools/javac/parser/JavacParser.java + test/langtools/tools/javac/patterns/ParenthesizedCombo.java Changeset: 46251bc6 Author: Prasanta Sadhukhan Date: 2022-07-11 11:35:32 +0000 URL: https://git.openjdk.org/loom/commit/46251bc6e248a19e8d78173ff8d0502c68ee1acb 8224267: JOptionPane message string with 5000+ newlines produces StackOverflowError Reviewed-by: tr, aivanov ! src/java.desktop/share/classes/javax/swing/plaf/basic/BasicOptionPaneUI.java + test/jdk/javax/swing/JOptionPane/TestOptionPaneStackOverflow.java Changeset: 0c370089 Author: Coleen Phillimore Date: 2022-07-11 13:07:03 +0000 URL: https://git.openjdk.org/loom/commit/0c37008917789e7b631b5c18e6f54454b1bfe038 8275662: remove test/lib/sun/hotspot Reviewed-by: mseledtsov, sspitsyn, lmesnik ! test/hotspot/jtreg/compiler/cha/AbstractRootMethod.java ! test/hotspot/jtreg/compiler/cha/DefaultRootMethod.java ! test/hotspot/jtreg/compiler/cha/Utils.java ! test/hotspot/jtreg/compiler/codecache/OverflowCodeCacheTest.java ! test/hotspot/jtreg/compiler/codecache/cli/TestSegmentedCodeCacheOption.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/CodeCacheFreeSpaceRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/GenericCodeHeapSizeRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/JVMStartupRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/TestCodeHeapSizeOptions.java ! test/hotspot/jtreg/compiler/codecache/cli/common/CodeCacheCLITestCase.java ! test/hotspot/jtreg/compiler/codecache/cli/common/CodeCacheInfoFormatter.java ! test/hotspot/jtreg/compiler/codecache/cli/common/CodeCacheOptions.java ! test/hotspot/jtreg/compiler/codecache/cli/printcodecache/PrintCodeCacheRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/printcodecache/TestPrintCodeCacheOption.java ! test/hotspot/jtreg/compiler/codecache/jmx/BeanTypeTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeCacheUtils.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeHeapBeanPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/GetUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/InitialAndMaxUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ManagerNamesTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/MemoryPoolsPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PeakUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PoolsIndependenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ThresholdNotificationsTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdExceededTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdIncreasedTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdNotExceededTest.java ! test/hotspot/jtreg/compiler/codecache/stress/RandomAllocationTest.java ! test/hotspot/jtreg/compiler/codecache/stress/ReturnBlobToWrongHeapTest.java ! test/hotspot/jtreg/compiler/codegen/aes/TestAESMain.java ! test/hotspot/jtreg/compiler/codegen/aes/TestCipherBlockChainingEncrypt.java ! test/hotspot/jtreg/compiler/intrinsics/base64/TestBase64.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBzhiI2L.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BmiIntrinsicBase.java ! test/hotspot/jtreg/compiler/intrinsics/klass/CastNullCheckDroppingsTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/AllocateCompileIdTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/CompileCodeTestCase.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java ! test/hotspot/jtreg/compiler/onSpinWait/TestOnSpinWaitAArch64DefaultFlags.java ! test/hotspot/jtreg/compiler/unsafe/UnsafeGetStableArrayElement.java ! test/hotspot/jtreg/compiler/whitebox/AllocationCodeBlobTest.java ! test/hotspot/jtreg/compiler/whitebox/CompilerWhiteBoxTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeFramesTest.java ! test/hotspot/jtreg/compiler/whitebox/ForceNMethodSweepTest.java ! test/hotspot/jtreg/compiler/whitebox/GetCodeHeapEntriesTest.java ! test/hotspot/jtreg/compiler/whitebox/GetNMethodTest.java ! test/hotspot/jtreg/gc/TestConcurrentGCBreakpoints.java ! test/hotspot/jtreg/gc/TestJNIWeak/TestJNIWeak.java ! test/hotspot/jtreg/gc/TestSmallHeap.java ! test/hotspot/jtreg/gc/arguments/TestParallelGCThreads.java ! test/hotspot/jtreg/gc/arguments/TestParallelRefProc.java ! test/hotspot/jtreg/gc/ergonomics/TestDynamicNumberOfGCThreads.java ! test/hotspot/jtreg/gc/ergonomics/TestInitialGCThreadLogging.java ! test/hotspot/jtreg/gc/g1/TestGCLogMessages.java ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java ! test/hotspot/jtreg/gc/logging/TestGCId.java ! test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java ! test/hotspot/jtreg/runtime/MemberName/MemberNameLeak.java ! test/hotspot/jtreg/runtime/cds/appcds/CommandLineFlagCombo.java ! test/hotspot/jtreg/runtime/cds/appcds/JarBuilder.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/CDSStreamTestDriver.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/modulepath/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/IncompatibleOptions.java ! test/hotspot/jtreg/runtime/stringtable/StringTableCleaningTest.java ! test/hotspot/jtreg/serviceability/sa/TestUniverse.java ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/general_functions/GF08/gf08t001/TestDriver.java ! test/jdk/com/sun/jdi/EATests.java ! test/jdk/java/lang/management/MemoryMXBean/CollectionUsageThreshold.java ! test/jdk/java/lang/management/MemoryMXBean/LowMemoryTest.java ! test/jdk/java/lang/management/MemoryMXBean/ResetPeakMemoryUsage.java ! test/jdk/jdk/jfr/event/compiler/TestCodeCacheFull.java ! test/jdk/jdk/jfr/event/compiler/TestCodeSweeper.java ! test/jdk/jdk/jfr/jvm/TestJFRIntrinsic.java - test/lib-test/jdk/test/whitebox/OldWhiteBox.java ! test/lib/jdk/test/lib/cli/predicate/CPUSpecificPredicate.java ! test/lib/jdk/test/lib/helpers/ClassFileInstaller.java - test/lib/sun/hotspot/WhiteBox.java - test/lib/sun/hotspot/code/BlobType.java - test/lib/sun/hotspot/code/CodeBlob.java - test/lib/sun/hotspot/code/Compiler.java - test/lib/sun/hotspot/code/NMethod.java - test/lib/sun/hotspot/cpuinfo/CPUInfo.java - test/lib/sun/hotspot/gc/GC.java Changeset: 95c80229 Author: Thomas Stuefe Date: 2022-07-11 14:07:12 +0000 URL: https://git.openjdk.org/loom/commit/95c8022958f84047cf26909239d8608eff4e35fb 8290046: NMT: Remove unused MallocSiteTable::reset() Reviewed-by: jiefu, zgu ! src/hotspot/share/services/mallocSiteTable.cpp ! src/hotspot/share/services/mallocSiteTable.hpp Changeset: fc01666a Author: Alan Bateman Date: 2022-07-11 14:41:13 +0000 URL: https://git.openjdk.org/loom/commit/fc01666a5824d55b2549c81c0c3602aafdec693c 8290002: (se) AssertionError in SelectorImpl.implCloseSelector Reviewed-by: michaelm ! src/java.base/share/classes/sun/nio/ch/SelectorImpl.java Changeset: 59980ac8 Author: Pavel Rappo Date: 2022-07-11 15:31:22 +0000 URL: https://git.openjdk.org/loom/commit/59980ac8e49c0e46120520cf0007c6fed514251d 8288309: Rename the "testTagInheritence" directory Reviewed-by: hannesw = test/langtools/jdk/javadoc/doclet/testTagInheritance/TestTagInheritance.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence/A.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence/B.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence2/A.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence2/B.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence2/C.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestAbstractClass.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestInterface.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestInterfaceForAbstractClass.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestSuperSuperClass.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestSuperSuperInterface.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestTagInheritance.java Changeset: c33fa55c Author: Calvin Cheung Date: 2022-07-11 15:33:18 +0000 URL: https://git.openjdk.org/loom/commit/c33fa55cf8e194e2662c11d342eee68ec67abb4d 8274235: -Xshare:dump should not call vm_direct_exit Reviewed-by: iklam, dholmes ! src/hotspot/share/cds/archiveBuilder.cpp ! src/hotspot/share/cds/heapShared.cpp ! src/hotspot/share/cds/metaspaceShared.cpp Changeset: 0c1aa2bc Author: Coleen Phillimore Date: 2022-07-11 15:34:17 +0000 URL: https://git.openjdk.org/loom/commit/0c1aa2bc8a1c23d8da8673a4fac574813f373f57 8289184: runtime/ClassUnload/DictionaryDependsTest.java failed with "Test failed: should be unloaded" Reviewed-by: lmesnik, hseigel ! test/hotspot/jtreg/runtime/BadObjectClass/TestUnloadClassError.java ! test/hotspot/jtreg/runtime/Nestmates/membership/TestNestHostErrorWithClassUnload.java ! test/hotspot/jtreg/runtime/logging/ClassLoadUnloadTest.java ! test/hotspot/jtreg/runtime/logging/LoaderConstraintsTest.java ! test/lib/jdk/test/lib/classloader/ClassUnloadCommon.java Changeset: 11319c2a Author: Brian Burkhalter Date: 2022-07-07 22:36:08 +0000 URL: https://git.openjdk.org/loom/commit/11319c2aeb16ef2feb0ecab0e2811a52e845739d 8278469: Test java/nio/channels/FileChannel/LargeGatheringWrite.java times out 8289526: java/nio/channels/FileChannel/MapTest.java times out Reviewed-by: dcubed ! test/jdk/TEST.ROOT = test/jdk/java/nio/channels/FileChannel/largeMemory/LargeGatheringWrite.java = test/jdk/java/nio/channels/FileChannel/largeMemory/MapTest.java Changeset: 1304390b Author: Daniel D. Daugherty Date: 2022-07-07 23:09:42 +0000 URL: https://git.openjdk.org/loom/commit/1304390b3e7ecb4c87108747defd33d9fc4045c4 8289951: ProblemList jdk/jfr/api/consumer/TestRecordingFileWrite.java on linux-x64 and macosx-x64 Reviewed-by: psandoz ! test/jdk/ProblemList.txt Changeset: 64286074 Author: Alexander Matveev Date: 2022-07-08 00:17:11 +0000 URL: https://git.openjdk.org/loom/commit/64286074ba763d4a1e8879d8af69eee34d32cfa6 8289030: [macos] app image signature invalid when creating DMG or PKG Reviewed-by: asemenyuk ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/MacAppBundler.java ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/MacAppImageBuilder.java ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/MacBaseInstallerBundler.java ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources.properties ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources_de.properties ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources_ja.properties ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources_zh_CN.properties ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/AbstractAppImageBuilder.java ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/AppImageBundler.java ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/AppImageFile.java ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/IOUtils.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/JPackageCommand.java ! test/jdk/tools/jpackage/macosx/SigningPackageTest.java + test/jdk/tools/jpackage/macosx/SigningPackageTwoStepTest.java Changeset: ea21c465 Author: Thomas Stuefe Date: 2022-07-08 08:13:20 +0000 URL: https://git.openjdk.org/loom/commit/ea21c46531e8095c12153f787a24715eb8efbb03 8289799: Build warning in methodData.cpp memset zero-length parameter Backport-of: cce77a700141a854bafaa5ccb33db026affcf322 ! src/hotspot/share/oops/methodData.cpp Changeset: 732f1065 Author: Jorn Vernee Date: 2022-07-08 11:18:32 +0000 URL: https://git.openjdk.org/loom/commit/732f1065fe05ae737a716bea92536cb8edc2b6a0 8289223: Canonicalize header ids in foreign API javadocs Reviewed-by: mcimadamore ! src/java.base/share/classes/java/lang/foreign/Linker.java ! src/java.base/share/classes/java/lang/foreign/MemoryAddress.java ! src/java.base/share/classes/java/lang/foreign/MemoryLayout.java ! src/java.base/share/classes/java/lang/foreign/MemorySegment.java ! src/java.base/share/classes/java/lang/foreign/MemorySession.java ! src/java.base/share/classes/java/lang/foreign/SymbolLookup.java ! src/java.base/share/classes/java/lang/foreign/package-info.java Changeset: 460d879a Author: Jorn Vernee Date: 2022-07-08 15:21:11 +0000 URL: https://git.openjdk.org/loom/commit/460d879a75133fc071802bbc2c742b4232db604e 8289601: SegmentAllocator::allocateUtf8String(String str) should be clarified for strings containing \0 Reviewed-by: psandoz, mcimadamore ! src/java.base/share/classes/java/lang/foreign/MemorySegment.java ! src/java.base/share/classes/java/lang/foreign/SegmentAllocator.java Changeset: eeaf0bba Author: Stuart Marks Date: 2022-07-08 17:03:48 +0000 URL: https://git.openjdk.org/loom/commit/eeaf0bbabc6632c181b191854678e72a333ec0a5 8289872: wrong wording in @param doc for HashMap.newHashMap et. al. Reviewed-by: chegar, naoto, iris ! src/java.base/share/classes/java/util/HashMap.java ! src/java.base/share/classes/java/util/LinkedHashMap.java ! src/java.base/share/classes/java/util/WeakHashMap.java Changeset: c142fbbb Author: Vladimir Kempik Date: 2022-07-08 17:49:53 +0000 URL: https://git.openjdk.org/loom/commit/c142fbbbafcaa728cbdc56467c641eeed511f161 8289697: buffer overflow in MTLVertexCache.m: MTLVertexCache_AddGlyphQuad Backport-of: d852e99ae9de4c611438c50ce37ea1806f58cbdf ! src/java.desktop/macosx/native/libawt_lwawt/java2d/metal/MTLVertexCache.m Changeset: 9981c85d Author: Daniel D. Daugherty Date: 2022-07-08 19:47:55 +0000 URL: https://git.openjdk.org/loom/commit/9981c85d462b1f5a82ebe8b88a1dabf033b4d551 8290033: ProblemList serviceability/jvmti/GetLocalVariable/GetLocalWithoutSuspendTest.java on windows-x64 in -Xcomp mode Reviewed-by: azvegint, tschatzl ! test/hotspot/jtreg/ProblemList-Xcomp.txt Changeset: c86c51cc Author: Joe Wang Date: 2022-07-08 21:34:57 +0000 URL: https://git.openjdk.org/loom/commit/c86c51cc72e3457756434b9150b0c5ef2f5d496d 8282071: Update java.xml module-info Reviewed-by: lancea, iris, naoto ! src/java.xml/share/classes/module-info.java Changeset: b542bcba Author: Albert Mingkun Yang Date: 2022-07-11 07:58:03 +0000 URL: https://git.openjdk.org/loom/commit/b542bcba57a1ac79b9b7182dbf984b447754fafc 8289729: G1: Incorrect verification logic in G1ConcurrentMark::clear_next_bitmap Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp Changeset: 25f4b043 Author: Jan Lahoda Date: 2022-07-11 08:59:32 +0000 URL: https://git.openjdk.org/loom/commit/25f4b04365e40a91ba7a06f6f9fe99e1785ce4f4 8289894: A NullPointerException thrown from guard expression Reviewed-by: vromero ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/TransPatterns.java ! test/langtools/tools/javac/patterns/CaseStructureTest.java ! test/langtools/tools/javac/patterns/Guards.java ! test/langtools/tools/javac/patterns/SwitchErrors.java ! test/langtools/tools/javac/patterns/SwitchErrors.out Changeset: 04942914 Author: Markus Gr?nlund Date: 2022-07-11 09:11:58 +0000 URL: https://git.openjdk.org/loom/commit/0494291490b6cd23d228f39199a3686cc9731ec0 8289692: JFR: Thread checkpoint no longer enforce mutual exclusion post Loom integration Reviewed-by: rehn ! src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.hpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.inline.hpp Changeset: cb6e9cb7 Author: Martin Doerr Date: 2022-07-11 09:21:05 +0000 URL: https://git.openjdk.org/loom/commit/cb6e9cb7286f609dec1fe1157bf95afc503870a9 8290004: [PPC64] JfrGetCallTrace: assert(_pc != nullptr) failed: must have PC Reviewed-by: rrich, lucy ! src/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp ! src/hotspot/os_cpu/linux_ppc/thread_linux_ppc.cpp Changeset: c79baaa8 Author: Jesper Wilhelmsson Date: 2022-07-11 16:15:49 +0000 URL: https://git.openjdk.org/loom/commit/c79baaa811971c43fbdbc251482d0e40903588cc Merge ! src/hotspot/os_cpu/aix_ppc/javaThread_aix_ppc.cpp ! src/hotspot/os_cpu/linux_ppc/javaThread_linux_ppc.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.hpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.inline.hpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt + src/hotspot/os_cpu/aix_ppc/javaThread_aix_ppc.cpp + src/hotspot/os_cpu/linux_ppc/javaThread_linux_ppc.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.hpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.inline.hpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt Changeset: 21db9a50 Author: Doug Simon Date: 2022-07-11 16:47:05 +0000 URL: https://git.openjdk.org/loom/commit/21db9a507b441dbf909720b0b394f563e03aafc3 8290065: [JVMCI] only check HotSpotCompiledCode stream is empty if installation succeeds Reviewed-by: kvn ! src/hotspot/share/jvmci/jvmciCodeInstaller.cpp Changeset: f42dab85 Author: Phil Race Date: 2022-07-11 19:19:27 +0000 URL: https://git.openjdk.org/loom/commit/f42dab85924d6a74d1c2c87bca1970e2362f45ea 8289853: Update HarfBuzz to 4.4.1 Reviewed-by: serb, azvegint ! src/java.desktop/share/legal/harfbuzz.md + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/Anchor.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorFormat3.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorMatrix.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ChainContextPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/Common.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ContextPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/CursivePos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/CursivePosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ExtensionPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkArray.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkBasePos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkBasePosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkLigPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkLigPosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkMarkPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkMarkPosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkRecord.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PairPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PairPosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PairPosFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PosLookup.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PosLookupSubTable.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/SinglePos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/SinglePosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/SinglePosFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ValueFormat.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/AlternateSet.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/AlternateSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/AlternateSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ChainContextSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/Common.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ContextSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ExtensionSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/GSUB.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/Ligature.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/LigatureSet.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/LigatureSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/LigatureSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/MultipleSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/MultipleSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ReverseChainSingleSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ReverseChainSingleSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/Sequence.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SingleSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SingleSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SingleSubstFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SubstLookup.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SubstLookupSubTable.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/CompositeGlyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/Glyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/GlyphHeader.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/SimpleGlyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/SubsetGlyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/glyf-helpers.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/glyf.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/loca.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/path-builder.hh + src/java.desktop/share/native/libharfbuzz/UPDATING.txt + src/java.desktop/share/native/libharfbuzz/graph/graph.hh + src/java.desktop/share/native/libharfbuzz/graph/serialize.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-ankr-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-bsln-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-feat-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-just-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-kerx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-morx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-opbd-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-trak-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout.cc ! src/java.desktop/share/native/libharfbuzz/hb-aat-ltag-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-algs.hh ! src/java.desktop/share/native/libharfbuzz/hb-array.hh ! src/java.desktop/share/native/libharfbuzz/hb-atomic.hh ! src/java.desktop/share/native/libharfbuzz/hb-bimap.hh + src/java.desktop/share/native/libharfbuzz/hb-bit-page.hh + src/java.desktop/share/native/libharfbuzz/hb-bit-set-invertible.hh + src/java.desktop/share/native/libharfbuzz/hb-bit-set.hh ! src/java.desktop/share/native/libharfbuzz/hb-blob.cc ! src/java.desktop/share/native/libharfbuzz/hb-blob.h ! src/java.desktop/share/native/libharfbuzz/hb-blob.hh ! src/java.desktop/share/native/libharfbuzz/hb-buffer-deserialize-json.hh ! src/java.desktop/share/native/libharfbuzz/hb-buffer-deserialize-text.hh ! src/java.desktop/share/native/libharfbuzz/hb-buffer-serialize.cc + src/java.desktop/share/native/libharfbuzz/hb-buffer-verify.cc ! src/java.desktop/share/native/libharfbuzz/hb-buffer.cc ! src/java.desktop/share/native/libharfbuzz/hb-buffer.h ! src/java.desktop/share/native/libharfbuzz/hb-buffer.hh + src/java.desktop/share/native/libharfbuzz/hb-cache.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff-interp-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff-interp-cs-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff-interp-dict-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff1-interp-cs.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff2-interp-cs.hh ! src/java.desktop/share/native/libharfbuzz/hb-common.cc ! src/java.desktop/share/native/libharfbuzz/hb-common.h ! src/java.desktop/share/native/libharfbuzz/hb-config.hh + src/java.desktop/share/native/libharfbuzz/hb-cplusplus.hh ! src/java.desktop/share/native/libharfbuzz/hb-debug.hh ! src/java.desktop/share/native/libharfbuzz/hb-deprecated.h ! src/java.desktop/share/native/libharfbuzz/hb-dispatch.hh ! src/java.desktop/share/native/libharfbuzz/hb-draw.cc ! src/java.desktop/share/native/libharfbuzz/hb-draw.h ! src/java.desktop/share/native/libharfbuzz/hb-draw.hh ! src/java.desktop/share/native/libharfbuzz/hb-face.cc ! src/java.desktop/share/native/libharfbuzz/hb-face.h ! src/java.desktop/share/native/libharfbuzz/hb-fallback-shape.cc ! src/java.desktop/share/native/libharfbuzz/hb-font.cc ! src/java.desktop/share/native/libharfbuzz/hb-font.h ! src/java.desktop/share/native/libharfbuzz/hb-font.hh ! src/java.desktop/share/native/libharfbuzz/hb-ft.cc ! src/java.desktop/share/native/libharfbuzz/hb-ft.h ! src/java.desktop/share/native/libharfbuzz/hb-iter.hh ! src/java.desktop/share/native/libharfbuzz/hb-kern.hh ! src/java.desktop/share/native/libharfbuzz/hb-machinery.hh ! src/java.desktop/share/native/libharfbuzz/hb-map.cc ! src/java.desktop/share/native/libharfbuzz/hb-map.h ! src/java.desktop/share/native/libharfbuzz/hb-map.hh ! src/java.desktop/share/native/libharfbuzz/hb-meta.hh ! src/java.desktop/share/native/libharfbuzz/hb-mutex.hh ! src/java.desktop/share/native/libharfbuzz/hb-null.hh ! src/java.desktop/share/native/libharfbuzz/hb-object.hh ! src/java.desktop/share/native/libharfbuzz/hb-open-file.hh ! src/java.desktop/share/native/libharfbuzz/hb-open-type.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff1-table.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff1-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff2-table.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff2-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cmap-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-cbdt-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-colr-table.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-color-colrv1-closure.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-cpal-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-sbix-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-svg-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-deprecated.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-face-table-list.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-face.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-font.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-gasp-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-glyf-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-hdmx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-head-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-hmtx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-kern-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-base-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gdef-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gpos-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gsub-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gsubgpos.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-jstf-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-map.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-map.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-math-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-math.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-math.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-maxp-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-meta-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-metrics.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-metrics.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-name-language-static.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-name-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-name.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-name.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-os2-table.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-post-table-v2subset.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-post-table.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-arabic-fallback.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-arabic-joining-list.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-arabic-table.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic-table.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-khmer-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-khmer.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-khmer.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-myanmar-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-myanmar.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-myanmar.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-syllabic.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-syllabic.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-use-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-use-table.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-vowel-constraints.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape-fallback.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape-normalize.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape-normalize.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-fallback.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-joining-list.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-pua.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-table.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-win1256.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-default.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-hangul.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-hebrew.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic-table.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-khmer-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-khmer.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-myanmar-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-myanmar.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-syllabic.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-syllabic.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-thai.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-use-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-use-table.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-use.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-vowel-constraints.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-vowel-constraints.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-stat-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-tag-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-tag.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-avar-table.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-var-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-fvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-gvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-hvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-mvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-var.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-vorg-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-pool.hh + src/java.desktop/share/native/libharfbuzz/hb-priority-queue.hh + src/java.desktop/share/native/libharfbuzz/hb-repacker.hh ! src/java.desktop/share/native/libharfbuzz/hb-sanitize.hh ! src/java.desktop/share/native/libharfbuzz/hb-serialize.hh ! src/java.desktop/share/native/libharfbuzz/hb-set-digest.hh ! src/java.desktop/share/native/libharfbuzz/hb-set.cc ! src/java.desktop/share/native/libharfbuzz/hb-set.h ! src/java.desktop/share/native/libharfbuzz/hb-set.hh ! src/java.desktop/share/native/libharfbuzz/hb-shape-plan.cc ! src/java.desktop/share/native/libharfbuzz/hb-shape-plan.hh ! src/java.desktop/share/native/libharfbuzz/hb-shape.cc ! src/java.desktop/share/native/libharfbuzz/hb-shaper.cc ! src/java.desktop/share/native/libharfbuzz/hb-static.cc ! src/java.desktop/share/native/libharfbuzz/hb-style.cc ! src/java.desktop/share/native/libharfbuzz/hb-style.h ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff-common.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff1.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff2.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-input.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-input.hh ! src/java.desktop/share/native/libharfbuzz/hb-subset-plan.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-plan.hh ! src/java.desktop/share/native/libharfbuzz/hb-subset.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset.h ! src/java.desktop/share/native/libharfbuzz/hb-subset.hh ! src/java.desktop/share/native/libharfbuzz/hb-ucd-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ucd.cc ! src/java.desktop/share/native/libharfbuzz/hb-unicode-emoji-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-unicode.cc ! src/java.desktop/share/native/libharfbuzz/hb-unicode.hh ! src/java.desktop/share/native/libharfbuzz/hb-vector.hh ! src/java.desktop/share/native/libharfbuzz/hb-version.h ! src/java.desktop/share/native/libharfbuzz/hb.hh Changeset: 3b9059a1 Author: Daniel Fuchs Date: 2022-07-12 09:59:29 +0000 URL: https://git.openjdk.org/loom/commit/3b9059a1471ba74af8bf6a3c0e5b2e1140eb4afd 8290083: ResponseBodyBeforeError: AssertionError or SSLException: Unsupported or unrecognized SSL message Reviewed-by: jpai ! test/jdk/java/net/httpclient/ResponseBodyBeforeError.java Changeset: 04c47da1 Author: Daniel Jeli?ski Date: 2022-07-12 11:30:17 +0000 URL: https://git.openjdk.org/loom/commit/04c47da118b2870d1c7525348a2ffdf9cd1cc0a4 8289768: Clean up unused code Reviewed-by: dfuchs, lancea, weijun, naoto, cjplummer, alanb, michaelm, chegar ! src/java.base/macosx/native/libjava/ProcessHandleImpl_macosx.c ! src/java.base/macosx/native/libjli/java_md_macosx.m ! src/java.base/macosx/native/libnet/DefaultProxySelector.c ! src/java.base/macosx/native/libnio/fs/BsdNativeDispatcher.c ! src/java.base/share/native/launcher/defines.h ! src/java.base/share/native/libjava/NativeLibraries.c ! src/java.base/share/native/libjli/java.c ! src/java.base/share/native/libjli/parse_manifest.c ! src/java.base/share/native/libverify/check_code.c ! src/java.base/share/native/libzip/zip_util.c ! src/java.base/unix/native/jspawnhelper/jspawnhelper.c ! src/java.base/unix/native/libjava/ProcessImpl_md.c ! src/java.base/unix/native/libjava/TimeZone_md.c ! src/java.base/unix/native/libjava/java_props_md.c ! src/java.base/unix/native/libjava/path_util.c ! src/java.base/unix/native/libjli/java_md.c ! src/java.base/unix/native/libjli/java_md_common.c ! src/java.base/unix/native/libnet/DefaultProxySelector.c ! src/java.base/unix/native/libnet/Inet6AddressImpl.c ! src/java.base/unix/native/libnet/NetworkInterface.c ! src/java.base/unix/native/libnet/net_util_md.c ! src/java.base/unix/native/libnio/ch/NativeThread.c ! src/java.base/unix/native/libnio/ch/Net.c ! src/java.base/unix/native/libnio/ch/UnixDomainSockets.c ! src/java.base/windows/native/libjava/ProcessHandleImpl_win.c ! src/java.base/windows/native/libjava/TimeZone_md.c ! src/java.base/windows/native/libjava/io_util_md.c ! src/java.base/windows/native/libjli/java_md.c ! src/java.base/windows/native/libnet/NetworkInterface.c ! src/java.base/windows/native/libnio/ch/Net.c ! src/java.base/windows/native/libnio/fs/WindowsNativeDispatcher.c ! src/java.instrument/windows/native/libinstrument/FileSystemSupport_md.c ! src/java.security.jgss/share/native/libj2gss/GSSLibStub.c ! src/java.security.jgss/windows/native/libsspi_bridge/sspi.cpp ! src/jdk.crypto.cryptoki/share/native/libj2pkcs11/p11_keymgmt.c ! src/jdk.crypto.cryptoki/unix/native/libj2pkcs11/p11_md.c ! src/jdk.crypto.mscapi/windows/native/libsunmscapi/security.cpp ! src/jdk.hotspot.agent/linux/native/libsaproc/LinuxDebuggerLocal.cpp ! src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.c ! src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c ! src/jdk.hotspot.agent/linux/native/libsaproc/symtab.c ! src/jdk.hotspot.agent/macosx/native/libsaproc/symtab.c ! src/jdk.jdi/share/native/libdt_shmem/SharedMemoryTransport.c ! src/jdk.jdwp.agent/share/native/libjdwp/log_messages.c ! src/jdk.management/unix/native/libmanagement_ext/OperatingSystemImpl.c ! src/jdk.sctp/unix/native/libsctp/SctpNet.c Changeset: e5491a26 Author: Matthias Baesken Date: 2022-07-12 12:10:28 +0000 URL: https://git.openjdk.org/loom/commit/e5491a2605177a9dca87a060d99aa5ea4fd4a239 8289910: unify os::message_box across posix platforms Reviewed-by: iklam, dholmes ! src/hotspot/os/aix/os_aix.cpp ! src/hotspot/os/bsd/os_bsd.cpp ! src/hotspot/os/linux/os_linux.cpp ! src/hotspot/os/posix/os_posix.cpp Changeset: 393dc7ad Author: Martin Doerr Date: 2022-07-12 13:31:51 +0000 URL: https://git.openjdk.org/loom/commit/393dc7ade716485f4452d0185caf9e630e4c6139 8290082: [PPC64] ZGC C2 load barrier stub needs to preserve vector registers Reviewed-by: eosterlund, rrich ! src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp ! src/hotspot/cpu/ppc/ppc.ad ! src/hotspot/cpu/ppc/register_ppc.hpp ! src/hotspot/cpu/ppc/vmreg_ppc.cpp ! src/hotspot/cpu/ppc/vmreg_ppc.hpp Changeset: ea12615d Author: Ryan Ernst Committer: Chris Hegarty Date: 2022-07-12 13:50:36 +0000 URL: https://git.openjdk.org/loom/commit/ea12615d2f4574467d93cca6b4cc81fc18986307 8288984: Simplification in java.lang.Runtime::exit Reviewed-by: dholmes, chegar, alanb, kbarrett ! src/java.base/share/classes/java/lang/Runtime.java ! src/java.base/share/classes/java/lang/Shutdown.java Changeset: 0e906975 Author: Erik Gahlin Date: 2022-07-12 14:14:56 +0000 URL: https://git.openjdk.org/loom/commit/0e906975a82e2f23c452c2f4ac5cd942f00ce743 8290133: JFR: Remove unused methods in Bits.java Reviewed-by: mgronlun ! src/jdk.jfr/share/classes/jdk/jfr/internal/Bits.java ! src/jdk.jfr/share/classes/jdk/jfr/internal/event/EventWriter.java Changeset: 728157fa Author: Ralf Schmelter Date: 2022-07-12 14:51:55 +0000 URL: https://git.openjdk.org/loom/commit/728157fa03913991088f6bb257a8bc16706792a9 8289917: Metadata for regionsRefilled of G1EvacuationStatistics event is wrong Reviewed-by: tschatzl, mgronlun, stuefe, egahlin ! src/hotspot/share/jfr/metadata/metadata.xml Changeset: 7f0e9bd6 Author: Ralf Schmelter Date: 2022-07-12 14:53:46 +0000 URL: https://git.openjdk.org/loom/commit/7f0e9bd632198c7fd34d27b85ca51ea0e2442e4d 8289745: JfrStructCopyFailed uses heap words instead of bytes for object sizes Reviewed-by: mgronlun, stuefe ! src/hotspot/share/gc/g1/g1Trace.cpp ! src/hotspot/share/gc/shared/gcTraceSend.cpp ! test/jdk/jdk/jfr/event/gc/detailed/PromotionFailedEvent.java ! test/jdk/jdk/jfr/event/gc/detailed/TestEvacuationFailedEvent.java Changeset: e8568b89 Author: Ludvig Janiuk Committer: Erik Gahlin Date: 2022-07-12 15:54:36 +0000 URL: https://git.openjdk.org/loom/commit/e8568b890a829f3481a57f4eb5cf1796e363858b 8290020: Deadlock in leakprofiler::emit_events during shutdown Reviewed-by: mgronlun, dholmes, egahlin ! src/hotspot/share/jfr/jfr.cpp ! src/hotspot/share/jfr/jfr.hpp ! src/hotspot/share/prims/jvm.cpp ! src/hotspot/share/runtime/java.cpp ! src/hotspot/share/runtime/java.hpp ! test/jdk/jdk/jfr/jvm/TestDumpOnCrash.java Changeset: fed3af8a Author: Maurizio Cimadamore Date: 2022-07-11 14:30:19 +0000 URL: https://git.openjdk.org/loom/commit/fed3af8ae069fc760a24e750292acbb468b14ce5 8287809: Revisit implementation of memory session Reviewed-by: jvernee ! src/java.base/share/classes/java/nio/Buffer.java ! src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template ! src/java.base/share/classes/jdk/internal/foreign/AbstractMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/ConfinedSession.java ! src/java.base/share/classes/jdk/internal/foreign/HeapMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/MappedMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/MemoryAddressImpl.java ! src/java.base/share/classes/jdk/internal/foreign/MemorySessionImpl.java ! src/java.base/share/classes/jdk/internal/foreign/NativeMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/Scoped.java ! src/java.base/share/classes/jdk/internal/foreign/SharedSession.java ! src/java.base/share/classes/jdk/internal/foreign/abi/SharedUtils.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/linux/LinuxAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/macos/MacOsAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/sysv/SysVVaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/windows/WinVaList.java ! src/java.base/share/classes/jdk/internal/misc/X-ScopedMemoryAccess-bin.java.template ! src/java.base/share/classes/jdk/internal/misc/X-ScopedMemoryAccess.java.template ! src/java.base/share/classes/sun/nio/ch/FileChannelImpl.java ! test/jdk/java/foreign/TestByteBuffer.java ! test/jdk/java/foreign/TestMemorySession.java Changeset: 62fbc3f8 Author: Pavel Rappo Date: 2022-07-11 15:43:20 +0000 URL: https://git.openjdk.org/loom/commit/62fbc3f883f06324abe8635efc48f9fc20f79f69 8287379: Using @inheritDoc in an inapplicable context shouldn't crash javadoc Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/InheritDocTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/TagletManager.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/TagletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/DocFinder.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclint/Checker.java ! test/langtools/jdk/javadoc/doclet/InheritDocForUserTags/DocTest.java ! test/langtools/jdk/javadoc/doclet/testInheritDocWithinInappropriateTag/TestInheritDocWithinInappropriateTag.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/TestRelativeLinks.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/pkg/D.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/pkg/sub/F.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/pkg2/E.java ! test/langtools/jdk/javadoc/doclet/testSimpleTagInherit/TestSimpleTagInherit.java ! test/langtools/jdk/javadoc/doclet/testSimpleTagInherit/p/BaseClass.java ! test/langtools/jdk/javadoc/doclet/testSimpleTagInherit/p/TestClass.java ! test/langtools/jdk/javadoc/doclet/testTaglets/TestTaglets.out Changeset: 39715f3d Author: Christoph Langer Date: 2022-07-11 17:46:22 +0000 URL: https://git.openjdk.org/loom/commit/39715f3da7e8749bf477b818ae06f4dd99c223c4 8287902: UnreadableRB case in MissingResourceCauseTest is not working reliably on Windows Backport-of: 975316e3e5f1208e4e15eadc2493d25c15554647 ! test/jdk/java/util/ResourceBundle/Control/MissingResourceCauseTest.java Changeset: c3806b93 Author: Serguei Spitsyn Date: 2022-07-11 22:44:03 +0000 URL: https://git.openjdk.org/loom/commit/c3806b93c48f826e940eecd0ba29995d7f0c796b 8289709: fatal error: stuck in JvmtiVTMSTransitionDisabler::disable_VTMS_transitions Reviewed-by: alanb, amenkov, lmesnik ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/framepop02.java ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/libframepop02.cpp Changeset: 3164c98f Author: Jorn Vernee Date: 2022-07-12 11:25:45 +0000 URL: https://git.openjdk.org/loom/commit/3164c98f4c02a48cad62dd4f9b6cc55d64ac6d83 8289148: j.l.foreign.VaList::nextVarg call could throw IndexOutOfBoundsException or even crash the VM 8289333: Specification of method j.l.foreign.VaList::skip deserves clarification 8289156: j.l.foreign.VaList::skip call could throw java.lang.IndexOutOfBoundsException: Out of bound access on segment Reviewed-by: mcimadamore ! src/java.base/share/classes/java/lang/foreign/VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/SharedUtils.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/linux/LinuxAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/macos/MacOsAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/sysv/SysVVaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/windows/WinVaList.java ! test/jdk/java/foreign/valist/VaListTest.java Changeset: d9ca438d Author: Jesper Wilhelmsson Date: 2022-07-12 16:16:16 +0000 URL: https://git.openjdk.org/loom/commit/d9ca438d06166f153d11bb55c9ec672fc63c0e9e Merge ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclint/Checker.java ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/libframepop02.cpp ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclint/Checker.java ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/libframepop02.cpp Changeset: 31f7fc04 Author: Jayashree Huttanagoudar Committer: Weijun Wang Date: 2022-07-12 20:12:22 +0000 URL: https://git.openjdk.org/loom/commit/31f7fc043b4616cb2d5f161cda357d0ebfb795f0 8283082: sun.security.x509.X509CertImpl.delete("x509.info.validity") nulls out info field Reviewed-by: weijun ! src/java.base/share/classes/sun/security/x509/X509CertImpl.java + test/jdk/sun/security/x509/X509CertImpl/JDK8283082.java Changeset: 6e18883d Author: Prasanta Sadhukhan Date: 2022-07-13 05:06:04 +0000 URL: https://git.openjdk.org/loom/commit/6e18883d8ffd9a7b7d495da05e9859dc1d1a2677 8290162: Reset recursion counter missed in fix of JDK-8224267 Reviewed-by: prr ! src/java.desktop/share/classes/javax/swing/plaf/basic/BasicOptionPaneUI.java ! test/jdk/javax/swing/JOptionPane/TestOptionPaneStackOverflow.java Changeset: 572c14ef Author: Jonathan Gibbons Date: 2022-07-13 14:45:04 +0000 URL: https://git.openjdk.org/loom/commit/572c14efc67860e75edaa50608b4c61aec5997da 8288624: Cleanup CommentHelper.getText0 Reviewed-by: hannesw ! src/java.base/share/classes/java/util/Locale.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/parser/DocCommentParser.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlSerialFieldWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/TagletWriterImpl.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/SerializedFormWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/builders/SerializedFormBuilder.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/CodeTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/TagletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/CommentHelper.java ! test/langtools/jdk/javadoc/doclet/testSeeTag/TestSeeTag.java + test/langtools/jdk/javadoc/doclet/testSerialWithLink/TestSerialWithLink.java Changeset: f528124f Author: Alan Bateman Date: 2022-07-13 15:03:37 +0000 URL: https://git.openjdk.org/loom/commit/f528124f571a29da49defbef30eeca04ab4a00ce 8289284: jdk.tracePinnedThreads output confusing when pinned due to native frame Reviewed-by: jpai, mchung ! make/test/JtregNativeJdk.gmk ! src/java.base/share/classes/java/lang/PinnedThreadPrinter.java ! test/jdk/java/lang/Thread/virtual/TracePinnedThreads.java + test/jdk/java/lang/Thread/virtual/libTracePinnedThreads.c Changeset: 44fb92e2 Author: Brian Burkhalter Date: 2022-07-13 15:13:27 +0000 URL: https://git.openjdk.org/loom/commit/44fb92e2aa8a708b94c568e3d39217cb4c39f6bf 8290197: test/jdk/java/nio/file/Files/probeContentType/Basic.java fails on some systems for the ".rar" extension Reviewed-by: lancea, dfuchs, jpai ! test/jdk/java/nio/file/Files/probeContentType/Basic.java Changeset: 2583feb2 Author: Thomas Schatzl Date: 2022-07-13 16:08:59 +0000 URL: https://git.openjdk.org/loom/commit/2583feb21bf5419afc3c1953d964cf89d65fe8a2 8290023: Remove use of IgnoreUnrecognizedVMOptions in gc tests Reviewed-by: ayang, lkorinth, kbarrett ! test/hotspot/jtreg/gc/TestObjectAlignment.java ! test/hotspot/jtreg/gc/epsilon/TestAlignment.java ! test/hotspot/jtreg/gc/epsilon/TestMaxTLAB.java ! test/hotspot/jtreg/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForAuxMemory.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForHeap.java ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java ! test/hotspot/jtreg/gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java ! test/hotspot/jtreg/gc/metaspace/TestMetaspaceMemoryPool.java ! test/hotspot/jtreg/gc/metaspace/TestMetaspacePerfCounters.java ! test/hotspot/jtreg/gc/metaspace/TestPerfCountersAndMemoryPools.java ! test/hotspot/jtreg/gc/shenandoah/TestVerifyJCStress.java ! test/hotspot/jtreg/gc/shenandoah/options/TestSelectiveBarrierFlags.java Changeset: 53580455 Author: Doug Lea
Date: 2022-07-13 18:05:42 +0000 URL: https://git.openjdk.org/loom/commit/535804554deef213d056cbd6bce14aeff04c32fb 8066859: java/lang/ref/OOMEInReferenceHandler.java failed with java.lang.Exception: Reference Handler thread died Reviewed-by: alanb ! src/java.base/share/classes/java/util/concurrent/locks/AbstractQueuedLongSynchronizer.java ! src/java.base/share/classes/java/util/concurrent/locks/AbstractQueuedSynchronizer.java ! test/jdk/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt + test/jdk/java/util/concurrent/locks/Lock/OOMEInAQS.java Changeset: 5e3ecff7 Author: Thomas Schatzl Date: 2022-07-13 18:31:03 +0000 URL: https://git.openjdk.org/loom/commit/5e3ecff7a60708aaf4a3c63f85907e4fb2dcbc9e 8290253: gc/g1/TestVerificationInConcurrentCycle.java#id1 fails with "Error. can't find sun.hotspot.WhiteBox in test directory or libraries" Reviewed-by: dcubed ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java Changeset: 74ac5df9 Author: Doug Simon Date: 2022-07-13 19:15:53 +0000 URL: https://git.openjdk.org/loom/commit/74ac5df96fb4344f005180f8643cb0c9223b1556 8290234: [JVMCI] use JVMCIKlassHandle to protect raw Klass* values from concurrent G1 scanning Reviewed-by: kvn, never ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMethodData.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethodImpl.java Changeset: 775006fa Author: Alan Bateman Date: 2022-07-14 07:28:08 +0000 URL: https://git.openjdk.org/loom/commit/775006fa9affb15e04ecb22724d64768457bc32d Merge with jdk-20+6 ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/prims/jvm.cpp ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/hotspot/jtreg/ProblemList.txt ! test/jdk/ProblemList.txt ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/prims/jvm.cpp ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/hotspot/jtreg/ProblemList.txt ! test/jdk/ProblemList.txt Changeset: 3195dc2c Author: Alan Bateman Date: 2022-07-14 08:59:21 +0000 URL: https://git.openjdk.org/loom/commit/3195dc2c80dacacd7f39293a1a44ca5950078bc6 exclude runtime/ErrorHandling/MachCodeFramesInErrorFile.java from wrapper runs ! test/hotspot/jtreg/ProblemList-vthread.txt From duke at openjdk.org Thu Jul 14 08:45:42 2022 From: duke at openjdk.org (duke) Date: Thu, 14 Jul 2022 08:45:42 GMT Subject: git: openjdk/loom: master: 123 new changesets Message-ID: <347eca15-4530-40c8-ab4b-eb206eaa1697@openjdk.org> Changeset: 4ad18cf0 Author: ScientificWare Committer: Andrey Turbanov Date: 2022-07-06 08:19:40 +0000 URL: https://git.openjdk.org/loom/commit/4ad18cf088e12f3582b8f6117a44ae4607f69839 8289730: Deprecated code sample in java.lang.ClassCastException Reviewed-by: darcy ! src/java.base/share/classes/java/lang/ClassCastException.java Changeset: ac6be165 Author: Severin Gehwolf Date: 2022-07-06 08:24:47 +0000 URL: https://git.openjdk.org/loom/commit/ac6be165196457a26d837760b5f5030fe010d633 8289695: [TESTBUG] TestMemoryAwareness.java fails on cgroups v2 and crun Reviewed-by: sspitsyn ! test/hotspot/jtreg/containers/docker/TestMemoryAwareness.java Changeset: 83418952 Author: Thomas Schatzl Date: 2022-07-06 09:39:25 +0000 URL: https://git.openjdk.org/loom/commit/834189527e16d6fc3aedb97108b0f74c391dbc3b 8289739: Add G1 specific GC breakpoints for testing Reviewed-by: kbarrett, iwalulya ! src/hotspot/share/gc/g1/g1ConcurrentMarkThread.cpp ! test/hotspot/jtreg/gc/TestConcurrentGCBreakpoints.java ! test/lib/sun/hotspot/WhiteBox.java Changeset: cbaf6e80 Author: Roland Westrelin Date: 2022-07-06 11:36:12 +0000 URL: https://git.openjdk.org/loom/commit/cbaf6e807e2b959a0264c87035916850798a2dc6 8288022: c2: Transform (CastLL (AddL into (AddL (CastLL when possible Reviewed-by: thartmann, kvn ! src/hotspot/share/opto/castnode.cpp ! src/hotspot/share/opto/castnode.hpp ! src/hotspot/share/opto/compile.hpp ! src/hotspot/share/opto/convertnode.cpp ! src/hotspot/share/opto/library_call.cpp ! src/hotspot/share/opto/type.cpp ! src/hotspot/share/opto/type.hpp + test/hotspot/jtreg/compiler/c2/irTests/TestPushAddThruCast.java Changeset: 83a5d599 Author: Coleen Phillimore Date: 2022-07-06 12:07:36 +0000 URL: https://git.openjdk.org/loom/commit/83a5d5996bca26b5f2e97b67f9bfd0a6ad110327 8278479: RunThese test failure with +UseHeavyMonitors and +VerifyHeavyMonitors Reviewed-by: kvn, dcubed, dlong ! src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp ! src/hotspot/cpu/arm/c1_LIRAssembler_arm.cpp ! src/hotspot/cpu/ppc/c1_LIRAssembler_ppc.cpp ! src/hotspot/cpu/riscv/c1_LIRAssembler_riscv.cpp ! src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp ! src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp Changeset: 75c0a5b8 Author: Thomas Stuefe Date: 2022-07-06 13:17:54 +0000 URL: https://git.openjdk.org/loom/commit/75c0a5b828de5a2c1baa7226e43d23db62aa8375 8288824: [arm32] Display isetstate in register output Reviewed-by: dsamersoff, snazarki ! src/hotspot/os_cpu/linux_arm/os_linux_arm.cpp Changeset: cc2b7927 Author: Andrew Haley Date: 2022-07-06 13:49:46 +0000 URL: https://git.openjdk.org/loom/commit/cc2b79270445ccfb2181894fed2edfd4518a2904 8288992: AArch64: CMN should be handled the same way as CMP Reviewed-by: adinn, ngasson ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp Changeset: 82a8bd7e Author: Xue-Lei Andrew Fan Date: 2022-07-06 14:23:44 +0000 URL: https://git.openjdk.org/loom/commit/82a8bd7e92a1867b0c82f051361938be8610428d 8287596: Reorg jdk.test.lib.util.ForceGC Reviewed-by: rriggs ! test/jdk/java/io/ObjectStreamClass/TestOSCClassLoaderLeak.java ! test/jdk/java/lang/ClassLoader/loadLibraryUnload/LoadLibraryUnload.java ! test/jdk/java/lang/ClassLoader/nativeLibrary/NativeLibraryTest.java ! test/jdk/java/lang/invoke/defineHiddenClass/UnloadingTest.java ! test/jdk/java/lang/reflect/callerCache/ReflectionCallerCacheTest.java ! test/jdk/javax/security/auth/callback/PasswordCallback/CheckCleanerBound.java ! test/jdk/sun/security/jgss/GssContextCleanup.java ! test/jdk/sun/security/jgss/GssNameCleanup.java ! test/jdk/sun/security/pkcs11/Provider/MultipleLogins.java ! test/lib/jdk/test/lib/util/ForceGC.java Changeset: dfb24ae4 Author: Andrew Haley Date: 2022-07-06 15:22:00 +0000 URL: https://git.openjdk.org/loom/commit/dfb24ae4b7d32c0c625a9396429d167d9dcca183 8289060: Undefined Behaviour in class VMReg Reviewed-by: jvernee, kvn ! src/hotspot/share/code/vmreg.cpp ! src/hotspot/share/code/vmreg.hpp ! src/hotspot/share/opto/optoreg.hpp Changeset: 9f37ba44 Author: Lance Andersen Date: 2022-07-06 15:37:23 +0000 URL: https://git.openjdk.org/loom/commit/9f37ba44b8a6dfb635f39b6950fd5a7ae8894902 8288706: Unused parameter 'boolean newln' in method java.lang.VersionProps#print(boolean, boolean) Reviewed-by: iris, alanb, rriggs ! src/java.base/share/classes/java/lang/VersionProps.java.template ! src/java.base/share/native/libjli/java.c Changeset: 35387d5c Author: Raffaello Giulietti Committer: Joe Darcy Date: 2022-07-06 16:22:18 +0000 URL: https://git.openjdk.org/loom/commit/35387d5cb6aa9e59d62b8e1b137b53ec88521310 8289260: BigDecimal movePointLeft() and movePointRight() do not follow their API spec Reviewed-by: darcy ! src/java.base/share/classes/java/math/BigDecimal.java + test/jdk/java/math/BigDecimal/MovePointTests.java Changeset: c4dcce4b Author: Serguei Spitsyn Date: 2022-07-02 20:43:11 +0000 URL: https://git.openjdk.org/loom/commit/c4dcce4bca8808f8f733128f2e2b1dd48a28a322 8289619: JVMTI SelfSuspendDisablerTest.java failed with RuntimeException: Test FAILED: Unexpected thread state Reviewed-by: alanb, cjplummer ! test/hotspot/jtreg/serviceability/jvmti/vthread/SelfSuspendDisablerTest/SelfSuspendDisablerTest.java Changeset: dc4edd3f Author: Erik Gahlin Date: 2022-07-03 19:28:39 +0000 URL: https://git.openjdk.org/loom/commit/dc4edd3fe83038b03cad6b3652d12aff987f3987 8289183: jdk.jfr.consumer.RecordedThread.getId references Thread::getId, should be Thread::threadId Reviewed-by: alanb ! src/jdk.jfr/share/classes/jdk/jfr/consumer/RecordedThread.java Changeset: 5b5bc6c2 Author: Christoph Langer Date: 2022-07-04 07:52:38 +0000 URL: https://git.openjdk.org/loom/commit/5b5bc6c26e9843e16f241b89853a3a1fa5ae61f0 8287672: jtreg test com/sun/jndi/ldap/LdapPoolTimeoutTest.java fails intermittently in nightly run Reviewed-by: stuefe Backport-of: 7e211d7daac32dca8f26f408d1a3b2c7805b5a2e ! test/jdk/com/sun/jndi/ldap/LdapPoolTimeoutTest.java Changeset: 1a271645 Author: Jatin Bhateja Date: 2022-07-04 11:31:32 +0000 URL: https://git.openjdk.org/loom/commit/1a271645a84ac4d7d6570e739d42c05cc328891d 8287851: C2 crash: assert(t->meet(t0) == t) failed: Not monotonic Reviewed-by: thartmann, chagedorn ! src/hotspot/share/opto/intrinsicnode.cpp ! test/jdk/ProblemList.txt Changeset: 0dff3276 Author: Matthias Baesken Date: 2022-07-04 14:45:48 +0000 URL: https://git.openjdk.org/loom/commit/0dff3276e863fcbf496fe6decd3335cd43cab21f 8289569: [test] java/lang/ProcessBuilder/Basic.java fails on Alpine/musl Reviewed-by: clanger Backport-of: a8edd7a12f955fe843c7c9ad4273e9c653a80c5a ! test/jdk/java/lang/ProcessBuilder/Basic.java Changeset: f640fc5a Author: Pavel Rappo Date: 2022-07-04 16:00:53 +0000 URL: https://git.openjdk.org/loom/commit/f640fc5a1eb876a657d0de011dcd9b9a42b88eec 8067757: Incorrect HTML generation for copied javadoc with multiple @throws tags Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/TagletWriterImpl.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/ThrowsTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/CommentHelper.java ! test/langtools/jdk/javadoc/doclet/testThrowsInheritance/TestThrowsTagInheritance.java + test/langtools/jdk/javadoc/doclet/testThrowsInheritanceMultiple/TestOneToMany.java Changeset: 29ea6429 Author: Chris Plummer Date: 2022-07-05 17:46:59 +0000 URL: https://git.openjdk.org/loom/commit/29ea6429d2f906a61331aab1aef172d0d854fb6f 8287847: Fatal Error when suspending virtual thread after it has terminated Reviewed-by: alanb, sspitsyn ! src/jdk.jdwp.agent/share/native/libjdwp/threadControl.c ! test/jdk/TEST.groups + test/jdk/com/sun/jdi/SuspendAfterDeath.java ! test/jdk/com/sun/jdi/TestScaffold.java Changeset: 30e134e9 Author: Daniel D. Daugherty Date: 2022-07-05 20:42:42 +0000 URL: https://git.openjdk.org/loom/commit/30e134e909c53423acd1ec20c106f4200bc10285 8289091: move oop safety check from SharedRuntime::get_java_tid() to JavaThread::threadObj() Reviewed-by: rehn, dholmes ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/hotspot/share/runtime/thread.cpp Changeset: 0b6fd482 Author: Tyler Steele Date: 2022-07-05 21:11:50 +0000 URL: https://git.openjdk.org/loom/commit/0b6fd4820c1f98d6154d7182345273a4c9468af5 8288128: S390X: Fix crashes after JDK-8284161 (Virtual Threads) Reviewed-by: mdoerr ! src/hotspot/cpu/s390/frame_s390.cpp ! src/hotspot/cpu/s390/frame_s390.hpp ! src/hotspot/cpu/s390/frame_s390.inline.hpp ! src/hotspot/cpu/s390/nativeInst_s390.hpp ! src/hotspot/cpu/s390/stubGenerator_s390.cpp ! src/hotspot/cpu/s390/templateInterpreterGenerator_s390.cpp ! src/hotspot/share/runtime/signature.cpp Changeset: b3a0e482 Author: Alan Bateman Date: 2022-07-06 06:40:07 +0000 URL: https://git.openjdk.org/loom/commit/b3a0e482adc32946d03b10589f746bb31f9c9e5b 8289439: Clarify relationship between ThreadStart/ThreadEnd and can_support_virtual_threads capability Reviewed-by: dholmes, dcubed, sspitsyn, cjplummer ! src/hotspot/share/prims/jvmti.xml ! src/hotspot/share/prims/jvmtiH.xsl Changeset: 0526402a Author: Thomas Stuefe Date: 2022-07-06 10:15:38 +0000 URL: https://git.openjdk.org/loom/commit/0526402a023d5725bf32ef6587001ad05e28c10f 8289477: Memory corruption with CPU_ALLOC, CPU_FREE on muslc Backport-of: da6d1fc0e0aeb1fdb504aced4b0dba0290ec240f ! src/hotspot/os/linux/os_linux.cpp Changeset: 2a6ec88c Author: Jesper Wilhelmsson Date: 2022-07-06 21:01:10 +0000 URL: https://git.openjdk.org/loom/commit/2a6ec88cd09adec43df3da1b22653271517b14a8 Merge ! src/hotspot/cpu/s390/stubGenerator_s390.cpp ! src/hotspot/share/runtime/javaThread.cpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! test/jdk/ProblemList.txt ! src/hotspot/cpu/s390/stubGenerator_s390.cpp + src/hotspot/share/runtime/javaThread.cpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! test/jdk/ProblemList.txt Changeset: a40c17b7 Author: Joe Darcy Date: 2022-07-06 21:28:09 +0000 URL: https://git.openjdk.org/loom/commit/a40c17b730257919f18066dbce4fc92ed3c4f10e 8289775: Update java.lang.invoke.MethodHandle[s] to use snippets Reviewed-by: jrose ! src/java.base/share/classes/java/lang/invoke/MethodHandle.java ! src/java.base/share/classes/java/lang/invoke/MethodHandles.java Changeset: 403a9bc7 Author: Tongbao Zhang Committer: Jie Fu Date: 2022-07-06 22:49:57 +0000 URL: https://git.openjdk.org/loom/commit/403a9bc79645018ee61b47bab67fe231577dd914 8289436: Make the redefine timer statistics more accurate Reviewed-by: sspitsyn, cjplummer, lmesnik ! src/hotspot/share/prims/jvmtiRedefineClasses.cpp ! src/hotspot/share/prims/jvmtiRedefineClasses.hpp Changeset: 569de453 Author: Thomas Stuefe Date: 2022-07-07 05:30:10 +0000 URL: https://git.openjdk.org/loom/commit/569de453c3267089d04befd756b81470693cf2de 8289620: gtest/MetaspaceUtilsGtests.java failed with unexpected stats values Reviewed-by: coleenp ! test/hotspot/gtest/metaspace/test_metaspaceUtils.cpp Changeset: a79ce4e7 Author: Xiaohong Gong Date: 2022-07-07 08:14:21 +0000 URL: https://git.openjdk.org/loom/commit/a79ce4e74858e78acc83c12d500303f667dc3f6b 8286941: Add mask IR for partial vector operations for ARM SVE Reviewed-by: kvn, jbhateja, njian, ngasson ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/cpu/aarch64/aarch64_sve.ad ! src/hotspot/cpu/aarch64/aarch64_sve_ad.m4 ! src/hotspot/cpu/aarch64/assembler_aarch64.hpp ! src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/c2_MacroAssembler_aarch64.hpp ! src/hotspot/cpu/arm/arm.ad ! src/hotspot/cpu/ppc/ppc.ad ! src/hotspot/cpu/riscv/riscv.ad ! src/hotspot/cpu/s390/s390.ad ! src/hotspot/cpu/x86/x86.ad ! src/hotspot/share/opto/matcher.cpp ! src/hotspot/share/opto/matcher.hpp ! src/hotspot/share/opto/memnode.hpp ! src/hotspot/share/opto/node.hpp ! src/hotspot/share/opto/vectornode.cpp ! src/hotspot/share/opto/vectornode.hpp ! test/hotspot/gtest/aarch64/aarch64-asmtest.py ! test/hotspot/gtest/aarch64/asmtest.out.h Changeset: d1249aa5 Author: Kevin Walls Date: 2022-07-07 08:41:50 +0000 URL: https://git.openjdk.org/loom/commit/d1249aa5cbf3a3a3a24e85bcec30aecbc3e09bc0 8198668: MemoryPoolMBean/isUsageThresholdExceeded/isexceeded001/TestDescription.java still failing Reviewed-by: lmesnik, sspitsyn ! test/hotspot/jtreg/ProblemList.txt ! test/hotspot/jtreg/vmTestbase/nsk/monitoring/MemoryPoolMBean/isUsageThresholdExceeded/isexceeded001.java Changeset: cce77a70 Author: Thomas Stuefe Date: 2022-07-07 09:42:14 +0000 URL: https://git.openjdk.org/loom/commit/cce77a700141a854bafaa5ccb33db026affcf322 8289799: Build warning in methodData.cpp memset zero-length parameter Reviewed-by: jiefu, lucy ! src/hotspot/share/oops/methodData.cpp Changeset: e05b2f2c Author: Martin Doerr Date: 2022-07-07 10:21:25 +0000 URL: https://git.openjdk.org/loom/commit/e05b2f2c3b9b0276099766bc38a55ff835c989e1 8289856: [PPC64] SIGSEGV in C2Compiler::init_c2_runtime() after JDK-8289060 Reviewed-by: dlong, lucy ! src/hotspot/cpu/ppc/ppc.ad Changeset: 532a6ec7 Author: Prasanta Sadhukhan Date: 2022-07-07 11:51:49 +0000 URL: https://git.openjdk.org/loom/commit/532a6ec7e3a048624b380b38b4611533a7caae18 7124313: [macosx] Swing Popups should overlap taskbar Reviewed-by: serb, dmarkov ! test/jdk/ProblemList.txt ! test/jdk/javax/swing/JPopupMenu/6580930/bug6580930.java Changeset: 77ad998b Author: Jie Fu Date: 2022-07-07 12:52:04 +0000 URL: https://git.openjdk.org/loom/commit/77ad998b6e741f7cd7cdd52155c024bbc77f2027 8289778: ZGC: incorrect use of os::free() for mountpoint string handling after JDK-8289633 Reviewed-by: stuefe, dholmes, mdoerr ! src/hotspot/os/linux/gc/z/zMountPoint_linux.cpp Changeset: 013a5eee Author: Albert Mingkun Yang Date: 2022-07-07 13:53:24 +0000 URL: https://git.openjdk.org/loom/commit/013a5eeeb9d9a46778f68261ac69ed7235cdc7dd 8137280: Remove eager reclaim of humongous controls Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.hpp ! src/hotspot/share/gc/g1/g1GCPhaseTimes.cpp ! src/hotspot/share/gc/g1/g1YoungGCPostEvacuateTasks.cpp ! src/hotspot/share/gc/g1/g1_globals.hpp ! test/hotspot/jtreg/gc/g1/TestGreyReclaimedHumongousObjects.java Changeset: 86f63f97 Author: Justin Gu Committer: Coleen Phillimore Date: 2022-07-07 14:57:24 +0000 URL: https://git.openjdk.org/loom/commit/86f63f9703b47b3b5b8fd093dbd117d8746091ff 8289164: Convert ResolutionErrorTable to use ResourceHashtable Reviewed-by: iklam, coleenp ! src/hotspot/share/classfile/resolutionErrors.cpp ! src/hotspot/share/classfile/resolutionErrors.hpp ! src/hotspot/share/classfile/systemDictionary.cpp ! src/hotspot/share/classfile/systemDictionary.hpp ! src/hotspot/share/interpreter/linkResolver.cpp ! src/hotspot/share/oops/instanceKlass.cpp + test/hotspot/jtreg/runtime/ClassResolutionFail/ErrorsDemoTest.java Changeset: 74ca6ca2 Author: Ivan Walulya Date: 2022-07-07 15:09:30 +0000 URL: https://git.openjdk.org/loom/commit/74ca6ca25ba3ece0c92bf2c6e4f940996785c9a3 8289800: G1: G1CollectionSet::finalize_young_part clears survivor list too early Reviewed-by: ayang, tschatzl ! src/hotspot/share/gc/g1/g1CollectionSet.cpp Changeset: 8e7b45b8 Author: Coleen Phillimore Date: 2022-07-07 15:27:55 +0000 URL: https://git.openjdk.org/loom/commit/8e7b45b82062cabad110ddcd51fa969b67483089 8282986: Remove "system" in boot class path names Reviewed-by: iklam, dholmes ! src/hotspot/share/cds/filemap.cpp ! src/hotspot/share/classfile/classLoader.cpp ! src/hotspot/share/classfile/modules.cpp ! src/hotspot/share/runtime/arguments.cpp ! src/hotspot/share/runtime/arguments.hpp ! src/hotspot/share/runtime/os.cpp Changeset: 95e3190d Author: Thomas Schatzl Date: 2022-07-07 15:46:05 +0000 URL: https://git.openjdk.org/loom/commit/95e3190d96424885707dd7d07e25e898ad642e5b 8210708: Use single mark bitmap in G1 Co-authored-by: Stefan Johansson Co-authored-by: Ivan Walulya Reviewed-by: iwalulya, ayang ! src/hotspot/share/gc/g1/g1BlockOffsetTable.cpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp ! src/hotspot/share/gc/g1/g1CodeBlobClosure.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/g1CollectedHeap.hpp ! src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp ! src/hotspot/share/gc/g1/g1CollectionSet.cpp ! src/hotspot/share/gc/g1/g1CollectorState.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.inline.hpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkThread.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMarkThread.hpp + src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.cpp + src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.hpp ! src/hotspot/share/gc/g1/g1EvacFailure.cpp ! src/hotspot/share/gc/g1/g1FullCollector.cpp ! src/hotspot/share/gc/g1/g1FullGCCompactTask.cpp ! src/hotspot/share/gc/g1/g1FullGCCompactTask.hpp ! src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp ! src/hotspot/share/gc/g1/g1FullGCPrepareTask.hpp ! src/hotspot/share/gc/g1/g1HeapVerifier.cpp ! src/hotspot/share/gc/g1/g1HeapVerifier.hpp ! src/hotspot/share/gc/g1/g1OopClosures.inline.hpp ! src/hotspot/share/gc/g1/g1ParScanThreadState.cpp ! src/hotspot/share/gc/g1/g1Policy.cpp ! src/hotspot/share/gc/g1/g1RegionMarkStatsCache.hpp ! src/hotspot/share/gc/g1/g1RemSet.cpp ! src/hotspot/share/gc/g1/g1RemSet.hpp ! src/hotspot/share/gc/g1/g1RemSetTrackingPolicy.cpp ! src/hotspot/share/gc/g1/g1SATBMarkQueueSet.cpp ! src/hotspot/share/gc/g1/g1YoungCollector.cpp ! src/hotspot/share/gc/g1/g1YoungGCPostEvacuateTasks.cpp ! src/hotspot/share/gc/g1/heapRegion.cpp ! src/hotspot/share/gc/g1/heapRegion.hpp ! src/hotspot/share/gc/g1/heapRegion.inline.hpp ! src/hotspot/share/gc/g1/heapRegionManager.cpp ! src/hotspot/share/gc/g1/heapRegionManager.hpp ! src/hotspot/share/gc/shared/markBitMap.hpp ! src/hotspot/share/gc/shared/markBitMap.inline.hpp ! src/hotspot/share/gc/shared/verifyOption.hpp ! test/hotspot/gtest/gc/g1/test_heapRegion.cpp ! test/hotspot/gtest/utilities/test_bitMap_search.cpp ! test/hotspot/jtreg/gc/g1/TestLargePageUseForAuxMemory.java Changeset: a694e9e3 Author: Alex Kasko Committer: Alexey Semenyuk Date: 2022-07-07 16:45:35 +0000 URL: https://git.openjdk.org/loom/commit/a694e9e34d1e4388df200d11b168ca5265cea4ac 8288838: jpackage: file association additional arguments Reviewed-by: asemenyuk, almatvee ! src/jdk.jpackage/windows/classes/jdk/jpackage/internal/WixAppImageFragmentBuilder.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/FileAssociations.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/LinuxHelper.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/PackageTest.java ! test/jdk/tools/jpackage/share/FileAssociationsTest.java Changeset: 5564effe Author: Ioi Lam Date: 2022-07-07 17:29:25 +0000 URL: https://git.openjdk.org/loom/commit/5564effe9c69a5aa1975d059f69cef546be28502 8289763: Remove NULL check in CDSProtectionDomain::init_security_info() Reviewed-by: ccheung, coleenp ! src/hotspot/share/cds/cdsProtectionDomain.cpp Changeset: f7b18305 Author: Thomas Schatzl Date: 2022-07-07 18:08:43 +0000 URL: https://git.openjdk.org/loom/commit/f7b183059a3023f8da73859f1577d08a807749b2 8289538: Make G1BlockOffsetTablePart unaware of block sizes Reviewed-by: ayang, iwalulya ! src/hotspot/share/gc/g1/g1BlockOffsetTable.cpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp ! src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp ! src/hotspot/share/gc/g1/g1CollectedHeap.cpp ! src/hotspot/share/gc/g1/heapRegion.hpp ! src/hotspot/share/gc/g1/heapRegion.inline.hpp Changeset: 3e60e828 Author: Zdenek Zambersky Committer: Valerie Peng Date: 2022-07-07 18:18:04 +0000 URL: https://git.openjdk.org/loom/commit/3e60e828148a0490a4422d0724d15f3eccec17f0 8289301: P11Cipher should not throw out of bounds exception during padding Reviewed-by: valeriep ! src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11Cipher.java + test/jdk/sun/security/pkcs11/Cipher/TestPaddingOOB.java Changeset: f93beacd Author: Coleen Phillimore Date: 2022-07-07 20:27:31 +0000 URL: https://git.openjdk.org/loom/commit/f93beacd2f64aab0f930ac822859380c00c51f0c 8252329: runtime/LoadClass/TestResize.java timed out Reviewed-by: hseigel, iklam ! src/hotspot/share/classfile/classLoaderData.cpp ! src/hotspot/share/classfile/dictionary.cpp ! src/hotspot/share/classfile/dictionary.hpp ! test/hotspot/jtreg/runtime/LoadClass/TestResize.java Changeset: 8cdead0c Author: Coleen Phillimore Date: 2022-07-07 20:28:34 +0000 URL: https://git.openjdk.org/loom/commit/8cdead0c94094a025c48eaefc7a3ef0c36a9629e 8278923: Document Klass::is_loader_alive Reviewed-by: dholmes, iklam ! src/hotspot/share/oops/klass.inline.hpp Changeset: f804f2ce Author: Mark Powers Committer: Valerie Peng Date: 2022-07-07 23:20:58 +0000 URL: https://git.openjdk.org/loom/commit/f804f2ce8ef7a859aae021b20cbdcd9e34f9fb94 8284851: Update javax.crypto files to use proper javadoc for mentioned classes Reviewed-by: weijun, valeriep ! src/java.base/share/classes/java/security/AccessControlContext.java ! src/java.base/share/classes/java/security/AccessControlException.java ! src/java.base/share/classes/java/security/AccessController.java ! src/java.base/share/classes/java/security/AlgorithmConstraints.java ! src/java.base/share/classes/java/security/AlgorithmParameterGenerator.java ! src/java.base/share/classes/java/security/AlgorithmParameterGeneratorSpi.java ! src/java.base/share/classes/java/security/AlgorithmParameters.java ! src/java.base/share/classes/java/security/AlgorithmParametersSpi.java ! src/java.base/share/classes/java/security/AllPermission.java ! src/java.base/share/classes/java/security/BasicPermission.java ! src/java.base/share/classes/java/security/Certificate.java ! src/java.base/share/classes/java/security/CodeSigner.java ! src/java.base/share/classes/java/security/CodeSource.java ! src/java.base/share/classes/java/security/DigestException.java ! src/java.base/share/classes/java/security/DigestInputStream.java ! src/java.base/share/classes/java/security/DigestOutputStream.java ! src/java.base/share/classes/java/security/DomainCombiner.java ! src/java.base/share/classes/java/security/DomainLoadStoreParameter.java ! src/java.base/share/classes/java/security/GeneralSecurityException.java ! src/java.base/share/classes/java/security/Guard.java ! src/java.base/share/classes/java/security/GuardedObject.java ! src/java.base/share/classes/java/security/Identity.java ! src/java.base/share/classes/java/security/IdentityScope.java ! src/java.base/share/classes/java/security/InvalidAlgorithmParameterException.java ! src/java.base/share/classes/java/security/InvalidKeyException.java ! src/java.base/share/classes/java/security/InvalidParameterException.java ! src/java.base/share/classes/java/security/Key.java ! src/java.base/share/classes/java/security/KeyException.java ! src/java.base/share/classes/java/security/KeyFactory.java ! src/java.base/share/classes/java/security/KeyManagementException.java ! src/java.base/share/classes/java/security/KeyPairGenerator.java ! src/java.base/share/classes/java/security/KeyPairGeneratorSpi.java ! src/java.base/share/classes/java/security/KeyStore.java ! src/java.base/share/classes/java/security/KeyStoreException.java ! src/java.base/share/classes/java/security/KeyStoreSpi.java ! src/java.base/share/classes/java/security/MessageDigest.java ! src/java.base/share/classes/java/security/MessageDigestSpi.java ! src/java.base/share/classes/java/security/NoSuchAlgorithmException.java ! src/java.base/share/classes/java/security/NoSuchProviderException.java ! src/java.base/share/classes/java/security/Permission.java ! src/java.base/share/classes/java/security/PermissionCollection.java ! src/java.base/share/classes/java/security/Permissions.java ! src/java.base/share/classes/java/security/Policy.java ! src/java.base/share/classes/java/security/PolicySpi.java ! src/java.base/share/classes/java/security/Principal.java ! src/java.base/share/classes/java/security/PrivilegedActionException.java ! src/java.base/share/classes/java/security/ProtectionDomain.java ! src/java.base/share/classes/java/security/Provider.java ! src/java.base/share/classes/java/security/ProviderException.java ! src/java.base/share/classes/java/security/SecureClassLoader.java ! src/java.base/share/classes/java/security/SecureRandom.java ! src/java.base/share/classes/java/security/Security.java ! src/java.base/share/classes/java/security/SecurityPermission.java ! src/java.base/share/classes/java/security/Signature.java ! src/java.base/share/classes/java/security/SignatureException.java ! src/java.base/share/classes/java/security/SignatureSpi.java ! src/java.base/share/classes/java/security/SignedObject.java ! src/java.base/share/classes/java/security/Signer.java ! src/java.base/share/classes/java/security/Timestamp.java ! src/java.base/share/classes/java/security/URIParameter.java ! src/java.base/share/classes/java/security/UnrecoverableEntryException.java ! src/java.base/share/classes/java/security/UnrecoverableKeyException.java ! src/java.base/share/classes/java/security/UnresolvedPermission.java ! src/java.base/share/classes/java/security/UnresolvedPermissionCollection.java ! src/java.base/share/classes/javax/crypto/AEADBadTagException.java ! src/java.base/share/classes/javax/crypto/BadPaddingException.java ! src/java.base/share/classes/javax/crypto/Cipher.java ! src/java.base/share/classes/javax/crypto/CipherInputStream.java ! src/java.base/share/classes/javax/crypto/CipherOutputStream.java ! src/java.base/share/classes/javax/crypto/CipherSpi.java ! src/java.base/share/classes/javax/crypto/CryptoAllPermission.java ! src/java.base/share/classes/javax/crypto/CryptoPermission.java ! src/java.base/share/classes/javax/crypto/CryptoPermissions.java ! src/java.base/share/classes/javax/crypto/CryptoPolicyParser.java ! src/java.base/share/classes/javax/crypto/EncryptedPrivateKeyInfo.java ! src/java.base/share/classes/javax/crypto/ExemptionMechanism.java ! src/java.base/share/classes/javax/crypto/ExemptionMechanismException.java ! src/java.base/share/classes/javax/crypto/ExemptionMechanismSpi.java ! src/java.base/share/classes/javax/crypto/IllegalBlockSizeException.java ! src/java.base/share/classes/javax/crypto/KeyAgreement.java ! src/java.base/share/classes/javax/crypto/KeyAgreementSpi.java ! src/java.base/share/classes/javax/crypto/KeyGenerator.java ! src/java.base/share/classes/javax/crypto/KeyGeneratorSpi.java ! src/java.base/share/classes/javax/crypto/Mac.java ! src/java.base/share/classes/javax/crypto/MacSpi.java ! src/java.base/share/classes/javax/crypto/NoSuchPaddingException.java ! src/java.base/share/classes/javax/crypto/NullCipher.java ! src/java.base/share/classes/javax/crypto/ProviderVerifier.java ! src/java.base/share/classes/javax/crypto/SealedObject.java ! src/java.base/share/classes/javax/crypto/SecretKeyFactory.java ! src/java.base/share/classes/javax/crypto/SecretKeyFactorySpi.java ! src/java.base/share/classes/javax/crypto/ShortBufferException.java Changeset: 3f1174aa Author: Yasumasa Suenaga Date: 2022-07-08 00:04:46 +0000 URL: https://git.openjdk.org/loom/commit/3f1174aa4709aabcfde8b40deec88b8ed466cc06 8289646: configure script failed on WSL Reviewed-by: ihse ! make/scripts/fixpath.sh Changeset: ef3f2ed9 Author: Daniel D. Daugherty Date: 2022-07-06 16:50:14 +0000 URL: https://git.openjdk.org/loom/commit/ef3f2ed9ba920ab8b1e3fb2029e7c0096dd11cc6 8289841: ProblemList vmTestbase/gc/gctests/MemoryEaterMT/MemoryEaterMT.java with ZGC on windows Reviewed-by: rriggs ! test/hotspot/jtreg/ProblemList-zgc.txt Changeset: 32b650c0 Author: Daniel D. Daugherty Date: 2022-07-06 16:51:03 +0000 URL: https://git.openjdk.org/loom/commit/32b650c024bc294f6d28d1f0ebbef9865f455daf 8289840: ProblemList vmTestbase/nsk/jdwp/ThreadReference/ForceEarlyReturn/forceEarlyReturn002/forceEarlyReturn002.java when run with vthread wrapper Reviewed-by: bpb ! test/hotspot/jtreg/ProblemList-svc-vthread.txt Changeset: 55fa19b5 Author: Daniel D. Daugherty Date: 2022-07-06 20:52:25 +0000 URL: https://git.openjdk.org/loom/commit/55fa19b508ab4d760d1c5ff71e37399c3b79d85c 8289857: ProblemList jdk/jfr/event/runtime/TestActiveSettingEvent.java Reviewed-by: darcy ! test/jdk/ProblemList.txt Changeset: 9a0fa824 Author: Ron Pressler Date: 2022-07-06 20:53:13 +0000 URL: https://git.openjdk.org/loom/commit/9a0fa8242461afe9ee4bcf80523af13500c9c1f2 8288949: serviceability/jvmti/vthread/ContStackDepthTest/ContStackDepthTest.java failing Reviewed-by: dlong, eosterlund, rehn ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/share/code/compiledIC.cpp ! src/hotspot/share/code/compiledIC.hpp ! src/hotspot/share/oops/method.cpp ! src/hotspot/share/runtime/continuationEntry.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! test/hotspot/jtreg/ProblemList-Xcomp.txt Changeset: 8f24d251 Author: Pavel Rappo Date: 2022-07-06 22:01:12 +0000 URL: https://git.openjdk.org/loom/commit/8f24d25168c576191075c7344ef0d95a8f08b347 6509045: {@inheritDoc} only copies one instance of the specified exception Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/ThrowsTaglet.java ! test/langtools/jdk/javadoc/doclet/testThrowsInheritanceMultiple/TestOneToMany.java Changeset: 8dd94a2c Author: Jan Lahoda Date: 2022-07-07 07:54:18 +0000 URL: https://git.openjdk.org/loom/commit/8dd94a2c14f7456b3eaf3e02f38d9e114eb8acc3 8289196: Pattern domination not working properly for record patterns Reviewed-by: vromero ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Attr.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/langtools/tools/javac/patterns/Domination.java ! test/langtools/tools/javac/patterns/Domination.out ! test/langtools/tools/javac/patterns/SwitchErrors.out Changeset: 889150b4 Author: Maurizio Cimadamore Date: 2022-07-07 09:08:09 +0000 URL: https://git.openjdk.org/loom/commit/889150b47a7a33d302c1883320d2cfbb915c52e7 8289558: Need spec clarification of j.l.foreign.*Layout Reviewed-by: psandoz, jvernee ! src/java.base/share/classes/java/lang/foreign/AbstractLayout.java ! src/java.base/share/classes/java/lang/foreign/GroupLayout.java ! src/java.base/share/classes/java/lang/foreign/MemoryLayout.java ! src/java.base/share/classes/java/lang/foreign/SequenceLayout.java ! src/java.base/share/classes/java/lang/foreign/ValueLayout.java Changeset: a8eb7286 Author: Stuart Marks Date: 2022-07-07 16:54:15 +0000 URL: https://git.openjdk.org/loom/commit/a8eb728680529e81bea0584912dead394c35b040 8289779: Map::replaceAll javadoc has redundant @throws clauses Reviewed-by: prappo, iris ! src/java.base/share/classes/java/util/Map.java Changeset: 3212dc9c Author: Joe Wang Date: 2022-07-07 19:07:04 +0000 URL: https://git.openjdk.org/loom/commit/3212dc9c6f3538e1d0bd1809efd5f33ad8b47701 8289486: Improve XSLT XPath operators count efficiency Reviewed-by: naoto, lancea ! src/java.xml/share/classes/com/sun/java_cup/internal/runtime/lr_parser.java ! src/java.xml/share/classes/com/sun/org/apache/xalan/internal/xsltc/compiler/XPathParser.java Changeset: 01b9f95c Author: Jesper Wilhelmsson Date: 2022-07-08 02:07:36 +0000 URL: https://git.openjdk.org/loom/commit/01b9f95c62953e7f9ca10eafd42d21c634413827 Merge ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/share/runtime/continuationEntry.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/share/runtime/continuationEntry.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt Changeset: 1fec62f2 Author: Ioi Lam Date: 2022-07-08 05:39:24 +0000 URL: https://git.openjdk.org/loom/commit/1fec62f299294a0c3b3c639883cdcdc8f1410224 8289710: Move Suspend/Resume classes out of os.hpp Reviewed-by: dholmes, coleenp ! src/hotspot/os/aix/osThread_aix.hpp ! src/hotspot/os/bsd/osThread_bsd.hpp ! src/hotspot/os/linux/osThread_linux.hpp ! src/hotspot/os/posix/signals_posix.cpp + src/hotspot/os/posix/suspendResume_posix.cpp + src/hotspot/os/posix/suspendResume_posix.hpp ! src/hotspot/os/windows/os_windows.cpp ! src/hotspot/os_cpu/linux_s390/javaThread_linux_s390.cpp ! src/hotspot/share/jfr/periodic/sampling/jfrThreadSampler.cpp ! src/hotspot/share/runtime/os.cpp ! src/hotspot/share/runtime/os.hpp ! src/hotspot/share/runtime/osThread.hpp + src/hotspot/share/runtime/suspendedThreadTask.cpp + src/hotspot/share/runtime/suspendedThreadTask.hpp Changeset: ac399e97 Author: Robbin Ehn Date: 2022-07-08 07:12:19 +0000 URL: https://git.openjdk.org/loom/commit/ac399e9777731e7a9cbc2ad3396acfa5358b1c76 8286957: Held monitor count Reviewed-by: rpressler, eosterlund ! make/test/JtregNativeHotspot.gmk ! src/hotspot/cpu/aarch64/aarch64.ad ! src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/globalDefinitions_aarch64.hpp ! src/hotspot/cpu/aarch64/interp_masm_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp ! src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp ! src/hotspot/cpu/aarch64/sharedRuntime_aarch64.cpp ! src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateInterpreterGenerator_aarch64.cpp ! src/hotspot/cpu/aarch64/templateTable_aarch64.cpp ! src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp ! src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp ! src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp ! src/hotspot/cpu/x86/globalDefinitions_x86.hpp ! src/hotspot/cpu/x86/interp_masm_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.cpp ! src/hotspot/cpu/x86/macroAssembler_x86.hpp ! src/hotspot/cpu/x86/sharedRuntime_x86_32.cpp ! src/hotspot/cpu/x86/sharedRuntime_x86_64.cpp ! src/hotspot/cpu/x86/stubGenerator_x86_64.cpp ! src/hotspot/cpu/x86/templateInterpreterGenerator_x86.cpp ! src/hotspot/cpu/x86/templateTable_x86.cpp ! src/hotspot/cpu/zero/globalDefinitions_zero.hpp ! src/hotspot/cpu/zero/zeroInterpreter_zero.cpp ! src/hotspot/share/c1/c1_Runtime1.cpp ! src/hotspot/share/interpreter/zero/bytecodeInterpreter.cpp ! src/hotspot/share/jvmci/vmStructs_jvmci.cpp ! src/hotspot/share/opto/macro.cpp ! src/hotspot/share/opto/runtime.cpp ! src/hotspot/share/prims/jni.cpp ! src/hotspot/share/runtime/continuationEntry.hpp ! src/hotspot/share/runtime/continuationFreezeThaw.cpp ! src/hotspot/share/runtime/deoptimization.cpp ! src/hotspot/share/runtime/javaThread.cpp ! src/hotspot/share/runtime/javaThread.hpp ! src/hotspot/share/runtime/objectMonitor.cpp ! src/hotspot/share/runtime/sharedRuntime.cpp ! src/hotspot/share/runtime/sharedRuntime.hpp ! src/hotspot/share/runtime/synchronizer.cpp ! src/hotspot/share/runtime/thread.cpp + test/hotspot/jtreg/runtime/Monitor/CompleteExit.java + test/hotspot/jtreg/runtime/Monitor/libCompleteExit.c Changeset: 1b8f466d Author: Thomas Schatzl Date: 2022-07-08 07:15:56 +0000 URL: https://git.openjdk.org/loom/commit/1b8f466dbad08c0fccb8f0069ff5141cf8d6bf2c 8289740: Add verification testing during all concurrent phases in G1 Reviewed-by: iwalulya, ayang + test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java Changeset: f1967cfa Author: Thomas Schatzl Date: 2022-07-08 08:49:17 +0000 URL: https://git.openjdk.org/loom/commit/f1967cfaabb30dba82eca0ab028f43020fe50c2b 8289997: gc/g1/TestVerificationInConcurrentCycle.java fails due to use of debug-only option Reviewed-by: lkorinth ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java Changeset: a13af650 Author: Dmitry Chuyko Date: 2022-07-08 08:55:13 +0000 URL: https://git.openjdk.org/loom/commit/a13af650437de508d64f0b12285a6ffc9901f85f 8282322: AArch64: Provide a means to eliminate all STREX family of instructions Reviewed-by: ngasson, aph ! src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp ! src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S Changeset: d852e99a Author: Vladimir Kempik Date: 2022-07-08 09:14:51 +0000 URL: https://git.openjdk.org/loom/commit/d852e99ae9de4c611438c50ce37ea1806f58cbdf 8289697: buffer overflow in MTLVertexCache.m: MTLVertexCache_AddGlyphQuad Reviewed-by: prr ! src/java.desktop/macosx/native/libawt_lwawt/java2d/metal/MTLVertexCache.m Changeset: e7795851 Author: Coleen Phillimore Date: 2022-07-08 15:55:14 +0000 URL: https://git.openjdk.org/loom/commit/e7795851d2e02389e63950fef939084b18ec4bfb 8271707: migrate tests to use jdk.test.whitebox.WhiteBox Reviewed-by: lmesnik, dholmes ! test/hotspot/jtreg/applications/ctw/modules/generate.bash ! test/hotspot/jtreg/applications/ctw/modules/java_base.java ! test/hotspot/jtreg/applications/ctw/modules/java_base_2.java ! test/hotspot/jtreg/applications/ctw/modules/java_compiler.java ! test/hotspot/jtreg/applications/ctw/modules/java_datatransfer.java ! test/hotspot/jtreg/applications/ctw/modules/java_desktop.java ! test/hotspot/jtreg/applications/ctw/modules/java_desktop_2.java ! test/hotspot/jtreg/applications/ctw/modules/java_instrument.java ! test/hotspot/jtreg/applications/ctw/modules/java_logging.java ! test/hotspot/jtreg/applications/ctw/modules/java_management.java ! test/hotspot/jtreg/applications/ctw/modules/java_management_rmi.java ! test/hotspot/jtreg/applications/ctw/modules/java_naming.java ! test/hotspot/jtreg/applications/ctw/modules/java_net_http.java ! test/hotspot/jtreg/applications/ctw/modules/java_prefs.java ! test/hotspot/jtreg/applications/ctw/modules/java_rmi.java ! test/hotspot/jtreg/applications/ctw/modules/java_scripting.java ! test/hotspot/jtreg/applications/ctw/modules/java_security_jgss.java ! test/hotspot/jtreg/applications/ctw/modules/java_security_sasl.java ! test/hotspot/jtreg/applications/ctw/modules/java_smartcardio.java ! test/hotspot/jtreg/applications/ctw/modules/java_sql.java ! test/hotspot/jtreg/applications/ctw/modules/java_sql_rowset.java ! test/hotspot/jtreg/applications/ctw/modules/java_transaction_xa.java ! test/hotspot/jtreg/applications/ctw/modules/java_xml.java ! test/hotspot/jtreg/applications/ctw/modules/java_xml_crypto.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_accessibility.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_attach.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_charsets.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_compiler.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_cryptoki.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_ec.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_crypto_mscapi.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_dynalink.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_editpad.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_hotspot_agent.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_httpserver.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_ed.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_jvmstat.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_le.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_opt.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_internal_vm_ci.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jartool.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_javadoc.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jcmd.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jconsole.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jdeps.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jdi.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jfr.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jlink.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jshell.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jsobject.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_jstatd.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_localedata.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_localedata_2.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_management.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_management_agent.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_management_jfr.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_naming_dns.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_naming_rmi.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_net.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_sctp.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_security_auth.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_security_jgss.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_unsupported.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_unsupported_desktop.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_xml_dom.java ! test/hotspot/jtreg/applications/ctw/modules/jdk_zipfs.java ! test/hotspot/jtreg/compiler/allocation/TestFailedAllocationBadGraph.java ! test/hotspot/jtreg/compiler/arguments/TestUseBMI1InstructionsOnSupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseBMI1InstructionsOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountLeadingZerosInstructionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountLeadingZerosInstructionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountTrailingZerosInstructionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/arguments/TestUseCountTrailingZerosInstructionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/arraycopy/TestArrayCopyNoInitDeopt.java ! test/hotspot/jtreg/compiler/arraycopy/TestDefaultMethodArrayCloneDeoptC2.java ! test/hotspot/jtreg/compiler/arraycopy/TestOutOfBoundsArrayLoad.java ! test/hotspot/jtreg/compiler/c2/Test6857159.java ! test/hotspot/jtreg/compiler/c2/Test8004741.java ! test/hotspot/jtreg/compiler/c2/TestDeadDataLoopIGVN.java ! test/hotspot/jtreg/compiler/c2/aarch64/TestVolatiles.java ! test/hotspot/jtreg/compiler/c2/cr6589834/Test_ia32.java ! test/hotspot/jtreg/compiler/c2/irTests/TestSuperwordFailsUnrolling.java ! test/hotspot/jtreg/compiler/calls/common/CallsBase.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeDynamic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeDynamic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeDynamic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeInterface2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeInterface2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeInterface2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeSpecial2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeSpecial2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeSpecial2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeStatic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeStatic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeStatic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeVirtual2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeVirtual2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromCompiled/CompiledInvokeVirtual2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeDynamic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeDynamic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeDynamic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeInterface2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeInterface2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeInterface2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeSpecial2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeSpecial2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeSpecial2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeStatic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeStatic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeStatic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeVirtual2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeVirtual2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromInterpreted/InterpretedInvokeVirtual2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeSpecial2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeSpecial2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeSpecial2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeStatic2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeStatic2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeStatic2NativeTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeVirtual2CompiledTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeVirtual2InterpretedTest.java ! test/hotspot/jtreg/compiler/calls/fromNative/NativeInvokeVirtual2NativeTest.java ! test/hotspot/jtreg/compiler/cha/AbstractRootMethod.java ! test/hotspot/jtreg/compiler/cha/DefaultRootMethod.java ! test/hotspot/jtreg/compiler/cha/StrengthReduceInterfaceCall.java ! test/hotspot/jtreg/compiler/cha/Utils.java ! test/hotspot/jtreg/compiler/ciReplay/TestClientVM.java ! test/hotspot/jtreg/compiler/ciReplay/TestDumpReplay.java ! test/hotspot/jtreg/compiler/ciReplay/TestDumpReplayCommandLine.java ! test/hotspot/jtreg/compiler/ciReplay/TestInlining.java ! test/hotspot/jtreg/compiler/ciReplay/TestLambdas.java ! test/hotspot/jtreg/compiler/ciReplay/TestNoClassFile.java ! test/hotspot/jtreg/compiler/ciReplay/TestSAClient.java ! test/hotspot/jtreg/compiler/ciReplay/TestSAServer.java ! test/hotspot/jtreg/compiler/ciReplay/TestServerVM.java ! test/hotspot/jtreg/compiler/ciReplay/TestUnresolvedClasses.java ! test/hotspot/jtreg/compiler/ciReplay/TestVMNoCompLevel.java ! test/hotspot/jtreg/compiler/ciReplay/VMBase.java ! test/hotspot/jtreg/compiler/classUnloading/methodUnloading/TestMethodUnloading.java ! test/hotspot/jtreg/compiler/codecache/CheckSegmentedCodeCache.java ! test/hotspot/jtreg/compiler/codecache/OverflowCodeCacheTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/BeanTypeTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeCacheUtils.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeHeapBeanPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/GetUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/InitialAndMaxUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ManagerNamesTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/MemoryPoolsPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PeakUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PoolsIndependenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ThresholdNotificationsTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdExceededSeveralTimesTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdExceededTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdIncreasedTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdNotExceededTest.java ! test/hotspot/jtreg/compiler/codecache/stress/Helper.java ! test/hotspot/jtreg/compiler/codecache/stress/OverloadCompileQueueTest.java ! test/hotspot/jtreg/compiler/codecache/stress/RandomAllocationTest.java ! test/hotspot/jtreg/compiler/codecache/stress/ReturnBlobToWrongHeapTest.java ! test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java ! test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationTest.java ! test/hotspot/jtreg/compiler/codegen/TestOopCmp.java ! test/hotspot/jtreg/compiler/codegen/aes/TestAESMain.java ! test/hotspot/jtreg/compiler/codegen/aes/TestCipherBlockChainingEncrypt.java ! test/hotspot/jtreg/compiler/compilercontrol/InlineMatcherTest.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityBase.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityCommandOff.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityCommandOn.java ! test/hotspot/jtreg/compiler/compilercontrol/TestCompilerDirectivesCompatibilityFlag.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/CompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/ExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commandfile/PrintTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/CompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/ControlIntrinsicTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/ExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/commands/PrintTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/CompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/ControlIntrinsicTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/ExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/directives/PrintTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddAndRemoveTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddCompileOnlyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddExcludeTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddLogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/AddPrintAssemblyTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/ClearDirectivesFileStackTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/ClearDirectivesStackTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/ControlIntrinsicTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/PrintDirectivesTest.java ! test/hotspot/jtreg/compiler/compilercontrol/jcmd/StressAddMultiThreadedTest.java ! test/hotspot/jtreg/compiler/compilercontrol/logcompilation/LogTest.java ! test/hotspot/jtreg/compiler/compilercontrol/matcher/MethodMatcherTest.java ! test/hotspot/jtreg/compiler/compilercontrol/mixed/RandomCommandsTest.java ! test/hotspot/jtreg/compiler/compilercontrol/mixed/RandomValidCommandsTest.java ! test/hotspot/jtreg/compiler/compilercontrol/share/actions/CompileAction.java ! test/hotspot/jtreg/compiler/cpuflags/TestAESIntrinsicsOnSupportedConfig.java ! test/hotspot/jtreg/compiler/cpuflags/TestAESIntrinsicsOnUnsupportedConfig.java ! test/hotspot/jtreg/compiler/escapeAnalysis/TestArrayCopy.java ! test/hotspot/jtreg/compiler/floatingpoint/NaNTest.java ! test/hotspot/jtreg/compiler/floatingpoint/TestPow2.java ! test/hotspot/jtreg/compiler/gcbarriers/EqvUncastStepOverBarrier.java ! test/hotspot/jtreg/compiler/gcbarriers/PreserveFPRegistersTest.java ! test/hotspot/jtreg/compiler/interpreter/DisableOSRTest.java ! test/hotspot/jtreg/compiler/intrinsics/IntrinsicAvailableTest.java ! test/hotspot/jtreg/compiler/intrinsics/IntrinsicDisabledTest.java ! test/hotspot/jtreg/compiler/intrinsics/TestCheckIndex.java ! test/hotspot/jtreg/compiler/intrinsics/base64/TestBase64.java ! test/hotspot/jtreg/compiler/intrinsics/bigInteger/MontgomeryMultiplyTest.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBzhiI2L.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/AndnTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/AndnTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsiTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsiTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsmskTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsmskTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsrTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BlsrTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BzhiTestI2L.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/LZcntTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/LZcntTestL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/TZcntTestI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/TZcntTestL.java ! test/hotspot/jtreg/compiler/intrinsics/klass/CastNullCheckDroppingsTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/AddExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/AddExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/DecrementExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/DecrementExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/IncrementExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/IncrementExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/MultiplyExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/MultiplyExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/NegateExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/NegateExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/SubtractExactIntTest.java ! test/hotspot/jtreg/compiler/intrinsics/mathexact/sanity/SubtractExactLongTest.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseMD5IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseMD5IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA1IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA1IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA256IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA256IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA3IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA3IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA512IntrinsicsOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHA512IntrinsicsOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHAOptionOnSupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/cli/TestUseSHAOptionOnUnsupportedCPU.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/DigestSanityTestBase.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestMD5Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestMD5MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA1Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA1MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA256Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA256MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA3Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA3MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA512Intrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/sha/sanity/TestSHA512MultiBlockIntrinsics.java ! test/hotspot/jtreg/compiler/intrinsics/string/TestStringIntrinsics2.java ! test/hotspot/jtreg/compiler/jsr292/ContinuousCallSiteTargetChange.java ! test/hotspot/jtreg/compiler/jsr292/InvokerGC.java ! test/hotspot/jtreg/compiler/jsr292/NonInlinedCall/InvokeTest.java ! test/hotspot/jtreg/compiler/jsr292/NonInlinedCall/RedefineTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/AllocateCompileIdTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/CollectCountersTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/CompileCodeTestCase.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ConstantPoolTestCase.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ConstantPoolTestsHelper.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DoNotInlineOrCompileTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DummyClass.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetFlagValueTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetResolvedJavaMethodTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/GetResolvedJavaTypeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/HasNeverInlineDirectiveTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IsCompilableTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IsMatureTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IsMatureVsReprofileTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/IterateFramesNative.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupKlassInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupKlassRefIndexInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupMethodInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupNameAndTypeRefIndexInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupNameInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/LookupSignatureInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/MaterializeVirtualObjectTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ReprofileTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ResolveFieldInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ResolvePossiblyCachedConstantInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ResolveTypeInPoolTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ShouldInlineMethodTest.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderData.java ! test/hotspot/jtreg/compiler/jvmci/jdk.vm.ci.hotspot.test/src/jdk/vm/ci/hotspot/test/MemoryAccessProviderTest.java ! test/hotspot/jtreg/compiler/lib/ir_framework/IRNode.java ! test/hotspot/jtreg/compiler/lib/ir_framework/TestFramework.java ! test/hotspot/jtreg/compiler/lib/ir_framework/flag/FlagVM.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/AbstractTest.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/CustomRunTest.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/IREncodingPrinter.java ! test/hotspot/jtreg/compiler/lib/ir_framework/test/TestVM.java ! test/hotspot/jtreg/compiler/loopopts/UseCountedLoopSafepoints.java ! test/hotspot/jtreg/compiler/loopopts/UseCountedLoopSafepointsTest.java ! test/hotspot/jtreg/compiler/onSpinWait/TestOnSpinWaitAArch64DefaultFlags.java ! test/hotspot/jtreg/compiler/oracle/GetMethodOptionTest.java ! test/hotspot/jtreg/compiler/oracle/MethodMatcherTest.java ! test/hotspot/jtreg/compiler/profiling/TestTypeProfiling.java ! test/hotspot/jtreg/compiler/rangechecks/TestExplicitRangeChecks.java ! test/hotspot/jtreg/compiler/rangechecks/TestLongRangeCheck.java ! test/hotspot/jtreg/compiler/rangechecks/TestRangeCheckSmearing.java ! test/hotspot/jtreg/compiler/regalloc/TestC2IntPressure.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMAbortRatio.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMAbortThreshold.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMAfterNonRTMDeopt.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMDeoptOnHighAbortRatio.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMDeoptOnLowAbortRatio.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMLockingCalculationDelay.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMLockingThreshold.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMRetryCount.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMSpinLoopCount.java ! test/hotspot/jtreg/compiler/rtm/locking/TestRTMTotalCountIncrRate.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMAfterLockInflation.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMDeopt.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMForInflatedLocks.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMForStackLocks.java ! test/hotspot/jtreg/compiler/rtm/locking/TestUseRTMXendForLockBusy.java ! test/hotspot/jtreg/compiler/rtm/method_options/TestNoRTMLockElidingOption.java ! test/hotspot/jtreg/compiler/rtm/method_options/TestUseRTMLockElidingOption.java ! test/hotspot/jtreg/compiler/rtm/print/TestPrintPreciseRTMLockingStatistics.java ! test/hotspot/jtreg/compiler/runtime/Test8010927.java ! test/hotspot/jtreg/compiler/stable/StableConfiguration.java ! test/hotspot/jtreg/compiler/stable/TestStableBoolean.java ! test/hotspot/jtreg/compiler/stable/TestStableByte.java ! test/hotspot/jtreg/compiler/stable/TestStableChar.java ! test/hotspot/jtreg/compiler/stable/TestStableDouble.java ! test/hotspot/jtreg/compiler/stable/TestStableFloat.java ! test/hotspot/jtreg/compiler/stable/TestStableInt.java ! test/hotspot/jtreg/compiler/stable/TestStableLong.java ! test/hotspot/jtreg/compiler/stable/TestStableObject.java ! test/hotspot/jtreg/compiler/stable/TestStableShort.java ! test/hotspot/jtreg/compiler/stable/TestStableUByte.java ! test/hotspot/jtreg/compiler/stable/TestStableUShort.java ! test/hotspot/jtreg/compiler/testlibrary/CompilerUtils.java ! test/hotspot/jtreg/compiler/testlibrary/rtm/AbortProvoker.java ! test/hotspot/jtreg/compiler/testlibrary/sha/predicate/IntrinsicPredicates.java ! test/hotspot/jtreg/compiler/tiered/ConstantGettersTransitionsTest.java ! test/hotspot/jtreg/compiler/tiered/Level2RecompilationTest.java ! test/hotspot/jtreg/compiler/tiered/LevelTransitionTest.java ! test/hotspot/jtreg/compiler/tiered/NonTieredLevelsTest.java ! test/hotspot/jtreg/compiler/tiered/TestEnqueueMethodForCompilation.java ! test/hotspot/jtreg/compiler/tiered/TieredLevelsTest.java ! test/hotspot/jtreg/compiler/types/TestMeetIncompatibleInterfaceArrays.java ! test/hotspot/jtreg/compiler/types/correctness/CorrectnessTest.java ! test/hotspot/jtreg/compiler/types/correctness/OffTest.java ! test/hotspot/jtreg/compiler/uncommontrap/DeoptReallocFailure.java ! test/hotspot/jtreg/compiler/uncommontrap/Test8009761.java ! test/hotspot/jtreg/compiler/uncommontrap/TestNullAssertAtCheckCast.java ! test/hotspot/jtreg/compiler/uncommontrap/TestUnstableIfTrap.java ! test/hotspot/jtreg/compiler/unsafe/UnsafeGetStableArrayElement.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayCopyTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayIndexFillTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayInvariantFillTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayShiftOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayTypeConvertTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/ArrayUnsafeOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicBooleanOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicByteOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicCharOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicDoubleOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicFloatOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicIntOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicLongOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/BasicShortOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopArrayIndexComputeTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopCombinedOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopControlFlowTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopLiveOutNodesTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopRangeStrideTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/LoopReductionOpTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/MultipleLoopsTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/StripMinedLoopTest.java ! test/hotspot/jtreg/compiler/vectorization/runner/VectorizationTestRunner.java ! test/hotspot/jtreg/compiler/whitebox/AllocationCodeBlobTest.java ! test/hotspot/jtreg/compiler/whitebox/BlockingCompilation.java ! test/hotspot/jtreg/compiler/whitebox/ClearMethodStateTest.java ! test/hotspot/jtreg/compiler/whitebox/CompilerWhiteBoxTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeAllTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeFramesTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeMultipleOSRTest.java ! test/hotspot/jtreg/compiler/whitebox/EnqueueMethodForCompilationTest.java ! test/hotspot/jtreg/compiler/whitebox/ForceNMethodSweepTest.java ! test/hotspot/jtreg/compiler/whitebox/GetCodeHeapEntriesTest.java ! test/hotspot/jtreg/compiler/whitebox/GetNMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/IsMethodCompilableTest.java ! test/hotspot/jtreg/compiler/whitebox/LockCompilationTest.java ! test/hotspot/jtreg/compiler/whitebox/MakeMethodNotCompilableTest.java ! test/hotspot/jtreg/compiler/whitebox/OSRFailureLevel4Test.java ! test/hotspot/jtreg/compiler/whitebox/SetDontInlineMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/SetForceInlineMethodTest.java ! test/hotspot/jtreg/compiler/whitebox/SimpleTestCase.java ! test/hotspot/jtreg/compiler/whitebox/TestEnqueueInitializerForCompilation.java ! test/hotspot/jtreg/compiler/whitebox/TestMethodCompilableCompilerDirectives.java ! test/hotspot/jtreg/containers/cgroup/CgroupSubsystemFactory.java ! test/hotspot/jtreg/containers/cgroup/PlainRead.java ! test/hotspot/jtreg/containers/docker/CheckContainerized.java ! test/hotspot/jtreg/containers/docker/PrintContainerInfo.java ! test/hotspot/jtreg/containers/docker/TestCPUSets.java ! test/hotspot/jtreg/containers/docker/TestMemoryAwareness.java ! test/hotspot/jtreg/containers/docker/TestMemoryWithCgroupV1.java ! test/hotspot/jtreg/containers/docker/TestMisc.java ! test/hotspot/jtreg/containers/docker/TestPids.java ! test/hotspot/jtreg/gc/TestAgeOutput.java ! test/hotspot/jtreg/gc/TestConcurrentGCBreakpoints.java ! test/hotspot/jtreg/gc/TestJNIWeak/TestJNIWeak.java ! test/hotspot/jtreg/gc/TestNumWorkerOutput.java ! test/hotspot/jtreg/gc/TestReferenceClearDuringMarking.java ! test/hotspot/jtreg/gc/TestReferenceClearDuringReferenceProcessing.java ! test/hotspot/jtreg/gc/TestReferenceRefersTo.java ! test/hotspot/jtreg/gc/TestReferenceRefersToDuringConcMark.java ! test/hotspot/jtreg/gc/TestSmallHeap.java ! test/hotspot/jtreg/gc/arguments/TestG1HeapSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestMaxHeapSizeTools.java ! test/hotspot/jtreg/gc/arguments/TestMaxRAMFlags.java ! test/hotspot/jtreg/gc/arguments/TestMinAndInitialSurvivorRatioFlags.java ! test/hotspot/jtreg/gc/arguments/TestMinInitialErgonomics.java ! test/hotspot/jtreg/gc/arguments/TestNewRatioFlag.java ! test/hotspot/jtreg/gc/arguments/TestNewSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestParallelGCThreads.java ! test/hotspot/jtreg/gc/arguments/TestParallelHeapSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestParallelRefProc.java ! test/hotspot/jtreg/gc/arguments/TestSerialHeapSizeFlags.java ! test/hotspot/jtreg/gc/arguments/TestSmallInitialHeapWithLargePageAndNUMA.java ! test/hotspot/jtreg/gc/arguments/TestSurvivorRatioFlag.java ! test/hotspot/jtreg/gc/arguments/TestTargetSurvivorRatioFlag.java ! test/hotspot/jtreg/gc/arguments/TestUseCompressedOopsErgo.java ! test/hotspot/jtreg/gc/arguments/TestUseCompressedOopsErgoTools.java ! test/hotspot/jtreg/gc/arguments/TestVerifyBeforeAndAfterGCFlags.java ! test/hotspot/jtreg/gc/class_unloading/TestClassUnloadingDisabled.java ! test/hotspot/jtreg/gc/class_unloading/TestG1ClassUnloadingHWM.java ! test/hotspot/jtreg/gc/ergonomics/TestDynamicNumberOfGCThreads.java ! test/hotspot/jtreg/gc/ergonomics/TestInitialGCThreadLogging.java ! test/hotspot/jtreg/gc/g1/TestEagerReclaimHumongousRegionsLog.java ! test/hotspot/jtreg/gc/g1/TestEdenSurvivorLessThanMax.java ! test/hotspot/jtreg/gc/g1/TestEvacuationFailure.java ! test/hotspot/jtreg/gc/g1/TestFromCardCacheIndex.java ! test/hotspot/jtreg/gc/g1/TestGCLogMessages.java ! test/hotspot/jtreg/gc/g1/TestHumongousCodeCacheRoots.java ! test/hotspot/jtreg/gc/g1/TestHumongousConcurrentStartUndo.java ! test/hotspot/jtreg/gc/g1/TestHumongousRemsetsMatch.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForAuxMemory.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForHeap.java ! test/hotspot/jtreg/gc/g1/TestMixedGCLiveThreshold.java ! test/hotspot/jtreg/gc/g1/TestNoEagerReclaimOfHumongousRegions.java ! test/hotspot/jtreg/gc/g1/TestNoUseHCC.java ! test/hotspot/jtreg/gc/g1/TestPLABOutput.java ! test/hotspot/jtreg/gc/g1/TestRegionLivenessPrint.java ! test/hotspot/jtreg/gc/g1/TestRemsetLogging.java ! test/hotspot/jtreg/gc/g1/TestRemsetLoggingPerRegion.java ! test/hotspot/jtreg/gc/g1/TestRemsetLoggingTools.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData00.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData05.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData10.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData15.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData20.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData25.java ! test/hotspot/jtreg/gc/g1/TestShrinkAuxiliaryData27.java ! test/hotspot/jtreg/gc/g1/TestSkipRebuildRemsetPhase.java ! test/hotspot/jtreg/gc/g1/TestVerifyGCType.java ! test/hotspot/jtreg/gc/g1/humongousObjects/G1SampleClass.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHeapCounters.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousClassLoader.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousMovement.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousNonArrayAllocation.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestHumongousThreshold.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestNoAllocationsInHRegions.java ! test/hotspot/jtreg/gc/g1/humongousObjects/TestObjectCollected.java ! test/hotspot/jtreg/gc/g1/humongousObjects/objectGraphTest/GC.java ! test/hotspot/jtreg/gc/g1/humongousObjects/objectGraphTest/TestObjectGraphAfterGC.java ! test/hotspot/jtreg/gc/g1/mixedgc/TestLogging.java ! test/hotspot/jtreg/gc/g1/mixedgc/TestOldGenCollectionUsage.java ! test/hotspot/jtreg/gc/g1/numa/TestG1NUMATouchRegions.java ! test/hotspot/jtreg/gc/g1/plab/TestPLABPromotion.java ! test/hotspot/jtreg/gc/g1/plab/TestPLABResize.java ! test/hotspot/jtreg/gc/g1/plab/lib/AppPLABPromotion.java ! test/hotspot/jtreg/gc/g1/plab/lib/AppPLABResize.java ! test/hotspot/jtreg/gc/logging/TestGCId.java ! test/hotspot/jtreg/gc/logging/TestMetaSpaceLog.java ! test/hotspot/jtreg/gc/metaspace/TestCapacityUntilGCWrapAround.java ! test/hotspot/jtreg/gc/shenandoah/TestReferenceRefersToShenandoah.java ! test/hotspot/jtreg/gc/shenandoah/TestReferenceShortcutCycle.java ! test/hotspot/jtreg/gc/stress/TestMultiThreadStressRSet.java ! test/hotspot/jtreg/gc/stress/TestStressRSetCoarsening.java ! test/hotspot/jtreg/gc/testlibrary/Helpers.java ! test/hotspot/jtreg/gc/testlibrary/g1/MixedGCProvoker.java ! test/hotspot/jtreg/gc/whitebox/TestConcMarkCycleWB.java ! test/hotspot/jtreg/gc/whitebox/TestWBGC.java ! test/hotspot/jtreg/resourcehogs/compiler/intrinsics/string/TestStringIntrinsics2LargeArray.java ! test/hotspot/jtreg/runtime/ClassInitErrors/InitExceptionUnloadTest.java ! test/hotspot/jtreg/runtime/ClassUnload/ConstantPoolDependsTest.java ! test/hotspot/jtreg/runtime/ClassUnload/DictionaryDependsTest.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveClass.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveClassLoader.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveObject.java ! test/hotspot/jtreg/runtime/ClassUnload/KeepAliveSoftReference.java ! test/hotspot/jtreg/runtime/ClassUnload/SuperDependsTest.java ! test/hotspot/jtreg/runtime/ClassUnload/UnloadInterfaceTest.java ! test/hotspot/jtreg/runtime/ClassUnload/UnloadTest.java ! test/hotspot/jtreg/runtime/ClassUnload/UnloadTestWithVerifyDuringGC.java ! test/hotspot/jtreg/runtime/CommandLine/VMDeprecatedOptions.java ! test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java ! test/hotspot/jtreg/runtime/Dictionary/CleanProtectionDomain.java ! test/hotspot/jtreg/runtime/ElfDecoder/TestElfDirectRead.java ! test/hotspot/jtreg/runtime/HiddenClasses/TestHiddenClassUnloading.java ! test/hotspot/jtreg/runtime/MemberName/MemberNameLeak.java ! test/hotspot/jtreg/runtime/Metaspace/DefineClass.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/MetaspaceTestArena.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/MetaspaceTestContext.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/Settings.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/TestMetaspaceAllocation.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/TestMetaspaceAllocationMT1.java ! test/hotspot/jtreg/runtime/Metaspace/elastic/TestMetaspaceAllocationMT2.java ! test/hotspot/jtreg/runtime/NMT/CommitOverlappingRegions.java ! test/hotspot/jtreg/runtime/NMT/HugeArenaTracking.java ! test/hotspot/jtreg/runtime/NMT/JcmdDetailDiff.java ! test/hotspot/jtreg/runtime/NMT/JcmdSummaryDiff.java ! test/hotspot/jtreg/runtime/NMT/MallocRoundingReportTest.java ! test/hotspot/jtreg/runtime/NMT/MallocSiteHashOverflow.java ! test/hotspot/jtreg/runtime/NMT/MallocSiteTypeChange.java ! test/hotspot/jtreg/runtime/NMT/MallocStressTest.java ! test/hotspot/jtreg/runtime/NMT/MallocTestType.java ! test/hotspot/jtreg/runtime/NMT/MallocTrackingVerify.java ! test/hotspot/jtreg/runtime/NMT/ReleaseCommittedMemory.java ! test/hotspot/jtreg/runtime/NMT/ReleaseNoCommit.java ! test/hotspot/jtreg/runtime/NMT/SummarySanityCheck.java ! test/hotspot/jtreg/runtime/NMT/ThreadedMallocTestType.java ! test/hotspot/jtreg/runtime/NMT/ThreadedVirtualAllocTestType.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocAttemptReserveMemoryAt.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocCommitMerge.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocCommitUncommitRecommit.java ! test/hotspot/jtreg/runtime/NMT/VirtualAllocTestType.java ! test/hotspot/jtreg/runtime/Nestmates/protectionDomain/TestDifferentProtectionDomains.java ! test/hotspot/jtreg/runtime/Safepoint/TestAbortVMOnSafepointTimeout.java ! test/hotspot/jtreg/runtime/Thread/ThreadObjAccessAtExit.java ! test/hotspot/jtreg/runtime/Unsafe/InternalErrorTest.java ! test/hotspot/jtreg/runtime/cds/CheckDefaultArchiveFile.java ! test/hotspot/jtreg/runtime/cds/CheckSharingWithDefaultArchive.java ! test/hotspot/jtreg/runtime/cds/DumpSymbolAndStringTable.java ! test/hotspot/jtreg/runtime/cds/SharedStrings.java ! test/hotspot/jtreg/runtime/cds/SharedStringsWb.java ! test/hotspot/jtreg/runtime/cds/SpaceUtilizationCheck.java ! test/hotspot/jtreg/runtime/cds/appcds/ClassLoaderTest.java ! test/hotspot/jtreg/runtime/cds/appcds/CommandLineFlagCombo.java ! test/hotspot/jtreg/runtime/cds/appcds/HelloExtTest.java ! test/hotspot/jtreg/runtime/cds/appcds/JvmtiAddPath.java ! test/hotspot/jtreg/runtime/cds/appcds/MultiProcessSharing.java ! test/hotspot/jtreg/runtime/cds/appcds/RewriteBytecodesTest.java ! test/hotspot/jtreg/runtime/cds/appcds/SharedArchiveConsistency.java ! test/hotspot/jtreg/runtime/cds/appcds/SharedRegionAlignmentTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedIntegerCacheTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedModuleComboTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/ArchivedModuleWithCustomImageTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckArchivedModuleApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedMirrorApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedMirrorTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedResolvedReferences.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckCachedResolvedReferencesApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/CheckIntegerCacheApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/DifferentHeapSizes.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/GCStressApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/GCStressTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/MirrorWithReferenceFieldsApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/MirrorWithReferenceFieldsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/PrimitiveTypesApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/PrimitiveTypesTest.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/RedefineClassApp.java ! test/hotspot/jtreg/runtime/cds/appcds/cacheObject/RedefineClassTest.java ! test/hotspot/jtreg/runtime/cds/appcds/condy/CondyHelloApp.java ! test/hotspot/jtreg/runtime/cds/appcds/condy/CondyHelloTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/HelloCustom.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/HelloCustom_JFR.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/LoaderSegregationTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/OldClassAndInf.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/PrintSharedArchiveAndExit.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/SameNameInTwoLoadersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/UnintendedLoadersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/UnloadUnregisteredLoaderTest.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/Hello.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/HelloUnload.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/LoaderSegregation.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/OldClassApp.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/SameNameUnrelatedLoaders.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/UnintendedLoaders.java ! test/hotspot/jtreg/runtime/cds/appcds/customLoader/test-classes/UnloadUnregisteredLoader.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/AppendClasspath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ArchiveConsistency.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ArchivedSuperIf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ArrayKlasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/BasicLambdaTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/CDSStreamTestDriver.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ClassResolutionFailure.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DoubleSumAverageTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DumpToDefaultArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DuplicatedCustomTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicArchiveRelocationTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicArchiveTestBase.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicLotsOfClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/DynamicSharedSymbols.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ExcludedClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/HelloDynamic.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/HelloDynamicCustom.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/HelloDynamicCustomUnload.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/JFRDynamicCDS.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/JITInteraction.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaContainsOldInf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaCustomLoader.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaForClassInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaForOldInfInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaProxyCallerIsHidden.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LambdaProxyDuringShutdown.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LinkClassTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/LotsUnloadTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MethodSorting.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MismatchedBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MissingArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ModulePath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/NestHostOldInf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/NestTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/NoClassToArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/OldClassAndInf.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/OldClassInBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/ParallelLambdaLoadTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/PredicateTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/PrintSharedArchiveAndExit.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/RedefineCallerClassTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/RegularHiddenClass.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/RelativePath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/SharedArchiveFileOption.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/SharedBaseAddressOption.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/StaticInnerTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestAutoCreateSharedArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestDynamicDumpAtOom.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestDynamicRegenerateHolderClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/TestLambdaInvokers.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/UnsupportedBaseArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/UnusedCPDuringDump.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/UsedAllArchivedLambdas.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/VerifyObjArrayCloneTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/VerifyWithDynamicArchive.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/WrongTopClasspath.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/CDSMHTest_generate.sh ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesAsCollectorTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesCastFailureTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesGeneralTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesInvokersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesPermuteArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/methodHandles/MethodHandlesSpreadArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/DuplicatedCustomApp.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/LambdaVerification.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/LoadClasses.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/TestJIT.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/test-classes/UsedAllArchivedLambdasApp.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/ArrayTest.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/ArrayTestHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/GCSharedStringsDuringDump.java ! test/hotspot/jtreg/runtime/cds/appcds/javaldr/GCSharedStringsDuringDumpWb.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestDumpBase.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestDynamicDump.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestFileSafety.java ! test/hotspot/jtreg/runtime/cds/appcds/jcmd/JCmdTestStaticDump.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/classpathtests/DummyClassesInBootClassPath.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/modulepath/JvmtiAddPath.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/modulepath/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/ClassFileLoadHook.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/ClassFileLoadHookTest.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/InstrumentationApp.java ! test/hotspot/jtreg/runtime/cds/appcds/jvmti/InstrumentationTest.java ! test/hotspot/jtreg/runtime/cds/appcds/loaderConstraints/DynamicLoaderConstraintsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/CDSMHTest_generate.sh ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesAsCollectorTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesCastFailureTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesGeneralTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesInvokersTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesPermuteArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/methodHandles/MethodHandlesSpreadArgumentsTest.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineBasic.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineBasicTest.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineRunningMethods_Shared.java ! test/hotspot/jtreg/runtime/cds/appcds/redefineClass/RedefineRunningMethods_SharedHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/ExerciseGC.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/HelloStringGC.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/HelloStringPlus.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/IncompatibleOptions.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/InternSharedString.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/InternStringTest.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/LockSharedStrings.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/LockStringTest.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/LockStringValueTest.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsBasicPlus.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsHumongous.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsUtils.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsWb.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/SharedStringsWbTest.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/BootClassPathAppendHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/DummyClassHelper.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/ForNameTest.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/GenericTestApp.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/HelloExt.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/HelloWB.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/JvmtiApp.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/MultiProcClass.java ! test/hotspot/jtreg/runtime/cds/appcds/test-classes/RewriteBytecodes.java ! test/hotspot/jtreg/runtime/cds/serviceability/ReplaceCriticalClasses.java ! test/hotspot/jtreg/runtime/cds/serviceability/ReplaceCriticalClassesForSubgraphs.java ! test/hotspot/jtreg/runtime/exceptionMsgs/AbstractMethodError/AbstractMethodErrorTest.java ! test/hotspot/jtreg/runtime/exceptionMsgs/IncompatibleClassChangeError/IncompatibleClassChangeErrorTest.java ! test/hotspot/jtreg/runtime/execstack/TestCheckJDK.java ! test/hotspot/jtreg/runtime/handshake/AsyncHandshakeWalkStackTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeDirectTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeTimeoutTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeWalkExitTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeWalkOneExitTest.java ! test/hotspot/jtreg/runtime/handshake/HandshakeWalkStackTest.java ! test/hotspot/jtreg/runtime/handshake/MixedHandshakeWalkStackTest.java ! test/hotspot/jtreg/runtime/handshake/SuspendBlocked.java ! test/hotspot/jtreg/runtime/interned/SanityTest.java ! test/hotspot/jtreg/runtime/logging/loadLibraryTest/LoadLibraryTest.java ! test/hotspot/jtreg/runtime/memory/ReadFromNoaccessArea.java ! test/hotspot/jtreg/runtime/memory/ReadVMPageSize.java ! test/hotspot/jtreg/runtime/memory/ReserveMemory.java ! test/hotspot/jtreg/runtime/memory/StressVirtualSpaceResize.java ! test/hotspot/jtreg/runtime/modules/AccessCheckAllUnnamed.java ! test/hotspot/jtreg/runtime/modules/AccessCheckExp.java ! test/hotspot/jtreg/runtime/modules/AccessCheckJavaBase.java ! test/hotspot/jtreg/runtime/modules/AccessCheckOpen.java ! test/hotspot/jtreg/runtime/modules/AccessCheckRead.java ! test/hotspot/jtreg/runtime/modules/AccessCheckSuper.java ! test/hotspot/jtreg/runtime/modules/AccessCheckUnnamed.java ! test/hotspot/jtreg/runtime/modules/AccessCheckWorks.java ! test/hotspot/jtreg/runtime/modules/CCE_module_msg.java ! test/hotspot/jtreg/runtime/modules/ExportTwice.java ! test/hotspot/jtreg/runtime/modules/JVMAddModuleExportToAllUnnamed.java ! test/hotspot/jtreg/runtime/modules/JVMAddModuleExports.java ! test/hotspot/jtreg/runtime/modules/JVMAddModuleExportsToAll.java ! test/hotspot/jtreg/runtime/modules/JVMAddReadsModule.java ! test/hotspot/jtreg/runtime/modules/JVMDefineModule.java ! test/hotspot/jtreg/runtime/modules/LoadUnloadModuleStress.java ! test/hotspot/jtreg/runtime/modules/ModuleHelper.java ! test/hotspot/jtreg/runtime/modules/SealedInterfaceModuleTest.java ! test/hotspot/jtreg/runtime/modules/SealedModuleTest.java ! test/hotspot/jtreg/runtime/stringtable/StringTableCleaningTest.java ! test/hotspot/jtreg/runtime/whitebox/TestHiddenClassIsAlive.java ! test/hotspot/jtreg/runtime/whitebox/TestWBDeflateIdleMonitors.java ! test/hotspot/jtreg/runtime/whitebox/WBStackSize.java ! test/hotspot/jtreg/serviceability/ParserTest.java ! test/hotspot/jtreg/serviceability/dcmd/compiler/CodelistTest.java ! test/hotspot/jtreg/serviceability/dcmd/compiler/CompilerQueueTest.java ! test/hotspot/jtreg/serviceability/jvmti/Heap/IterateHeapWithEscapeAnalysisEnabled.java ! test/hotspot/jtreg/serviceability/sa/TestInstanceKlassSize.java ! test/hotspot/jtreg/serviceability/sa/TestInstanceKlassSizeForInterface.java ! test/hotspot/jtreg/serviceability/sa/TestUniverse.java ! test/hotspot/jtreg/testlibrary/ctw/src/sun/hotspot/tools/ctw/Compiler.java ! test/hotspot/jtreg/testlibrary_tests/ctw/ClassesDirTest.java ! test/hotspot/jtreg/testlibrary_tests/ctw/ClassesListTest.java ! test/hotspot/jtreg/testlibrary_tests/ctw/JarDirTest.java ! test/hotspot/jtreg/testlibrary_tests/ctw/JarsTest.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestBasics.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestCompLevels.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestControls.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestDIgnoreCompilerControls.java ! test/hotspot/jtreg/testlibrary_tests/ir_framework/tests/TestIRMatching.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/check/ClassAssertion.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/loading/ClassLoadingHelper.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_anonclassloader_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level1_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level2_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level3_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_compilation_level4_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_humongous_class_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_jni_classloading_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_global_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_jni_local_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_rootClass_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_stackLocal_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_staticField_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_strongRef_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_keepRef_threadItself_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_phantom_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_prot_domains_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_redefinition_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_reflection_classloading_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_inMemoryCompilation_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_inMemoryCompilation_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_inMemoryCompilation_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_keep_cl/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_keep_class/TestDescription.java ! test/hotspot/jtreg/vmTestbase/gc/g1/unloading/tests/unloading_weak_ref_keep_obj/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/staticReferences/StaticReferences.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/common/PerformChecksHelper.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy003/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy004/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy005/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy006/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy007/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy008/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy009/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy010/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy011/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy012/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy013/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy014/TestDescription.java ! test/hotspot/jtreg/vmTestbase/metaspace/stressHierarchy/stressHierarchy015/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/HiddenClass/events/events001.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/HiddenClass/events/events001a.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects002/referringObjects002.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/ObjectReference/referringObjects/referringObjects002/referringObjects002a.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/forceEarlyReturn001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/forceEarlyReturn002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/heapwalking001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/heapwalking002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/mixed001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/mixed002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/monitorEvents001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/monitorEvents002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/ownedMonitorsAndFrames001/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jdi/stress/serial/ownedMonitorsAndFrames002/TestDescription.java ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/general_functions/GF08/gf08t001/TestDriver.java ! test/hotspot/jtreg/vmTestbase/nsk/share/jdi/SerialExecutionDebuggee.java ! test/hotspot/jtreg/vmTestbase/vm/compiler/CodeCacheInfo/Test.java ! test/hotspot/jtreg/vmTestbase/vm/mlvm/indy/stress/gc/lotsOfCallSites/Test.java ! test/hotspot/jtreg/vmTestbase/vm/share/gc/TriggerUnloadingWithWhiteBox.java ! test/jdk/com/sun/jdi/EATests.java ! test/jdk/java/foreign/stackwalk/TestAsyncStackWalk.java ! test/jdk/java/foreign/stackwalk/TestStackWalk.java ! test/jdk/java/foreign/upcalldeopt/TestUpcallDeopt.java ! test/jdk/java/lang/instrument/GetObjectSizeIntrinsicsTest.java ! test/jdk/java/lang/management/MemoryMXBean/CollectionUsageThreshold.java ! test/jdk/java/lang/management/MemoryMXBean/LowMemoryTest.java ! test/jdk/java/lang/management/MemoryMXBean/ResetPeakMemoryUsage.java ! test/jdk/java/lang/ref/CleanerTest.java ! test/jdk/java/util/Arrays/TimSortStackSize2.java ! test/jdk/jdk/internal/vm/Continuation/Fuzz.java ! test/jdk/jdk/jfr/api/consumer/TestRecordedFrameType.java ! test/jdk/jdk/jfr/event/allocation/TestObjectAllocationInNewTLABEvent.java ! test/jdk/jdk/jfr/event/allocation/TestObjectAllocationOutsideTLABEvent.java ! test/jdk/jdk/jfr/event/allocation/TestObjectAllocationSampleEventThrottling.java ! test/jdk/jdk/jfr/event/compiler/TestCodeCacheConfig.java ! test/jdk/jdk/jfr/event/compiler/TestCodeCacheFull.java ! test/jdk/jdk/jfr/event/compiler/TestCodeSweeper.java ! test/jdk/jdk/jfr/event/compiler/TestCodeSweeperStats.java ! test/jdk/jdk/jfr/event/compiler/TestCompilerCompile.java ! test/jdk/jdk/jfr/event/compiler/TestCompilerInlining.java ! test/jdk/jdk/jfr/event/compiler/TestCompilerPhase.java ! test/jdk/jdk/jfr/event/compiler/TestDeoptimization.java ! test/jdk/jdk/jfr/event/gc/collection/TestG1ParallelPhases.java ! test/jdk/jdk/jfr/event/gc/configuration/TestGCHeapConfigurationEventWith32BitOops.java ! test/jdk/jdk/jfr/event/gc/configuration/TestGCHeapConfigurationEventWithHeapBasedOops.java ! test/jdk/jdk/jfr/event/gc/detailed/TestEvacuationFailedEvent.java ! test/jdk/jdk/jfr/event/gc/detailed/TestGCLockerEvent.java ! test/jdk/jdk/jfr/event/gc/heapsummary/TestHeapSummaryCommittedSize.java ! test/jdk/jdk/jfr/event/runtime/TestSafepointEvents.java ! test/jdk/jdk/jfr/event/runtime/TestThrowableInstrumentation.java ! test/jdk/jdk/jfr/jvm/TestJFRIntrinsic.java ! test/jdk/jdk/jfr/startupargs/TestBadOptionValues.java ! test/lib-test/jdk/test/lib/TestPlatformIsTieredSupported.java ! test/lib/jdk/test/lib/cds/CDSArchiveUtils.java ! test/lib/jdk/test/lib/helpers/ClassFileInstaller.java ! test/lib/jdk/test/whitebox/WhiteBox.java ! test/lib/sun/hotspot/code/BlobType.java ! test/lib/sun/hotspot/code/CodeBlob.java ! test/lib/sun/hotspot/code/Compiler.java ! test/lib/sun/hotspot/code/NMethod.java ! test/lib/sun/hotspot/cpuinfo/CPUInfo.java ! test/lib/sun/hotspot/gc/GC.java Changeset: 9c86c820 Author: Vicente Romero Date: 2022-07-08 17:24:27 +0000 URL: https://git.openjdk.org/loom/commit/9c86c82091827e781c3919b4b4410981ae322732 8282714: synthetic arguments are being added to the constructors of static local classes Reviewed-by: jlahoda ! src/jdk.compiler/share/classes/com/sun/tools/javac/code/Symbol.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Lower.java + test/langtools/tools/javac/records/LocalStaticDeclarations2.java ! test/langtools/tools/javac/records/RecordCompilationTests.java Changeset: 1877533f Author: Weijun Wang Date: 2022-07-08 18:38:08 +0000 URL: https://git.openjdk.org/loom/commit/1877533f757731e2ce918230bfb345716954fa53 6522064: Aliases from Microsoft CryptoAPI has bad character encoding Reviewed-by: coffeys, hchao ! src/jdk.crypto.mscapi/windows/native/libsunmscapi/security.cpp + test/jdk/sun/security/mscapi/NonAsciiAlias.java Changeset: 6aaf141f Author: Lance Andersen Date: 2022-07-08 18:56:04 +0000 URL: https://git.openjdk.org/loom/commit/6aaf141f61416104020107c371592812a4c723d9 8289984: Files:isDirectory and isRegularFile methods not throwing SecurityException Reviewed-by: iris, alanb ! src/java.base/unix/classes/sun/nio/fs/UnixFileSystemProvider.java ! test/jdk/java/nio/file/Files/CheckPermissions.java Changeset: 54b4576f Author: Jonathan Gibbons Date: 2022-07-08 19:33:03 +0000 URL: https://git.openjdk.org/loom/commit/54b4576f78277335e9b45d0b36d943a20cf40888 8288699: cleanup HTML tree in HtmlDocletWriter.commentTagsToContent Reviewed-by: hannesw ! src/jdk.compiler/share/classes/com/sun/tools/javac/parser/DocCommentParser.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/resources/compiler.properties ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/AbstractOverviewIndexWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlSerialFieldWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/Signatures.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/TagletWriterImpl.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/Entity.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/RawHtml.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/Text.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/markup/TextBuilder.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/CommentUtils.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/Content.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/UserTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/CommentHelper.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/Utils.java ! test/langtools/jdk/javadoc/doclet/testTypeAnnotations/TestTypeAnnotations.java = test/langtools/tools/javac/diags/examples/InvalidHtml.java Changeset: 3c08e6b3 Author: Ioi Lam Date: 2022-07-09 03:47:20 +0000 URL: https://git.openjdk.org/loom/commit/3c08e6b311121e05e30b88c0e325317f364ef15d 8289780: Avoid formatting stub names when Forte is not enabled Reviewed-by: dholmes, coleenp, sspitsyn ! src/hotspot/share/code/codeBlob.cpp ! src/hotspot/share/interpreter/abstractInterpreter.cpp ! src/hotspot/share/prims/forte.cpp ! src/hotspot/share/prims/forte.hpp ! src/hotspot/share/runtime/sharedRuntime.cpp Changeset: 81ee7d28 Author: Jatin Bhateja Date: 2022-07-09 15:13:25 +0000 URL: https://git.openjdk.org/loom/commit/81ee7d28f8cb9f6c7fb6d2c76a0f14fd5147d93c 8289186: Support predicated vector load/store operations over X86 AVX2 targets. Reviewed-by: xgong, kvn ! src/hotspot/cpu/x86/assembler_x86.cpp ! src/hotspot/cpu/x86/assembler_x86.hpp ! src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp ! src/hotspot/cpu/x86/c2_MacroAssembler_x86.hpp ! src/hotspot/cpu/x86/x86.ad ! src/hotspot/share/opto/vectorIntrinsics.cpp ! src/hotspot/share/opto/vectornode.hpp + test/micro/org/openjdk/bench/jdk/incubator/vector/StoreMaskedIOOBEBenchmark.java Changeset: 87aa3ce0 Author: Andrey Turbanov Date: 2022-07-09 17:59:43 +0000 URL: https://git.openjdk.org/loom/commit/87aa3ce03e5e294b35cf2cab3cbba0d1964bbbff 8289274: Cleanup unnecessary null comparison before instanceof check in security modules Reviewed-by: mullan ! src/java.base/macosx/classes/apple/security/KeychainStore.java ! src/java.base/share/classes/com/sun/crypto/provider/RC2Cipher.java ! src/java.base/share/classes/javax/security/auth/PrivateCredentialPermission.java ! src/java.base/share/classes/sun/security/pkcs12/PKCS12KeyStore.java ! src/java.base/share/classes/sun/security/provider/JavaKeyStore.java ! src/java.base/share/classes/sun/security/provider/PolicyFile.java ! src/java.base/share/classes/sun/security/provider/SubjectCodeSource.java ! src/java.base/share/classes/sun/security/provider/certpath/CertId.java ! src/java.base/share/classes/sun/security/provider/certpath/RevocationChecker.java ! src/java.base/share/classes/sun/security/util/BitArray.java ! src/java.base/share/classes/sun/security/x509/AccessDescription.java ! src/jdk.crypto.cryptoki/share/classes/sun/security/pkcs11/P11KeyStore.java ! src/jdk.security.auth/share/classes/com/sun/security/auth/module/KeyStoreLoginModule.java ! src/jdk.security.jgss/share/classes/com/sun/security/sasl/gsskerb/GssKrb5Client.java Changeset: e9d9cc6d Author: Ioi Lam Date: 2022-07-11 05:21:01 +0000 URL: https://git.openjdk.org/loom/commit/e9d9cc6d0aece2237c490a610d79a562867251d8 8290027: Move inline functions from vm_version_x86.hpp to cpp Reviewed-by: kbarrett, dholmes ! src/hotspot/cpu/x86/vm_version_x86.cpp ! src/hotspot/cpu/x86/vm_version_x86.hpp Changeset: 4ab77ac6 Author: Thomas Schatzl Date: 2022-07-11 07:36:21 +0000 URL: https://git.openjdk.org/loom/commit/4ab77ac60df78eedb16ebe142a51f703165e808d 8290017: Directly call HeapRegion::block_start in G1CMObjArrayProcessor::process_slice Reviewed-by: ayang, iwalulya ! src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp Changeset: e2598207 Author: Thomas Schatzl Date: 2022-07-11 07:58:07 +0000 URL: https://git.openjdk.org/loom/commit/e25982071d6d1586d723bcc0d261be619a187f00 8290019: Refactor HeapRegion::oops_on_memregion_iterate() Reviewed-by: ayang, iwalulya ! src/hotspot/share/gc/g1/heapRegion.hpp ! src/hotspot/share/gc/g1/heapRegion.inline.hpp Changeset: 0225eb43 Author: Thomas Schatzl Date: 2022-07-11 07:59:00 +0000 URL: https://git.openjdk.org/loom/commit/0225eb434cb8792d362923bf2c2e3607be4efcb9 8290018: Remove dead declarations in G1BlockOffsetTablePart Reviewed-by: ayang ! src/hotspot/share/gc/g1/g1BlockOffsetTable.hpp Changeset: 2579373d Author: Koichi Sakata Committer: David Holmes Date: 2022-07-11 09:24:16 +0000 URL: https://git.openjdk.org/loom/commit/2579373dd0cc151dad22e4041f42bbd314b3be5f 8280472: Don't mix legacy logging with UL Reviewed-by: dholmes, mgronlun ! src/hotspot/share/oops/method.cpp Changeset: bba6be79 Author: Aggelos Biboudis Committer: Jan Lahoda Date: 2022-07-11 11:13:55 +0000 URL: https://git.openjdk.org/loom/commit/bba6be79e06b2b83b97e6def7b6a520e93f5737c 8269674: Improve testing of parenthesized patterns Reviewed-by: jlahoda ! src/jdk.compiler/share/classes/com/sun/tools/javac/parser/JavacParser.java + test/langtools/tools/javac/patterns/ParenthesizedCombo.java Changeset: 46251bc6 Author: Prasanta Sadhukhan Date: 2022-07-11 11:35:32 +0000 URL: https://git.openjdk.org/loom/commit/46251bc6e248a19e8d78173ff8d0502c68ee1acb 8224267: JOptionPane message string with 5000+ newlines produces StackOverflowError Reviewed-by: tr, aivanov ! src/java.desktop/share/classes/javax/swing/plaf/basic/BasicOptionPaneUI.java + test/jdk/javax/swing/JOptionPane/TestOptionPaneStackOverflow.java Changeset: 0c370089 Author: Coleen Phillimore Date: 2022-07-11 13:07:03 +0000 URL: https://git.openjdk.org/loom/commit/0c37008917789e7b631b5c18e6f54454b1bfe038 8275662: remove test/lib/sun/hotspot Reviewed-by: mseledtsov, sspitsyn, lmesnik ! test/hotspot/jtreg/compiler/cha/AbstractRootMethod.java ! test/hotspot/jtreg/compiler/cha/DefaultRootMethod.java ! test/hotspot/jtreg/compiler/cha/Utils.java ! test/hotspot/jtreg/compiler/codecache/OverflowCodeCacheTest.java ! test/hotspot/jtreg/compiler/codecache/cli/TestSegmentedCodeCacheOption.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/CodeCacheFreeSpaceRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/GenericCodeHeapSizeRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/JVMStartupRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/codeheapsize/TestCodeHeapSizeOptions.java ! test/hotspot/jtreg/compiler/codecache/cli/common/CodeCacheCLITestCase.java ! test/hotspot/jtreg/compiler/codecache/cli/common/CodeCacheInfoFormatter.java ! test/hotspot/jtreg/compiler/codecache/cli/common/CodeCacheOptions.java ! test/hotspot/jtreg/compiler/codecache/cli/printcodecache/PrintCodeCacheRunner.java ! test/hotspot/jtreg/compiler/codecache/cli/printcodecache/TestPrintCodeCacheOption.java ! test/hotspot/jtreg/compiler/codecache/jmx/BeanTypeTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeCacheUtils.java ! test/hotspot/jtreg/compiler/codecache/jmx/CodeHeapBeanPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/GetUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/InitialAndMaxUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ManagerNamesTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/MemoryPoolsPresenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PeakUsageTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/PoolsIndependenceTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/ThresholdNotificationsTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdExceededTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdIncreasedTest.java ! test/hotspot/jtreg/compiler/codecache/jmx/UsageThresholdNotExceededTest.java ! test/hotspot/jtreg/compiler/codecache/stress/RandomAllocationTest.java ! test/hotspot/jtreg/compiler/codecache/stress/ReturnBlobToWrongHeapTest.java ! test/hotspot/jtreg/compiler/codegen/aes/TestAESMain.java ! test/hotspot/jtreg/compiler/codegen/aes/TestCipherBlockChainingEncrypt.java ! test/hotspot/jtreg/compiler/intrinsics/base64/TestBase64.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestAndnL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsiL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsmskL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBlsrL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestBzhiI2L.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestLzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntI.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/TestTzcntL.java ! test/hotspot/jtreg/compiler/intrinsics/bmi/verifycode/BmiIntrinsicBase.java ! test/hotspot/jtreg/compiler/intrinsics/klass/CastNullCheckDroppingsTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/AllocateCompileIdTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/CompileCodeTestCase.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/DisassembleCodeBlobTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/ExecuteInstalledCodeTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/HasCompiledCodeForOSRTest.java ! test/hotspot/jtreg/compiler/jvmci/compilerToVM/InvalidateInstalledCodeTest.java ! test/hotspot/jtreg/compiler/onSpinWait/TestOnSpinWaitAArch64DefaultFlags.java ! test/hotspot/jtreg/compiler/unsafe/UnsafeGetStableArrayElement.java ! test/hotspot/jtreg/compiler/whitebox/AllocationCodeBlobTest.java ! test/hotspot/jtreg/compiler/whitebox/CompilerWhiteBoxTest.java ! test/hotspot/jtreg/compiler/whitebox/DeoptimizeFramesTest.java ! test/hotspot/jtreg/compiler/whitebox/ForceNMethodSweepTest.java ! test/hotspot/jtreg/compiler/whitebox/GetCodeHeapEntriesTest.java ! test/hotspot/jtreg/compiler/whitebox/GetNMethodTest.java ! test/hotspot/jtreg/gc/TestConcurrentGCBreakpoints.java ! test/hotspot/jtreg/gc/TestJNIWeak/TestJNIWeak.java ! test/hotspot/jtreg/gc/TestSmallHeap.java ! test/hotspot/jtreg/gc/arguments/TestParallelGCThreads.java ! test/hotspot/jtreg/gc/arguments/TestParallelRefProc.java ! test/hotspot/jtreg/gc/ergonomics/TestDynamicNumberOfGCThreads.java ! test/hotspot/jtreg/gc/ergonomics/TestInitialGCThreadLogging.java ! test/hotspot/jtreg/gc/g1/TestGCLogMessages.java ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java ! test/hotspot/jtreg/gc/logging/TestGCId.java ! test/hotspot/jtreg/runtime/CompressedOops/UseCompressedOops.java ! test/hotspot/jtreg/runtime/MemberName/MemberNameLeak.java ! test/hotspot/jtreg/runtime/cds/appcds/CommandLineFlagCombo.java ! test/hotspot/jtreg/runtime/cds/appcds/JarBuilder.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/CDSStreamTestDriver.java ! test/hotspot/jtreg/runtime/cds/appcds/dynamicArchive/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/jigsaw/modulepath/MainModuleOnly.java ! test/hotspot/jtreg/runtime/cds/appcds/sharedStrings/IncompatibleOptions.java ! test/hotspot/jtreg/runtime/stringtable/StringTableCleaningTest.java ! test/hotspot/jtreg/serviceability/sa/TestUniverse.java ! test/hotspot/jtreg/vmTestbase/nsk/jvmti/scenarios/general_functions/GF08/gf08t001/TestDriver.java ! test/jdk/com/sun/jdi/EATests.java ! test/jdk/java/lang/management/MemoryMXBean/CollectionUsageThreshold.java ! test/jdk/java/lang/management/MemoryMXBean/LowMemoryTest.java ! test/jdk/java/lang/management/MemoryMXBean/ResetPeakMemoryUsage.java ! test/jdk/jdk/jfr/event/compiler/TestCodeCacheFull.java ! test/jdk/jdk/jfr/event/compiler/TestCodeSweeper.java ! test/jdk/jdk/jfr/jvm/TestJFRIntrinsic.java - test/lib-test/jdk/test/whitebox/OldWhiteBox.java ! test/lib/jdk/test/lib/cli/predicate/CPUSpecificPredicate.java ! test/lib/jdk/test/lib/helpers/ClassFileInstaller.java - test/lib/sun/hotspot/WhiteBox.java - test/lib/sun/hotspot/code/BlobType.java - test/lib/sun/hotspot/code/CodeBlob.java - test/lib/sun/hotspot/code/Compiler.java - test/lib/sun/hotspot/code/NMethod.java - test/lib/sun/hotspot/cpuinfo/CPUInfo.java - test/lib/sun/hotspot/gc/GC.java Changeset: 95c80229 Author: Thomas Stuefe Date: 2022-07-11 14:07:12 +0000 URL: https://git.openjdk.org/loom/commit/95c8022958f84047cf26909239d8608eff4e35fb 8290046: NMT: Remove unused MallocSiteTable::reset() Reviewed-by: jiefu, zgu ! src/hotspot/share/services/mallocSiteTable.cpp ! src/hotspot/share/services/mallocSiteTable.hpp Changeset: fc01666a Author: Alan Bateman Date: 2022-07-11 14:41:13 +0000 URL: https://git.openjdk.org/loom/commit/fc01666a5824d55b2549c81c0c3602aafdec693c 8290002: (se) AssertionError in SelectorImpl.implCloseSelector Reviewed-by: michaelm ! src/java.base/share/classes/sun/nio/ch/SelectorImpl.java Changeset: 59980ac8 Author: Pavel Rappo Date: 2022-07-11 15:31:22 +0000 URL: https://git.openjdk.org/loom/commit/59980ac8e49c0e46120520cf0007c6fed514251d 8288309: Rename the "testTagInheritence" directory Reviewed-by: hannesw = test/langtools/jdk/javadoc/doclet/testTagInheritance/TestTagInheritance.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence/A.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence/B.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence2/A.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence2/B.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/firstSentence2/C.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestAbstractClass.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestInterface.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestInterfaceForAbstractClass.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestSuperSuperClass.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestSuperSuperInterface.java = test/langtools/jdk/javadoc/doclet/testTagInheritance/pkg/TestTagInheritance.java Changeset: c33fa55c Author: Calvin Cheung Date: 2022-07-11 15:33:18 +0000 URL: https://git.openjdk.org/loom/commit/c33fa55cf8e194e2662c11d342eee68ec67abb4d 8274235: -Xshare:dump should not call vm_direct_exit Reviewed-by: iklam, dholmes ! src/hotspot/share/cds/archiveBuilder.cpp ! src/hotspot/share/cds/heapShared.cpp ! src/hotspot/share/cds/metaspaceShared.cpp Changeset: 0c1aa2bc Author: Coleen Phillimore Date: 2022-07-11 15:34:17 +0000 URL: https://git.openjdk.org/loom/commit/0c1aa2bc8a1c23d8da8673a4fac574813f373f57 8289184: runtime/ClassUnload/DictionaryDependsTest.java failed with "Test failed: should be unloaded" Reviewed-by: lmesnik, hseigel ! test/hotspot/jtreg/runtime/BadObjectClass/TestUnloadClassError.java ! test/hotspot/jtreg/runtime/Nestmates/membership/TestNestHostErrorWithClassUnload.java ! test/hotspot/jtreg/runtime/logging/ClassLoadUnloadTest.java ! test/hotspot/jtreg/runtime/logging/LoaderConstraintsTest.java ! test/lib/jdk/test/lib/classloader/ClassUnloadCommon.java Changeset: 11319c2a Author: Brian Burkhalter Date: 2022-07-07 22:36:08 +0000 URL: https://git.openjdk.org/loom/commit/11319c2aeb16ef2feb0ecab0e2811a52e845739d 8278469: Test java/nio/channels/FileChannel/LargeGatheringWrite.java times out 8289526: java/nio/channels/FileChannel/MapTest.java times out Reviewed-by: dcubed ! test/jdk/TEST.ROOT = test/jdk/java/nio/channels/FileChannel/largeMemory/LargeGatheringWrite.java = test/jdk/java/nio/channels/FileChannel/largeMemory/MapTest.java Changeset: 1304390b Author: Daniel D. Daugherty Date: 2022-07-07 23:09:42 +0000 URL: https://git.openjdk.org/loom/commit/1304390b3e7ecb4c87108747defd33d9fc4045c4 8289951: ProblemList jdk/jfr/api/consumer/TestRecordingFileWrite.java on linux-x64 and macosx-x64 Reviewed-by: psandoz ! test/jdk/ProblemList.txt Changeset: 64286074 Author: Alexander Matveev Date: 2022-07-08 00:17:11 +0000 URL: https://git.openjdk.org/loom/commit/64286074ba763d4a1e8879d8af69eee34d32cfa6 8289030: [macos] app image signature invalid when creating DMG or PKG Reviewed-by: asemenyuk ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/MacAppBundler.java ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/MacAppImageBuilder.java ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/MacBaseInstallerBundler.java ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources.properties ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources_de.properties ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources_ja.properties ! src/jdk.jpackage/macosx/classes/jdk/jpackage/internal/resources/MacResources_zh_CN.properties ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/AbstractAppImageBuilder.java ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/AppImageBundler.java ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/AppImageFile.java ! src/jdk.jpackage/share/classes/jdk/jpackage/internal/IOUtils.java ! test/jdk/tools/jpackage/helpers/jdk/jpackage/test/JPackageCommand.java ! test/jdk/tools/jpackage/macosx/SigningPackageTest.java + test/jdk/tools/jpackage/macosx/SigningPackageTwoStepTest.java Changeset: ea21c465 Author: Thomas Stuefe Date: 2022-07-08 08:13:20 +0000 URL: https://git.openjdk.org/loom/commit/ea21c46531e8095c12153f787a24715eb8efbb03 8289799: Build warning in methodData.cpp memset zero-length parameter Backport-of: cce77a700141a854bafaa5ccb33db026affcf322 ! src/hotspot/share/oops/methodData.cpp Changeset: 732f1065 Author: Jorn Vernee Date: 2022-07-08 11:18:32 +0000 URL: https://git.openjdk.org/loom/commit/732f1065fe05ae737a716bea92536cb8edc2b6a0 8289223: Canonicalize header ids in foreign API javadocs Reviewed-by: mcimadamore ! src/java.base/share/classes/java/lang/foreign/Linker.java ! src/java.base/share/classes/java/lang/foreign/MemoryAddress.java ! src/java.base/share/classes/java/lang/foreign/MemoryLayout.java ! src/java.base/share/classes/java/lang/foreign/MemorySegment.java ! src/java.base/share/classes/java/lang/foreign/MemorySession.java ! src/java.base/share/classes/java/lang/foreign/SymbolLookup.java ! src/java.base/share/classes/java/lang/foreign/package-info.java Changeset: 460d879a Author: Jorn Vernee Date: 2022-07-08 15:21:11 +0000 URL: https://git.openjdk.org/loom/commit/460d879a75133fc071802bbc2c742b4232db604e 8289601: SegmentAllocator::allocateUtf8String(String str) should be clarified for strings containing \0 Reviewed-by: psandoz, mcimadamore ! src/java.base/share/classes/java/lang/foreign/MemorySegment.java ! src/java.base/share/classes/java/lang/foreign/SegmentAllocator.java Changeset: eeaf0bba Author: Stuart Marks Date: 2022-07-08 17:03:48 +0000 URL: https://git.openjdk.org/loom/commit/eeaf0bbabc6632c181b191854678e72a333ec0a5 8289872: wrong wording in @param doc for HashMap.newHashMap et. al. Reviewed-by: chegar, naoto, iris ! src/java.base/share/classes/java/util/HashMap.java ! src/java.base/share/classes/java/util/LinkedHashMap.java ! src/java.base/share/classes/java/util/WeakHashMap.java Changeset: c142fbbb Author: Vladimir Kempik Date: 2022-07-08 17:49:53 +0000 URL: https://git.openjdk.org/loom/commit/c142fbbbafcaa728cbdc56467c641eeed511f161 8289697: buffer overflow in MTLVertexCache.m: MTLVertexCache_AddGlyphQuad Backport-of: d852e99ae9de4c611438c50ce37ea1806f58cbdf ! src/java.desktop/macosx/native/libawt_lwawt/java2d/metal/MTLVertexCache.m Changeset: 9981c85d Author: Daniel D. Daugherty Date: 2022-07-08 19:47:55 +0000 URL: https://git.openjdk.org/loom/commit/9981c85d462b1f5a82ebe8b88a1dabf033b4d551 8290033: ProblemList serviceability/jvmti/GetLocalVariable/GetLocalWithoutSuspendTest.java on windows-x64 in -Xcomp mode Reviewed-by: azvegint, tschatzl ! test/hotspot/jtreg/ProblemList-Xcomp.txt Changeset: c86c51cc Author: Joe Wang Date: 2022-07-08 21:34:57 +0000 URL: https://git.openjdk.org/loom/commit/c86c51cc72e3457756434b9150b0c5ef2f5d496d 8282071: Update java.xml module-info Reviewed-by: lancea, iris, naoto ! src/java.xml/share/classes/module-info.java Changeset: b542bcba Author: Albert Mingkun Yang Date: 2022-07-11 07:58:03 +0000 URL: https://git.openjdk.org/loom/commit/b542bcba57a1ac79b9b7182dbf984b447754fafc 8289729: G1: Incorrect verification logic in G1ConcurrentMark::clear_next_bitmap Reviewed-by: tschatzl, iwalulya ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp Changeset: 25f4b043 Author: Jan Lahoda Date: 2022-07-11 08:59:32 +0000 URL: https://git.openjdk.org/loom/commit/25f4b04365e40a91ba7a06f6f9fe99e1785ce4f4 8289894: A NullPointerException thrown from guard expression Reviewed-by: vromero ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/TransPatterns.java ! test/langtools/tools/javac/patterns/CaseStructureTest.java ! test/langtools/tools/javac/patterns/Guards.java ! test/langtools/tools/javac/patterns/SwitchErrors.java ! test/langtools/tools/javac/patterns/SwitchErrors.out Changeset: 04942914 Author: Markus Gr?nlund Date: 2022-07-11 09:11:58 +0000 URL: https://git.openjdk.org/loom/commit/0494291490b6cd23d228f39199a3686cc9731ec0 8289692: JFR: Thread checkpoint no longer enforce mutual exclusion post Loom integration Reviewed-by: rehn ! src/hotspot/share/jfr/recorder/checkpoint/jfrCheckpointManager.cpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.hpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.inline.hpp Changeset: cb6e9cb7 Author: Martin Doerr Date: 2022-07-11 09:21:05 +0000 URL: https://git.openjdk.org/loom/commit/cb6e9cb7286f609dec1fe1157bf95afc503870a9 8290004: [PPC64] JfrGetCallTrace: assert(_pc != nullptr) failed: must have PC Reviewed-by: rrich, lucy ! src/hotspot/os_cpu/aix_ppc/thread_aix_ppc.cpp ! src/hotspot/os_cpu/linux_ppc/thread_linux_ppc.cpp Changeset: c79baaa8 Author: Jesper Wilhelmsson Date: 2022-07-11 16:15:49 +0000 URL: https://git.openjdk.org/loom/commit/c79baaa811971c43fbdbc251482d0e40903588cc Merge ! src/hotspot/os_cpu/aix_ppc/javaThread_aix_ppc.cpp ! src/hotspot/os_cpu/linux_ppc/javaThread_linux_ppc.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.hpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.inline.hpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt + src/hotspot/os_cpu/aix_ppc/javaThread_aix_ppc.cpp + src/hotspot/os_cpu/linux_ppc/javaThread_linux_ppc.cpp ! src/hotspot/share/gc/g1/g1ConcurrentMark.cpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.hpp ! src/hotspot/share/jfr/recorder/storage/jfrStorageUtils.inline.hpp ! src/jdk.compiler/share/classes/com/sun/tools/javac/comp/Check.java ! test/hotspot/jtreg/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt Changeset: 21db9a50 Author: Doug Simon Date: 2022-07-11 16:47:05 +0000 URL: https://git.openjdk.org/loom/commit/21db9a507b441dbf909720b0b394f563e03aafc3 8290065: [JVMCI] only check HotSpotCompiledCode stream is empty if installation succeeds Reviewed-by: kvn ! src/hotspot/share/jvmci/jvmciCodeInstaller.cpp Changeset: f42dab85 Author: Phil Race Date: 2022-07-11 19:19:27 +0000 URL: https://git.openjdk.org/loom/commit/f42dab85924d6a74d1c2c87bca1970e2362f45ea 8289853: Update HarfBuzz to 4.4.1 Reviewed-by: serb, azvegint ! src/java.desktop/share/legal/harfbuzz.md + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/Anchor.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorFormat3.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/AnchorMatrix.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ChainContextPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/Common.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ContextPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/CursivePos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/CursivePosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ExtensionPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkArray.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkBasePos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkBasePosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkLigPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkLigPosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkMarkPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkMarkPosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/MarkRecord.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PairPos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PairPosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PairPosFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PosLookup.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/PosLookupSubTable.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/SinglePos.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/SinglePosFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/SinglePosFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GPOS/ValueFormat.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/AlternateSet.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/AlternateSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/AlternateSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ChainContextSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/Common.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ContextSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ExtensionSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/GSUB.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/Ligature.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/LigatureSet.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/LigatureSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/LigatureSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/MultipleSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/MultipleSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ReverseChainSingleSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/ReverseChainSingleSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/Sequence.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SingleSubst.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SingleSubstFormat1.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SingleSubstFormat2.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SubstLookup.hh + src/java.desktop/share/native/libharfbuzz/OT/Layout/GSUB/SubstLookupSubTable.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/CompositeGlyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/Glyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/GlyphHeader.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/SimpleGlyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/SubsetGlyph.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/glyf-helpers.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/glyf.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/loca.hh + src/java.desktop/share/native/libharfbuzz/OT/glyf/path-builder.hh + src/java.desktop/share/native/libharfbuzz/UPDATING.txt + src/java.desktop/share/native/libharfbuzz/graph/graph.hh + src/java.desktop/share/native/libharfbuzz/graph/serialize.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-ankr-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-bsln-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-feat-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-just-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-kerx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-morx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-opbd-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout-trak-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-aat-layout.cc ! src/java.desktop/share/native/libharfbuzz/hb-aat-ltag-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-algs.hh ! src/java.desktop/share/native/libharfbuzz/hb-array.hh ! src/java.desktop/share/native/libharfbuzz/hb-atomic.hh ! src/java.desktop/share/native/libharfbuzz/hb-bimap.hh + src/java.desktop/share/native/libharfbuzz/hb-bit-page.hh + src/java.desktop/share/native/libharfbuzz/hb-bit-set-invertible.hh + src/java.desktop/share/native/libharfbuzz/hb-bit-set.hh ! src/java.desktop/share/native/libharfbuzz/hb-blob.cc ! src/java.desktop/share/native/libharfbuzz/hb-blob.h ! src/java.desktop/share/native/libharfbuzz/hb-blob.hh ! src/java.desktop/share/native/libharfbuzz/hb-buffer-deserialize-json.hh ! src/java.desktop/share/native/libharfbuzz/hb-buffer-deserialize-text.hh ! src/java.desktop/share/native/libharfbuzz/hb-buffer-serialize.cc + src/java.desktop/share/native/libharfbuzz/hb-buffer-verify.cc ! src/java.desktop/share/native/libharfbuzz/hb-buffer.cc ! src/java.desktop/share/native/libharfbuzz/hb-buffer.h ! src/java.desktop/share/native/libharfbuzz/hb-buffer.hh + src/java.desktop/share/native/libharfbuzz/hb-cache.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff-interp-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff-interp-cs-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff-interp-dict-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff1-interp-cs.hh ! src/java.desktop/share/native/libharfbuzz/hb-cff2-interp-cs.hh ! src/java.desktop/share/native/libharfbuzz/hb-common.cc ! src/java.desktop/share/native/libharfbuzz/hb-common.h ! src/java.desktop/share/native/libharfbuzz/hb-config.hh + src/java.desktop/share/native/libharfbuzz/hb-cplusplus.hh ! src/java.desktop/share/native/libharfbuzz/hb-debug.hh ! src/java.desktop/share/native/libharfbuzz/hb-deprecated.h ! src/java.desktop/share/native/libharfbuzz/hb-dispatch.hh ! src/java.desktop/share/native/libharfbuzz/hb-draw.cc ! src/java.desktop/share/native/libharfbuzz/hb-draw.h ! src/java.desktop/share/native/libharfbuzz/hb-draw.hh ! src/java.desktop/share/native/libharfbuzz/hb-face.cc ! src/java.desktop/share/native/libharfbuzz/hb-face.h ! src/java.desktop/share/native/libharfbuzz/hb-fallback-shape.cc ! src/java.desktop/share/native/libharfbuzz/hb-font.cc ! src/java.desktop/share/native/libharfbuzz/hb-font.h ! src/java.desktop/share/native/libharfbuzz/hb-font.hh ! src/java.desktop/share/native/libharfbuzz/hb-ft.cc ! src/java.desktop/share/native/libharfbuzz/hb-ft.h ! src/java.desktop/share/native/libharfbuzz/hb-iter.hh ! src/java.desktop/share/native/libharfbuzz/hb-kern.hh ! src/java.desktop/share/native/libharfbuzz/hb-machinery.hh ! src/java.desktop/share/native/libharfbuzz/hb-map.cc ! src/java.desktop/share/native/libharfbuzz/hb-map.h ! src/java.desktop/share/native/libharfbuzz/hb-map.hh ! src/java.desktop/share/native/libharfbuzz/hb-meta.hh ! src/java.desktop/share/native/libharfbuzz/hb-mutex.hh ! src/java.desktop/share/native/libharfbuzz/hb-null.hh ! src/java.desktop/share/native/libharfbuzz/hb-object.hh ! src/java.desktop/share/native/libharfbuzz/hb-open-file.hh ! src/java.desktop/share/native/libharfbuzz/hb-open-type.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff1-table.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff1-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff2-table.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-cff2-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-cmap-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-cbdt-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-colr-table.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-color-colrv1-closure.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-cpal-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-sbix-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color-svg-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-color.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-deprecated.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-face-table-list.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-face.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-font.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-gasp-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-glyf-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-hdmx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-head-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-hmtx-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-kern-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-base-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gdef-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gpos-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gsub-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-gsubgpos.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout-jstf-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-layout.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-map.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-map.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-math-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-math.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-math.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-maxp-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-meta-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-metrics.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-metrics.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-name-language-static.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-name-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-name.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-name.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-os2-table.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-post-table-v2subset.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-post-table.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-arabic-fallback.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-arabic-joining-list.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-arabic-table.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic-table.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-indic.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-khmer-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-khmer.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-khmer.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-myanmar-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-myanmar.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-myanmar.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-syllabic.cc - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-syllabic.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-use-machine.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-use-table.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex-vowel-constraints.hh - src/java.desktop/share/native/libharfbuzz/hb-ot-shape-complex.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape-fallback.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape-normalize.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape-normalize.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-shape.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-fallback.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-joining-list.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-pua.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-table.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic-win1256.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-arabic.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-default.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-hangul.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-hebrew.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic-table.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-indic.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-khmer-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-khmer.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-myanmar-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-myanmar.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-syllabic.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-syllabic.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-thai.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-use-machine.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-use-table.hh = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-use.cc = src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-vowel-constraints.cc + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper-vowel-constraints.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-shaper.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-stat-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-tag-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-tag.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-avar-table.hh + src/java.desktop/share/native/libharfbuzz/hb-ot-var-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-fvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-gvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-hvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var-mvar-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ot-var.cc ! src/java.desktop/share/native/libharfbuzz/hb-ot-var.h ! src/java.desktop/share/native/libharfbuzz/hb-ot-vorg-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-pool.hh + src/java.desktop/share/native/libharfbuzz/hb-priority-queue.hh + src/java.desktop/share/native/libharfbuzz/hb-repacker.hh ! src/java.desktop/share/native/libharfbuzz/hb-sanitize.hh ! src/java.desktop/share/native/libharfbuzz/hb-serialize.hh ! src/java.desktop/share/native/libharfbuzz/hb-set-digest.hh ! src/java.desktop/share/native/libharfbuzz/hb-set.cc ! src/java.desktop/share/native/libharfbuzz/hb-set.h ! src/java.desktop/share/native/libharfbuzz/hb-set.hh ! src/java.desktop/share/native/libharfbuzz/hb-shape-plan.cc ! src/java.desktop/share/native/libharfbuzz/hb-shape-plan.hh ! src/java.desktop/share/native/libharfbuzz/hb-shape.cc ! src/java.desktop/share/native/libharfbuzz/hb-shaper.cc ! src/java.desktop/share/native/libharfbuzz/hb-static.cc ! src/java.desktop/share/native/libharfbuzz/hb-style.cc ! src/java.desktop/share/native/libharfbuzz/hb-style.h ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff-common.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff-common.hh ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff1.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-cff2.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-input.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-input.hh ! src/java.desktop/share/native/libharfbuzz/hb-subset-plan.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset-plan.hh ! src/java.desktop/share/native/libharfbuzz/hb-subset.cc ! src/java.desktop/share/native/libharfbuzz/hb-subset.h ! src/java.desktop/share/native/libharfbuzz/hb-subset.hh ! src/java.desktop/share/native/libharfbuzz/hb-ucd-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-ucd.cc ! src/java.desktop/share/native/libharfbuzz/hb-unicode-emoji-table.hh ! src/java.desktop/share/native/libharfbuzz/hb-unicode.cc ! src/java.desktop/share/native/libharfbuzz/hb-unicode.hh ! src/java.desktop/share/native/libharfbuzz/hb-vector.hh ! src/java.desktop/share/native/libharfbuzz/hb-version.h ! src/java.desktop/share/native/libharfbuzz/hb.hh Changeset: 3b9059a1 Author: Daniel Fuchs Date: 2022-07-12 09:59:29 +0000 URL: https://git.openjdk.org/loom/commit/3b9059a1471ba74af8bf6a3c0e5b2e1140eb4afd 8290083: ResponseBodyBeforeError: AssertionError or SSLException: Unsupported or unrecognized SSL message Reviewed-by: jpai ! test/jdk/java/net/httpclient/ResponseBodyBeforeError.java Changeset: 04c47da1 Author: Daniel Jeli?ski Date: 2022-07-12 11:30:17 +0000 URL: https://git.openjdk.org/loom/commit/04c47da118b2870d1c7525348a2ffdf9cd1cc0a4 8289768: Clean up unused code Reviewed-by: dfuchs, lancea, weijun, naoto, cjplummer, alanb, michaelm, chegar ! src/java.base/macosx/native/libjava/ProcessHandleImpl_macosx.c ! src/java.base/macosx/native/libjli/java_md_macosx.m ! src/java.base/macosx/native/libnet/DefaultProxySelector.c ! src/java.base/macosx/native/libnio/fs/BsdNativeDispatcher.c ! src/java.base/share/native/launcher/defines.h ! src/java.base/share/native/libjava/NativeLibraries.c ! src/java.base/share/native/libjli/java.c ! src/java.base/share/native/libjli/parse_manifest.c ! src/java.base/share/native/libverify/check_code.c ! src/java.base/share/native/libzip/zip_util.c ! src/java.base/unix/native/jspawnhelper/jspawnhelper.c ! src/java.base/unix/native/libjava/ProcessImpl_md.c ! src/java.base/unix/native/libjava/TimeZone_md.c ! src/java.base/unix/native/libjava/java_props_md.c ! src/java.base/unix/native/libjava/path_util.c ! src/java.base/unix/native/libjli/java_md.c ! src/java.base/unix/native/libjli/java_md_common.c ! src/java.base/unix/native/libnet/DefaultProxySelector.c ! src/java.base/unix/native/libnet/Inet6AddressImpl.c ! src/java.base/unix/native/libnet/NetworkInterface.c ! src/java.base/unix/native/libnet/net_util_md.c ! src/java.base/unix/native/libnio/ch/NativeThread.c ! src/java.base/unix/native/libnio/ch/Net.c ! src/java.base/unix/native/libnio/ch/UnixDomainSockets.c ! src/java.base/windows/native/libjava/ProcessHandleImpl_win.c ! src/java.base/windows/native/libjava/TimeZone_md.c ! src/java.base/windows/native/libjava/io_util_md.c ! src/java.base/windows/native/libjli/java_md.c ! src/java.base/windows/native/libnet/NetworkInterface.c ! src/java.base/windows/native/libnio/ch/Net.c ! src/java.base/windows/native/libnio/fs/WindowsNativeDispatcher.c ! src/java.instrument/windows/native/libinstrument/FileSystemSupport_md.c ! src/java.security.jgss/share/native/libj2gss/GSSLibStub.c ! src/java.security.jgss/windows/native/libsspi_bridge/sspi.cpp ! src/jdk.crypto.cryptoki/share/native/libj2pkcs11/p11_keymgmt.c ! src/jdk.crypto.cryptoki/unix/native/libj2pkcs11/p11_md.c ! src/jdk.crypto.mscapi/windows/native/libsunmscapi/security.cpp ! src/jdk.hotspot.agent/linux/native/libsaproc/LinuxDebuggerLocal.cpp ! src/jdk.hotspot.agent/linux/native/libsaproc/libproc_impl.c ! src/jdk.hotspot.agent/linux/native/libsaproc/ps_core.c ! src/jdk.hotspot.agent/linux/native/libsaproc/symtab.c ! src/jdk.hotspot.agent/macosx/native/libsaproc/symtab.c ! src/jdk.jdi/share/native/libdt_shmem/SharedMemoryTransport.c ! src/jdk.jdwp.agent/share/native/libjdwp/log_messages.c ! src/jdk.management/unix/native/libmanagement_ext/OperatingSystemImpl.c ! src/jdk.sctp/unix/native/libsctp/SctpNet.c Changeset: e5491a26 Author: Matthias Baesken Date: 2022-07-12 12:10:28 +0000 URL: https://git.openjdk.org/loom/commit/e5491a2605177a9dca87a060d99aa5ea4fd4a239 8289910: unify os::message_box across posix platforms Reviewed-by: iklam, dholmes ! src/hotspot/os/aix/os_aix.cpp ! src/hotspot/os/bsd/os_bsd.cpp ! src/hotspot/os/linux/os_linux.cpp ! src/hotspot/os/posix/os_posix.cpp Changeset: 393dc7ad Author: Martin Doerr Date: 2022-07-12 13:31:51 +0000 URL: https://git.openjdk.org/loom/commit/393dc7ade716485f4452d0185caf9e630e4c6139 8290082: [PPC64] ZGC C2 load barrier stub needs to preserve vector registers Reviewed-by: eosterlund, rrich ! src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp ! src/hotspot/cpu/ppc/ppc.ad ! src/hotspot/cpu/ppc/register_ppc.hpp ! src/hotspot/cpu/ppc/vmreg_ppc.cpp ! src/hotspot/cpu/ppc/vmreg_ppc.hpp Changeset: ea12615d Author: Ryan Ernst Committer: Chris Hegarty Date: 2022-07-12 13:50:36 +0000 URL: https://git.openjdk.org/loom/commit/ea12615d2f4574467d93cca6b4cc81fc18986307 8288984: Simplification in java.lang.Runtime::exit Reviewed-by: dholmes, chegar, alanb, kbarrett ! src/java.base/share/classes/java/lang/Runtime.java ! src/java.base/share/classes/java/lang/Shutdown.java Changeset: 0e906975 Author: Erik Gahlin Date: 2022-07-12 14:14:56 +0000 URL: https://git.openjdk.org/loom/commit/0e906975a82e2f23c452c2f4ac5cd942f00ce743 8290133: JFR: Remove unused methods in Bits.java Reviewed-by: mgronlun ! src/jdk.jfr/share/classes/jdk/jfr/internal/Bits.java ! src/jdk.jfr/share/classes/jdk/jfr/internal/event/EventWriter.java Changeset: 728157fa Author: Ralf Schmelter Date: 2022-07-12 14:51:55 +0000 URL: https://git.openjdk.org/loom/commit/728157fa03913991088f6bb257a8bc16706792a9 8289917: Metadata for regionsRefilled of G1EvacuationStatistics event is wrong Reviewed-by: tschatzl, mgronlun, stuefe, egahlin ! src/hotspot/share/jfr/metadata/metadata.xml Changeset: 7f0e9bd6 Author: Ralf Schmelter Date: 2022-07-12 14:53:46 +0000 URL: https://git.openjdk.org/loom/commit/7f0e9bd632198c7fd34d27b85ca51ea0e2442e4d 8289745: JfrStructCopyFailed uses heap words instead of bytes for object sizes Reviewed-by: mgronlun, stuefe ! src/hotspot/share/gc/g1/g1Trace.cpp ! src/hotspot/share/gc/shared/gcTraceSend.cpp ! test/jdk/jdk/jfr/event/gc/detailed/PromotionFailedEvent.java ! test/jdk/jdk/jfr/event/gc/detailed/TestEvacuationFailedEvent.java Changeset: e8568b89 Author: Ludvig Janiuk Committer: Erik Gahlin Date: 2022-07-12 15:54:36 +0000 URL: https://git.openjdk.org/loom/commit/e8568b890a829f3481a57f4eb5cf1796e363858b 8290020: Deadlock in leakprofiler::emit_events during shutdown Reviewed-by: mgronlun, dholmes, egahlin ! src/hotspot/share/jfr/jfr.cpp ! src/hotspot/share/jfr/jfr.hpp ! src/hotspot/share/prims/jvm.cpp ! src/hotspot/share/runtime/java.cpp ! src/hotspot/share/runtime/java.hpp ! test/jdk/jdk/jfr/jvm/TestDumpOnCrash.java Changeset: fed3af8a Author: Maurizio Cimadamore Date: 2022-07-11 14:30:19 +0000 URL: https://git.openjdk.org/loom/commit/fed3af8ae069fc760a24e750292acbb468b14ce5 8287809: Revisit implementation of memory session Reviewed-by: jvernee ! src/java.base/share/classes/java/nio/Buffer.java ! src/java.base/share/classes/java/nio/Direct-X-Buffer.java.template ! src/java.base/share/classes/jdk/internal/foreign/AbstractMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/ConfinedSession.java ! src/java.base/share/classes/jdk/internal/foreign/HeapMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/MappedMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/MemoryAddressImpl.java ! src/java.base/share/classes/jdk/internal/foreign/MemorySessionImpl.java ! src/java.base/share/classes/jdk/internal/foreign/NativeMemorySegmentImpl.java ! src/java.base/share/classes/jdk/internal/foreign/Scoped.java ! src/java.base/share/classes/jdk/internal/foreign/SharedSession.java ! src/java.base/share/classes/jdk/internal/foreign/abi/SharedUtils.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/linux/LinuxAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/macos/MacOsAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/sysv/SysVVaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/windows/WinVaList.java ! src/java.base/share/classes/jdk/internal/misc/X-ScopedMemoryAccess-bin.java.template ! src/java.base/share/classes/jdk/internal/misc/X-ScopedMemoryAccess.java.template ! src/java.base/share/classes/sun/nio/ch/FileChannelImpl.java ! test/jdk/java/foreign/TestByteBuffer.java ! test/jdk/java/foreign/TestMemorySession.java Changeset: 62fbc3f8 Author: Pavel Rappo Date: 2022-07-11 15:43:20 +0000 URL: https://git.openjdk.org/loom/commit/62fbc3f883f06324abe8635efc48f9fc20f79f69 8287379: Using @inheritDoc in an inapplicable context shouldn't crash javadoc Reviewed-by: jjg ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/InheritDocTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/TagletManager.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/TagletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/DocFinder.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclint/Checker.java ! test/langtools/jdk/javadoc/doclet/InheritDocForUserTags/DocTest.java ! test/langtools/jdk/javadoc/doclet/testInheritDocWithinInappropriateTag/TestInheritDocWithinInappropriateTag.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/TestRelativeLinks.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/pkg/D.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/pkg/sub/F.java ! test/langtools/jdk/javadoc/doclet/testRelativeLinks/pkg2/E.java ! test/langtools/jdk/javadoc/doclet/testSimpleTagInherit/TestSimpleTagInherit.java ! test/langtools/jdk/javadoc/doclet/testSimpleTagInherit/p/BaseClass.java ! test/langtools/jdk/javadoc/doclet/testSimpleTagInherit/p/TestClass.java ! test/langtools/jdk/javadoc/doclet/testTaglets/TestTaglets.out Changeset: 39715f3d Author: Christoph Langer Date: 2022-07-11 17:46:22 +0000 URL: https://git.openjdk.org/loom/commit/39715f3da7e8749bf477b818ae06f4dd99c223c4 8287902: UnreadableRB case in MissingResourceCauseTest is not working reliably on Windows Backport-of: 975316e3e5f1208e4e15eadc2493d25c15554647 ! test/jdk/java/util/ResourceBundle/Control/MissingResourceCauseTest.java Changeset: c3806b93 Author: Serguei Spitsyn Date: 2022-07-11 22:44:03 +0000 URL: https://git.openjdk.org/loom/commit/c3806b93c48f826e940eecd0ba29995d7f0c796b 8289709: fatal error: stuck in JvmtiVTMSTransitionDisabler::disable_VTMS_transitions Reviewed-by: alanb, amenkov, lmesnik ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/framepop02.java ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/libframepop02.cpp Changeset: 3164c98f Author: Jorn Vernee Date: 2022-07-12 11:25:45 +0000 URL: https://git.openjdk.org/loom/commit/3164c98f4c02a48cad62dd4f9b6cc55d64ac6d83 8289148: j.l.foreign.VaList::nextVarg call could throw IndexOutOfBoundsException or even crash the VM 8289333: Specification of method j.l.foreign.VaList::skip deserves clarification 8289156: j.l.foreign.VaList::skip call could throw java.lang.IndexOutOfBoundsException: Out of bound access on segment Reviewed-by: mcimadamore ! src/java.base/share/classes/java/lang/foreign/VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/SharedUtils.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/linux/LinuxAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/aarch64/macos/MacOsAArch64VaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/sysv/SysVVaList.java ! src/java.base/share/classes/jdk/internal/foreign/abi/x64/windows/WinVaList.java ! test/jdk/java/foreign/valist/VaListTest.java Changeset: d9ca438d Author: Jesper Wilhelmsson Date: 2022-07-12 16:16:16 +0000 URL: https://git.openjdk.org/loom/commit/d9ca438d06166f153d11bb55c9ec672fc63c0e9e Merge ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclint/Checker.java ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/libframepop02.cpp ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclint/Checker.java ! test/hotspot/jtreg/serviceability/jvmti/events/FramePop/framepop02/libframepop02.cpp Changeset: 31f7fc04 Author: Jayashree Huttanagoudar Committer: Weijun Wang Date: 2022-07-12 20:12:22 +0000 URL: https://git.openjdk.org/loom/commit/31f7fc043b4616cb2d5f161cda357d0ebfb795f0 8283082: sun.security.x509.X509CertImpl.delete("x509.info.validity") nulls out info field Reviewed-by: weijun ! src/java.base/share/classes/sun/security/x509/X509CertImpl.java + test/jdk/sun/security/x509/X509CertImpl/JDK8283082.java Changeset: 6e18883d Author: Prasanta Sadhukhan Date: 2022-07-13 05:06:04 +0000 URL: https://git.openjdk.org/loom/commit/6e18883d8ffd9a7b7d495da05e9859dc1d1a2677 8290162: Reset recursion counter missed in fix of JDK-8224267 Reviewed-by: prr ! src/java.desktop/share/classes/javax/swing/plaf/basic/BasicOptionPaneUI.java ! test/jdk/javax/swing/JOptionPane/TestOptionPaneStackOverflow.java Changeset: 572c14ef Author: Jonathan Gibbons Date: 2022-07-13 14:45:04 +0000 URL: https://git.openjdk.org/loom/commit/572c14efc67860e75edaa50608b4c61aec5997da 8288624: Cleanup CommentHelper.getText0 Reviewed-by: hannesw ! src/java.base/share/classes/java/util/Locale.java ! src/jdk.compiler/share/classes/com/sun/tools/javac/parser/DocCommentParser.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlDocletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/HtmlSerialFieldWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/formats/html/TagletWriterImpl.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/SerializedFormWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/builders/SerializedFormBuilder.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/CodeTaglet.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/taglets/TagletWriter.java ! src/jdk.javadoc/share/classes/jdk/javadoc/internal/doclets/toolkit/util/CommentHelper.java ! test/langtools/jdk/javadoc/doclet/testSeeTag/TestSeeTag.java + test/langtools/jdk/javadoc/doclet/testSerialWithLink/TestSerialWithLink.java Changeset: f528124f Author: Alan Bateman Date: 2022-07-13 15:03:37 +0000 URL: https://git.openjdk.org/loom/commit/f528124f571a29da49defbef30eeca04ab4a00ce 8289284: jdk.tracePinnedThreads output confusing when pinned due to native frame Reviewed-by: jpai, mchung ! make/test/JtregNativeJdk.gmk ! src/java.base/share/classes/java/lang/PinnedThreadPrinter.java ! test/jdk/java/lang/Thread/virtual/TracePinnedThreads.java + test/jdk/java/lang/Thread/virtual/libTracePinnedThreads.c Changeset: 44fb92e2 Author: Brian Burkhalter Date: 2022-07-13 15:13:27 +0000 URL: https://git.openjdk.org/loom/commit/44fb92e2aa8a708b94c568e3d39217cb4c39f6bf 8290197: test/jdk/java/nio/file/Files/probeContentType/Basic.java fails on some systems for the ".rar" extension Reviewed-by: lancea, dfuchs, jpai ! test/jdk/java/nio/file/Files/probeContentType/Basic.java Changeset: 2583feb2 Author: Thomas Schatzl Date: 2022-07-13 16:08:59 +0000 URL: https://git.openjdk.org/loom/commit/2583feb21bf5419afc3c1953d964cf89d65fe8a2 8290023: Remove use of IgnoreUnrecognizedVMOptions in gc tests Reviewed-by: ayang, lkorinth, kbarrett ! test/hotspot/jtreg/gc/TestObjectAlignment.java ! test/hotspot/jtreg/gc/epsilon/TestAlignment.java ! test/hotspot/jtreg/gc/epsilon/TestMaxTLAB.java ! test/hotspot/jtreg/gc/g1/TestEagerReclaimHumongousRegionsClearMarkBits.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForAuxMemory.java ! test/hotspot/jtreg/gc/g1/TestLargePageUseForHeap.java ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java ! test/hotspot/jtreg/gc/metaspace/CompressedClassSpaceSizeInJmapHeap.java ! test/hotspot/jtreg/gc/metaspace/TestMetaspaceMemoryPool.java ! test/hotspot/jtreg/gc/metaspace/TestMetaspacePerfCounters.java ! test/hotspot/jtreg/gc/metaspace/TestPerfCountersAndMemoryPools.java ! test/hotspot/jtreg/gc/shenandoah/TestVerifyJCStress.java ! test/hotspot/jtreg/gc/shenandoah/options/TestSelectiveBarrierFlags.java Changeset: 53580455 Author: Doug Lea
Date: 2022-07-13 18:05:42 +0000 URL: https://git.openjdk.org/loom/commit/535804554deef213d056cbd6bce14aeff04c32fb 8066859: java/lang/ref/OOMEInReferenceHandler.java failed with java.lang.Exception: Reference Handler thread died Reviewed-by: alanb ! src/java.base/share/classes/java/util/concurrent/locks/AbstractQueuedLongSynchronizer.java ! src/java.base/share/classes/java/util/concurrent/locks/AbstractQueuedSynchronizer.java ! test/jdk/ProblemList-Xcomp.txt ! test/jdk/ProblemList.txt + test/jdk/java/util/concurrent/locks/Lock/OOMEInAQS.java Changeset: 5e3ecff7 Author: Thomas Schatzl Date: 2022-07-13 18:31:03 +0000 URL: https://git.openjdk.org/loom/commit/5e3ecff7a60708aaf4a3c63f85907e4fb2dcbc9e 8290253: gc/g1/TestVerificationInConcurrentCycle.java#id1 fails with "Error. can't find sun.hotspot.WhiteBox in test directory or libraries" Reviewed-by: dcubed ! test/hotspot/jtreg/gc/g1/TestVerificationInConcurrentCycle.java Changeset: 74ac5df9 Author: Doug Simon Date: 2022-07-13 19:15:53 +0000 URL: https://git.openjdk.org/loom/commit/74ac5df96fb4344f005180f8643cb0c9223b1556 8290234: [JVMCI] use JVMCIKlassHandle to protect raw Klass* values from concurrent G1 scanning Reviewed-by: kvn, never ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/CompilerToVM.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotMethodData.java ! src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HotSpotResolvedJavaMethodImpl.java From ron.pressler at oracle.com Thu Jul 14 10:01:36 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 14 Jul 2022 10:01:36 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> Message-ID: <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Little?s law tells us what the relationship between concurrency, throughput and latency is if the system is stable. It tells us that if latency doesn?t decrease, then concurrency rises with throughput (again, if the system is stable). Therefore, to support high throughput you need a high level of concurrency. Since the Java platform?s unit of concurrency is the thread, to support high throughput you need a high number of threads. There might be other things you also need more of, but you *at least* need a high number of threads. The number of threads is an *upper bound* on concurrency, because the platform cannot make concurrent progress on anything without a thread (with the caveat in the next paragraph). There might be other upper bounds, too (e.g. you need enough memory to concurrently store all the working data for your concurrent operations), but the number of threads *is* an upper bound, and the one virtual threads are there to remove. Of course, as JEP 425 explains, you could abandon threads altogether and use some other construct as your unit of concurrency, but then you lose platform support. In any event, virtual threads exist to support a high number of threads, as Little?s law requires, therefore, if you use virtual threads, you have a high number of them. ? Ron On 14 Jul 2022, at 08:12, Alex Otenko > wrote: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, > wrote: The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Thu Jul 14 10:35:24 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 14 Jul 2022 10:35:24 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <050d01d896e6$12520800$36f61800$@kolotyluk.net> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <050d01d896e6$12520800$36f61800$@kolotyluk.net> Message-ID: First, there is no such thing as more or less stable. Stability is binary. Either the rate at which requests are completed is equal to the rate at which they arrive (the system is stable), or it is lower (in which case requests pile up and the system is unstable). Although, I guess you could talk about how quickly requests pile up and your server starts dropping them. Second, if your system is stable, Little?s law tells you how many requests are being concurrently served. Obviously, if you?re serving L concurrent requests in a stable system, then you have sufficient resources to serve them concurrently. Every request might consume a little or a lot of some resources ? CPU, memory, networking ? and so those resources' availability imposes upper bounds on your concurrency. But (assuming you use threads as your units of concurrency) every concurrent request must consume at least one thread, or it won?t be able to make progress at all. So threads are also an upper bound on concurrency, and we know empirically that in a great many server systems OS threads become the most constraining upper bound on concurrency well before other resources. Virtual threads remove that particular limitation, which helps all those systems, and now the concurrency of your system is only limited by the other resources I mentioned. If every request consumes 1/10 of your available CPU over its entire duration, then your CPU puts a limit of 10 on your concurrency and threads are not your bottleneck, but if you?re using virtual threads ? meaning you want a much higher number of threads ? then that?s not your circumstance. Clearly, when your CPU, or any other resource consumed by the requests you serve, is at 100% (for any non-instantaneous duration) then your system is not stable. ? Ron On 13 Jul 2022, at 19:26, eric at kolotyluk.net wrote: Just testing my intuition here? because reading what Ron says is often eye-opening? and changes my intuition 1. Loom improves concurrency via Virtual Threads * And consequently, potentially improves throughput 2. A key aspect of concurrency is blocking, where blocked tasks enable resources to be applied to unblocked tasks (where Fork-Join is highly effective) * Pre-Loom, resources such as Threads could be applied to unblocked tasks, but i. Platform Threads are heavy, expensive, etc. such that the number of Platform Threads puts a bound on concurrency * Post-Loom, resources such as Virtual Threads can now be applied to unblocked tasks, such that i. Light, cheap, etc. Virtual Threads enable a much higher bound on concurrency ii. According to Little?s Law, throughput can rise because the number of threads can rise. 1. Little?s Law also says ?The only requirements are that the system be stable and non-preemptive;? * While the underlying O/S may be preemptive, the JVM is not, so this requirement is met. * But, Ron says, ?While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound).? * Which I take to imply, that increasing the number of Virtual Threads increases the stability? ? i. Even in Loom, there is an upper bound on Virtual Threads created, albeit a much higher upper bound. 1. Where I am still confused is * In Loom, I would expect that even when all our CPU Cores are at 100%, 100% throughput, the system is still stable? i. Or maybe I am misinterpreting what Ron said? * However, latency will suffer, unless i. more CPU Cores are added to the overall load, via some load balancer ii. flow control, such as backpressure, is added such that queues do not grow without bound (a topic I would love to explore more) iii. Or, does an increase in latency mean a loss of stability? Cheers, Eric From: loom-dev > On Behalf Of Ron Pressler Sent: July 13, 2022 6:30 AM To: Alex Otenko > Cc: Rob Bygrave >; Egor Ushakov >; loom-dev at openjdk.org Subject: Re: [External] : Re: jstack, profilers and other tools The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Thu Jul 14 17:29:45 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Thu, 14 Jul 2022 10:29:45 -0700 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <050d01d896e6$12520800$36f61800$@kolotyluk.net> Message-ID: <06df01d897a7$4fcc47b0$ef64d710$@kolotyluk.net> Thanks for that clarification, Ron? Would it be fair to say a system is stable, in the case where when the Load Balancer predicts the current resources will soon become unstable, it launches more instances/resources before instability sets in? That is, appropriate feedback loops can keep the underlying systems stable? Of course, keeping a system stable under load bursts could be challenging, but manageable. My past experience with Akka/Scala shows that we can provision systems such that we get better CPU utilization because such systems can distribute tasks more effectively over limited Platform Threads. I am hopeful that with Loom, Virtual Threads, Structured Concurrency, etc. we also can provision systems with better resource utilization. For example, the load balance spawns new instances at 75% utilization rather than 50% under conventional systems. Also, my hope is that Loom will let us design and implement this with a lower cognitive load than using reactive programming techniques. Once Loom becomes more available, such as Java 19, it will be interesting to see what impact this has on the Reactive Community, Scala, Akka, Kotlin and Kotlin Coroutines, etc. This whole Loom Project would make a fascinating University course as a lot of important historical lessons are apparent. There has been a lot of progress from Fibers to Virtual Threads? I took two Coursera courses, one on Functional Programming in Scala, and the other on Reactive Programming in Scala. They were great courses. It would be fantastic for Coursera to have a course on Concurrent Programming in Loom. Cheers, Eric From: Ron Pressler Sent: July 14, 2022 3:35 AM To: Eric Kolotyluk Cc: Alex Otenko ; Rob Bygrave ; Egor Ushakov ; loom-dev at openjdk.org Subject: Re: [External] : Re: jstack, profilers and other tools First, there is no such thing as more or less stable. Stability is binary. Either the rate at which requests are completed is equal to the rate at which they arrive (the system is stable), or it is lower (in which case requests pile up and the system is unstable). Although, I guess you could talk about how quickly requests pile up and your server starts dropping them. Second, if your system is stable, Little?s law tells you how many requests are being concurrently served. Obviously, if you?re serving L concurrent requests in a stable system, then you have sufficient resources to serve them concurrently. Every request might consume a little or a lot of some resources ? CPU, memory, networking ? and so those resources' availability imposes upper bounds on your concurrency. But (assuming you use threads as your units of concurrency) every concurrent request must consume at least one thread, or it won?t be able to make progress at all. So threads are also an upper bound on concurrency, and we know empirically that in a great many server systems OS threads become the most constraining upper bound on concurrency well before other resources. Virtual threads remove that particular limitation, which helps all those systems, and now the concurrency of your system is only limited by the other resources I mentioned. If every request consumes 1/10 of your available CPU over its entire duration, then your CPU puts a limit of 10 on your concurrency and threads are not your bottleneck, but if you?re using virtual threads ? meaning you want a much higher number of threads ? then that?s not your circumstance. Clearly, when your CPU, or any other resource consumed by the requests you serve, is at 100% (for any non-instantaneous duration) then your system is not stable. ? Ron On 13 Jul 2022, at 19:26, eric at kolotyluk.net wrote: Just testing my intuition here? because reading what Ron says is often eye-opening? and changes my intuition 1. Loom improves concurrency via Virtual Threads a. And consequently, potentially improves throughput 2. A key aspect of concurrency is blocking, where blocked tasks enable resources to be applied to unblocked tasks (where Fork-Join is highly effective) a. Pre-Loom, resources such as Threads could be applied to unblocked tasks, but i. Platform Threads are heavy, expensive, etc. such that the number of Platform Threads puts a bound on concurrency b. Post-Loom, resources such as Virtual Threads can now be applied to unblocked tasks, such that i. Light, cheap, etc. Virtual Threads enable a much higher bound on concurrency ii. According to Little?s Law, throughput can rise because the number of threads can rise. 3. Little?s Law also says ?The only requirements are that the system be stable and non-preemptive;? a. While the underlying O/S may be preemptive, the JVM is not, so this requirement is met. b. But, Ron says, ?While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound).? c. Which I take to imply, that increasing the number of Virtual Threads increases the stability? ? i. Even in Loom, there is an upper bound on Virtual Threads created, albeit a much higher upper bound. 4. Where I am still confused is a. In Loom, I would expect that even when all our CPU Cores are at 100%, 100% throughput, the system is still stable? i. Or maybe I am misinterpreting what Ron said? b. However, latency will suffer, unless i. more CPU Cores are added to the overall load, via some load balancer ii. flow control, such as backpressure, is added such that queues do not grow without bound (a topic I would love to explore more) iii. Or, does an increase in latency mean a loss of stability? Cheers, Eric From: loom-dev > On Behalf Of Ron Pressler Sent: July 13, 2022 6:30 AM To: Alex Otenko > Cc: Rob Bygrave >; Egor Ushakov >; loom-dev at openjdk.org Subject: Re: [External] : Re: jstack, profilers and other tools The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Thu Jul 14 19:00:08 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 14 Jul 2022 19:00:08 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <06df01d897a7$4fcc47b0$ef64d710$@kolotyluk.net> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <050d01d896e6$12520800$36f61800$@kolotyluk.net> <06df01d897a7$4fcc47b0$ef64d710$@kolotyluk.net> Message-ID: <8CE68F82-D337-4FF9-ACDA-1C1E0D37B127@oracle.com> Sure, a system could provision more resources before they?re exhausted to maintain stability. ? Ron On 14 Jul 2022, at 18:29, eric at kolotyluk.net wrote: Thanks for that clarification, Ron? Would it be fair to say a system is stable, in the case where when the Load Balancer predicts the current resources will soon become unstable, it launches more instances/resources before instability sets in? That is, appropriate feedback loops can keep the underlying systems stable? Of course, keeping a system stable under load bursts could be challenging, but manageable. My past experience with Akka/Scala shows that we can provision systems such that we get better CPU utilization because such systems can distribute tasks more effectively over limited Platform Threads. I am hopeful that with Loom, Virtual Threads, Structured Concurrency, etc. we also can provision systems with better resource utilization. For example, the load balance spawns new instances at 75% utilization rather than 50% under conventional systems. Also, my hope is that Loom will let us design and implement this with a lower cognitive load than using reactive programming techniques. Once Loom becomes more available, such as Java 19, it will be interesting to see what impact this has on the Reactive Community, Scala, Akka, Kotlin and Kotlin Coroutines, etc. This whole Loom Project would make a fascinating University course as a lot of important historical lessons are apparent. There has been a lot of progress from Fibers to Virtual Threads? I took two Coursera courses, one on Functional Programming in Scala, and the other on Reactive Programming in Scala. They were great courses. It would be fantastic for Coursera to have a course on Concurrent Programming in Loom. Cheers, Eric From: Ron Pressler > Sent: July 14, 2022 3:35 AM To: Eric Kolotyluk > Cc: Alex Otenko >; Rob Bygrave >; Egor Ushakov >; loom-dev at openjdk.org Subject: Re: [External] : Re: jstack, profilers and other tools First, there is no such thing as more or less stable. Stability is binary. Either the rate at which requests are completed is equal to the rate at which they arrive (the system is stable), or it is lower (in which case requests pile up and the system is unstable). Although, I guess you could talk about how quickly requests pile up and your server starts dropping them. Second, if your system is stable, Little?s law tells you how many requests are being concurrently served. Obviously, if you?re serving L concurrent requests in a stable system, then you have sufficient resources to serve them concurrently. Every request might consume a little or a lot of some resources ? CPU, memory, networking ? and so those resources' availability imposes upper bounds on your concurrency. But (assuming you use threads as your units of concurrency) every concurrent request must consume at least one thread, or it won?t be able to make progress at all. So threads are also an upper bound on concurrency, and we know empirically that in a great many server systems OS threads become the most constraining upper bound on concurrency well before other resources. Virtual threads remove that particular limitation, which helps all those systems, and now the concurrency of your system is only limited by the other resources I mentioned. If every request consumes 1/10 of your available CPU over its entire duration, then your CPU puts a limit of 10 on your concurrency and threads are not your bottleneck, but if you?re using virtual threads ? meaning you want a much higher number of threads ? then that?s not your circumstance. Clearly, when your CPU, or any other resource consumed by the requests you serve, is at 100% (for any non-instantaneous duration) then your system is not stable. ? Ron On 13 Jul 2022, at 19:26, eric at kolotyluk.net wrote: Just testing my intuition here? because reading what Ron says is often eye-opening? and changes my intuition 1. Loom improves concurrency via Virtual Threads * And consequently, potentially improves throughput 1. A key aspect of concurrency is blocking, where blocked tasks enable resources to be applied to unblocked tasks (where Fork-Join is highly effective) * Pre-Loom, resources such as Threads could be applied to unblocked tasks, but i. Platform Threads are heavy, expensive, etc. such that the number of Platform Threads puts a bound on concurrency * Post-Loom, resources such as Virtual Threads can now be applied to unblocked tasks, such that i. Light, cheap, etc. Virtual Threads enable a much higher bound on concurrency ii. According to Little?s Law, throughput can rise because the number of threads can rise. 1. Little?s Law also says ?The only requirements are that the system be stable and non-preemptive;? * While the underlying O/S may be preemptive, the JVM is not, so this requirement is met. * But, Ron says, ?While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound).? * Which I take to imply, that increasing the number of Virtual Threads increases the stability? ? i. Even in Loom, there is an upper bound on Virtual Threads created, albeit a much higher upper bound. 1. Where I am still confused is * In Loom, I would expect that even when all our CPU Cores are at 100%, 100% throughput, the system is still stable? i. Or maybe I am misinterpreting what Ron said? * However, latency will suffer, unless i. more CPU Cores are added to the overall load, via some load balancer ii. flow control, such as backpressure, is added such that queues do not grow without bound (a topic I would love to explore more) iii. Or, does an increase in latency mean a loss of stability? Cheers, Eric From: loom-dev > On Behalf Of Ron Pressler Sent: July 13, 2022 6:30 AM To: Alex Otenko > Cc: Rob Bygrave >; Egor Ushakov >; loom-dev at openjdk.org Subject: Re: [External] : Re: jstack, profilers and other tools The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Fri Jul 15 08:37:13 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Fri, 15 Jul 2022 09:37:13 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: You quickly jumped to a *therefore*. Newton's second law binds force, mass and acceleration. But you can't say that you can decrease mass by increasing acceleration, if the force remains the same. That is, the statement would be arithmetically correct, but it would have no physical meaning. Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. Now, what you could do at will, is split the work into sub-tasks. Virtual threads allow to do this at very little cost. However, you still can't talk about an increase in concurrency due to Little's law, because - enter Amdahl - response time changes. Say, 100k requests get split into 10 sub tasks each, each runnable independently. Amdahl says your response time is going down 10-fold. So you have 100k requests times 1ms gives concurrency 100. Concurrency got reduced. Not surprising at all, because now each request spends 10x less time in the system. What about subtasks? Aren't we running more of them? Does this mean concurrency increased? Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, because the definition of the unit of work changed: was W, became W/10. But let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency is 1000. Same as before splitting the work and matching change of response time. I treat this like I would any units of measurement change. So whereas I see a lot of good from being able to spin up threads, lots and shortlived, I don't see how you can claim concurrency increases, or that Little's law somehow controls throughput. Alex On Thu, 14 Jul 2022, 11:01 Ron Pressler, wrote: > Little?s law tells us what the relationship between concurrency, > throughput and latency is if the system is stable. It tells us that if > latency doesn?t decrease, then concurrency rises with throughput (again, if > the system is stable). Therefore, to support high throughput you need a > high level of concurrency. Since the Java platform?s unit of concurrency is > the thread, to support high throughput you need a high number of threads. > There might be other things you also need more of, but you *at least* need > a high number of threads. > > The number of threads is an *upper bound* on concurrency, because the > platform cannot make concurrent progress on anything without a thread (with > the caveat in the next paragraph). There might be other upper bounds, too > (e.g. you need enough memory to concurrently store all the working data for > your concurrent operations), but the number of threads *is* an upper bound, > and the one virtual threads are there to remove. > > Of course, as JEP 425 explains, you could abandon threads altogether and > use some other construct as your unit of concurrency, but then you lose > platform support. > > In any event, virtual threads exist to support a high number of threads, > as Little?s law requires, therefore, if you use virtual threads, you have a > high number of them. > > ? Ron > > On 14 Jul 2022, at 08:12, Alex Otenko wrote: > > Hi Ron, > > It looks you are unconvinced. Let me try with illustrative numbers. > > The users opening their laptops at 9am don't know how many threads you > have. So throughput remains 100k ops/sec in both setups below. Suppose, in > the first setup we have a system that is stable with 1000 threads. Little's > law tells us that the response time cannot exceed 10ms in this case. > Little's law does not prescribe response time, by the way; it is merely a > consequence of the statement that the system is stable: it couldn't have > been stable if its response time were higher. > > Now, let's create one thread per request. One claim is that this increases > concurrency (and I object to this point alone). Suppose this means > concurrency becomes 100k. Little's law says that the response time must be > 1 second. Sorry, but that's hardly an improvement! In fact, for any > concurrency greater than 1000 you must get response time higher than 10ms > we've got with 1000 threads. This is not what we want. Fortunately, this is > not what happens either. > > Really, thread count in the thread per request design has little to do > with concurrency level. Concurrency level is a derived quantity. It only > tells us how many requests are making progress at any given time in a > system that experiences request arrival rate R and which is able to process > them in time T. The only thing you can control through system design is > response time T. > > There are good reasons to design a system that way, but Little's law is > not one of them. > > On Wed, 13 Jul 2022, 14:29 Ron Pressler, wrote: > >> The application of Little?s law is 100% correct. Little?s law tells us >> that the number of threads must *necessarily* rise if throughput is to be >> high. Whether or not that alone is *sufficient* might depend on the >> concurrency level of other resources as well. The number of threads is not >> the only quantity that limits the L in the formula, but L cannot be higher >> than the number of threads. Obviously, if the system?s level of concurrency >> is bounded at a very low level ? say, 10 ? then having more than 10 threads >> is unhelpful, but as we?re talking about a program that uses virtual >> threads, we know that is not the case. >> >> Also, Little?s law describes *stable* systems; i.e. it says that *if* the >> system is stable, then a certain relationship must hold. While it is true >> that the rate of arrival might rise without bound, if the number of threads >> is insufficient to meet it, then the system is no longer stable (normally >> that means that queues are growing without bound). >> >> ? Ron >> >> On 13 Jul 2022, at 14:00, Alex Otenko wrote: >> >> This is an incorrect application of Little's Law. The law only posits >> that there is a connection between quantities. It doesn't specify which >> variables depend on which. In particular, throughput is not a free >> variable. >> >> Throughput is something outside your control. 100k users open their >> laptops at 9am and login within 1 second - that's it, you have throughput >> of 100k ops/sec. >> >> Then based on response time the system is able to deliver, you can tell >> what concurrency makes sense here. Adding threads is not going to change >> anything - certainly not if threads are not the bottleneck resource. >> Threads become the bottleneck when you have hardware to run them, but not >> the threads. >> >> On Tue, 12 Jul 2022, 15:47 Ron Pressler, wrote: >> >>> >>> >>> On 11 Jul 2022, at 22:13, Rob Bygrave wrote: >>> >>> *> An existing application that migrates to using virtual threads >>> doesn?t replace its platform threads with virtual threads* >>> >>> What I have been confident about to date based on the testing I've done >>> is that we can use Jetty with a Loom based thread pool and that has worked >>> very well. That is replacing current platform threads with virtual threads. >>> I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are >>> you suggesting this isn't a valid use of virtual threads or am I reading >>> too much into what you've said here? >>> >>> >>> The throughput advantage to virtual threads comes from one aspect ? >>> their *number* ? as explained by Little?s law. A web server employing >>> virtual thread would not replace a pool of N platform threads with a pool >>> of N virtual threads, as that does not increase the number of threads >>> required to increase throughput. Rather, it replaces the pool of N virtual >>> threads with an unpooled ExecutorService that spawns at least one new >>> virtual thread for every HTTP serving task. Only that can increase the >>> number of threads sufficiently to improve throughput. >>> >>> >>> >>> > *unusual* for an application that has any virtual threads to have >>> fewer than, say, 10,000 >>> >>> In the case of http server use of virtual thread, I feel the use of >>> *unusual* is too strong. That is, when we are using virtual threads for >>> application code handling of http request/response (like Jetty + Loom), I >>> suspect this is frequently going to operate with less than 1000 concurrent >>> requests per server instance. >>> >>> >>> 1000 concurrent requests would likely translate to more than 10,000 >>> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >>> without fanout, every HTTP request might wish to spawn more than one >>> thread, for example to have one thread for reading and one for writing. The >>> number 10,000, however, is just illustrative. Clearly, an application with >>> virtual threads will have some large number of threads (significantly >>> larger than applications with just platform threads), because the ability >>> to have a large number of threads is what virtual threads are for. >>> >>> The important point is that tooling needs to adapt to a high number of >>> threads, which is why we?ve added a tool that?s designed to make sense of >>> many threads, where jstack might not be very useful. >>> >>> ? Ron >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 15 09:19:18 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 15 Jul 2022 09:19:18 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: The number of threads doesn?t ?do? or not do you do anything. If requests arrive at 100K per second, each takes 500ms to process, then the number of threads you?re using *is equal to* at least 50K (assuming thread-per-request) in a stable system, that?s all. That is the physical meaning: the formula tells you what the quantities *are* in a stable system. Because in a thread-per-request program, every concurrent request takes up at least one thread, while the formula does not immediately tell you how many machines are used, or what the RAM, CPU, and network bandwidth utilisation is, it does give you a lower bound on the total number of live threads. Conversely, the number of threads gives an upper bound on L. As to the rest about splitting into subtasks, that increases L and reduces W by the same factor, so when applying Little?s law it?s handy to treat W as the total latency, *as if* it was processed sequentially, if we?re interested in L being the number of concurrent requests. More about that here: https://inside.java/2020/08/07/loom-performance/ ? Ron On 15 Jul 2022, at 09:37, Alex Otenko > wrote: You quickly jumped to a *therefore*. Newton's second law binds force, mass and acceleration. But you can't say that you can decrease mass by increasing acceleration, if the force remains the same. That is, the statement would be arithmetically correct, but it would have no physical meaning. Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. Now, what you could do at will, is split the work into sub-tasks. Virtual threads allow to do this at very little cost. However, you still can't talk about an increase in concurrency due to Little's law, because - enter Amdahl - response time changes. Say, 100k requests get split into 10 sub tasks each, each runnable independently. Amdahl says your response time is going down 10-fold. So you have 100k requests times 1ms gives concurrency 100. Concurrency got reduced. Not surprising at all, because now each request spends 10x less time in the system. What about subtasks? Aren't we running more of them? Does this mean concurrency increased? Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, because the definition of the unit of work changed: was W, became W/10. But let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency is 1000. Same as before splitting the work and matching change of response time. I treat this like I would any units of measurement change. So whereas I see a lot of good from being able to spin up threads, lots and shortlived, I don't see how you can claim concurrency increases, or that Little's law somehow controls throughput. Alex On Thu, 14 Jul 2022, 11:01 Ron Pressler, > wrote: Little?s law tells us what the relationship between concurrency, throughput and latency is if the system is stable. It tells us that if latency doesn?t decrease, then concurrency rises with throughput (again, if the system is stable). Therefore, to support high throughput you need a high level of concurrency. Since the Java platform?s unit of concurrency is the thread, to support high throughput you need a high number of threads. There might be other things you also need more of, but you *at least* need a high number of threads. The number of threads is an *upper bound* on concurrency, because the platform cannot make concurrent progress on anything without a thread (with the caveat in the next paragraph). There might be other upper bounds, too (e.g. you need enough memory to concurrently store all the working data for your concurrent operations), but the number of threads *is* an upper bound, and the one virtual threads are there to remove. Of course, as JEP 425 explains, you could abandon threads altogether and use some other construct as your unit of concurrency, but then you lose platform support. In any event, virtual threads exist to support a high number of threads, as Little?s law requires, therefore, if you use virtual threads, you have a high number of them. ? Ron On 14 Jul 2022, at 08:12, Alex Otenko > wrote: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, > wrote: The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Fri Jul 15 14:05:25 2022 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Fri, 15 Jul 2022 11:05:25 -0300 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: Em sex., 15 de jul. de 2022 ?s 05:39, Alex Otenko < oleksandr.otenko at gmail.com> escreveu: > Adding threads allows to do more work. But you can't do more work at will > - the amount of work going through the system is a quantity independent of > your design. > I think that, more precisely, the maximum amount of work that can go through a concrete system is a quantity independent of programmer design. Nobody is arguing that increasing the quantity of threads will increase work throughput in a machine with devices already at full capacity. What is being argued is that, since "task" is one of the machine's "devices" consumed to do work, increasing the capacity for "tasks" increases the maximum amount of work that can go through etc. If there are free processors, free memory, free network bandwidth, free storage bandwidth etc. etc. then doing more work concurrently will increase work throughput. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Fri Jul 15 19:52:19 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Fri, 15 Jul 2022 12:52:19 -0700 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: <098601d89884$64cf6c90$2e6e45b0$@kolotyluk.net> The way I look at it now, if operations are blocking, then creating more Virtual Threads creates more opportunities for unblocked code to run. In the past, we could not create enough Platform Threads to create such opportunities, and we had to rely on non-blocking patterns, such as Reactive programming. Akka/Scala demonstrated this years ago as being very effective. Loom simply allows us the same power of concurrency, without having to follow non-blocking patterns? Cheers, Eric From: loom-dev On Behalf Of Pedro Lamar?o Sent: July 15, 2022 7:05 AM To: Alex Otenko Cc: Ron Pressler ; loom-dev Subject: Re: [External] : Re: jstack, profilers and other tools Em sex., 15 de jul. de 2022 ?s 05:39, Alex Otenko > escreveu: Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. I think that, more precisely, the maximum amount of work that can go through a concrete system is a quantity independent of programmer design. Nobody is arguing that increasing the quantity of threads will increase work throughput in a machine with devices already at full capacity. What is being argued is that, since "task" is one of the machine's "devices" consumed to do work, increasing the capacity for "tasks" increases the maximum amount of work that can go through etc. If there are free processors, free memory, free network bandwidth, free storage bandwidth etc. etc. then doing more work concurrently will increase work throughput. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sat Jul 16 19:30:01 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sat, 16 Jul 2022 20:30:01 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: That's the indisputable bit. The contentious part is that adding more threads is going to increase throughput. Supposing that 10k threads are there, and you actually need them, you should get concurrency level 10k. Let's see what that means in practice. If it is a 1-CPU machine, 10k requests in flight somewhere at any given time means they are waiting for 99.99% of time. Or, out of 1 second they spend 100 microseconds on CPU, and waiting for something for the rest of the time (or, out of 100ms response time, 10 microseconds on CPU - barely enough to parse REST request). This can't be the case for the majority of workflows. Of course, having 10k threads for less than 1 second each doesn't mean you are getting concurrency thar is unattainable with fewer threads. The bottom line is that adding threads you aren't necessarily increasing concurrency. On Fri, 15 Jul 2022, 10:19 Ron Pressler, wrote: > The number of threads doesn?t ?do? or not do you do anything. If requests > arrive at 100K per second, each takes 500ms to process, then the number of > threads you?re using *is equal to* at least 50K (assuming > thread-per-request) in a stable system, that?s all. That is the physical > meaning: the formula tells you what the quantities *are* in a stable > system. > > Because in a thread-per-request program, every concurrent request takes up > at least one thread, while the formula does not immediately tell you how > many machines are used, or what the RAM, CPU, and network bandwidth > utilisation is, it does give you a lower bound on the total number of live > threads. Conversely, the number of threads gives an upper bound on L. > > As to the rest about splitting into subtasks, that increases L and reduces > W by the same factor, so when applying Little?s law it?s handy to treat W > as the total latency, *as if* it was processed sequentially, if we?re > interested in L being the number of concurrent requests. More about that > here: https://inside.java/2020/08/07/loom-performance/ > > ? Ron > > On 15 Jul 2022, at 09:37, Alex Otenko wrote: > > You quickly jumped to a *therefore*. > > Newton's second law binds force, mass and acceleration. But you can't say > that you can decrease mass by increasing acceleration, if the force remains > the same. That is, the statement would be arithmetically correct, but it > would have no physical meaning. > > Adding threads allows to do more work. But you can't do more work at will > - the amount of work going through the system is a quantity independent of > your design. > > Now, what you could do at will, is split the work into sub-tasks. Virtual > threads allow to do this at very little cost. However, you still can't talk > about an increase in concurrency due to Little's law, because - enter > Amdahl - response time changes. > > Say, 100k requests get split into 10 sub tasks each, each runnable > independently. Amdahl says your response time is going down 10-fold. So you > have 100k requests times 1ms gives concurrency 100. Concurrency got > reduced. Not surprising at all, because now each request spends 10x less > time in the system. > > What about subtasks? Aren't we running more of them? Does this mean > concurrency increased? > > Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, > because the definition of the unit of work changed: was W, became W/10. But > let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency > is 1000. Same as before splitting the work and matching change of response > time. I treat this like I would any units of measurement change. > > > So whereas I see a lot of good from being able to spin up threads, lots > and shortlived, I don't see how you can claim concurrency increases, or > that Little's law somehow controls throughput. > > > Alex > > On Thu, 14 Jul 2022, 11:01 Ron Pressler, wrote: > >> Little?s law tells us what the relationship between concurrency, >> throughput and latency is if the system is stable. It tells us that if >> latency doesn?t decrease, then concurrency rises with throughput (again, if >> the system is stable). Therefore, to support high throughput you need a >> high level of concurrency. Since the Java platform?s unit of concurrency is >> the thread, to support high throughput you need a high number of threads. >> There might be other things you also need more of, but you *at least* need >> a high number of threads. >> >> The number of threads is an *upper bound* on concurrency, because the >> platform cannot make concurrent progress on anything without a thread (with >> the caveat in the next paragraph). There might be other upper bounds, too >> (e.g. you need enough memory to concurrently store all the working data for >> your concurrent operations), but the number of threads *is* an upper bound, >> and the one virtual threads are there to remove. >> >> Of course, as JEP 425 explains, you could abandon threads altogether and >> use some other construct as your unit of concurrency, but then you lose >> platform support. >> >> In any event, virtual threads exist to support a high number of threads, >> as Little?s law requires, therefore, if you use virtual threads, you have a >> high number of them. >> >> ? Ron >> >> On 14 Jul 2022, at 08:12, Alex Otenko wrote: >> >> Hi Ron, >> >> It looks you are unconvinced. Let me try with illustrative numbers. >> >> The users opening their laptops at 9am don't know how many threads you >> have. So throughput remains 100k ops/sec in both setups below. Suppose, in >> the first setup we have a system that is stable with 1000 threads. Little's >> law tells us that the response time cannot exceed 10ms in this case. >> Little's law does not prescribe response time, by the way; it is merely a >> consequence of the statement that the system is stable: it couldn't have >> been stable if its response time were higher. >> >> Now, let's create one thread per request. One claim is that this >> increases concurrency (and I object to this point alone). Suppose this >> means concurrency becomes 100k. Little's law says that the response time >> must be 1 second. Sorry, but that's hardly an improvement! In fact, for any >> concurrency greater than 1000 you must get response time higher than 10ms >> we've got with 1000 threads. This is not what we want. Fortunately, this is >> not what happens either. >> >> Really, thread count in the thread per request design has little to do >> with concurrency level. Concurrency level is a derived quantity. It only >> tells us how many requests are making progress at any given time in a >> system that experiences request arrival rate R and which is able to process >> them in time T. The only thing you can control through system design is >> response time T. >> >> There are good reasons to design a system that way, but Little's law is >> not one of them. >> >> On Wed, 13 Jul 2022, 14:29 Ron Pressler, wrote: >> >>> The application of Little?s law is 100% correct. Little?s law tells us >>> that the number of threads must *necessarily* rise if throughput is to be >>> high. Whether or not that alone is *sufficient* might depend on the >>> concurrency level of other resources as well. The number of threads is not >>> the only quantity that limits the L in the formula, but L cannot be higher >>> than the number of threads. Obviously, if the system?s level of concurrency >>> is bounded at a very low level ? say, 10 ? then having more than 10 threads >>> is unhelpful, but as we?re talking about a program that uses virtual >>> threads, we know that is not the case. >>> >>> Also, Little?s law describes *stable* systems; i.e. it says that *if* >>> the system is stable, then a certain relationship must hold. While it is >>> true that the rate of arrival might rise without bound, if the number of >>> threads is insufficient to meet it, then the system is no longer stable >>> (normally that means that queues are growing without bound). >>> >>> ? Ron >>> >>> On 13 Jul 2022, at 14:00, Alex Otenko >>> wrote: >>> >>> This is an incorrect application of Little's Law. The law only posits >>> that there is a connection between quantities. It doesn't specify which >>> variables depend on which. In particular, throughput is not a free >>> variable. >>> >>> Throughput is something outside your control. 100k users open their >>> laptops at 9am and login within 1 second - that's it, you have throughput >>> of 100k ops/sec. >>> >>> Then based on response time the system is able to deliver, you can tell >>> what concurrency makes sense here. Adding threads is not going to change >>> anything - certainly not if threads are not the bottleneck resource. >>> Threads become the bottleneck when you have hardware to run them, but not >>> the threads. >>> >>> On Tue, 12 Jul 2022, 15:47 Ron Pressler, >>> wrote: >>> >>>> >>>> >>>> On 11 Jul 2022, at 22:13, Rob Bygrave wrote: >>>> >>>> *> An existing application that migrates to using virtual threads >>>> doesn?t replace its platform threads with virtual threads* >>>> >>>> What I have been confident about to date based on the testing I've done >>>> is that we can use Jetty with a Loom based thread pool and that has worked >>>> very well. That is replacing current platform threads with virtual threads. >>>> I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are >>>> you suggesting this isn't a valid use of virtual threads or am I reading >>>> too much into what you've said here? >>>> >>>> >>>> The throughput advantage to virtual threads comes from one aspect ? >>>> their *number* ? as explained by Little?s law. A web server employing >>>> virtual thread would not replace a pool of N platform threads with a pool >>>> of N virtual threads, as that does not increase the number of threads >>>> required to increase throughput. Rather, it replaces the pool of N virtual >>>> threads with an unpooled ExecutorService that spawns at least one new >>>> virtual thread for every HTTP serving task. Only that can increase the >>>> number of threads sufficiently to improve throughput. >>>> >>>> >>>> >>>> > *unusual* for an application that has any virtual threads to have >>>> fewer than, say, 10,000 >>>> >>>> In the case of http server use of virtual thread, I feel the use of >>>> *unusual* is too strong. That is, when we are using virtual threads >>>> for application code handling of http request/response (like Jetty + Loom), >>>> I suspect this is frequently going to operate with less than 1000 >>>> concurrent requests per server instance. >>>> >>>> >>>> 1000 concurrent requests would likely translate to more than 10,000 >>>> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >>>> without fanout, every HTTP request might wish to spawn more than one >>>> thread, for example to have one thread for reading and one for writing. The >>>> number 10,000, however, is just illustrative. Clearly, an application with >>>> virtual threads will have some large number of threads (significantly >>>> larger than applications with just platform threads), because the ability >>>> to have a large number of threads is what virtual threads are for. >>>> >>>> The important point is that tooling needs to adapt to a high number of >>>> threads, which is why we?ve added a tool that?s designed to make sense of >>>> many threads, where jstack might not be very useful. >>>> >>>> ? Ron >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sat Jul 16 19:38:53 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sat, 16 Jul 2022 20:38:53 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: I agree about capacity to do work. What I don't agree with is that you can change concurrency to increase throughput in Little's law - not more than you can change acceleration to increase force. And I don't agree that the common bottleneck is the lack of threads - 10k threads on 100 CPUs is not much; 10k longlived threads on 1 CPU is 99.99% waiting. Shortlived threads, or thread per request, isn't really about concurrency in Little's law. Alex On Fri, 15 Jul 2022, 15:05 Pedro Lamar?o, wrote: > Em sex., 15 de jul. de 2022 ?s 05:39, Alex Otenko < > oleksandr.otenko at gmail.com> escreveu: > > >> Adding threads allows to do more work. But you can't do more work at will >> - the amount of work going through the system is a quantity independent of >> your design. >> > > I think that, more precisely, the maximum amount of work that can go > through a concrete system is a quantity independent of programmer design. > Nobody is arguing that increasing the quantity of threads will increase > work throughput in a machine with devices already at full capacity. > What is being argued is that, since "task" is one of the machine's > "devices" consumed to do work, > increasing the capacity for "tasks" increases the maximum amount of work > that can go through etc. > If there are free processors, free memory, free network bandwidth, free > storage bandwidth etc. etc. then doing more work concurrently will increase > work throughput. > > -- > Pedro Lamar?o > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sun Jul 17 09:59:23 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 17 Jul 2022 09:59:23 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> If your thread-per-request system is getting 10K req/s (on average), each request takes 500ms (on average) to handle, and this can be sustained (i.e. the system is stable), then it doesn?t matter how much CPU or RAM is consumed, how much network bandwidth you?re using, or even how many machines you have: the (average) number of threads you?re running *is* no less than 5K (and, in practice, will usually be several times that). So it?s not that adding more threads is going to increase throughput (in fact, it won?t; having 1M threads will do nothing in this case), it?s that the number of threads is an upper bound on L (among all other upper bounds on L). Conversely, reaching a certain throughput requires some minimum number of threads. As to how many thread-per-request systems do or would hit the OS-thread boundary before they hit others, that?s an empirical question, and I think it is well-established that there are many such systems, but if you?re sceptical and think that user-mode threads/asynchronous APIs have little impact, you can just wait and see. ? Ron On 16 Jul 2022, at 20:30, Alex Otenko > wrote: That's the indisputable bit. The contentious part is that adding more threads is going to increase throughput. Supposing that 10k threads are there, and you actually need them, you should get concurrency level 10k. Let's see what that means in practice. If it is a 1-CPU machine, 10k requests in flight somewhere at any given time means they are waiting for 99.99% of time. Or, out of 1 second they spend 100 microseconds on CPU, and waiting for something for the rest of the time (or, out of 100ms response time, 10 microseconds on CPU - barely enough to parse REST request). This can't be the case for the majority of workflows. Of course, having 10k threads for less than 1 second each doesn't mean you are getting concurrency thar is unattainable with fewer threads. The bottom line is that adding threads you aren't necessarily increasing concurrency. On Fri, 15 Jul 2022, 10:19 Ron Pressler, > wrote: The number of threads doesn?t ?do? or not do you do anything. If requests arrive at 100K per second, each takes 500ms to process, then the number of threads you?re using *is equal to* at least 50K (assuming thread-per-request) in a stable system, that?s all. That is the physical meaning: the formula tells you what the quantities *are* in a stable system. Because in a thread-per-request program, every concurrent request takes up at least one thread, while the formula does not immediately tell you how many machines are used, or what the RAM, CPU, and network bandwidth utilisation is, it does give you a lower bound on the total number of live threads. Conversely, the number of threads gives an upper bound on L. As to the rest about splitting into subtasks, that increases L and reduces W by the same factor, so when applying Little?s law it?s handy to treat W as the total latency, *as if* it was processed sequentially, if we?re interested in L being the number of concurrent requests. More about that here: https://inside.java/2020/08/07/loom-performance/ ? Ron On 15 Jul 2022, at 09:37, Alex Otenko > wrote: You quickly jumped to a *therefore*. Newton's second law binds force, mass and acceleration. But you can't say that you can decrease mass by increasing acceleration, if the force remains the same. That is, the statement would be arithmetically correct, but it would have no physical meaning. Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. Now, what you could do at will, is split the work into sub-tasks. Virtual threads allow to do this at very little cost. However, you still can't talk about an increase in concurrency due to Little's law, because - enter Amdahl - response time changes. Say, 100k requests get split into 10 sub tasks each, each runnable independently. Amdahl says your response time is going down 10-fold. So you have 100k requests times 1ms gives concurrency 100. Concurrency got reduced. Not surprising at all, because now each request spends 10x less time in the system. What about subtasks? Aren't we running more of them? Does this mean concurrency increased? Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, because the definition of the unit of work changed: was W, became W/10. But let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency is 1000. Same as before splitting the work and matching change of response time. I treat this like I would any units of measurement change. So whereas I see a lot of good from being able to spin up threads, lots and shortlived, I don't see how you can claim concurrency increases, or that Little's law somehow controls throughput. Alex On Thu, 14 Jul 2022, 11:01 Ron Pressler, > wrote: Little?s law tells us what the relationship between concurrency, throughput and latency is if the system is stable. It tells us that if latency doesn?t decrease, then concurrency rises with throughput (again, if the system is stable). Therefore, to support high throughput you need a high level of concurrency. Since the Java platform?s unit of concurrency is the thread, to support high throughput you need a high number of threads. There might be other things you also need more of, but you *at least* need a high number of threads. The number of threads is an *upper bound* on concurrency, because the platform cannot make concurrent progress on anything without a thread (with the caveat in the next paragraph). There might be other upper bounds, too (e.g. you need enough memory to concurrently store all the working data for your concurrent operations), but the number of threads *is* an upper bound, and the one virtual threads are there to remove. Of course, as JEP 425 explains, you could abandon threads altogether and use some other construct as your unit of concurrency, but then you lose platform support. In any event, virtual threads exist to support a high number of threads, as Little?s law requires, therefore, if you use virtual threads, you have a high number of them. ? Ron On 14 Jul 2022, at 08:12, Alex Otenko > wrote: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, > wrote: The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sun Jul 17 10:19:29 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 17 Jul 2022 10:19:29 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> Message-ID: <9801C2D7-B331-4DCD-96C8-35C6A9326C0A@oracle.com> You don?t increase concurrency to increase throughput (in a system under a given load). Rather, higher levels of throughput require higher levels of concurrency, or, put another way, the level of concurrency in a system rises with its throughput. That is Little?s law. User-mode threads or async APIs are all about Little?s law. Whether or not the level of concurrency is a common bottleneck in thread-per-request programs or not is an empirical question. If it isn?t, then people won?t use virtual threads or async APIs. ? Ron On 16 Jul 2022, at 20:38, Alex Otenko > wrote: I agree about capacity to do work. What I don't agree with is that you can change concurrency to increase throughput in Little's law - not more than you can change acceleration to increase force. And I don't agree that the common bottleneck is the lack of threads - 10k threads on 100 CPUs is not much; 10k longlived threads on 1 CPU is 99.99% waiting. Shortlived threads, or thread per request, isn't really about concurrency in Little's law. Alex On Fri, 15 Jul 2022, 15:05 Pedro Lamar?o, > wrote: Em sex., 15 de jul. de 2022 ?s 05:39, Alex Otenko > escreveu: Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. I think that, more precisely, the maximum amount of work that can go through a concrete system is a quantity independent of programmer design. Nobody is arguing that increasing the quantity of threads will increase work throughput in a machine with devices already at full capacity. What is being argued is that, since "task" is one of the machine's "devices" consumed to do work, increasing the capacity for "tasks" increases the maximum amount of work that can go through etc. If there are free processors, free memory, free network bandwidth, free storage bandwidth etc. etc. then doing more work concurrently will increase work throughput. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From mushtaq.a at gmail.com Mon Jul 18 07:50:51 2022 From: mushtaq.a at gmail.com (Mushtaq Ahmed) Date: Mon, 18 Jul 2022 13:20:51 +0530 Subject: single thread executor Message-ID: Google search for a similar use case led me to this discussion. It seems that API has changed in the latest builds. What is the current way to achieve the following? Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory() Thanks, Mushtaq ----------------------------------------------------- Reference to the original thread (in case this email shows up outside the original thread, not sure how mailman works): Remember, you want multiple virtual threads, but use only on platform thread to schedule them. So you need to pass the single-thread executor as the virtual thread scheduler: ThreadFactory tf = Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory(); And then you can use the thread factory directly to create virtual threads, or use it like so: ExecutorService e = Executors.newUnboundedExecutor(tf); - Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Mon Jul 18 09:05:40 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Mon, 18 Jul 2022 09:05:40 +0000 Subject: single thread executor In-Reply-To: References: Message-ID: <3195812A-E62E-4D8D-84B9-6B682498373B@oracle.com> Hi. Custom schedulers were not delivered as part of JEP 425, as they are not yet ready for release. We intend to add that feature at a later date. Until then, only the default scheduler can schedule virtual threads. ? Ron On 18 Jul 2022, at 08:50, Mushtaq Ahmed > wrote: Google search for a similar use case led me to this discussion. It seems that API has changed in the latest builds. What is the current way to achieve the following? Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory() Thanks, Mushtaq ----------------------------------------------------- Reference to the original thread (in case this email shows up outside the original thread, not sure how mailman works): Remember, you want multiple virtual threads, but use only on platform thread to schedule them. So you need to pass the single-thread executor as the virtual thread scheduler: ThreadFactory tf = Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory(); And then you can use the thread factory directly to create virtual threads, or use it like so: ExecutorService e = Executors.newUnboundedExecutor(tf); - Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From mushtaq.a at gmail.com Mon Jul 18 09:18:24 2022 From: mushtaq.a at gmail.com (Mushtaq Ahmed) Date: Mon, 18 Jul 2022 14:48:24 +0530 Subject: single thread executor In-Reply-To: <3195812A-E62E-4D8D-84B9-6B682498373B@oracle.com> References: <3195812A-E62E-4D8D-84B9-6B682498373B@oracle.com> Message-ID: Thanks for the quick clarification! Good to know that is on the roadmap. I intend to make use of this for building a DSL where a user can a) update the mutable variables without having to think about concurrency (because of single carrier thread) b) freely use blocking (because of virtual threads within that carrier thread) On Mon, Jul 18, 2022 at 2:35 PM Ron Pressler wrote: > Hi. > > Custom schedulers were not delivered as part of JEP 425, as they are not > yet ready for release. > We intend to add that feature at a later date. Until then, only the > default scheduler can schedule virtual threads. > > ? Ron > > On 18 Jul 2022, at 08:50, Mushtaq Ahmed wrote: > > Google search for a similar use case led me to this discussion. > > It seems that API has changed in the latest builds. > > What is the current way to achieve the following? > > Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory() > > Thanks, > Mushtaq > > ----------------------------------------------------- > Reference to the original thread (in case this email shows up outside the > original thread, not sure how mailman works): > > Remember, you want multiple virtual threads, but use only on platform > thread to schedule them. So you need to pass the single-thread executor > as the virtual thread scheduler: > > ThreadFactory tf = > Thread.builder().virtual(Executors.newSingleThreadExecutor()).factory(); > And then you can use the thread factory directly to create virtual threads, > or use it like so: ExecutorService e = > Executors.newUnboundedExecutor(tf); - Ron > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Mon Jul 18 18:01:44 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Mon, 18 Jul 2022 19:01:44 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> Message-ID: I think I have made it clear that I am not sceptical about the ability to spawn threads in large numbers, and that all I am sceptical about is the use of Little's law in the way you did. You made it look like one needs thousands of threads to get better throughput, whereas typical numbers are much more modest than that. In practice you can't heedlessly add more threads, as at some point you get response time degrading with no improvement to throughput. On Sun, 17 Jul 2022, 10:59 Ron Pressler, wrote: > If your thread-per-request system is getting 10K req/s (on average), each > request takes 500ms (on average) to handle, and this can be sustained (i.e. > the system is stable), then it doesn?t matter how much CPU or RAM is > consumed, how much network bandwidth you?re using, or even how many > machines you have: the (average) number of threads you?re running *is* no > less than 5K (and, in practice, will usually be several times that). > > So it?s not that adding more threads is going to increase throughput (in > fact, it won?t; having 1M threads will do nothing in this case), it?s that > the number of threads is an upper bound on L (among all other upper bounds > on L). Conversely, reaching a certain throughput requires some minimum > number of threads. > > As to how many thread-per-request systems do or would hit the OS-thread > boundary before they hit others, that?s an empirical question, and I think > it is well-established that there are many such systems, but if you?re > sceptical and think that user-mode threads/asynchronous APIs have little > impact, you can just wait and see. > > ? Ron > > On 16 Jul 2022, at 20:30, Alex Otenko wrote: > > That's the indisputable bit. The contentious part is that adding more > threads is going to increase throughput. > > Supposing that 10k threads are there, and you actually need them, you > should get concurrency level 10k. Let's see what that means in practice. > > If it is a 1-CPU machine, 10k requests in flight somewhere at any given > time means they are waiting for 99.99% of time. Or, out of 1 second they > spend 100 microseconds on CPU, and waiting for something for the rest of > the time (or, out of 100ms response time, 10 microseconds on CPU - barely > enough to parse REST request). This can't be the case for the majority of > workflows. > > Of course, having 10k threads for less than 1 second each doesn't mean you > are getting concurrency thar is unattainable with fewer threads. > > The bottom line is that adding threads you aren't necessarily increasing > concurrency. > > On Fri, 15 Jul 2022, 10:19 Ron Pressler, wrote: > >> The number of threads doesn?t ?do? or not do you do anything. If requests >> arrive at 100K per second, each takes 500ms to process, then the number of >> threads you?re using *is equal to* at least 50K (assuming >> thread-per-request) in a stable system, that?s all. That is the physical >> meaning: the formula tells you what the quantities *are* in a stable >> system. >> >> Because in a thread-per-request program, every concurrent request takes >> up at least one thread, while the formula does not immediately tell you how >> many machines are used, or what the RAM, CPU, and network bandwidth >> utilisation is, it does give you a lower bound on the total number of live >> threads. Conversely, the number of threads gives an upper bound on L. >> >> As to the rest about splitting into subtasks, that increases L and >> reduces W by the same factor, so when applying Little?s law it?s handy to >> treat W as the total latency, *as if* it was processed sequentially, if >> we?re interested in L being the number of concurrent requests. More about >> that here: https://inside.java/2020/08/07/loom-performance/ >> >> >> ? Ron >> >> On 15 Jul 2022, at 09:37, Alex Otenko wrote: >> >> You quickly jumped to a *therefore*. >> >> Newton's second law binds force, mass and acceleration. But you can't say >> that you can decrease mass by increasing acceleration, if the force remains >> the same. That is, the statement would be arithmetically correct, but it >> would have no physical meaning. >> >> Adding threads allows to do more work. But you can't do more work at will >> - the amount of work going through the system is a quantity independent of >> your design. >> >> Now, what you could do at will, is split the work into sub-tasks. Virtual >> threads allow to do this at very little cost. However, you still can't talk >> about an increase in concurrency due to Little's law, because - enter >> Amdahl - response time changes. >> >> Say, 100k requests get split into 10 sub tasks each, each runnable >> independently. Amdahl says your response time is going down 10-fold. So you >> have 100k requests times 1ms gives concurrency 100. Concurrency got >> reduced. Not surprising at all, because now each request spends 10x less >> time in the system. >> >> What about subtasks? Aren't we running more of them? Does this mean >> concurrency increased? >> >> Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, >> because the definition of the unit of work changed: was W, became W/10. But >> let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency >> is 1000. Same as before splitting the work and matching change of response >> time. I treat this like I would any units of measurement change. >> >> >> So whereas I see a lot of good from being able to spin up threads, lots >> and shortlived, I don't see how you can claim concurrency increases, or >> that Little's law somehow controls throughput. >> >> >> Alex >> >> On Thu, 14 Jul 2022, 11:01 Ron Pressler, wrote: >> >>> Little?s law tells us what the relationship between concurrency, >>> throughput and latency is if the system is stable. It tells us that if >>> latency doesn?t decrease, then concurrency rises with throughput (again, if >>> the system is stable). Therefore, to support high throughput you need a >>> high level of concurrency. Since the Java platform?s unit of concurrency is >>> the thread, to support high throughput you need a high number of threads. >>> There might be other things you also need more of, but you *at least* need >>> a high number of threads. >>> >>> The number of threads is an *upper bound* on concurrency, because the >>> platform cannot make concurrent progress on anything without a thread (with >>> the caveat in the next paragraph). There might be other upper bounds, too >>> (e.g. you need enough memory to concurrently store all the working data for >>> your concurrent operations), but the number of threads *is* an upper bound, >>> and the one virtual threads are there to remove. >>> >>> Of course, as JEP 425 explains, you could abandon threads altogether and >>> use some other construct as your unit of concurrency, but then you lose >>> platform support. >>> >>> In any event, virtual threads exist to support a high number of threads, >>> as Little?s law requires, therefore, if you use virtual threads, you have a >>> high number of them. >>> >>> ? Ron >>> >>> On 14 Jul 2022, at 08:12, Alex Otenko >>> wrote: >>> >>> Hi Ron, >>> >>> It looks you are unconvinced. Let me try with illustrative numbers. >>> >>> The users opening their laptops at 9am don't know how many threads you >>> have. So throughput remains 100k ops/sec in both setups below. Suppose, in >>> the first setup we have a system that is stable with 1000 threads. Little's >>> law tells us that the response time cannot exceed 10ms in this case. >>> Little's law does not prescribe response time, by the way; it is merely a >>> consequence of the statement that the system is stable: it couldn't have >>> been stable if its response time were higher. >>> >>> Now, let's create one thread per request. One claim is that this >>> increases concurrency (and I object to this point alone). Suppose this >>> means concurrency becomes 100k. Little's law says that the response time >>> must be 1 second. Sorry, but that's hardly an improvement! In fact, for any >>> concurrency greater than 1000 you must get response time higher than 10ms >>> we've got with 1000 threads. This is not what we want. Fortunately, this is >>> not what happens either. >>> >>> Really, thread count in the thread per request design has little to do >>> with concurrency level. Concurrency level is a derived quantity. It only >>> tells us how many requests are making progress at any given time in a >>> system that experiences request arrival rate R and which is able to process >>> them in time T. The only thing you can control through system design is >>> response time T. >>> >>> There are good reasons to design a system that way, but Little's law is >>> not one of them. >>> >>> On Wed, 13 Jul 2022, 14:29 Ron Pressler, >>> wrote: >>> >>>> The application of Little?s law is 100% correct. Little?s law tells us >>>> that the number of threads must *necessarily* rise if throughput is to be >>>> high. Whether or not that alone is *sufficient* might depend on the >>>> concurrency level of other resources as well. The number of threads is not >>>> the only quantity that limits the L in the formula, but L cannot be higher >>>> than the number of threads. Obviously, if the system?s level of concurrency >>>> is bounded at a very low level ? say, 10 ? then having more than 10 threads >>>> is unhelpful, but as we?re talking about a program that uses virtual >>>> threads, we know that is not the case. >>>> >>>> Also, Little?s law describes *stable* systems; i.e. it says that *if* >>>> the system is stable, then a certain relationship must hold. While it is >>>> true that the rate of arrival might rise without bound, if the number of >>>> threads is insufficient to meet it, then the system is no longer stable >>>> (normally that means that queues are growing without bound). >>>> >>>> ? Ron >>>> >>>> On 13 Jul 2022, at 14:00, Alex Otenko >>>> wrote: >>>> >>>> This is an incorrect application of Little's Law. The law only posits >>>> that there is a connection between quantities. It doesn't specify which >>>> variables depend on which. In particular, throughput is not a free >>>> variable. >>>> >>>> Throughput is something outside your control. 100k users open their >>>> laptops at 9am and login within 1 second - that's it, you have throughput >>>> of 100k ops/sec. >>>> >>>> Then based on response time the system is able to deliver, you can tell >>>> what concurrency makes sense here. Adding threads is not going to change >>>> anything - certainly not if threads are not the bottleneck resource. >>>> Threads become the bottleneck when you have hardware to run them, but not >>>> the threads. >>>> >>>> On Tue, 12 Jul 2022, 15:47 Ron Pressler, >>>> wrote: >>>> >>>>> >>>>> >>>>> On 11 Jul 2022, at 22:13, Rob Bygrave wrote: >>>>> >>>>> *> An existing application that migrates to using virtual threads >>>>> doesn?t replace its platform threads with virtual threads* >>>>> >>>>> What I have been confident about to date based on the testing I've >>>>> done is that we can use Jetty with a Loom based thread pool and that has >>>>> worked very well. That is replacing current platform threads with virtual >>>>> threads. I'm suggesting this will frequently be sub 1000 virtual threads. >>>>> Ron, are you suggesting this isn't a valid use of virtual threads or am I >>>>> reading too much into what you've said here? >>>>> >>>>> >>>>> The throughput advantage to virtual threads comes from one aspect ? >>>>> their *number* ? as explained by Little?s law. A web server employing >>>>> virtual thread would not replace a pool of N platform threads with a pool >>>>> of N virtual threads, as that does not increase the number of threads >>>>> required to increase throughput. Rather, it replaces the pool of N virtual >>>>> threads with an unpooled ExecutorService that spawns at least one new >>>>> virtual thread for every HTTP serving task. Only that can increase the >>>>> number of threads sufficiently to improve throughput. >>>>> >>>>> >>>>> >>>>> > *unusual* for an application that has any virtual threads to have >>>>> fewer than, say, 10,000 >>>>> >>>>> In the case of http server use of virtual thread, I feel the use of >>>>> *unusual* is too strong. That is, when we are using virtual threads >>>>> for application code handling of http request/response (like Jetty + Loom), >>>>> I suspect this is frequently going to operate with less than 1000 >>>>> concurrent requests per server instance. >>>>> >>>>> >>>>> 1000 concurrent requests would likely translate to more than 10,000 >>>>> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >>>>> without fanout, every HTTP request might wish to spawn more than one >>>>> thread, for example to have one thread for reading and one for writing. The >>>>> number 10,000, however, is just illustrative. Clearly, an application with >>>>> virtual threads will have some large number of threads (significantly >>>>> larger than applications with just platform threads), because the ability >>>>> to have a large number of threads is what virtual threads are for. >>>>> >>>>> The important point is that tooling needs to adapt to a high number of >>>>> threads, which is why we?ve added a tool that?s designed to make sense of >>>>> many threads, where jstack might not be very useful. >>>>> >>>>> ? Ron >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Mon Jul 18 21:57:48 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Mon, 18 Jul 2022 21:57:48 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> Message-ID: <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> ?Concurrency rises with throughput?, which is just a mathematical fact, is not the same as the claim ? that no one is making ? that one can *raise* throughput by adding threads. However, it is the same as the claim that the *maximum* throughput might rise if the *maximum* number of threads is increased, because that?s just how dependent variables can work in mathematics, as I?ll try explaining. There is no ?more threads to get *better throughput*?, and there is no question about ?applying? Little?s law. Little?s law is simply the maths that tells us how many requests are being concurrently served in some system. There is no getting around it. In a system with 10K requests/s, each taking 500ms on average, there *are* 5K concurrent requests. If the program is written in the thread-per-request style, then it *has* at least 5K threads. Now, if the rate of requests doubles to 20K req/s and the system doesn?t collapse, then then there must be at least 10K threads serving them. Note that the increase in threads doesn?t raise the throughput, but it must accompany it. However, because concurrency rises with throughput, the *maximum* number of threads does pose an upper bound on throughput. It is very important to understand the difference between ?adding processing units could decrease latency in a data-parallel program? and ?concurrency rises with throughput in a concurrent program.? In the former, the units are an independent variable, and in the latter they?re not ? i.e. when the throughput is higher there are more threads, but adding threads doesn?t increase the throughput. And yet, because this forms an *upper bound* on throughput, the ability to have more threads is a prerequisite to raising the maximum attainable throughput (with the thread-per-request style). So raising the number of threads cannot possibly increase throughput, and yet raising the maximum number of threads could increase maximum throughput (until it?s bounded by something else). That?s just how dependent variables work when talking about upper/lower bounds. ? Ron On 18 Jul 2022, at 19:01, Alex Otenko > wrote: I think I have made it clear that I am not sceptical about the ability to spawn threads in large numbers, and that all I am sceptical about is the use of Little's law in the way you did. You made it look like one needs thousands of threads to get better throughput, whereas typical numbers are much more modest than that. In practice you can't heedlessly add more threads, as at some point you get response time degrading with no improvement to throughput. On Sun, 17 Jul 2022, 10:59 Ron Pressler, > wrote: If your thread-per-request system is getting 10K req/s (on average), each request takes 500ms (on average) to handle, and this can be sustained (i.e. the system is stable), then it doesn?t matter how much CPU or RAM is consumed, how much network bandwidth you?re using, or even how many machines you have: the (average) number of threads you?re running *is* no less than 5K (and, in practice, will usually be several times that). So it?s not that adding more threads is going to increase throughput (in fact, it won?t; having 1M threads will do nothing in this case), it?s that the number of threads is an upper bound on L (among all other upper bounds on L). Conversely, reaching a certain throughput requires some minimum number of threads. As to how many thread-per-request systems do or would hit the OS-thread boundary before they hit others, that?s an empirical question, and I think it is well-established that there are many such systems, but if you?re sceptical and think that user-mode threads/asynchronous APIs have little impact, you can just wait and see. ? Ron On 16 Jul 2022, at 20:30, Alex Otenko > wrote: That's the indisputable bit. The contentious part is that adding more threads is going to increase throughput. Supposing that 10k threads are there, and you actually need them, you should get concurrency level 10k. Let's see what that means in practice. If it is a 1-CPU machine, 10k requests in flight somewhere at any given time means they are waiting for 99.99% of time. Or, out of 1 second they spend 100 microseconds on CPU, and waiting for something for the rest of the time (or, out of 100ms response time, 10 microseconds on CPU - barely enough to parse REST request). This can't be the case for the majority of workflows. Of course, having 10k threads for less than 1 second each doesn't mean you are getting concurrency thar is unattainable with fewer threads. The bottom line is that adding threads you aren't necessarily increasing concurrency. On Fri, 15 Jul 2022, 10:19 Ron Pressler, > wrote: The number of threads doesn?t ?do? or not do you do anything. If requests arrive at 100K per second, each takes 500ms to process, then the number of threads you?re using *is equal to* at least 50K (assuming thread-per-request) in a stable system, that?s all. That is the physical meaning: the formula tells you what the quantities *are* in a stable system. Because in a thread-per-request program, every concurrent request takes up at least one thread, while the formula does not immediately tell you how many machines are used, or what the RAM, CPU, and network bandwidth utilisation is, it does give you a lower bound on the total number of live threads. Conversely, the number of threads gives an upper bound on L. As to the rest about splitting into subtasks, that increases L and reduces W by the same factor, so when applying Little?s law it?s handy to treat W as the total latency, *as if* it was processed sequentially, if we?re interested in L being the number of concurrent requests. More about that here: https://inside.java/2020/08/07/loom-performance/ ? Ron On 15 Jul 2022, at 09:37, Alex Otenko > wrote: You quickly jumped to a *therefore*. Newton's second law binds force, mass and acceleration. But you can't say that you can decrease mass by increasing acceleration, if the force remains the same. That is, the statement would be arithmetically correct, but it would have no physical meaning. Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. Now, what you could do at will, is split the work into sub-tasks. Virtual threads allow to do this at very little cost. However, you still can't talk about an increase in concurrency due to Little's law, because - enter Amdahl - response time changes. Say, 100k requests get split into 10 sub tasks each, each runnable independently. Amdahl says your response time is going down 10-fold. So you have 100k requests times 1ms gives concurrency 100. Concurrency got reduced. Not surprising at all, because now each request spends 10x less time in the system. What about subtasks? Aren't we running more of them? Does this mean concurrency increased? Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, because the definition of the unit of work changed: was W, became W/10. But let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency is 1000. Same as before splitting the work and matching change of response time. I treat this like I would any units of measurement change. So whereas I see a lot of good from being able to spin up threads, lots and shortlived, I don't see how you can claim concurrency increases, or that Little's law somehow controls throughput. Alex On Thu, 14 Jul 2022, 11:01 Ron Pressler, > wrote: Little?s law tells us what the relationship between concurrency, throughput and latency is if the system is stable. It tells us that if latency doesn?t decrease, then concurrency rises with throughput (again, if the system is stable). Therefore, to support high throughput you need a high level of concurrency. Since the Java platform?s unit of concurrency is the thread, to support high throughput you need a high number of threads. There might be other things you also need more of, but you *at least* need a high number of threads. The number of threads is an *upper bound* on concurrency, because the platform cannot make concurrent progress on anything without a thread (with the caveat in the next paragraph). There might be other upper bounds, too (e.g. you need enough memory to concurrently store all the working data for your concurrent operations), but the number of threads *is* an upper bound, and the one virtual threads are there to remove. Of course, as JEP 425 explains, you could abandon threads altogether and use some other construct as your unit of concurrency, but then you lose platform support. In any event, virtual threads exist to support a high number of threads, as Little?s law requires, therefore, if you use virtual threads, you have a high number of them. ? Ron On 14 Jul 2022, at 08:12, Alex Otenko > wrote: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, > wrote: The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Tue Jul 19 08:22:29 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Tue, 19 Jul 2022 09:22:29 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: Thanks, that's what I was trying to get across, too. Also, 10k threads per request doesn't mean that the concurrency is in thousands. In the thought experiment it is. In practice - ... well, if the systems are fine with dozens or even hundreds of threads, there should be no problem even doubling thread count, if it can double, or at least improve, throughput. In my experience this is not the case. There even are famous systems with self-tuning thread pool sizes, and I worked on the self-tuning algorithm. I have seen various apps and workloads that use that system and haven't seen any that would reach maximum thread count of a few hundred even on a fairly large machine. So whereas I never found anything wrong with the claim that thread count is one of the caps on throughput, I find the claim that allowing thread per request is going to improve concurrency problematic exactly because there are other caps. There surely are such workloads that are bottlenecked on thread count that can't grow into thousands, but in my practice I haven't seen a single one of this kind. If we had thousands of threads per CPU, they just need to be waiting so much that business logic must be very trivial. For example, the thought experiment with 10k threads and 0.5s response time. If that is executed on a 1 CPU machine, each request must be spending 50 microseconds on CPU, and for the rest of time waiting for something. If it's waiting for a lock or a pool of resource, you may be better off having fewer threads (coarsening contention). So it better be some network connection, or something of the kind. So 499.95ms it is waiting on that, and does request parsing, response construction, etc in 50 microseconds. This sort of profile is not a very common pattern. If we consider tens of CPUs for 10k threads, it starts to look far less impressive in terms of the number of threads. That's all about concurrency and threads as a bottleneck resource. There are other important uses of threads, but those are not about increasing concurrency. Ok, I reckon the topic got bashed to smithereens. Alex On Mon, 18 Jul 2022, 22:57 Ron Pressler, wrote: > ?Concurrency rises with throughput?, which is just a mathematical fact, is > not the same as the claim ? that no one is making ? that one can *raise* > throughput by adding threads. However, it is the same as the claim that the > *maximum* throughput might rise if the *maximum* number of threads is > increased, because that?s just how dependent variables can work in > mathematics, as I?ll try explaining. > > There is no ?more threads to get *better throughput*?, and there is no > question about ?applying? Little?s law. Little?s law is simply the maths > that tells us how many requests are being concurrently served in some > system. There is no getting around it. In a system with 10K requests/s, > each taking 500ms on average, there *are* 5K concurrent requests. If the > program is written in the thread-per-request style, then it *has* at least > 5K threads. Now, if the rate of requests doubles to 20K req/s and the > system doesn?t collapse, then then there must be at least 10K threads > serving them. > > Note that the increase in threads doesn?t raise the throughput, but it > must accompany it. However, because concurrency rises with throughput, the > *maximum* number of threads does pose an upper bound on throughput. > > It is very important to understand the difference between ?adding > processing units could decrease latency in a data-parallel program? and > ?concurrency rises with throughput in a concurrent program.? In the former, > the units are an independent variable, and in the latter they?re not ? i.e. > when the throughput is higher there are more threads, but adding threads > doesn?t increase the throughput. > > And yet, because this forms an *upper bound* on throughput, the ability to > have more threads is a prerequisite to raising the maximum attainable > throughput (with the thread-per-request style). So raising the number of > threads cannot possibly increase throughput, and yet raising the maximum > number of threads could increase maximum throughput (until it?s bounded by > something else). That?s just how dependent variables work when talking > about upper/lower bounds. > > ? Ron > > On 18 Jul 2022, at 19:01, Alex Otenko wrote: > > I think I have made it clear that I am not sceptical about the ability to > spawn threads in large numbers, and that all I am sceptical about is the > use of Little's law in the way you did. You made it look like one needs > thousands of threads to get better throughput, whereas typical numbers are > much more modest than that. In practice you can't heedlessly add more > threads, as at some point you get response time degrading with no > improvement to throughput. > > On Sun, 17 Jul 2022, 10:59 Ron Pressler, wrote: > >> If your thread-per-request system is getting 10K req/s (on average), each >> request takes 500ms (on average) to handle, and this can be sustained (i.e. >> the system is stable), then it doesn?t matter how much CPU or RAM is >> consumed, how much network bandwidth you?re using, or even how many >> machines you have: the (average) number of threads you?re running *is* no >> less than 5K (and, in practice, will usually be several times that). >> >> So it?s not that adding more threads is going to increase throughput (in >> fact, it won?t; having 1M threads will do nothing in this case), it?s that >> the number of threads is an upper bound on L (among all other upper bounds >> on L). Conversely, reaching a certain throughput requires some minimum >> number of threads. >> >> As to how many thread-per-request systems do or would hit the OS-thread >> boundary before they hit others, that?s an empirical question, and I think >> it is well-established that there are many such systems, but if you?re >> sceptical and think that user-mode threads/asynchronous APIs have little >> impact, you can just wait and see. >> >> ? Ron >> >> On 16 Jul 2022, at 20:30, Alex Otenko wrote: >> >> That's the indisputable bit. The contentious part is that adding more >> threads is going to increase throughput. >> >> Supposing that 10k threads are there, and you actually need them, you >> should get concurrency level 10k. Let's see what that means in practice. >> >> If it is a 1-CPU machine, 10k requests in flight somewhere at any given >> time means they are waiting for 99.99% of time. Or, out of 1 second they >> spend 100 microseconds on CPU, and waiting for something for the rest of >> the time (or, out of 100ms response time, 10 microseconds on CPU - barely >> enough to parse REST request). This can't be the case for the majority of >> workflows. >> >> Of course, having 10k threads for less than 1 second each doesn't mean >> you are getting concurrency thar is unattainable with fewer threads. >> >> The bottom line is that adding threads you aren't necessarily increasing >> concurrency. >> >> On Fri, 15 Jul 2022, 10:19 Ron Pressler, wrote: >> >>> The number of threads doesn?t ?do? or not do you do anything. If >>> requests arrive at 100K per second, each takes 500ms to process, then the >>> number of threads you?re using *is equal to* at least 50K (assuming >>> thread-per-request) in a stable system, that?s all. That is the physical >>> meaning: the formula tells you what the quantities *are* in a stable >>> system. >>> >>> Because in a thread-per-request program, every concurrent request takes >>> up at least one thread, while the formula does not immediately tell you how >>> many machines are used, or what the RAM, CPU, and network bandwidth >>> utilisation is, it does give you a lower bound on the total number of live >>> threads. Conversely, the number of threads gives an upper bound on L. >>> >>> As to the rest about splitting into subtasks, that increases L and >>> reduces W by the same factor, so when applying Little?s law it?s handy to >>> treat W as the total latency, *as if* it was processed sequentially, if >>> we?re interested in L being the number of concurrent requests. More about >>> that here: https://inside.java/2020/08/07/loom-performance/ >>> >>> >>> ? Ron >>> >>> On 15 Jul 2022, at 09:37, Alex Otenko >>> wrote: >>> >>> You quickly jumped to a *therefore*. >>> >>> Newton's second law binds force, mass and acceleration. But you can't >>> say that you can decrease mass by increasing acceleration, if the force >>> remains the same. That is, the statement would be arithmetically correct, >>> but it would have no physical meaning. >>> >>> Adding threads allows to do more work. But you can't do more work at >>> will - the amount of work going through the system is a quantity >>> independent of your design. >>> >>> Now, what you could do at will, is split the work into sub-tasks. >>> Virtual threads allow to do this at very little cost. However, you still >>> can't talk about an increase in concurrency due to Little's law, because - >>> enter Amdahl - response time changes. >>> >>> Say, 100k requests get split into 10 sub tasks each, each runnable >>> independently. Amdahl says your response time is going down 10-fold. So you >>> have 100k requests times 1ms gives concurrency 100. Concurrency got >>> reduced. Not surprising at all, because now each request spends 10x less >>> time in the system. >>> >>> What about subtasks? Aren't we running more of them? Does this mean >>> concurrency increased? >>> >>> Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, >>> because the definition of the unit of work changed: was W, became W/10. But >>> let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency >>> is 1000. Same as before splitting the work and matching change of response >>> time. I treat this like I would any units of measurement change. >>> >>> >>> So whereas I see a lot of good from being able to spin up threads, lots >>> and shortlived, I don't see how you can claim concurrency increases, or >>> that Little's law somehow controls throughput. >>> >>> >>> Alex >>> >>> On Thu, 14 Jul 2022, 11:01 Ron Pressler, >>> wrote: >>> >>>> Little?s law tells us what the relationship between concurrency, >>>> throughput and latency is if the system is stable. It tells us that if >>>> latency doesn?t decrease, then concurrency rises with throughput (again, if >>>> the system is stable). Therefore, to support high throughput you need a >>>> high level of concurrency. Since the Java platform?s unit of concurrency is >>>> the thread, to support high throughput you need a high number of threads. >>>> There might be other things you also need more of, but you *at least* need >>>> a high number of threads. >>>> >>>> The number of threads is an *upper bound* on concurrency, because the >>>> platform cannot make concurrent progress on anything without a thread (with >>>> the caveat in the next paragraph). There might be other upper bounds, too >>>> (e.g. you need enough memory to concurrently store all the working data for >>>> your concurrent operations), but the number of threads *is* an upper bound, >>>> and the one virtual threads are there to remove. >>>> >>>> Of course, as JEP 425 explains, you could abandon threads altogether >>>> and use some other construct as your unit of concurrency, but then you lose >>>> platform support. >>>> >>>> In any event, virtual threads exist to support a high number of >>>> threads, as Little?s law requires, therefore, if you use virtual threads, >>>> you have a high number of them. >>>> >>>> ? Ron >>>> >>>> On 14 Jul 2022, at 08:12, Alex Otenko >>>> wrote: >>>> >>>> Hi Ron, >>>> >>>> It looks you are unconvinced. Let me try with illustrative numbers. >>>> >>>> The users opening their laptops at 9am don't know how many threads you >>>> have. So throughput remains 100k ops/sec in both setups below. Suppose, in >>>> the first setup we have a system that is stable with 1000 threads. Little's >>>> law tells us that the response time cannot exceed 10ms in this case. >>>> Little's law does not prescribe response time, by the way; it is merely a >>>> consequence of the statement that the system is stable: it couldn't have >>>> been stable if its response time were higher. >>>> >>>> Now, let's create one thread per request. One claim is that this >>>> increases concurrency (and I object to this point alone). Suppose this >>>> means concurrency becomes 100k. Little's law says that the response time >>>> must be 1 second. Sorry, but that's hardly an improvement! In fact, for any >>>> concurrency greater than 1000 you must get response time higher than 10ms >>>> we've got with 1000 threads. This is not what we want. Fortunately, this is >>>> not what happens either. >>>> >>>> Really, thread count in the thread per request design has little to do >>>> with concurrency level. Concurrency level is a derived quantity. It only >>>> tells us how many requests are making progress at any given time in a >>>> system that experiences request arrival rate R and which is able to process >>>> them in time T. The only thing you can control through system design is >>>> response time T. >>>> >>>> There are good reasons to design a system that way, but Little's law is >>>> not one of them. >>>> >>>> On Wed, 13 Jul 2022, 14:29 Ron Pressler, >>>> wrote: >>>> >>>>> The application of Little?s law is 100% correct. Little?s law tells us >>>>> that the number of threads must *necessarily* rise if throughput is to be >>>>> high. Whether or not that alone is *sufficient* might depend on the >>>>> concurrency level of other resources as well. The number of threads is not >>>>> the only quantity that limits the L in the formula, but L cannot be higher >>>>> than the number of threads. Obviously, if the system?s level of concurrency >>>>> is bounded at a very low level ? say, 10 ? then having more than 10 threads >>>>> is unhelpful, but as we?re talking about a program that uses virtual >>>>> threads, we know that is not the case. >>>>> >>>>> Also, Little?s law describes *stable* systems; i.e. it says that *if* >>>>> the system is stable, then a certain relationship must hold. While it is >>>>> true that the rate of arrival might rise without bound, if the number of >>>>> threads is insufficient to meet it, then the system is no longer stable >>>>> (normally that means that queues are growing without bound). >>>>> >>>>> ? Ron >>>>> >>>>> On 13 Jul 2022, at 14:00, Alex Otenko >>>>> wrote: >>>>> >>>>> This is an incorrect application of Little's Law. The law only posits >>>>> that there is a connection between quantities. It doesn't specify which >>>>> variables depend on which. In particular, throughput is not a free >>>>> variable. >>>>> >>>>> Throughput is something outside your control. 100k users open their >>>>> laptops at 9am and login within 1 second - that's it, you have throughput >>>>> of 100k ops/sec. >>>>> >>>>> Then based on response time the system is able to deliver, you can >>>>> tell what concurrency makes sense here. Adding threads is not going to >>>>> change anything - certainly not if threads are not the bottleneck resource. >>>>> Threads become the bottleneck when you have hardware to run them, but not >>>>> the threads. >>>>> >>>>> On Tue, 12 Jul 2022, 15:47 Ron Pressler, >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On 11 Jul 2022, at 22:13, Rob Bygrave >>>>>> wrote: >>>>>> >>>>>> *> An existing application that migrates to using virtual threads >>>>>> doesn?t replace its platform threads with virtual threads* >>>>>> >>>>>> What I have been confident about to date based on the testing I've >>>>>> done is that we can use Jetty with a Loom based thread pool and that has >>>>>> worked very well. That is replacing current platform threads with virtual >>>>>> threads. I'm suggesting this will frequently be sub 1000 virtual threads. >>>>>> Ron, are you suggesting this isn't a valid use of virtual threads or am I >>>>>> reading too much into what you've said here? >>>>>> >>>>>> >>>>>> The throughput advantage to virtual threads comes from one aspect ? >>>>>> their *number* ? as explained by Little?s law. A web server employing >>>>>> virtual thread would not replace a pool of N platform threads with a pool >>>>>> of N virtual threads, as that does not increase the number of threads >>>>>> required to increase throughput. Rather, it replaces the pool of N virtual >>>>>> threads with an unpooled ExecutorService that spawns at least one new >>>>>> virtual thread for every HTTP serving task. Only that can increase the >>>>>> number of threads sufficiently to improve throughput. >>>>>> >>>>>> >>>>>> >>>>>> > *unusual* for an application that has any virtual threads to have >>>>>> fewer than, say, 10,000 >>>>>> >>>>>> In the case of http server use of virtual thread, I feel the use of >>>>>> *unusual* is too strong. That is, when we are using virtual threads >>>>>> for application code handling of http request/response (like Jetty + Loom), >>>>>> I suspect this is frequently going to operate with less than 1000 >>>>>> concurrent requests per server instance. >>>>>> >>>>>> >>>>>> 1000 concurrent requests would likely translate to more than 10,000 >>>>>> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >>>>>> without fanout, every HTTP request might wish to spawn more than one >>>>>> thread, for example to have one thread for reading and one for writing. The >>>>>> number 10,000, however, is just illustrative. Clearly, an application with >>>>>> virtual threads will have some large number of threads (significantly >>>>>> larger than applications with just platform threads), because the ability >>>>>> to have a large number of threads is what virtual threads are for. >>>>>> >>>>>> The important point is that tooling needs to adapt to a high number >>>>>> of threads, which is why we?ve added a tool that?s designed to make sense >>>>>> of many threads, where jstack might not be very useful. >>>>>> >>>>>> ? Ron >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Tue Jul 19 11:01:56 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 19 Jul 2022 11:01:56 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> First, async APIs and lightweight user-mode threads were invented because we empirically know there are many systems where the number of threads is the first bottleneck on maximum throughput that they encounter. The purpose of async APIs/lightweight threads is to allow those systems to hit the other, higher, limits on capacity. Of course, there are systems that hit other bottlenecks first, but the number of threads limitation is known to be very common. If a theory tells you it shouldn?t be, then the it?s the theory that should be revised (hint: the portion of total latency spent waiting for I/O is commonly *very* high; if you want to use Little?s law for that calculation, note that adding concurrency for fanout increases L and reduces W by the same factor, so it's handy to add together the total time spent waiting for, say, 5-50 outgoing microservice calls as if they were done sequentially, and compare that against the total time spent composing the results; this total ?wait latency? is frequently in the hundreds of milliseconds ? far higher than the actual request latency ? and easily two orders of magnitude higher than CPU time). Second, the number of *threads* (as opposed to the number of concurrent operation) has, at most, a negligible impact on contention. If we?re talking about low-level memory contention, then what matters is the number of processing cores, not the number of threads (beyond the number of cores), and if we?re talking about other resources, then that contention is part of the other limits on concurrency (and so throughput) in the system, and the way it is reached ? be it with many threads or one ? is irrelevant. It is true that various scheduling algorithms ? whether used on threads or on async constructs the relevant scheduling problems and algorithms are the same ? could reduce some overhead, but we?re talking about effects that are orders of magnitude lower than what can be achieved by reducing artificial limits on concurrency, but could matter to get the very last drop of performance; I go through the calculation of the effect of scheduling overhead here: https://inside.java/2020/08/07/loom-performance/. In short, the impact of scheduling can only be high if the total amount of time spent on scheduling is significant when compared to the time spent waiting for I/O. ? Ron On 19 Jul 2022, at 09:22, Alex Otenko > wrote: Thanks, that's what I was trying to get across, too. Also, 10k threads per request doesn't mean that the concurrency is in thousands. In the thought experiment it is. In practice - ... well, if the systems are fine with dozens or even hundreds of threads, there should be no problem even doubling thread count, if it can double, or at least improve, throughput. In my experience this is not the case. There even are famous systems with self-tuning thread pool sizes, and I worked on the self-tuning algorithm. I have seen various apps and workloads that use that system and haven't seen any that would reach maximum thread count of a few hundred even on a fairly large machine. So whereas I never found anything wrong with the claim that thread count is one of the caps on throughput, I find the claim that allowing thread per request is going to improve concurrency problematic exactly because there are other caps. There surely are such workloads that are bottlenecked on thread count that can't grow into thousands, but in my practice I haven't seen a single one of this kind. If we had thousands of threads per CPU, they just need to be waiting so much that business logic must be very trivial. For example, the thought experiment with 10k threads and 0.5s response time. If that is executed on a 1 CPU machine, each request must be spending 50 microseconds on CPU, and for the rest of time waiting for something. If it's waiting for a lock or a pool of resource, you may be better off having fewer threads (coarsening contention). So it better be some network connection, or something of the kind. So 499.95ms it is waiting on that, and does request parsing, response construction, etc in 50 microseconds. This sort of profile is not a very common pattern. If we consider tens of CPUs for 10k threads, it starts to look far less impressive in terms of the number of threads. That's all about concurrency and threads as a bottleneck resource. There are other important uses of threads, but those are not about increasing concurrency. Ok, I reckon the topic got bashed to smithereens. Alex On Mon, 18 Jul 2022, 22:57 Ron Pressler, > wrote: ?Concurrency rises with throughput?, which is just a mathematical fact, is not the same as the claim ? that no one is making ? that one can *raise* throughput by adding threads. However, it is the same as the claim that the *maximum* throughput might rise if the *maximum* number of threads is increased, because that?s just how dependent variables can work in mathematics, as I?ll try explaining. There is no ?more threads to get *better throughput*?, and there is no question about ?applying? Little?s law. Little?s law is simply the maths that tells us how many requests are being concurrently served in some system. There is no getting around it. In a system with 10K requests/s, each taking 500ms on average, there *are* 5K concurrent requests. If the program is written in the thread-per-request style, then it *has* at least 5K threads. Now, if the rate of requests doubles to 20K req/s and the system doesn?t collapse, then then there must be at least 10K threads serving them. Note that the increase in threads doesn?t raise the throughput, but it must accompany it. However, because concurrency rises with throughput, the *maximum* number of threads does pose an upper bound on throughput. It is very important to understand the difference between ?adding processing units could decrease latency in a data-parallel program? and ?concurrency rises with throughput in a concurrent program.? In the former, the units are an independent variable, and in the latter they?re not ? i.e. when the throughput is higher there are more threads, but adding threads doesn?t increase the throughput. And yet, because this forms an *upper bound* on throughput, the ability to have more threads is a prerequisite to raising the maximum attainable throughput (with the thread-per-request style). So raising the number of threads cannot possibly increase throughput, and yet raising the maximum number of threads could increase maximum throughput (until it?s bounded by something else). That?s just how dependent variables work when talking about upper/lower bounds. ? Ron On 18 Jul 2022, at 19:01, Alex Otenko > wrote: I think I have made it clear that I am not sceptical about the ability to spawn threads in large numbers, and that all I am sceptical about is the use of Little's law in the way you did. You made it look like one needs thousands of threads to get better throughput, whereas typical numbers are much more modest than that. In practice you can't heedlessly add more threads, as at some point you get response time degrading with no improvement to throughput. On Sun, 17 Jul 2022, 10:59 Ron Pressler, > wrote: If your thread-per-request system is getting 10K req/s (on average), each request takes 500ms (on average) to handle, and this can be sustained (i.e. the system is stable), then it doesn?t matter how much CPU or RAM is consumed, how much network bandwidth you?re using, or even how many machines you have: the (average) number of threads you?re running *is* no less than 5K (and, in practice, will usually be several times that). So it?s not that adding more threads is going to increase throughput (in fact, it won?t; having 1M threads will do nothing in this case), it?s that the number of threads is an upper bound on L (among all other upper bounds on L). Conversely, reaching a certain throughput requires some minimum number of threads. As to how many thread-per-request systems do or would hit the OS-thread boundary before they hit others, that?s an empirical question, and I think it is well-established that there are many such systems, but if you?re sceptical and think that user-mode threads/asynchronous APIs have little impact, you can just wait and see. ? Ron On 16 Jul 2022, at 20:30, Alex Otenko > wrote: That's the indisputable bit. The contentious part is that adding more threads is going to increase throughput. Supposing that 10k threads are there, and you actually need them, you should get concurrency level 10k. Let's see what that means in practice. If it is a 1-CPU machine, 10k requests in flight somewhere at any given time means they are waiting for 99.99% of time. Or, out of 1 second they spend 100 microseconds on CPU, and waiting for something for the rest of the time (or, out of 100ms response time, 10 microseconds on CPU - barely enough to parse REST request). This can't be the case for the majority of workflows. Of course, having 10k threads for less than 1 second each doesn't mean you are getting concurrency thar is unattainable with fewer threads. The bottom line is that adding threads you aren't necessarily increasing concurrency. On Fri, 15 Jul 2022, 10:19 Ron Pressler, > wrote: The number of threads doesn?t ?do? or not do you do anything. If requests arrive at 100K per second, each takes 500ms to process, then the number of threads you?re using *is equal to* at least 50K (assuming thread-per-request) in a stable system, that?s all. That is the physical meaning: the formula tells you what the quantities *are* in a stable system. Because in a thread-per-request program, every concurrent request takes up at least one thread, while the formula does not immediately tell you how many machines are used, or what the RAM, CPU, and network bandwidth utilisation is, it does give you a lower bound on the total number of live threads. Conversely, the number of threads gives an upper bound on L. As to the rest about splitting into subtasks, that increases L and reduces W by the same factor, so when applying Little?s law it?s handy to treat W as the total latency, *as if* it was processed sequentially, if we?re interested in L being the number of concurrent requests. More about that here: https://inside.java/2020/08/07/loom-performance/ ? Ron On 15 Jul 2022, at 09:37, Alex Otenko > wrote: You quickly jumped to a *therefore*. Newton's second law binds force, mass and acceleration. But you can't say that you can decrease mass by increasing acceleration, if the force remains the same. That is, the statement would be arithmetically correct, but it would have no physical meaning. Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. Now, what you could do at will, is split the work into sub-tasks. Virtual threads allow to do this at very little cost. However, you still can't talk about an increase in concurrency due to Little's law, because - enter Amdahl - response time changes. Say, 100k requests get split into 10 sub tasks each, each runnable independently. Amdahl says your response time is going down 10-fold. So you have 100k requests times 1ms gives concurrency 100. Concurrency got reduced. Not surprising at all, because now each request spends 10x less time in the system. What about subtasks? Aren't we running more of them? Does this mean concurrency increased? Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, because the definition of the unit of work changed: was W, became W/10. But let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency is 1000. Same as before splitting the work and matching change of response time. I treat this like I would any units of measurement change. So whereas I see a lot of good from being able to spin up threads, lots and shortlived, I don't see how you can claim concurrency increases, or that Little's law somehow controls throughput. Alex On Thu, 14 Jul 2022, 11:01 Ron Pressler, > wrote: Little?s law tells us what the relationship between concurrency, throughput and latency is if the system is stable. It tells us that if latency doesn?t decrease, then concurrency rises with throughput (again, if the system is stable). Therefore, to support high throughput you need a high level of concurrency. Since the Java platform?s unit of concurrency is the thread, to support high throughput you need a high number of threads. There might be other things you also need more of, but you *at least* need a high number of threads. The number of threads is an *upper bound* on concurrency, because the platform cannot make concurrent progress on anything without a thread (with the caveat in the next paragraph). There might be other upper bounds, too (e.g. you need enough memory to concurrently store all the working data for your concurrent operations), but the number of threads *is* an upper bound, and the one virtual threads are there to remove. Of course, as JEP 425 explains, you could abandon threads altogether and use some other construct as your unit of concurrency, but then you lose platform support. In any event, virtual threads exist to support a high number of threads, as Little?s law requires, therefore, if you use virtual threads, you have a high number of them. ? Ron On 14 Jul 2022, at 08:12, Alex Otenko > wrote: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, > wrote: The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From jrivard at gmail.com Tue Jul 19 11:24:47 2022 From: jrivard at gmail.com (Jason Rivard) Date: Tue, 19 Jul 2022 07:24:47 -0400 Subject: JNDI LDAP Threads & Loom Message-ID: Will JNDI LDAP Threads be "virtualizable"? These threads are created here: https://github.com/openjdk/jdk/blob/f5a7de86278ce019ffe44a92921dbb4018451a73/src/java.naming/share/classes/com/sun/jndi/ldap/VersionHelper.java#L112 There is no exposed API for controlling the thread creation or supplying a custom ThreadFactory. As it happens I manage an app that needs to create hundreds to thousands of these connections for /reasons/, and that makes for lots of legacy thread usage. I suppose a similar question also applies to otherwise similar internal threads created for per-socket I/O work - I'm assuming there are others. Thanks! -Jason From ron.pressler at oracle.com Tue Jul 19 13:29:16 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 19 Jul 2022 13:29:16 +0000 Subject: JNDI LDAP Threads & Loom In-Reply-To: References: Message-ID: <41039BEC-5C1B-4C35-83AE-0D4CA2EE7870@oracle.com> I am not familiar with the specifics of this particular mechanism (I can ask others who are), but I expect this kind of internal usage will be considered once virtual threads are out of preview. ? Ron > On 19 Jul 2022, at 12:24, Jason Rivard wrote: > > Will JNDI LDAP Threads be "virtualizable"? These threads are created here: > > https://github.com/openjdk/jdk/blob/f5a7de86278ce019ffe44a92921dbb4018451a73/src/java.naming/share/classes/com/sun/jndi/ldap/VersionHelper.java#L112 > > There is no exposed API for controlling the thread creation or > supplying a custom ThreadFactory. > > As it happens I manage an app that needs to create hundreds to > thousands of these connections for /reasons/, and that makes for lots > of legacy thread usage. > > I suppose a similar question also applies to otherwise similar > internal threads created for per-socket I/O work - I'm assuming there > are others. > > Thanks! > > -Jason From pedro.lamarao at prodist.com.br Tue Jul 19 13:35:23 2022 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Tue, 19 Jul 2022 10:35:23 -0300 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: Em ter., 19 de jul. de 2022 ?s 05:25, Alex Otenko < oleksandr.otenko at gmail.com> escreveu: > I find the claim that allowing thread per request is going to improve > concurrency problematic exactly because there are other caps. > I think that this is also an important point. It seems certain that *allowing* thread-per-request designs is not going to improve concurrency. If one already has an optimized system with an async/await design or similar, one would not super-optimize it by rewriting it in a thread-per-request design. I don't think there is anyone here making this claim. In the context of design or architecture, the benefit of thread-per-request is improved maintainability. async/await is notoriously difficult to understand and debug, as are state machines in general. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksej.efimov at oracle.com Tue Jul 19 17:00:30 2022 From: aleksej.efimov at oracle.com (Aleksei Efimov) Date: Tue, 19 Jul 2022 17:00:30 +0000 Subject: JNDI LDAP Threads & Loom In-Reply-To: <41039BEC-5C1B-4C35-83AE-0D4CA2EE7870@oracle.com> References: <41039BEC-5C1B-4C35-83AE-0D4CA2EE7870@oracle.com> Message-ID: I've logged an RFE to investigate internal usage of virtual threads in the JNDI/LDAP provider: https://bugs.openjdk.org/browse/JDK-8290559 We will investigate it once VTs are out of preview. - Aleksei ________________________________ From: loom-dev on behalf of Ron Pressler Sent: Tuesday, July 19, 2022 2:29 PM To: Jason Rivard Cc: loom-dev at openjdk.java.net Subject: Re: JNDI LDAP Threads & Loom I am not familiar with the specifics of this particular mechanism (I can ask others who are), but I expect this kind of internal usage will be considered once virtual threads are out of preview. ? Ron > On 19 Jul 2022, at 12:24, Jason Rivard wrote: > > Will JNDI LDAP Threads be "virtualizable"? These threads are created here: > > https://github.com/openjdk/jdk/blob/f5a7de86278ce019ffe44a92921dbb4018451a73/src/java.naming/share/classes/com/sun/jndi/ldap/VersionHelper.java#L112 > > There is no exposed API for controlling the thread creation or > supplying a custom ThreadFactory. > > As it happens I manage an app that needs to create hundreds to > thousands of these connections for /reasons/, and that makes for lots > of legacy thread usage. > > I suppose a similar question also applies to otherwise similar > internal threads created for per-socket I/O work - I'm assuming there > are others. > > Thanks! > > -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Tue Jul 19 17:38:14 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Tue, 19 Jul 2022 18:38:14 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: Agreed about the architectural advantages. The email that triggered my rant did contain the claim that using Virtual threads has the advantage of higher concurrency. > The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. On Tue, 19 Jul 2022, 14:35 Pedro Lamar?o, wrote: > Em ter., 19 de jul. de 2022 ?s 05:25, Alex Otenko < > oleksandr.otenko at gmail.com> escreveu: > > >> I find the claim that allowing thread per request is going to improve >> concurrency problematic exactly because there are other caps. >> > > I think that this is also an important point. > It seems certain that *allowing* thread-per-request designs is not going > to improve concurrency. > If one already has an optimized system with an async/await design or > similar, > one would not super-optimize it by rewriting it in a thread-per-request > design. > I don't think there is anyone here making this claim. > In the context of design or architecture, the benefit of > thread-per-request is improved maintainability. > async/await is notoriously difficult to understand and debug, as are > state machines in general. > > -- > Pedro Lamar?o > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Tue Jul 19 18:34:08 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Tue, 19 Jul 2022 11:34:08 -0700 Subject: An Analogy Message-ID: <110101d89b9e$22a052e0$67e0f8a0$@kolotyluk.net> I am hoping this is an apt analogy, so please correct me if it is wrong. Before Loom, concurrency was like going to a bank with a fixed number of tellers, where each teller had a line of customers. 1. In Java terms, a teller is like Platform Thread 2. Generally, it would take time to process each customer, say an average of 5 minutes 3. Sometimes, a customer would block the process, such as the teller needed to make a phone call to get some information 4. No work is performed, while the teller is blocked waiting, and consequently the entire line is blocked After Loom, concurrency is like going to a bank with more modern policies and procedures 1. In Java terms, a teller is still like a Platform Thread, but has the ability to park a customer 2. Generally, it still takes time to process each customer, say an average of 5 minutes 3. Sometimes, a customer would block the process, such as the teller needed some information before proceeding. a. The teller sends a text message or emails to get the necessary information b. The teller asks the customer to be seated, and as soon the information is available, they will be the next customer processed by the first available teller c. The teller starts processing the next customer in line d. This is analogous to a parked Virtual Thread, where the teller is like a Platform Thread, and the customer is like a Virtual Thread 4. Concurrency is increased, by better policies and procedures in dealing with blocking operations Yes, this is very simplistic, but intentionally so to try to expose what is so great about Virtual Threads. Cheers, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From anmbrr.bit0112 at gmail.com Tue Jul 19 19:12:51 2022 From: anmbrr.bit0112 at gmail.com (Bazlur Rahman) Date: Tue, 19 Jul 2022 15:12:51 -0400 Subject: An Analogy In-Reply-To: <110101d89b9e$22a052e0$67e0f8a0$@kolotyluk.net> References: <110101d89b9e$22a052e0$67e0f8a0$@kolotyluk.net> Message-ID: Hey Eric, A great analogy, I wonder if I can use it in my talk at the PhillyJUG (of course, using your reference) today. *Thank you,* *-* *A N M Bazlur Rahman* On Tue, Jul 19, 2022 at 2:34 PM wrote: > I am hoping this is an apt analogy, so please correct me if it is wrong? > > > > Before Loom, concurrency was like going to a bank with a fixed number of > tellers, where each teller had a line of customers. > > > > 1. In Java terms, a teller is like Platform Thread > 2. Generally, it would take time to process each customer, say an > average of 5 minutes > 3. Sometimes, a customer would block the process, such as the teller > needed to make a phone call to get some information > 4. No work is performed, while the teller is blocked waiting, and > consequently the entire line is blocked > > > > After Loom, concurrency is like going to a bank with more modern policies > and procedures > > > > 1. In Java terms, a teller is still like a Platform Thread, but has > the ability to park a customer > 2. Generally, it still takes time to process each customer, say an > average of 5 minutes > 3. Sometimes, a customer would block the process, such as the teller > needed some information before proceeding? > 1. The teller sends a text message or emails to get the necessary > information > 2. The teller asks the customer to be seated, and as soon the > information is available, they will be the next customer processed by the > first available teller > 3. The teller starts processing the next customer in line > 4. This is analogous to a parked Virtual Thread, where the teller > is like a Platform Thread, and the customer is like a Virtual Thread > 4. Concurrency is increased, by better policies and procedures in > dealing with blocking operations > > > > Yes, this is very simplistic, but intentionally so to try to expose what > is so great about Virtual Threads. > > > > Cheers, Eric > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarma.swaranga at gmail.com Tue Jul 19 19:22:26 2022 From: sarma.swaranga at gmail.com (Swaranga Sarma) Date: Tue, 19 Jul 2022 12:22:26 -0700 Subject: An Analogy In-Reply-To: References: <110101d89b9e$22a052e0$67e0f8a0$@kolotyluk.net> Message-ID: To take it a little bit further, you could also say there are physical desks/counters where the tellers are supposed to work. And the number of desks/counters is usually less than the number of tellers. Here the desks/counters are the actual CPU cores. Since not all tellers will get a desk at the same time, the manager (the OS) will sometimes ask a teller to release a desk and allow another teller with some pending customers to serve at the desk. This swap sometimes happens if a certain teller has occupied a desk for a length of time and some other teller hasn't had a chance to serve its pending customers. This is analogous to platform thread scheduling by the OS on the actual cores. Much simplified, but helped me create a mental model. Regards Swaranga On Tue, Jul 19, 2022 at 12:13 PM Bazlur Rahman wrote: > Hey Eric, > > A great analogy, I wonder if I can use it in my talk at the PhillyJUG (of > course, using your reference) today. > > *Thank you,* > *-* > *A N M Bazlur Rahman* > > > > > On Tue, Jul 19, 2022 at 2:34 PM wrote: > >> I am hoping this is an apt analogy, so please correct me if it is wrong? >> >> >> >> Before Loom, concurrency was like going to a bank with a fixed number of >> tellers, where each teller had a line of customers. >> >> >> >> 1. In Java terms, a teller is like Platform Thread >> 2. Generally, it would take time to process each customer, say an >> average of 5 minutes >> 3. Sometimes, a customer would block the process, such as the teller >> needed to make a phone call to get some information >> 4. No work is performed, while the teller is blocked waiting, and >> consequently the entire line is blocked >> >> >> >> After Loom, concurrency is like going to a bank with more modern policies >> and procedures >> >> >> >> 1. In Java terms, a teller is still like a Platform Thread, but has >> the ability to park a customer >> 2. Generally, it still takes time to process each customer, say an >> average of 5 minutes >> 3. Sometimes, a customer would block the process, such as the teller >> needed some information before proceeding? >> 1. The teller sends a text message or emails to get the necessary >> information >> 2. The teller asks the customer to be seated, and as soon the >> information is available, they will be the next customer processed by the >> first available teller >> 3. The teller starts processing the next customer in line >> 4. This is analogous to a parked Virtual Thread, where the teller >> is like a Platform Thread, and the customer is like a Virtual Thread >> 4. Concurrency is increased, by better policies and procedures in >> dealing with blocking operations >> >> >> >> Yes, this is very simplistic, but intentionally so to try to expose what >> is so great about Virtual Threads. >> >> >> >> Cheers, Eric >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Tue Jul 19 22:52:12 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 19 Jul 2022 22:52:12 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: On 19 Jul 2022, at 18:38, Alex Otenko > wrote: Agreed about the architectural advantages. The email that triggered my rant did contain the claim that using Virtual threads has the advantage of higher concurrency. > The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. Yes, and that is correct. As I explained, a higher maximum number of threads does indeed mean it is possible to reach the higher concurrency needed for higher throughput, so virtual threads, by virtue of their number, do allow for higher throughput. That statement is completely accurate, and yet it means something very different from (the incorrect) ?increasing the number of threads increases throughput?, which is how you misinterpreted the statement. This is similar to saying that AC allows people to live in areas with higher temperature, and that is a very different statement from saying that AC increases the temperature (althoughI guess it happens to also do that). ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Wed Jul 20 13:46:51 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Wed, 20 Jul 2022 13:46:51 +0000 Subject: An Analogy In-Reply-To: <110101d89b9e$22a052e0$67e0f8a0$@kolotyluk.net> References: <110101d89b9e$22a052e0$67e0f8a0$@kolotyluk.net> Message-ID: I would think about this differently. There is no important, essential, difference between virtual threads and OS threads. They implement the same thread abstraction that and both employ similar algorithms that ultimately schedule user code to run on a processing core or have it taken off it to run other code. The main difference is that one is a subprogram of the OS kernel, and the other is a subprogram of the Java runtime. OS threads are no more ?real? or do no more ?actual work? than virtual threads. They're both software constructs that create an abstraction, or illusion, that virtualises actual resources, such as the CPU. So to keep with your analogy, having virtual threads is just having access to more tellers. Instead of waiting for one of a small set, every customer who walks into the bank (up to some very large number), immediately gets their own teller. ? Ron On 19 Jul 2022, at 19:34, eric at kolotyluk.net wrote: I am hoping this is an apt analogy, so please correct me if it is wrong? Before Loom, concurrency was like going to a bank with a fixed number of tellers, where each teller had a line of customers. 1. In Java terms, a teller is like Platform Thread 2. Generally, it would take time to process each customer, say an average of 5 minutes 3. Sometimes, a customer would block the process, such as the teller needed to make a phone call to get some information 4. No work is performed, while the teller is blocked waiting, and consequently the entire line is blocked After Loom, concurrency is like going to a bank with more modern policies and procedures 1. In Java terms, a teller is still like a Platform Thread, but has the ability to park a customer 2. Generally, it still takes time to process each customer, say an average of 5 minutes 3. Sometimes, a customer would block the process, such as the teller needed some information before proceeding? * The teller sends a text message or emails to get the necessary information * The teller asks the customer to be seated, and as soon the information is available, they will be the next customer processed by the first available teller * The teller starts processing the next customer in line * This is analogous to a parked Virtual Thread, where the teller is like a Platform Thread, and the customer is like a Virtual Thread 4. Concurrency is increased, by better policies and procedures in dealing with blocking operations Yes, this is very simplistic, but intentionally so to try to expose what is so great about Virtual Threads. Cheers, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Wed Jul 20 18:24:12 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Wed, 20 Jul 2022 19:24:12 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: To me that statement implies a few things: - that Little's law talks of thread count - that if thread count is low, can't have throughput advantage Well, I don't feel like discussing my imperfect grasp of English. On Tue, 19 Jul 2022, 23:52 Ron Pressler, wrote: > > > On 19 Jul 2022, at 18:38, Alex Otenko wrote: > > Agreed about the architectural advantages. > > The email that triggered my rant did contain the claim that using Virtual > threads has the advantage of higher concurrency. > > > The throughput advantage to virtual threads comes from one aspect ? > their *number* ? as explained by Little?s law. > > > > > Yes, and that is correct. As I explained, a higher maximum number of > threads does indeed mean it is possible to reach the higher concurrency > needed for higher throughput, so virtual threads, by virtue of their > number, do allow for higher throughput. That statement is completely > accurate, and yet it means something very different from (the incorrect) > ?increasing the number of threads increases throughput?, which is how you > misinterpreted the statement. > > This is similar to saying that AC allows people to live in areas with > higher temperature, and that is a very different statement from saying that > AC increases the temperature (althoughI guess it happens to also do that). > > ? Ron > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Wed Jul 20 18:49:59 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Wed, 20 Jul 2022 19:49:59 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: Waiting for requests doesn't count towards response time. Waiting for responses from downstream does, but it is doable with rather small thread counts - perhaps as witness to just how large a proportion of response time that typically is. I agree virtual threads allow to build things differently. But it's not just the awkwardness of async API that changes. Blocking API eliminates inversion of control and changes allocation pattern. For example, first you allocate 16KB to read efficiently, then you are blocked - perhaps, for seconds while the user is having a think. So 10k connections pin 160MB waiting for a request. With async API nothing is pinned, because you don't allocate until you know who's read-ready. This may no longer be a problem with modern JVMs, but needs calling out. The lack of inversion of control can be seen as inability to control who reads when. So to speak, would you rather read 100 bytes of 10k requests each, or would you rather read 10KB of 100 requests each? This has impact on memory footprint and who can make progress. It makes sense to read more requests when you run out of things to do, but doesn't make sense if you saturated the bottleneck resource. Alex On Tue, 19 Jul 2022, 12:02 Ron Pressler, wrote: > First, async APIs and lightweight user-mode threads were invented because > we empirically know there are many systems where the number of threads is > the first bottleneck on maximum throughput that they encounter. The purpose > of async APIs/lightweight threads is to allow those systems to hit the > other, higher, limits on capacity. Of course, there are systems that hit > other bottlenecks first, but the number of threads limitation is known to > be very common. If a theory tells you it shouldn?t be, then the it?s the > theory that should be revised (hint: the portion of total latency spent > waiting for I/O is commonly *very* high; if you want to use Little?s law > for that calculation, note that adding concurrency for fanout increases L > and reduces W by the same factor, so it's handy to add together the total > time spent waiting for, say, 5-50 outgoing microservice calls as if they > were done sequentially, and compare that against the total time spent > composing the results; this total ?wait latency? is frequently in the > hundreds of milliseconds ? far higher than the actual request latency ? and > easily two orders of magnitude higher than CPU time). > > Second, the number of *threads* (as opposed to the number of concurrent > operation) has, at most, a negligible impact on contention. If we?re > talking about low-level memory contention, then what matters is the number > of processing cores, not the number of threads (beyond the number of > cores), and if we?re talking about other resources, then that contention is > part of the other limits on concurrency (and so throughput) in the system, > and the way it is reached ? be it with many threads or one ? is irrelevant. > It is true that various scheduling algorithms ? whether used on threads or > on async constructs the relevant scheduling problems and algorithms are the > same ? could reduce some overhead, but we?re talking about effects that are > orders of magnitude lower than what can be achieved by reducing artificial > limits on concurrency, but could matter to get the very last drop of > performance; I go through the calculation of the effect of scheduling > overhead here: https://inside.java/2020/08/07/loom-performance/. In > short, the impact of scheduling can only be high if the total amount of > time spent on scheduling is significant when compared to the time spent > waiting for I/O. > > ? Ron > > > > On 19 Jul 2022, at 09:22, Alex Otenko wrote: > > Thanks, that's what I was trying to get across, too. > > Also, 10k threads per request doesn't mean that the concurrency is in > thousands. In the thought experiment it is. In practice - ... well, if the > systems are fine with dozens or even hundreds of threads, there should be > no problem even doubling thread count, if it can double, or at least > improve, throughput. In my experience this is not the case. There even are > famous systems with self-tuning thread pool sizes, and I worked on the > self-tuning algorithm. I have seen various apps and workloads that use that > system and haven't seen any that would reach maximum thread count of a few > hundred even on a fairly large machine. So whereas I never found anything > wrong with the claim that thread count is one of the caps on throughput, I > find the claim that allowing thread per request is going to improve > concurrency problematic exactly because there are other caps. There surely > are such workloads that are bottlenecked on thread count that can't grow > into thousands, but in my practice I haven't seen a single one of this > kind. If we had thousands of threads per CPU, they just need to be waiting > so much that business logic must be very trivial. > > For example, the thought experiment with 10k threads and 0.5s response > time. If that is executed on a 1 CPU machine, each request must be spending > 50 microseconds on CPU, and for the rest of time waiting for something. If > it's waiting for a lock or a pool of resource, you may be better off having > fewer threads (coarsening contention). So it better be some network > connection, or something of the kind. So 499.95ms it is waiting on that, > and does request parsing, response construction, etc in 50 microseconds. > This sort of profile is not a very common pattern. > > If we consider tens of CPUs for 10k threads, it starts to look far less > impressive in terms of the number of threads. > > That's all about concurrency and threads as a bottleneck resource. There > are other important uses of threads, but those are not about increasing > concurrency. > > > Ok, I reckon the topic got bashed to smithereens. > > Alex > > On Mon, 18 Jul 2022, 22:57 Ron Pressler, wrote: > >> ?Concurrency rises with throughput?, which is just a mathematical fact, >> is not the same as the claim ? that no one is making ? that one can *raise* >> throughput by adding threads. However, it is the same as the claim that the >> *maximum* throughput might rise if the *maximum* number of threads is >> increased, because that?s just how dependent variables can work in >> mathematics, as I?ll try explaining. >> >> There is no ?more threads to get *better throughput*?, and there is no >> question about ?applying? Little?s law. Little?s law is simply the maths >> that tells us how many requests are being concurrently served in some >> system. There is no getting around it. In a system with 10K requests/s, >> each taking 500ms on average, there *are* 5K concurrent requests. If the >> program is written in the thread-per-request style, then it *has* at least >> 5K threads. Now, if the rate of requests doubles to 20K req/s and the >> system doesn?t collapse, then then there must be at least 10K threads >> serving them. >> >> Note that the increase in threads doesn?t raise the throughput, but it >> must accompany it. However, because concurrency rises with throughput, the >> *maximum* number of threads does pose an upper bound on throughput. >> >> It is very important to understand the difference between ?adding >> processing units could decrease latency in a data-parallel program? and >> ?concurrency rises with throughput in a concurrent program.? In the former, >> the units are an independent variable, and in the latter they?re not ? i.e. >> when the throughput is higher there are more threads, but adding threads >> doesn?t increase the throughput. >> >> And yet, because this forms an *upper bound* on throughput, the ability >> to have more threads is a prerequisite to raising the maximum attainable >> throughput (with the thread-per-request style). So raising the number of >> threads cannot possibly increase throughput, and yet raising the maximum >> number of threads could increase maximum throughput (until it?s bounded by >> something else). That?s just how dependent variables work when talking >> about upper/lower bounds. >> >> ? Ron >> >> On 18 Jul 2022, at 19:01, Alex Otenko wrote: >> >> I think I have made it clear that I am not sceptical about the ability to >> spawn threads in large numbers, and that all I am sceptical about is the >> use of Little's law in the way you did. You made it look like one needs >> thousands of threads to get better throughput, whereas typical numbers are >> much more modest than that. In practice you can't heedlessly add more >> threads, as at some point you get response time degrading with no >> improvement to throughput. >> >> On Sun, 17 Jul 2022, 10:59 Ron Pressler, wrote: >> >>> If your thread-per-request system is getting 10K req/s (on average), >>> each request takes 500ms (on average) to handle, and this can be sustained >>> (i.e. the system is stable), then it doesn?t matter how much CPU or RAM is >>> consumed, how much network bandwidth you?re using, or even how many >>> machines you have: the (average) number of threads you?re running *is* no >>> less than 5K (and, in practice, will usually be several times that). >>> >>> So it?s not that adding more threads is going to increase throughput (in >>> fact, it won?t; having 1M threads will do nothing in this case), it?s that >>> the number of threads is an upper bound on L (among all other upper bounds >>> on L). Conversely, reaching a certain throughput requires some minimum >>> number of threads. >>> >>> As to how many thread-per-request systems do or would hit the OS-thread >>> boundary before they hit others, that?s an empirical question, and I think >>> it is well-established that there are many such systems, but if you?re >>> sceptical and think that user-mode threads/asynchronous APIs have little >>> impact, you can just wait and see. >>> >>> ? Ron >>> >>> On 16 Jul 2022, at 20:30, Alex Otenko >>> wrote: >>> >>> That's the indisputable bit. The contentious part is that adding more >>> threads is going to increase throughput. >>> >>> Supposing that 10k threads are there, and you actually need them, you >>> should get concurrency level 10k. Let's see what that means in practice. >>> >>> If it is a 1-CPU machine, 10k requests in flight somewhere at any given >>> time means they are waiting for 99.99% of time. Or, out of 1 second they >>> spend 100 microseconds on CPU, and waiting for something for the rest of >>> the time (or, out of 100ms response time, 10 microseconds on CPU - barely >>> enough to parse REST request). This can't be the case for the majority of >>> workflows. >>> >>> Of course, having 10k threads for less than 1 second each doesn't mean >>> you are getting concurrency thar is unattainable with fewer threads. >>> >>> The bottom line is that adding threads you aren't necessarily increasing >>> concurrency. >>> >>> On Fri, 15 Jul 2022, 10:19 Ron Pressler, >>> wrote: >>> >>>> The number of threads doesn?t ?do? or not do you do anything. If >>>> requests arrive at 100K per second, each takes 500ms to process, then the >>>> number of threads you?re using *is equal to* at least 50K (assuming >>>> thread-per-request) in a stable system, that?s all. That is the physical >>>> meaning: the formula tells you what the quantities *are* in a stable >>>> system. >>>> >>>> Because in a thread-per-request program, every concurrent request takes >>>> up at least one thread, while the formula does not immediately tell you how >>>> many machines are used, or what the RAM, CPU, and network bandwidth >>>> utilisation is, it does give you a lower bound on the total number of live >>>> threads. Conversely, the number of threads gives an upper bound on L. >>>> >>>> As to the rest about splitting into subtasks, that increases L and >>>> reduces W by the same factor, so when applying Little?s law it?s handy to >>>> treat W as the total latency, *as if* it was processed sequentially, if >>>> we?re interested in L being the number of concurrent requests. More about >>>> that here: https://inside.java/2020/08/07/loom-performance/ >>>> >>>> >>>> ? Ron >>>> >>>> On 15 Jul 2022, at 09:37, Alex Otenko >>>> wrote: >>>> >>>> You quickly jumped to a *therefore*. >>>> >>>> Newton's second law binds force, mass and acceleration. But you can't >>>> say that you can decrease mass by increasing acceleration, if the force >>>> remains the same. That is, the statement would be arithmetically correct, >>>> but it would have no physical meaning. >>>> >>>> Adding threads allows to do more work. But you can't do more work at >>>> will - the amount of work going through the system is a quantity >>>> independent of your design. >>>> >>>> Now, what you could do at will, is split the work into sub-tasks. >>>> Virtual threads allow to do this at very little cost. However, you still >>>> can't talk about an increase in concurrency due to Little's law, because - >>>> enter Amdahl - response time changes. >>>> >>>> Say, 100k requests get split into 10 sub tasks each, each runnable >>>> independently. Amdahl says your response time is going down 10-fold. So you >>>> have 100k requests times 1ms gives concurrency 100. Concurrency got >>>> reduced. Not surprising at all, because now each request spends 10x less >>>> time in the system. >>>> >>>> What about subtasks? Aren't we running more of them? Does this mean >>>> concurrency increased? >>>> >>>> Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, >>>> because the definition of the unit of work changed: was W, became W/10. But >>>> let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency >>>> is 1000. Same as before splitting the work and matching change of response >>>> time. I treat this like I would any units of measurement change. >>>> >>>> >>>> So whereas I see a lot of good from being able to spin up threads, lots >>>> and shortlived, I don't see how you can claim concurrency increases, or >>>> that Little's law somehow controls throughput. >>>> >>>> >>>> Alex >>>> >>>> On Thu, 14 Jul 2022, 11:01 Ron Pressler, >>>> wrote: >>>> >>>>> Little?s law tells us what the relationship between concurrency, >>>>> throughput and latency is if the system is stable. It tells us that if >>>>> latency doesn?t decrease, then concurrency rises with throughput (again, if >>>>> the system is stable). Therefore, to support high throughput you need a >>>>> high level of concurrency. Since the Java platform?s unit of concurrency is >>>>> the thread, to support high throughput you need a high number of threads. >>>>> There might be other things you also need more of, but you *at least* need >>>>> a high number of threads. >>>>> >>>>> The number of threads is an *upper bound* on concurrency, because the >>>>> platform cannot make concurrent progress on anything without a thread (with >>>>> the caveat in the next paragraph). There might be other upper bounds, too >>>>> (e.g. you need enough memory to concurrently store all the working data for >>>>> your concurrent operations), but the number of threads *is* an upper bound, >>>>> and the one virtual threads are there to remove. >>>>> >>>>> Of course, as JEP 425 explains, you could abandon threads altogether >>>>> and use some other construct as your unit of concurrency, but then you lose >>>>> platform support. >>>>> >>>>> In any event, virtual threads exist to support a high number of >>>>> threads, as Little?s law requires, therefore, if you use virtual threads, >>>>> you have a high number of them. >>>>> >>>>> ? Ron >>>>> >>>>> On 14 Jul 2022, at 08:12, Alex Otenko >>>>> wrote: >>>>> >>>>> Hi Ron, >>>>> >>>>> It looks you are unconvinced. Let me try with illustrative numbers. >>>>> >>>>> The users opening their laptops at 9am don't know how many threads you >>>>> have. So throughput remains 100k ops/sec in both setups below. Suppose, in >>>>> the first setup we have a system that is stable with 1000 threads. Little's >>>>> law tells us that the response time cannot exceed 10ms in this case. >>>>> Little's law does not prescribe response time, by the way; it is merely a >>>>> consequence of the statement that the system is stable: it couldn't have >>>>> been stable if its response time were higher. >>>>> >>>>> Now, let's create one thread per request. One claim is that this >>>>> increases concurrency (and I object to this point alone). Suppose this >>>>> means concurrency becomes 100k. Little's law says that the response time >>>>> must be 1 second. Sorry, but that's hardly an improvement! In fact, for any >>>>> concurrency greater than 1000 you must get response time higher than 10ms >>>>> we've got with 1000 threads. This is not what we want. Fortunately, this is >>>>> not what happens either. >>>>> >>>>> Really, thread count in the thread per request design has little to do >>>>> with concurrency level. Concurrency level is a derived quantity. It only >>>>> tells us how many requests are making progress at any given time in a >>>>> system that experiences request arrival rate R and which is able to process >>>>> them in time T. The only thing you can control through system design is >>>>> response time T. >>>>> >>>>> There are good reasons to design a system that way, but Little's law >>>>> is not one of them. >>>>> >>>>> On Wed, 13 Jul 2022, 14:29 Ron Pressler, >>>>> wrote: >>>>> >>>>>> The application of Little?s law is 100% correct. Little?s law tells >>>>>> us that the number of threads must *necessarily* rise if throughput is to >>>>>> be high. Whether or not that alone is *sufficient* might depend on the >>>>>> concurrency level of other resources as well. The number of threads is not >>>>>> the only quantity that limits the L in the formula, but L cannot be higher >>>>>> than the number of threads. Obviously, if the system?s level of concurrency >>>>>> is bounded at a very low level ? say, 10 ? then having more than 10 threads >>>>>> is unhelpful, but as we?re talking about a program that uses virtual >>>>>> threads, we know that is not the case. >>>>>> >>>>>> Also, Little?s law describes *stable* systems; i.e. it says that *if* >>>>>> the system is stable, then a certain relationship must hold. While it is >>>>>> true that the rate of arrival might rise without bound, if the number of >>>>>> threads is insufficient to meet it, then the system is no longer stable >>>>>> (normally that means that queues are growing without bound). >>>>>> >>>>>> ? Ron >>>>>> >>>>>> On 13 Jul 2022, at 14:00, Alex Otenko >>>>>> wrote: >>>>>> >>>>>> This is an incorrect application of Little's Law. The law only posits >>>>>> that there is a connection between quantities. It doesn't specify which >>>>>> variables depend on which. In particular, throughput is not a free >>>>>> variable. >>>>>> >>>>>> Throughput is something outside your control. 100k users open their >>>>>> laptops at 9am and login within 1 second - that's it, you have throughput >>>>>> of 100k ops/sec. >>>>>> >>>>>> Then based on response time the system is able to deliver, you can >>>>>> tell what concurrency makes sense here. Adding threads is not going to >>>>>> change anything - certainly not if threads are not the bottleneck resource. >>>>>> Threads become the bottleneck when you have hardware to run them, but not >>>>>> the threads. >>>>>> >>>>>> On Tue, 12 Jul 2022, 15:47 Ron Pressler, >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On 11 Jul 2022, at 22:13, Rob Bygrave >>>>>>> wrote: >>>>>>> >>>>>>> *> An existing application that migrates to using virtual threads >>>>>>> doesn?t replace its platform threads with virtual threads* >>>>>>> >>>>>>> What I have been confident about to date based on the testing I've >>>>>>> done is that we can use Jetty with a Loom based thread pool and that has >>>>>>> worked very well. That is replacing current platform threads with virtual >>>>>>> threads. I'm suggesting this will frequently be sub 1000 virtual threads. >>>>>>> Ron, are you suggesting this isn't a valid use of virtual threads or am I >>>>>>> reading too much into what you've said here? >>>>>>> >>>>>>> >>>>>>> The throughput advantage to virtual threads comes from one aspect ? >>>>>>> their *number* ? as explained by Little?s law. A web server employing >>>>>>> virtual thread would not replace a pool of N platform threads with a pool >>>>>>> of N virtual threads, as that does not increase the number of threads >>>>>>> required to increase throughput. Rather, it replaces the pool of N virtual >>>>>>> threads with an unpooled ExecutorService that spawns at least one new >>>>>>> virtual thread for every HTTP serving task. Only that can increase the >>>>>>> number of threads sufficiently to improve throughput. >>>>>>> >>>>>>> >>>>>>> >>>>>>> > *unusual* for an application that has any virtual threads to have >>>>>>> fewer than, say, 10,000 >>>>>>> >>>>>>> In the case of http server use of virtual thread, I feel the use of >>>>>>> *unusual* is too strong. That is, when we are using virtual threads >>>>>>> for application code handling of http request/response (like Jetty + Loom), >>>>>>> I suspect this is frequently going to operate with less than 1000 >>>>>>> concurrent requests per server instance. >>>>>>> >>>>>>> >>>>>>> 1000 concurrent requests would likely translate to more than 10,000 >>>>>>> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >>>>>>> without fanout, every HTTP request might wish to spawn more than one >>>>>>> thread, for example to have one thread for reading and one for writing. The >>>>>>> number 10,000, however, is just illustrative. Clearly, an application with >>>>>>> virtual threads will have some large number of threads (significantly >>>>>>> larger than applications with just platform threads), because the ability >>>>>>> to have a large number of threads is what virtual threads are for. >>>>>>> >>>>>>> The important point is that tooling needs to adapt to a high number >>>>>>> of threads, which is why we?ve added a tool that?s designed to make sense >>>>>>> of many threads, where jstack might not be very useful. >>>>>>> >>>>>>> ? Ron >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Thu Jul 21 07:13:36 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Thu, 21 Jul 2022 08:13:36 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: Let's move on. I take these discussions as an opportunity to revisit understanding and learn new things. So I went to the beginnings, and now I have to disagree even with the claim that thread count is a cap on concurrency - certainly, not on concurrency as used in Little's law. Please, bear with me and see if it's correct. Suppose we have 1 thread that is able to dispose of one request in 10ms. The prediction is that concurrency won't exceed 1, right? Yes, N threads can't be processing more than N tasks. Yes, this is also the case, when we test in a particular way - that is, if we send requests sequentially, the next being sent only after receiving a response. Enter real world. All requests are independent. So if throughput is 66.667, response time in a single-threaded system above is 30ms. If we agree on this, then concurrency predicted by Little's law is 2, not capped at 1. More than that, it is unbounded. If throughput is 99, response time is 1s, and concurrency is 99. Knowing how response time grows, we can tell it really can be arbitrarily large for any throughput below 100 (the limit of the system). Alex On Wed, 20 Jul 2022, 19:24 Alex Otenko, wrote: > To me that statement implies a few things: > > - that Little's law talks of thread count > > - that if thread count is low, can't have throughput advantage > > > Well, I don't feel like discussing my imperfect grasp of English. > > On Tue, 19 Jul 2022, 23:52 Ron Pressler, wrote: > >> >> >> On 19 Jul 2022, at 18:38, Alex Otenko wrote: >> >> Agreed about the architectural advantages. >> >> The email that triggered my rant did contain the claim that using Virtual >> threads has the advantage of higher concurrency. >> >> > The throughput advantage to virtual threads comes from one aspect ? >> their *number* ? as explained by Little?s law. >> >> >> >> >> Yes, and that is correct. As I explained, a higher maximum number of >> threads does indeed mean it is possible to reach the higher concurrency >> needed for higher throughput, so virtual threads, by virtue of their >> number, do allow for higher throughput. That statement is completely >> accurate, and yet it means something very different from (the incorrect) >> ?increasing the number of threads increases throughput?, which is how you >> misinterpreted the statement. >> >> This is similar to saying that AC allows people to live in areas with >> higher temperature, and that is a very different statement from saying that >> AC increases the temperature (althoughI guess it happens to also do that). >> >> ? Ron >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey.lomakin at jetbrains.com Thu Jul 21 09:19:21 2022 From: andrey.lomakin at jetbrains.com (Andrey Lomakin) Date: Thu, 21 Jul 2022 11:19:21 +0200 Subject: Usage of direct IO with virtual threads In-Reply-To: <8e6a5105-5a44-9bbb-4eaf-43d9961fb3fe@oracle.com> References: <1cbf08ae-2eb5-6e76-bbf6-957c16715cb1@oracle.com> <8e6a5105-5a44-9bbb-4eaf-43d9961fb3fe@oracle.com> Message-ID: Hi Alan. Thank you for your reply. Recently MS added io_ring support for Windows as alternative to the io_uring of Linux. You will be interested to add support for both API https://windows-internals.com/i-o-rings-when-one-i-o-operation-is-not-enough/. Just FYI. On Mon, Jul 11, 2022 at 8:28 PM Alan Bateman wrote: > On 09/07/2022 17:28, Andrey Lomakin wrote: > > Thank you for your reply. > > Is there any issue which I can follow to track state of such refactoring > ? > > > Nothing to point to in JBS right now but these are changes that are > usually reviewed on nio-dev or core-libs-dev. > > -Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Thu Jul 21 11:30:51 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 21 Jul 2022 11:30:51 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> Message-ID: <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> Little?s law has no notion of threads, only of ?requests.? But if you?re talking about a *thread-per-request* program, as I made explicitly clear, then the number of threads is equal to or greater than the number of requests. And yes, if the *maximum* thread count is low, a thread-per-request program will have a low bound on the number of concurrent requests, and hence, by Little?s law, on throughput. ? Ron On 20 Jul 2022, at 19:24, Alex Otenko > wrote: To me that statement implies a few things: - that Little's law talks of thread count - that if thread count is low, can't have throughput advantage Well, I don't feel like discussing my imperfect grasp of English. On Tue, 19 Jul 2022, 23:52 Ron Pressler, > wrote: On 19 Jul 2022, at 18:38, Alex Otenko > wrote: Agreed about the architectural advantages. The email that triggered my rant did contain the claim that using Virtual threads has the advantage of higher concurrency. > The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. Yes, and that is correct. As I explained, a higher maximum number of threads does indeed mean it is possible to reach the higher concurrency needed for higher throughput, so virtual threads, by virtue of their number, do allow for higher throughput. That statement is completely accurate, and yet it means something very different from (the incorrect) ?increasing the number of threads increases throughput?, which is how you misinterpreted the statement. This is similar to saying that AC allows people to live in areas with higher temperature, and that is a very different statement from saying that AC increases the temperature (althoughI guess it happens to also do that). ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Thu Jul 21 11:42:24 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 21 Jul 2022 11:42:24 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: On 20 Jul 2022, at 19:49, Alex Otenko > wrote: Waiting for requests doesn't count towards response time. Waiting for responses from downstream does, but it is doable with rather small thread counts - perhaps as witness to just how large a proportion of response time that typically is. I am not talking about what?s *doable* but about what virtual-thread programs *do* (or can do), i.e. use a thread to wait for a response. As I made very clear, this can also be done using async APIs, which were invented for the very same reason (get around the maximum thread count limitation), but then you?re missing out on ?harmony? with the Java platform, which knows about threads and not async constructs. I agree virtual threads allow to build things differently. But it's not just the awkwardness of async API that changes. Blocking API eliminates inversion of control and changes allocation pattern. I disagree, but no one is taking async APIs away. If you enjoy programming in that style, you?re more than welcome to continue doing so. For example, first you allocate 16KB to read efficiently, then you are blocked - perhaps, for seconds while the user is having a think. So 10k connections pin 160MB waiting for a request. With async API nothing is pinned, because you don't allocate until you know who's read-ready. This may no longer be a problem with modern JVMs, but needs calling out. That is incorrect. A synchronous API can allocate and return a buffer only once the number of available bytes is known. There is absolutely no difference between asynchronous and synchronous APIs in that respect. In fact, there can be no algorithmic difference between them, and they can both express the same algorithm. The difference between them is only how an algorithm is *expressed* in code, and in which constructs have built-in observability support in the runtime. The lack of inversion of control can be seen as inability to control who reads when. So to speak, would you rather read 100 bytes of 10k requests each, or would you rather read 10KB of 100 requests each? This has impact on memory footprint and who can make progress. It makes sense to read more requests when you run out of things to do, but doesn't make sense if you saturated the bottleneck resource. There is no difference between synchronous and asynchronous code here, either; they just express what it is that you want to do using different code. If you want to see that in the most direct way, consider that the decision of which callback to call and the decision on which thread waiting on a queue you want to unblock and return a message to is algorithmically the same. There is nothing that you?re able to do in one of the styles and unable in the other. In terms of choice, there?s the subjective question of which style you prefer aesthetically, and the objective question of which style is more directly supported by the runtime and its tools. ? Ron Alex On Tue, 19 Jul 2022, 12:02 Ron Pressler, > wrote: First, async APIs and lightweight user-mode threads were invented because we empirically know there are many systems where the number of threads is the first bottleneck on maximum throughput that they encounter. The purpose of async APIs/lightweight threads is to allow those systems to hit the other, higher, limits on capacity. Of course, there are systems that hit other bottlenecks first, but the number of threads limitation is known to be very common. If a theory tells you it shouldn?t be, then the it?s the theory that should be revised (hint: the portion of total latency spent waiting for I/O is commonly *very* high; if you want to use Little?s law for that calculation, note that adding concurrency for fanout increases L and reduces W by the same factor, so it's handy to add together the total time spent waiting for, say, 5-50 outgoing microservice calls as if they were done sequentially, and compare that against the total time spent composing the results; this total ?wait latency? is frequently in the hundreds of milliseconds ? far higher than the actual request latency ? and easily two orders of magnitude higher than CPU time). Second, the number of *threads* (as opposed to the number of concurrent operation) has, at most, a negligible impact on contention. If we?re talking about low-level memory contention, then what matters is the number of processing cores, not the number of threads (beyond the number of cores), and if we?re talking about other resources, then that contention is part of the other limits on concurrency (and so throughput) in the system, and the way it is reached ? be it with many threads or one ? is irrelevant. It is true that various scheduling algorithms ? whether used on threads or on async constructs the relevant scheduling problems and algorithms are the same ? could reduce some overhead, but we?re talking about effects that are orders of magnitude lower than what can be achieved by reducing artificial limits on concurrency, but could matter to get the very last drop of performance; I go through the calculation of the effect of scheduling overhead here: https://inside.java/2020/08/07/loom-performance/. In short, the impact of scheduling can only be high if the total amount of time spent on scheduling is significant when compared to the time spent waiting for I/O. ? Ron On 19 Jul 2022, at 09:22, Alex Otenko > wrote: Thanks, that's what I was trying to get across, too. Also, 10k threads per request doesn't mean that the concurrency is in thousands. In the thought experiment it is. In practice - ... well, if the systems are fine with dozens or even hundreds of threads, there should be no problem even doubling thread count, if it can double, or at least improve, throughput. In my experience this is not the case. There even are famous systems with self-tuning thread pool sizes, and I worked on the self-tuning algorithm. I have seen various apps and workloads that use that system and haven't seen any that would reach maximum thread count of a few hundred even on a fairly large machine. So whereas I never found anything wrong with the claim that thread count is one of the caps on throughput, I find the claim that allowing thread per request is going to improve concurrency problematic exactly because there are other caps. There surely are such workloads that are bottlenecked on thread count that can't grow into thousands, but in my practice I haven't seen a single one of this kind. If we had thousands of threads per CPU, they just need to be waiting so much that business logic must be very trivial. For example, the thought experiment with 10k threads and 0.5s response time. If that is executed on a 1 CPU machine, each request must be spending 50 microseconds on CPU, and for the rest of time waiting for something. If it's waiting for a lock or a pool of resource, you may be better off having fewer threads (coarsening contention). So it better be some network connection, or something of the kind. So 499.95ms it is waiting on that, and does request parsing, response construction, etc in 50 microseconds. This sort of profile is not a very common pattern. If we consider tens of CPUs for 10k threads, it starts to look far less impressive in terms of the number of threads. That's all about concurrency and threads as a bottleneck resource. There are other important uses of threads, but those are not about increasing concurrency. Ok, I reckon the topic got bashed to smithereens. Alex On Mon, 18 Jul 2022, 22:57 Ron Pressler, > wrote: ?Concurrency rises with throughput?, which is just a mathematical fact, is not the same as the claim ? that no one is making ? that one can *raise* throughput by adding threads. However, it is the same as the claim that the *maximum* throughput might rise if the *maximum* number of threads is increased, because that?s just how dependent variables can work in mathematics, as I?ll try explaining. There is no ?more threads to get *better throughput*?, and there is no question about ?applying? Little?s law. Little?s law is simply the maths that tells us how many requests are being concurrently served in some system. There is no getting around it. In a system with 10K requests/s, each taking 500ms on average, there *are* 5K concurrent requests. If the program is written in the thread-per-request style, then it *has* at least 5K threads. Now, if the rate of requests doubles to 20K req/s and the system doesn?t collapse, then then there must be at least 10K threads serving them. Note that the increase in threads doesn?t raise the throughput, but it must accompany it. However, because concurrency rises with throughput, the *maximum* number of threads does pose an upper bound on throughput. It is very important to understand the difference between ?adding processing units could decrease latency in a data-parallel program? and ?concurrency rises with throughput in a concurrent program.? In the former, the units are an independent variable, and in the latter they?re not ? i.e. when the throughput is higher there are more threads, but adding threads doesn?t increase the throughput. And yet, because this forms an *upper bound* on throughput, the ability to have more threads is a prerequisite to raising the maximum attainable throughput (with the thread-per-request style). So raising the number of threads cannot possibly increase throughput, and yet raising the maximum number of threads could increase maximum throughput (until it?s bounded by something else). That?s just how dependent variables work when talking about upper/lower bounds. ? Ron On 18 Jul 2022, at 19:01, Alex Otenko > wrote: I think I have made it clear that I am not sceptical about the ability to spawn threads in large numbers, and that all I am sceptical about is the use of Little's law in the way you did. You made it look like one needs thousands of threads to get better throughput, whereas typical numbers are much more modest than that. In practice you can't heedlessly add more threads, as at some point you get response time degrading with no improvement to throughput. On Sun, 17 Jul 2022, 10:59 Ron Pressler, > wrote: If your thread-per-request system is getting 10K req/s (on average), each request takes 500ms (on average) to handle, and this can be sustained (i.e. the system is stable), then it doesn?t matter how much CPU or RAM is consumed, how much network bandwidth you?re using, or even how many machines you have: the (average) number of threads you?re running *is* no less than 5K (and, in practice, will usually be several times that). So it?s not that adding more threads is going to increase throughput (in fact, it won?t; having 1M threads will do nothing in this case), it?s that the number of threads is an upper bound on L (among all other upper bounds on L). Conversely, reaching a certain throughput requires some minimum number of threads. As to how many thread-per-request systems do or would hit the OS-thread boundary before they hit others, that?s an empirical question, and I think it is well-established that there are many such systems, but if you?re sceptical and think that user-mode threads/asynchronous APIs have little impact, you can just wait and see. ? Ron On 16 Jul 2022, at 20:30, Alex Otenko > wrote: That's the indisputable bit. The contentious part is that adding more threads is going to increase throughput. Supposing that 10k threads are there, and you actually need them, you should get concurrency level 10k. Let's see what that means in practice. If it is a 1-CPU machine, 10k requests in flight somewhere at any given time means they are waiting for 99.99% of time. Or, out of 1 second they spend 100 microseconds on CPU, and waiting for something for the rest of the time (or, out of 100ms response time, 10 microseconds on CPU - barely enough to parse REST request). This can't be the case for the majority of workflows. Of course, having 10k threads for less than 1 second each doesn't mean you are getting concurrency thar is unattainable with fewer threads. The bottom line is that adding threads you aren't necessarily increasing concurrency. On Fri, 15 Jul 2022, 10:19 Ron Pressler, > wrote: The number of threads doesn?t ?do? or not do you do anything. If requests arrive at 100K per second, each takes 500ms to process, then the number of threads you?re using *is equal to* at least 50K (assuming thread-per-request) in a stable system, that?s all. That is the physical meaning: the formula tells you what the quantities *are* in a stable system. Because in a thread-per-request program, every concurrent request takes up at least one thread, while the formula does not immediately tell you how many machines are used, or what the RAM, CPU, and network bandwidth utilisation is, it does give you a lower bound on the total number of live threads. Conversely, the number of threads gives an upper bound on L. As to the rest about splitting into subtasks, that increases L and reduces W by the same factor, so when applying Little?s law it?s handy to treat W as the total latency, *as if* it was processed sequentially, if we?re interested in L being the number of concurrent requests. More about that here: https://inside.java/2020/08/07/loom-performance/ ? Ron On 15 Jul 2022, at 09:37, Alex Otenko > wrote: You quickly jumped to a *therefore*. Newton's second law binds force, mass and acceleration. But you can't say that you can decrease mass by increasing acceleration, if the force remains the same. That is, the statement would be arithmetically correct, but it would have no physical meaning. Adding threads allows to do more work. But you can't do more work at will - the amount of work going through the system is a quantity independent of your design. Now, what you could do at will, is split the work into sub-tasks. Virtual threads allow to do this at very little cost. However, you still can't talk about an increase in concurrency due to Little's law, because - enter Amdahl - response time changes. Say, 100k requests get split into 10 sub tasks each, each runnable independently. Amdahl says your response time is going down 10-fold. So you have 100k requests times 1ms gives concurrency 100. Concurrency got reduced. Not surprising at all, because now each request spends 10x less time in the system. What about subtasks? Aren't we running more of them? Does this mean concurrency increased? Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, because the definition of the unit of work changed: was W, became W/10. But let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency is 1000. Same as before splitting the work and matching change of response time. I treat this like I would any units of measurement change. So whereas I see a lot of good from being able to spin up threads, lots and shortlived, I don't see how you can claim concurrency increases, or that Little's law somehow controls throughput. Alex On Thu, 14 Jul 2022, 11:01 Ron Pressler, > wrote: Little?s law tells us what the relationship between concurrency, throughput and latency is if the system is stable. It tells us that if latency doesn?t decrease, then concurrency rises with throughput (again, if the system is stable). Therefore, to support high throughput you need a high level of concurrency. Since the Java platform?s unit of concurrency is the thread, to support high throughput you need a high number of threads. There might be other things you also need more of, but you *at least* need a high number of threads. The number of threads is an *upper bound* on concurrency, because the platform cannot make concurrent progress on anything without a thread (with the caveat in the next paragraph). There might be other upper bounds, too (e.g. you need enough memory to concurrently store all the working data for your concurrent operations), but the number of threads *is* an upper bound, and the one virtual threads are there to remove. Of course, as JEP 425 explains, you could abandon threads altogether and use some other construct as your unit of concurrency, but then you lose platform support. In any event, virtual threads exist to support a high number of threads, as Little?s law requires, therefore, if you use virtual threads, you have a high number of them. ? Ron On 14 Jul 2022, at 08:12, Alex Otenko > wrote: Hi Ron, It looks you are unconvinced. Let me try with illustrative numbers. The users opening their laptops at 9am don't know how many threads you have. So throughput remains 100k ops/sec in both setups below. Suppose, in the first setup we have a system that is stable with 1000 threads. Little's law tells us that the response time cannot exceed 10ms in this case. Little's law does not prescribe response time, by the way; it is merely a consequence of the statement that the system is stable: it couldn't have been stable if its response time were higher. Now, let's create one thread per request. One claim is that this increases concurrency (and I object to this point alone). Suppose this means concurrency becomes 100k. Little's law says that the response time must be 1 second. Sorry, but that's hardly an improvement! In fact, for any concurrency greater than 1000 you must get response time higher than 10ms we've got with 1000 threads. This is not what we want. Fortunately, this is not what happens either. Really, thread count in the thread per request design has little to do with concurrency level. Concurrency level is a derived quantity. It only tells us how many requests are making progress at any given time in a system that experiences request arrival rate R and which is able to process them in time T. The only thing you can control through system design is response time T. There are good reasons to design a system that way, but Little's law is not one of them. On Wed, 13 Jul 2022, 14:29 Ron Pressler, > wrote: The application of Little?s law is 100% correct. Little?s law tells us that the number of threads must *necessarily* rise if throughput is to be high. Whether or not that alone is *sufficient* might depend on the concurrency level of other resources as well. The number of threads is not the only quantity that limits the L in the formula, but L cannot be higher than the number of threads. Obviously, if the system?s level of concurrency is bounded at a very low level ? say, 10 ? then having more than 10 threads is unhelpful, but as we?re talking about a program that uses virtual threads, we know that is not the case. Also, Little?s law describes *stable* systems; i.e. it says that *if* the system is stable, then a certain relationship must hold. While it is true that the rate of arrival might rise without bound, if the number of threads is insufficient to meet it, then the system is no longer stable (normally that means that queues are growing without bound). ? Ron On 13 Jul 2022, at 14:00, Alex Otenko > wrote: This is an incorrect application of Little's Law. The law only posits that there is a connection between quantities. It doesn't specify which variables depend on which. In particular, throughput is not a free variable. Throughput is something outside your control. 100k users open their laptops at 9am and login within 1 second - that's it, you have throughput of 100k ops/sec. Then based on response time the system is able to deliver, you can tell what concurrency makes sense here. Adding threads is not going to change anything - certainly not if threads are not the bottleneck resource. Threads become the bottleneck when you have hardware to run them, but not the threads. On Tue, 12 Jul 2022, 15:47 Ron Pressler, > wrote: On 11 Jul 2022, at 22:13, Rob Bygrave > wrote: > An existing application that migrates to using virtual threads doesn?t replace its platform threads with virtual threads What I have been confident about to date based on the testing I've done is that we can use Jetty with a Loom based thread pool and that has worked very well. That is replacing current platform threads with virtual threads. I'm suggesting this will frequently be sub 1000 virtual threads. Ron, are you suggesting this isn't a valid use of virtual threads or am I reading too much into what you've said here? The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. A web server employing virtual thread would not replace a pool of N platform threads with a pool of N virtual threads, as that does not increase the number of threads required to increase throughput. Rather, it replaces the pool of N virtual threads with an unpooled ExecutorService that spawns at least one new virtual thread for every HTTP serving task. Only that can increase the number of threads sufficiently to improve throughput. > unusual for an application that has any virtual threads to have fewer than, say, 10,000 In the case of http server use of virtual thread, I feel the use of unusual is too strong. That is, when we are using virtual threads for application code handling of http request/response (like Jetty + Loom), I suspect this is frequently going to operate with less than 1000 concurrent requests per server instance. 1000 concurrent requests would likely translate to more than 10,000 virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even without fanout, every HTTP request might wish to spawn more than one thread, for example to have one thread for reading and one for writing. The number 10,000, however, is just illustrative. Clearly, an application with virtual threads will have some large number of threads (significantly larger than applications with just platform threads), because the ability to have a large number of threads is what virtual threads are for. The important point is that tooling needs to adapt to a high number of threads, which is why we?ve added a tool that?s designed to make sense of many threads, where jstack might not be very useful. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Thu Jul 21 12:04:45 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 21 Jul 2022 12:04:45 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: P.S. Here?s something that might be of interest to those interested in the theory re inversion of control. The reason we know that synchronous and asynchronous code are ?algorithmically equivalent?, i.e. able to express (using different syntax) the same algorithms is that it?s a special case of the continuation/monad equivalence proved in 1994 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.8213). However, to make things more concrete (and hopefully, more widely understandable), consider that what delimited continuations do is turn a subroutine call into a subroutine return. In particular, in the JDK implementation, a call to Continuation.yield() becomes a return from the call to Continuation.run(), and the call to Continuation.run() turns into a return from the call to Continuation.yield(). So we can turn any decision as to which (callback) subroutine to call into a decision on which subroutine to return from. The asynchronous programming style is a special case of what?s known as a continuation-passing style, or CPS, where we register a callback with an operation, and the operation then decides when to call the callback (?inversion of control?). With delimited continuations, and, with minor changes, threads, the registration of the callback becomes a call to yield, and the invocation of the callback becomes the *return* from yield. Making this even more concrete, in the async style we have a set of registered callbacks which we then call at will, whereas in the synchronous style we have a set of blocked threads which we then unblock at will. Calls into a callback become returns from blocking calls. There is one difference which I papered over, which is that threads aren?t delimited continuations, but are essentially delimited continuations with an associated scheduler, so where in the async or delimited continuation cases, the ?framework? has control over scheduling, with threads, the thread scheduler has control over scheduling. In practice, that is a second-order concern for the intended use-case of threads, but we can close that gap with pluggable thread schedulers (that are being worked on but were not delivered in JEP 425). ? Ron On 21 Jul 2022, at 12:42, Ron Pressler > wrote: On 20 Jul 2022, at 19:49, Alex Otenko > wrote: Waiting for requests doesn't count towards response time. Waiting for responses from downstream does, but it is doable with rather small thread counts - perhaps as witness to just how large a proportion of response time that typically is. I am not talking about what?s *doable* but about what virtual-thread programs *do* (or can do), i.e. use a thread to wait for a response. As I made very clear, this can also be done using async APIs, which were invented for the very same reason (get around the maximum thread count limitation), but then you?re missing out on ?harmony? with the Java platform, which knows about threads and not async constructs. I agree virtual threads allow to build things differently. But it's not just the awkwardness of async API that changes. Blocking API eliminates inversion of control and changes allocation pattern. I disagree, but no one is taking async APIs away. If you enjoy programming in that style, you?re more than welcome to continue doing so. For example, first you allocate 16KB to read efficiently, then you are blocked - perhaps, for seconds while the user is having a think. So 10k connections pin 160MB waiting for a request. With async API nothing is pinned, because you don't allocate until you know who's read-ready. This may no longer be a problem with modern JVMs, but needs calling out. That is incorrect. A synchronous API can allocate and return a buffer only once the number of available bytes is known. There is absolutely no difference between asynchronous and synchronous APIs in that respect. In fact, there can be no algorithmic difference between them, and they can both express the same algorithm. The difference between them is only how an algorithm is *expressed* in code, and in which constructs have built-in observability support in the runtime. The lack of inversion of control can be seen as inability to control who reads when. So to speak, would you rather read 100 bytes of 10k requests each, or would you rather read 10KB of 100 requests each? This has impact on memory footprint and who can make progress. It makes sense to read more requests when you run out of things to do, but doesn't make sense if you saturated the bottleneck resource. There is no difference between synchronous and asynchronous code here, either; they just express what it is that you want to do using different code. If you want to see that in the most direct way, consider that the decision of which callback to call and the decision on which thread waiting on a queue you want to unblock and return a message to is algorithmically the same. There is nothing that you?re able to do in one of the styles and unable in the other. In terms of choice, there?s the subjective question of which style you prefer aesthetically, and the objective question of which style is more directly supported by the runtime and its tools. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhinojosa at evolutionnext.com Fri Jul 22 03:40:40 2022 From: dhinojosa at evolutionnext.com (Daniel Hinojosa) Date: Thu, 21 Jul 2022 21:40:40 -0600 Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package Message-ID: Hello, I have been watching Loom and have been excited about it and I can't wait for the first official preview this fall. I had one example that I used in demonstrating continuations and scope but it seems now that it was placed in an internal package and likely inaccessible by the module system. Is there any use case where a developer may want to use the Continuation, ContinuationScope and Scope objects for their purposes and would you consider moving it back out into public use? Danno -------------- next part -------------- An HTML attachment was scrubbed... URL: From fazil.mes53 at gmail.com Fri Jul 22 03:45:49 2022 From: fazil.mes53 at gmail.com (fazil mohamed) Date: Fri, 22 Jul 2022 09:15:49 +0530 Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: References: Message-ID: Interested to know. On Fri 22 Jul, 2022, 9:10 AM Daniel Hinojosa, wrote: > Hello, > > I have been watching Loom and have been excited about it and I can't wait > for the first official preview this fall. I had one example that I used in > demonstrating continuations and scope but it seems now that it was placed > in an internal package and likely inaccessible by the module system. Is > there any use case where a developer may want to use the Continuation, > ContinuationScope and Scope objects for their purposes and would you > consider moving it back out into public use? > > Danno > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From forax at univ-mlv.fr Fri Jul 22 12:00:37 2022 From: forax at univ-mlv.fr (Remi Forax) Date: Fri, 22 Jul 2022 14:00:37 +0200 (CEST) Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: References: Message-ID: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> This question has been answered several time on this list by Ron, the class Contination is unsafe, it's kind of useful to understand how the internals works but this class can not be published publicly. R?mi > From: "Daniel Hinojosa" > To: loom-dev at openjdk.org > Sent: Friday, July 22, 2022 5:40:40 AM > Subject: Motivation to put Continuation, ContinuationScope, and Scope in > jdk.internal.vm package > Hello, > I have been watching Loom and have been excited about it and I can't wait for > the first official preview this fall. I had one example that I used in > demonstrating continuations and scope but it seems now that it was placed in an > internal package and likely inaccessible by the module system. Is there any use > case where a developer may want to use the Continuation, ContinuationScope and > Scope objects for their purposes and would you consider moving it back out into > public use? > Danno -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 22 12:48:57 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 22 Jul 2022 12:48:57 +0000 Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> References: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> Message-ID: Correct. However, we might introduce other (safe) constructs internally employing Continuation under the hood. The main issue is that the Continuation class can be used in a way that would cause the current thread (i.e. Thread.currentThread()) to change mid-method. This would not only break a lot of Java code in very surprising ways, but also some assumptions made by the JIT compilers. Therefore, all safe constructs based on continuations must be confined to a single thread (or implement a thread, as done by virtual threads). ? Ron On 22 Jul 2022, at 13:00, Remi Forax > wrote: This question has been answered several time on this list by Ron, the class Contination is unsafe, it's kind of useful to understand how the internals works but this class can not be published publicly. R?mi ________________________________ From: "Daniel Hinojosa" > To: loom-dev at openjdk.org Sent: Friday, July 22, 2022 5:40:40 AM Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package Hello, I have been watching Loom and have been excited about it and I can't wait for the first official preview this fall. I had one example that I used in demonstrating continuations and scope but it seems now that it was placed in an internal package and likely inaccessible by the module system. Is there any use case where a developer may want to use the Continuation, ContinuationScope and Scope objects for their purposes and would you consider moving it back out into public use? Danno -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Fri Jul 22 14:33:25 2022 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Fri, 22 Jul 2022 11:33:25 -0300 Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: References: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> Message-ID: Em sex., 22 de jul. de 2022 ?s 09:51, Ron Pressler escreveu: > The main issue is that the Continuation class can be used in a way that > would cause the current thread (i.e. Thread.currentThread()) to change > mid-method. This would not only break a lot of Java code in very surprising > ways, but also some assumptions made by the JIT compilers. Therefore, all > safe constructs based on continuations must be confined to a single thread > (or implement a thread, as done by virtual threads). > I have once experimented designing a Generator based on Continuation. Reading your comment above, I fear I may have misunderstood the requirements. I'll try to re-state it, please correct me if I'm wrong. A Continuation is initialized with a scope and a runnable. At some point it will be run for the first time in a certain thread. >From this point onwards, this Continuation must continue to run always in the same thread. If the Continuation yields, then it must be restarted in the thread to which ii is "bound", never in some other thread. Is that correct? Let us consider a Generator class defined over a Continuation. The user calls _generate_ for the first time, which calls _run_ on the Continuation for the first time, "binding" the Continuation to the user's thread. The Continuation function produces the value, stores it, and yields; _generate_ returns the stored value to the user. Eventually the user is going to call _generate_ again, which must _run_ the Continuation again. The JVM assumes that _run_ is always run on the "thread" to which the Continuation was bound. Wouldn't it be enough to constrain _generate_ to be run always on the same thread, a restriction which could be guaranteed by an assert? -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 22 19:02:53 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 22 Jul 2022 19:02:53 +0000 Subject: [External] : Re: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: References: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> Message-ID: <6C5A77F9-BDAE-4D2F-A61A-443D0BE71FCC@oracle.com> That?s correct, and if we build a public construct such as generators on top of Continuation, it will do precisely this kind of thread-confinement checking. However, the Continuation class itself cannot do it, because it is used by constructs that do move it from thread to thread (namely, virtual threads), and take great care to do it safely, cooperating with the compiler and making sure such transitions only occur inside methods that the compiler knows to treat specially. ? Ron On 22 Jul 2022, at 15:33, Pedro Lamar?o > wrote: Em sex., 22 de jul. de 2022 ?s 09:51, Ron Pressler > escreveu: The main issue is that the Continuation class can be used in a way that would cause the current thread (i.e. Thread.currentThread()) to change mid-method. This would not only break a lot of Java code in very surprising ways, but also some assumptions made by the JIT compilers. Therefore, all safe constructs based on continuations must be confined to a single thread (or implement a thread, as done by virtual threads). I have once experimented designing a Generator based on Continuation. Reading your comment above, I fear I may have misunderstood the requirements. I'll try to re-state it, please correct me if I'm wrong. A Continuation is initialized with a scope and a runnable. At some point it will be run for the first time in a certain thread. From this point onwards, this Continuation must continue to run always in the same thread. If the Continuation yields, then it must be restarted in the thread to which ii is "bound", never in some other thread. Is that correct? Let us consider a Generator class defined over a Continuation. The user calls _generate_ for the first time, which calls _run_ on the Continuation for the first time, "binding" the Continuation to the user's thread. The Continuation function produces the value, stores it, and yields; _generate_ returns the stored value to the user. Eventually the user is going to call _generate_ again, which must _run_ the Continuation again. The JVM assumes that _run_ is always run on the "thread" to which the Continuation was bound. Wouldn't it be enough to constrain _generate_ to be run always on the same thread, a restriction which could be guaranteed by an assert? -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Fri Jul 22 19:59:03 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Fri, 22 Jul 2022 12:59:03 -0700 Subject: [External] : Re: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: <6C5A77F9-BDAE-4D2F-A61A-443D0BE71FCC@oracle.com> References: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> <6C5A77F9-BDAE-4D2F-A61A-443D0BE71FCC@oracle.com> Message-ID: <013301d89e05$7f20c9f0$7d625dd0$@kolotyluk.net> Ooooh ? ?cooperating with the compiler and making sure such transitions only occur inside methods that the compiler knows to treat specially.? I know there are no ?language? changes, but I did not imagine there would be any cooperation with the compiler? Out of curiosity what does this look like, or can you refer us to some other discussions? Is this just normal Java reflection, or something deeper and more intimate with the compiler? Cheers, Eric From: loom-dev On Behalf Of Ron Pressler Sent: July 22, 2022 12:03 PM To: Pedro Lamar?o Cc: Remi Forax ; Daniel Hinojosa ; loom-dev at openjdk.org Subject: Re: [External] : Re: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package That?s correct, and if we build a public construct such as generators on top of Continuation, it will do precisely this kind of thread-confinement checking. However, the Continuation class itself cannot do it, because it is used by constructs that do move it from thread to thread (namely, virtual threads), and take great care to do it safely, cooperating with the compiler and making sure such transitions only occur inside methods that the compiler knows to treat specially. ? Ron On 22 Jul 2022, at 15:33, Pedro Lamar?o > wrote: Em sex., 22 de jul. de 2022 ?s 09:51, Ron Pressler > escreveu: The main issue is that the Continuation class can be used in a way that would cause the current thread (i.e. Thread.currentThread()) to change mid-method. This would not only break a lot of Java code in very surprising ways, but also some assumptions made by the JIT compilers. Therefore, all safe constructs based on continuations must be confined to a single thread (or implement a thread, as done by virtual threads). I have once experimented designing a Generator based on Continuation. Reading your comment above, I fear I may have misunderstood the requirements. I'll try to re-state it, please correct me if I'm wrong. A Continuation is initialized with a scope and a runnable. At some point it will be run for the first time in a certain thread. >From this point onwards, this Continuation must continue to run always in the same thread. If the Continuation yields, then it must be restarted in the thread to which ii is "bound", never in some other thread. Is that correct? Let us consider a Generator class defined over a Continuation. The user calls _generate_ for the first time, which calls _run_ on the Continuation for the first time, "binding" the Continuation to the user's thread. The Continuation function produces the value, stores it, and yields; _generate_ returns the stored value to the user. Eventually the user is going to call _generate_ again, which must _run_ the Continuation again. The JVM assumes that _run_ is always run on the "thread" to which the Continuation was bound. Wouldn't it be enough to constrain _generate_ to be run always on the same thread, a restriction which could be guaranteed by an assert? -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 22 22:57:55 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 22 Jul 2022 22:57:55 +0000 Subject: [External] : Re: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package In-Reply-To: <013301d89e05$7f20c9f0$7d625dd0$@kolotyluk.net> References: <1249135288.14317889.1658491237732.JavaMail.zimbra@u-pem.fr> <6C5A77F9-BDAE-4D2F-A61A-443D0BE71FCC@oracle.com> <013301d89e05$7f20c9f0$7d625dd0$@kolotyluk.net> Message-ID: <12582A6D-A0C5-4433-B38C-0BA9C61E5622@oracle.com> On 22 Jul 2022, at 20:59, eric at kolotyluk.net wrote: Ooooh ? ?cooperating with the compiler and making sure such transitions only occur inside methods that the compiler knows to treat specially.? I know there are no ?language? changes, but I did not imagine there would be any cooperation with the compiler? Out of curiosity what does this look like, or can you refer us to some other discussions? Is this just normal Java reflection, or something deeper and more intimate with the compiler? Cheers, Eric I meant the JIT compiler(s). Look here: https://github.com/openjdk/jdk/blob/987656d69065b5b61d658cec3704a181a4aef18b/src/java.base/share/classes/java/lang/VirtualThread.java#L270 The special @ChangesCurrentThread annotation on that method and a couple of others tells the compiler (i.e. the JIT compiler) to turn off some optimisations that rely on the assumption that the current thread cannot change in the middle of a method. Neglecting that annotation could rely in miscompilation and very strange bugs, and we can?t statically infer that property, so that is why the Continuation class is not safe when used directly. Virtual threads take care to interact with the compiler in this way, but other constructs based on continuations must be thread-confined to be safe. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Fri Jul 22 23:06:43 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sat, 23 Jul 2022 00:06:43 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: I am familiar with the bijection between types and continuation-passing. There is however a barrier. Who does the thing of interest in each of the styles: the caller or the callee. In terms of type theory it doesn't matter; both styles have the same power. In terms of software engineering it does matter. It is impossible to plug custom logic in a library that you import, unless a mechanism is planned at design time, and built in - most commonly a bunch of configuration options to tweak a parameterised algorithm; not really the ability to implement arbitrary algorithms that the theoretical bijection requires. So in practice you are limited to doing only things a caller can do. Same for allocation. It is doable to get the same behaviour as in async API, but it is not what InputStream.read(byte[]) does - the method signature forces you to allocate and wait. And there is no mechanism to tell the blocking API to read 10k of 100 requests instead of 100 bytes of 10k requests. It's doable, but it is not there. On Thu, 21 Jul 2022, 12:42 Ron Pressler, wrote: > > > On 20 Jul 2022, at 19:49, Alex Otenko wrote: > > Waiting for requests doesn't count towards response time. Waiting for > responses from downstream does, but it is doable with rather small thread > counts - perhaps as witness to just how large a proportion of response time > that typically is. > > > I am not talking about what?s *doable* but about what virtual-thread > programs *do* (or can do), i.e. use a thread to wait for a response. As I > made very clear, this can also be done using async APIs, which were > invented for the very same reason (get around the maximum thread count > limitation), but then you?re missing out on ?harmony? with the Java > platform, which knows about threads and not async constructs. > > > I agree virtual threads allow to build things differently. But it's not > just the awkwardness of async API that changes. Blocking API eliminates > inversion of control and changes allocation pattern. > > > I disagree, but no one is taking async APIs away. If you enjoy programming > in that style, you?re more than welcome to continue doing so. > > > For example, first you allocate 16KB to read efficiently, then you are > blocked - perhaps, for seconds while the user is having a think. So 10k > connections pin 160MB waiting for a request. With async API nothing is > pinned, because you don't allocate until you know who's read-ready. This > may no longer be a problem with modern JVMs, but needs calling out. > > > That is incorrect. A synchronous API can allocate and return a buffer only > once the number of available bytes is known. There is absolutely no > difference between asynchronous and synchronous APIs in that respect. In > fact, there can be no algorithmic difference between them, and they can > both express the same algorithm. The difference between them is only how an > algorithm is *expressed* in code, and in which constructs have built-in > observability support in the runtime. > > > The lack of inversion of control can be seen as inability to control who > reads when. So to speak, would you rather read 100 bytes of 10k requests > each, or would you rather read 10KB of 100 requests each? This has impact > on memory footprint and who can make progress. It makes sense to read more > requests when you run out of things to do, but doesn't make sense if you > saturated the bottleneck resource. > > > There is no difference between synchronous and asynchronous code here, > either; they just express what it is that you want to do using different > code. If you want to see that in the most direct way, consider that the > decision of which callback to call and the decision on which thread waiting > on a queue you want to unblock and return a message to is algorithmically > the same. > > There is nothing that you?re able to do in one of the styles and unable in > the other. In terms of choice, there?s the subjective question of which > style you prefer aesthetically, and the objective question of which style > is more directly supported by the runtime and its tools. > > ? Ron > > > > Alex > > On Tue, 19 Jul 2022, 12:02 Ron Pressler, wrote: > >> First, async APIs and lightweight user-mode threads were invented because >> we empirically know there are many systems where the number of threads is >> the first bottleneck on maximum throughput that they encounter. The purpose >> of async APIs/lightweight threads is to allow those systems to hit the >> other, higher, limits on capacity. Of course, there are systems that hit >> other bottlenecks first, but the number of threads limitation is known to >> be very common. If a theory tells you it shouldn?t be, then the it?s the >> theory that should be revised (hint: the portion of total latency spent >> waiting for I/O is commonly *very* high; if you want to use Little?s law >> for that calculation, note that adding concurrency for fanout increases L >> and reduces W by the same factor, so it's handy to add together the total >> time spent waiting for, say, 5-50 outgoing microservice calls as if they >> were done sequentially, and compare that against the total time spent >> composing the results; this total ?wait latency? is frequently in the >> hundreds of milliseconds ? far higher than the actual request latency ? and >> easily two orders of magnitude higher than CPU time). >> >> Second, the number of *threads* (as opposed to the number of concurrent >> operation) has, at most, a negligible impact on contention. If we?re >> talking about low-level memory contention, then what matters is the number >> of processing cores, not the number of threads (beyond the number of >> cores), and if we?re talking about other resources, then that contention is >> part of the other limits on concurrency (and so throughput) in the system, >> and the way it is reached ? be it with many threads or one ? is irrelevant. >> It is true that various scheduling algorithms ? whether used on threads or >> on async constructs the relevant scheduling problems and algorithms are the >> same ? could reduce some overhead, but we?re talking about effects that are >> orders of magnitude lower than what can be achieved by reducing artificial >> limits on concurrency, but could matter to get the very last drop of >> performance; I go through the calculation of the effect of scheduling >> overhead here: https://inside.java/2020/08/07/loom-performance/ >> . >> In short, the impact of scheduling can only be high if the total amount of >> time spent on scheduling is significant when compared to the time spent >> waiting for I/O. >> >> ? Ron >> >> >> >> On 19 Jul 2022, at 09:22, Alex Otenko wrote: >> >> Thanks, that's what I was trying to get across, too. >> >> Also, 10k threads per request doesn't mean that the concurrency is in >> thousands. In the thought experiment it is. In practice - ... well, if the >> systems are fine with dozens or even hundreds of threads, there should be >> no problem even doubling thread count, if it can double, or at least >> improve, throughput. In my experience this is not the case. There even are >> famous systems with self-tuning thread pool sizes, and I worked on the >> self-tuning algorithm. I have seen various apps and workloads that use that >> system and haven't seen any that would reach maximum thread count of a few >> hundred even on a fairly large machine. So whereas I never found anything >> wrong with the claim that thread count is one of the caps on throughput, I >> find the claim that allowing thread per request is going to improve >> concurrency problematic exactly because there are other caps. There surely >> are such workloads that are bottlenecked on thread count that can't grow >> into thousands, but in my practice I haven't seen a single one of this >> kind. If we had thousands of threads per CPU, they just need to be waiting >> so much that business logic must be very trivial. >> >> For example, the thought experiment with 10k threads and 0.5s response >> time. If that is executed on a 1 CPU machine, each request must be spending >> 50 microseconds on CPU, and for the rest of time waiting for something. If >> it's waiting for a lock or a pool of resource, you may be better off having >> fewer threads (coarsening contention). So it better be some network >> connection, or something of the kind. So 499.95ms it is waiting on that, >> and does request parsing, response construction, etc in 50 microseconds. >> This sort of profile is not a very common pattern. >> >> If we consider tens of CPUs for 10k threads, it starts to look far less >> impressive in terms of the number of threads. >> >> That's all about concurrency and threads as a bottleneck resource. There >> are other important uses of threads, but those are not about increasing >> concurrency. >> >> >> Ok, I reckon the topic got bashed to smithereens. >> >> Alex >> >> On Mon, 18 Jul 2022, 22:57 Ron Pressler, wrote: >> >>> ?Concurrency rises with throughput?, which is just a mathematical fact, >>> is not the same as the claim ? that no one is making ? that one can *raise* >>> throughput by adding threads. However, it is the same as the claim that the >>> *maximum* throughput might rise if the *maximum* number of threads is >>> increased, because that?s just how dependent variables can work in >>> mathematics, as I?ll try explaining. >>> >>> There is no ?more threads to get *better throughput*?, and there is no >>> question about ?applying? Little?s law. Little?s law is simply the maths >>> that tells us how many requests are being concurrently served in some >>> system. There is no getting around it. In a system with 10K requests/s, >>> each taking 500ms on average, there *are* 5K concurrent requests. If the >>> program is written in the thread-per-request style, then it *has* at least >>> 5K threads. Now, if the rate of requests doubles to 20K req/s and the >>> system doesn?t collapse, then then there must be at least 10K threads >>> serving them. >>> >>> Note that the increase in threads doesn?t raise the throughput, but it >>> must accompany it. However, because concurrency rises with throughput, the >>> *maximum* number of threads does pose an upper bound on throughput. >>> >>> It is very important to understand the difference between ?adding >>> processing units could decrease latency in a data-parallel program? and >>> ?concurrency rises with throughput in a concurrent program.? In the former, >>> the units are an independent variable, and in the latter they?re not ? i.e. >>> when the throughput is higher there are more threads, but adding threads >>> doesn?t increase the throughput. >>> >>> And yet, because this forms an *upper bound* on throughput, the ability >>> to have more threads is a prerequisite to raising the maximum attainable >>> throughput (with the thread-per-request style). So raising the number of >>> threads cannot possibly increase throughput, and yet raising the maximum >>> number of threads could increase maximum throughput (until it?s bounded by >>> something else). That?s just how dependent variables work when talking >>> about upper/lower bounds. >>> >>> ? Ron >>> >>> On 18 Jul 2022, at 19:01, Alex Otenko >>> wrote: >>> >>> I think I have made it clear that I am not sceptical about the ability >>> to spawn threads in large numbers, and that all I am sceptical about is the >>> use of Little's law in the way you did. You made it look like one needs >>> thousands of threads to get better throughput, whereas typical numbers are >>> much more modest than that. In practice you can't heedlessly add more >>> threads, as at some point you get response time degrading with no >>> improvement to throughput. >>> >>> On Sun, 17 Jul 2022, 10:59 Ron Pressler, >>> wrote: >>> >>>> If your thread-per-request system is getting 10K req/s (on average), >>>> each request takes 500ms (on average) to handle, and this can be sustained >>>> (i.e. the system is stable), then it doesn?t matter how much CPU or RAM is >>>> consumed, how much network bandwidth you?re using, or even how many >>>> machines you have: the (average) number of threads you?re running *is* no >>>> less than 5K (and, in practice, will usually be several times that). >>>> >>>> So it?s not that adding more threads is going to increase throughput >>>> (in fact, it won?t; having 1M threads will do nothing in this case), it?s >>>> that the number of threads is an upper bound on L (among all other upper >>>> bounds on L). Conversely, reaching a certain throughput requires some >>>> minimum number of threads. >>>> >>>> As to how many thread-per-request systems do or would hit the OS-thread >>>> boundary before they hit others, that?s an empirical question, and I think >>>> it is well-established that there are many such systems, but if you?re >>>> sceptical and think that user-mode threads/asynchronous APIs have little >>>> impact, you can just wait and see. >>>> >>>> ? Ron >>>> >>>> On 16 Jul 2022, at 20:30, Alex Otenko >>>> wrote: >>>> >>>> That's the indisputable bit. The contentious part is that adding more >>>> threads is going to increase throughput. >>>> >>>> Supposing that 10k threads are there, and you actually need them, you >>>> should get concurrency level 10k. Let's see what that means in practice. >>>> >>>> If it is a 1-CPU machine, 10k requests in flight somewhere at any given >>>> time means they are waiting for 99.99% of time. Or, out of 1 second they >>>> spend 100 microseconds on CPU, and waiting for something for the rest of >>>> the time (or, out of 100ms response time, 10 microseconds on CPU - barely >>>> enough to parse REST request). This can't be the case for the majority of >>>> workflows. >>>> >>>> Of course, having 10k threads for less than 1 second each doesn't mean >>>> you are getting concurrency thar is unattainable with fewer threads. >>>> >>>> The bottom line is that adding threads you aren't necessarily >>>> increasing concurrency. >>>> >>>> On Fri, 15 Jul 2022, 10:19 Ron Pressler, >>>> wrote: >>>> >>>>> The number of threads doesn?t ?do? or not do you do anything. If >>>>> requests arrive at 100K per second, each takes 500ms to process, then the >>>>> number of threads you?re using *is equal to* at least 50K (assuming >>>>> thread-per-request) in a stable system, that?s all. That is the physical >>>>> meaning: the formula tells you what the quantities *are* in a stable >>>>> system. >>>>> >>>>> Because in a thread-per-request program, every concurrent request >>>>> takes up at least one thread, while the formula does not immediately tell >>>>> you how many machines are used, or what the RAM, CPU, and network bandwidth >>>>> utilisation is, it does give you a lower bound on the total number of live >>>>> threads. Conversely, the number of threads gives an upper bound on L. >>>>> >>>>> As to the rest about splitting into subtasks, that increases L and >>>>> reduces W by the same factor, so when applying Little?s law it?s handy to >>>>> treat W as the total latency, *as if* it was processed sequentially, if >>>>> we?re interested in L being the number of concurrent requests. More about >>>>> that here: https://inside.java/2020/08/07/loom-performance/ >>>>> >>>>> >>>>> ? Ron >>>>> >>>>> On 15 Jul 2022, at 09:37, Alex Otenko >>>>> wrote: >>>>> >>>>> You quickly jumped to a *therefore*. >>>>> >>>>> Newton's second law binds force, mass and acceleration. But you can't >>>>> say that you can decrease mass by increasing acceleration, if the force >>>>> remains the same. That is, the statement would be arithmetically correct, >>>>> but it would have no physical meaning. >>>>> >>>>> Adding threads allows to do more work. But you can't do more work at >>>>> will - the amount of work going through the system is a quantity >>>>> independent of your design. >>>>> >>>>> Now, what you could do at will, is split the work into sub-tasks. >>>>> Virtual threads allow to do this at very little cost. However, you still >>>>> can't talk about an increase in concurrency due to Little's law, because - >>>>> enter Amdahl - response time changes. >>>>> >>>>> Say, 100k requests get split into 10 sub tasks each, each runnable >>>>> independently. Amdahl says your response time is going down 10-fold. So you >>>>> have 100k requests times 1ms gives concurrency 100. Concurrency got >>>>> reduced. Not surprising at all, because now each request spends 10x less >>>>> time in the system. >>>>> >>>>> What about subtasks? Aren't we running more of them? Does this mean >>>>> concurrency increased? >>>>> >>>>> Yes, 100k requests begets 1m sub tasks. We can't compare concurrency, >>>>> because the definition of the unit of work changed: was W, became W/10. But >>>>> let's see anyway. So we have 1m tasks, each finished in 1ms - concurrency >>>>> is 1000. Same as before splitting the work and matching change of response >>>>> time. I treat this like I would any units of measurement change. >>>>> >>>>> >>>>> So whereas I see a lot of good from being able to spin up threads, >>>>> lots and shortlived, I don't see how you can claim concurrency increases, >>>>> or that Little's law somehow controls throughput. >>>>> >>>>> >>>>> Alex >>>>> >>>>> On Thu, 14 Jul 2022, 11:01 Ron Pressler, >>>>> wrote: >>>>> >>>>>> Little?s law tells us what the relationship between concurrency, >>>>>> throughput and latency is if the system is stable. It tells us that if >>>>>> latency doesn?t decrease, then concurrency rises with throughput (again, if >>>>>> the system is stable). Therefore, to support high throughput you need a >>>>>> high level of concurrency. Since the Java platform?s unit of concurrency is >>>>>> the thread, to support high throughput you need a high number of threads. >>>>>> There might be other things you also need more of, but you *at least* need >>>>>> a high number of threads. >>>>>> >>>>>> The number of threads is an *upper bound* on concurrency, because the >>>>>> platform cannot make concurrent progress on anything without a thread (with >>>>>> the caveat in the next paragraph). There might be other upper bounds, too >>>>>> (e.g. you need enough memory to concurrently store all the working data for >>>>>> your concurrent operations), but the number of threads *is* an upper bound, >>>>>> and the one virtual threads are there to remove. >>>>>> >>>>>> Of course, as JEP 425 explains, you could abandon threads altogether >>>>>> and use some other construct as your unit of concurrency, but then you lose >>>>>> platform support. >>>>>> >>>>>> In any event, virtual threads exist to support a high number of >>>>>> threads, as Little?s law requires, therefore, if you use virtual threads, >>>>>> you have a high number of them. >>>>>> >>>>>> ? Ron >>>>>> >>>>>> On 14 Jul 2022, at 08:12, Alex Otenko >>>>>> wrote: >>>>>> >>>>>> Hi Ron, >>>>>> >>>>>> It looks you are unconvinced. Let me try with illustrative numbers. >>>>>> >>>>>> The users opening their laptops at 9am don't know how many threads >>>>>> you have. So throughput remains 100k ops/sec in both setups below. Suppose, >>>>>> in the first setup we have a system that is stable with 1000 threads. >>>>>> Little's law tells us that the response time cannot exceed 10ms in this >>>>>> case. Little's law does not prescribe response time, by the way; it is >>>>>> merely a consequence of the statement that the system is stable: it >>>>>> couldn't have been stable if its response time were higher. >>>>>> >>>>>> Now, let's create one thread per request. One claim is that this >>>>>> increases concurrency (and I object to this point alone). Suppose this >>>>>> means concurrency becomes 100k. Little's law says that the response time >>>>>> must be 1 second. Sorry, but that's hardly an improvement! In fact, for any >>>>>> concurrency greater than 1000 you must get response time higher than 10ms >>>>>> we've got with 1000 threads. This is not what we want. Fortunately, this is >>>>>> not what happens either. >>>>>> >>>>>> Really, thread count in the thread per request design has little to >>>>>> do with concurrency level. Concurrency level is a derived quantity. It only >>>>>> tells us how many requests are making progress at any given time in a >>>>>> system that experiences request arrival rate R and which is able to process >>>>>> them in time T. The only thing you can control through system design is >>>>>> response time T. >>>>>> >>>>>> There are good reasons to design a system that way, but Little's law >>>>>> is not one of them. >>>>>> >>>>>> On Wed, 13 Jul 2022, 14:29 Ron Pressler, >>>>>> wrote: >>>>>> >>>>>>> The application of Little?s law is 100% correct. Little?s law tells >>>>>>> us that the number of threads must *necessarily* rise if throughput is to >>>>>>> be high. Whether or not that alone is *sufficient* might depend on the >>>>>>> concurrency level of other resources as well. The number of threads is not >>>>>>> the only quantity that limits the L in the formula, but L cannot be higher >>>>>>> than the number of threads. Obviously, if the system?s level of concurrency >>>>>>> is bounded at a very low level ? say, 10 ? then having more than 10 threads >>>>>>> is unhelpful, but as we?re talking about a program that uses virtual >>>>>>> threads, we know that is not the case. >>>>>>> >>>>>>> Also, Little?s law describes *stable* systems; i.e. it says that >>>>>>> *if* the system is stable, then a certain relationship must hold. While it >>>>>>> is true that the rate of arrival might rise without bound, if the number of >>>>>>> threads is insufficient to meet it, then the system is no longer stable >>>>>>> (normally that means that queues are growing without bound). >>>>>>> >>>>>>> ? Ron >>>>>>> >>>>>>> On 13 Jul 2022, at 14:00, Alex Otenko >>>>>>> wrote: >>>>>>> >>>>>>> This is an incorrect application of Little's Law. The law only >>>>>>> posits that there is a connection between quantities. It doesn't specify >>>>>>> which variables depend on which. In particular, throughput is not a free >>>>>>> variable. >>>>>>> >>>>>>> Throughput is something outside your control. 100k users open their >>>>>>> laptops at 9am and login within 1 second - that's it, you have throughput >>>>>>> of 100k ops/sec. >>>>>>> >>>>>>> Then based on response time the system is able to deliver, you can >>>>>>> tell what concurrency makes sense here. Adding threads is not going to >>>>>>> change anything - certainly not if threads are not the bottleneck resource. >>>>>>> Threads become the bottleneck when you have hardware to run them, but not >>>>>>> the threads. >>>>>>> >>>>>>> On Tue, 12 Jul 2022, 15:47 Ron Pressler, >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 11 Jul 2022, at 22:13, Rob Bygrave >>>>>>>> wrote: >>>>>>>> >>>>>>>> *> An existing application that migrates to using virtual threads >>>>>>>> doesn?t replace its platform threads with virtual threads* >>>>>>>> >>>>>>>> What I have been confident about to date based on the testing I've >>>>>>>> done is that we can use Jetty with a Loom based thread pool and that has >>>>>>>> worked very well. That is replacing current platform threads with virtual >>>>>>>> threads. I'm suggesting this will frequently be sub 1000 virtual threads. >>>>>>>> Ron, are you suggesting this isn't a valid use of virtual threads or am I >>>>>>>> reading too much into what you've said here? >>>>>>>> >>>>>>>> >>>>>>>> The throughput advantage to virtual threads comes from one aspect ? >>>>>>>> their *number* ? as explained by Little?s law. A web server employing >>>>>>>> virtual thread would not replace a pool of N platform threads with a pool >>>>>>>> of N virtual threads, as that does not increase the number of threads >>>>>>>> required to increase throughput. Rather, it replaces the pool of N virtual >>>>>>>> threads with an unpooled ExecutorService that spawns at least one new >>>>>>>> virtual thread for every HTTP serving task. Only that can increase the >>>>>>>> number of threads sufficiently to improve throughput. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> > *unusual* for an application that has any virtual threads to >>>>>>>> have fewer than, say, 10,000 >>>>>>>> >>>>>>>> In the case of http server use of virtual thread, I feel the use of >>>>>>>> *unusual* is too strong. That is, when we are using virtual >>>>>>>> threads for application code handling of http request/response (like Jetty >>>>>>>> + Loom), I suspect this is frequently going to operate with less than 1000 >>>>>>>> concurrent requests per server instance. >>>>>>>> >>>>>>>> >>>>>>>> 1000 concurrent requests would likely translate to more than 10,000 >>>>>>>> virtual threads due to fanout (JEPs 425 and 428 cover this). In fact, even >>>>>>>> without fanout, every HTTP request might wish to spawn more than one >>>>>>>> thread, for example to have one thread for reading and one for writing. The >>>>>>>> number 10,000, however, is just illustrative. Clearly, an application with >>>>>>>> virtual threads will have some large number of threads (significantly >>>>>>>> larger than applications with just platform threads), because the ability >>>>>>>> to have a large number of threads is what virtual threads are for. >>>>>>>> >>>>>>>> The important point is that tooling needs to adapt to a high number >>>>>>>> of threads, which is why we?ve added a tool that?s designed to make sense >>>>>>>> of many threads, where jstack might not be very useful. >>>>>>>> >>>>>>>> ? Ron >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Fri Jul 22 23:25:32 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sat, 23 Jul 2022 00:25:32 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> Message-ID: I think the single threaded example I gave speaks for itself. 1 thread can sustain various throughputs with various concurrency. I've shown a case with 99 concurrent requests, as per Little's law (and I agree with it), and it's easy to see how to get any higher concurrency. There are other laws at play, too, so my example latency wasn't random. But this has been long enough. On Thu, 21 Jul 2022, 12:30 Ron Pressler, wrote: > Little?s law has no notion of threads, only of ?requests.? But if you?re > talking about a *thread-per-request* program, as I made explicitly clear, > then the number of threads is equal to or greater than the number of > requests. > > And yes, if the *maximum* thread count is low, a thread-per-request > program will have a low bound on the number of concurrent requests, and > hence, by Little?s law, on throughput. > > ? Ron > > On 20 Jul 2022, at 19:24, Alex Otenko wrote: > > To me that statement implies a few things: > > - that Little's law talks of thread count > > - that if thread count is low, can't have throughput advantage > > > Well, I don't feel like discussing my imperfect grasp of English. > > On Tue, 19 Jul 2022, 23:52 Ron Pressler, wrote: > >> >> >> On 19 Jul 2022, at 18:38, Alex Otenko wrote: >> >> Agreed about the architectural advantages. >> >> The email that triggered my rant did contain the claim that using Virtual >> threads has the advantage of higher concurrency. >> >> > The throughput advantage to virtual threads comes from one aspect ? >> their *number* ? as explained by Little?s law. >> >> >> >> >> Yes, and that is correct. As I explained, a higher maximum number of >> threads does indeed mean it is possible to reach the higher concurrency >> needed for higher throughput, so virtual threads, by virtue of their >> number, do allow for higher throughput. That statement is completely >> accurate, and yet it means something very different from (the incorrect) >> ?increasing the number of threads increases throughput?, which is how you >> misinterpreted the statement. >> >> This is similar to saying that AC allows people to live in areas with >> higher temperature, and that is a very different statement from saying that >> AC increases the temperature (althoughI guess it happens to also do that). >> >> ? Ron >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sat Jul 23 01:00:50 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sat, 23 Jul 2022 01:00:50 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: > On 23 Jul 2022, at 00:06, Alex Otenko wrote: > > I am familiar with the bijection between types and continuation-passing. There is however a barrier. Who does the thing of interest in each of the styles: the caller or the callee. > > In terms of type theory it doesn't matter; both styles have the same power. In terms of software engineering it does matter. It is impossible to plug custom logic in a library that you import, unless a mechanism is planned at design time, and built in - most commonly a bunch of configuration options to tweak a parameterised algorithm; not really the ability to implement arbitrary algorithms that the theoretical bijection requires. So in practice you are limited to doing only things a caller can do. > > Same for allocation. It is doable to get the same behaviour as in async API, but it is not what InputStream.read(byte[]) does - the method signature forces you to allocate and wait. And there is no mechanism to tell the blocking API to read 10k of 100 requests instead of 100 bytes of 10k requests. It's doable, but it is not there. No, all that is simply incorrect. The customisability of synchronous code is at least as ergonomic and flexible as for async code if not more so (because Java is built around synchronous primitives, and its basic composition operators are made for synchronous primitives), and the I/O buffering primitives provided by the JDK are no different between synchronous and asynchronous. I feel like you?re trying to find some justification for your aesthetic preference for asynchronous code, and there?s really no need. If you enjoy it more, and you don?t need the observability support from the runtime ? by all means keep using it. Our goal isn?t to get fans of asynchronous code to abandon using it. It is to give the same benefits to those who prefer using synchronous code, and we can even go further because it is a better fit for the design of the Java platform and so that?s where we can support the code better, both in the runtime and the language. But if you like async better ? use async. But while we?re in the weeds, there are some interesting differences re memory usage, although they are not fundamentally about sync vs async, but some technical specifics of the JDK and how virtual threads are implemented. This might be of interest to the curious. Whether you use the async or sync style, there?s need to pass data from computation done before the call to computation done after. In Java, the async style normally requires allocating a new object to do that, while sync reuses the same mutable stack. Hypothetically, async code could implement such a mutable stack manually, but in Java it is difficult, because, unlike in C, heap objects cannot switch between storing pointers and primitives in the same memory cell. Stacks are designed to allow that thanks to special GC protocols, that were adapted for virtual threads. On the other hand, for simplicity and performance reasons in the JIT compiler, the way data is stored in the stack is wasteful (so fewer but bigger objects are allocated). We?re now working on making it more compact in the case of virtual threads. ? Ron From ron.pressler at oracle.com Sat Jul 23 01:03:57 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sat, 23 Jul 2022 01:03:57 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> Message-ID: We?re talking about thread-per-request programs. In such programs, one thread has a concurrency of one (i.e. it handles one request, hence ?thread-per-request?). As I explained, to get higher concurrency than what?s allowed by the number of OS threads you can *either* use user-mode threads *or* not represent a unit of concurrency as a thread, but here we?re talking about the former. All that is covered in JEP 425. ? Ron On 23 Jul 2022, at 00:25, Alex Otenko > wrote: I think the single threaded example I gave speaks for itself. 1 thread can sustain various throughputs with various concurrency. I've shown a case with 99 concurrent requests, as per Little's law (and I agree with it), and it's easy to see how to get any higher concurrency. There are other laws at play, too, so my example latency wasn't random. But this has been long enough. On Thu, 21 Jul 2022, 12:30 Ron Pressler, > wrote: Little?s law has no notion of threads, only of ?requests.? But if you?re talking about a *thread-per-request* program, as I made explicitly clear, then the number of threads is equal to or greater than the number of requests. And yes, if the *maximum* thread count is low, a thread-per-request program will have a low bound on the number of concurrent requests, and hence, by Little?s law, on throughput. ? Ron On 20 Jul 2022, at 19:24, Alex Otenko > wrote: To me that statement implies a few things: - that Little's law talks of thread count - that if thread count is low, can't have throughput advantage Well, I don't feel like discussing my imperfect grasp of English. On Tue, 19 Jul 2022, 23:52 Ron Pressler, > wrote: On 19 Jul 2022, at 18:38, Alex Otenko > wrote: Agreed about the architectural advantages. The email that triggered my rant did contain the claim that using Virtual threads has the advantage of higher concurrency. > The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. Yes, and that is correct. As I explained, a higher maximum number of threads does indeed mean it is possible to reach the higher concurrency needed for higher throughput, so virtual threads, by virtue of their number, do allow for higher throughput. That statement is completely accurate, and yet it means something very different from (the incorrect) ?increasing the number of threads increases throughput?, which is how you misinterpreted the statement. This is similar to saying that AC allows people to live in areas with higher temperature, and that is a very different statement from saying that AC increases the temperature (althoughI guess it happens to also do that). ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 24 13:05:44 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 24 Jul 2022 14:05:44 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: I am not sure what made you think I am fighting for async style everywhere. I am merely pointing out that some tasks are harder to solve in sync code - and impossible to solve, if you rely on library/built-in that isn't doing what you want. Like, if JDK doesn't expose a mechanism to prioritize reads, you can't solve the problem of reading fewer requests completely vs more requests partially. Also, I am not clear why you dismissed the allocation problem by just saying IO buffering is the same. The allocation problem is that user code allocates before read(byte[]) is called. This is the same in both sync and async code. The difference is that in async code the lifespan of the allocation is shorter - we read only read-ready channels, and those that aren't ready return quickly. In sync code the buffer remains allocated for a long time - even seconds, if that's what the connection reuse pattern is on the client side. On Sat, 23 Jul 2022, 02:00 Ron Pressler, wrote: > > > > On 23 Jul 2022, at 00:06, Alex Otenko > wrote: > > > > I am familiar with the bijection between types and continuation-passing. > There is however a barrier. Who does the thing of interest in each of the > styles: the caller or the callee. > > > > In terms of type theory it doesn't matter; both styles have the same > power. In terms of software engineering it does matter. It is impossible to > plug custom logic in a library that you import, unless a mechanism is > planned at design time, and built in - most commonly a bunch of > configuration options to tweak a parameterised algorithm; not really the > ability to implement arbitrary algorithms that the theoretical bijection > requires. So in practice you are limited to doing only things a caller can > do. > > > > Same for allocation. It is doable to get the same behaviour as in async > API, but it is not what InputStream.read(byte[]) does - the method > signature forces you to allocate and wait. And there is no mechanism to > tell the blocking API to read 10k of 100 requests instead of 100 bytes of > 10k requests. It's doable, but it is not there. > > No, all that is simply incorrect. The customisability of synchronous code > is at least as ergonomic and flexible as for async code if not more so > (because Java is built around synchronous primitives, and its basic > composition operators are made for synchronous primitives), and the I/O > buffering primitives provided by the JDK are no different between > synchronous and asynchronous. > > I feel like you?re trying to find some justification for your aesthetic > preference for asynchronous code, and there?s really no need. If you enjoy > it more, and you don?t need the observability support from the runtime ? by > all means keep using it. Our goal isn?t to get fans of asynchronous code to > abandon using it. It is to give the same benefits to those who prefer using > synchronous code, and we can even go further because it is a better fit for > the design of the Java platform and so that?s where we can support the code > better, both in the runtime and the language. But if you like async better > ? use async. > > But while we?re in the weeds, there are some interesting differences re > memory usage, although they are not fundamentally about sync vs async, but > some technical specifics of the JDK and how virtual threads are > implemented. This might be of interest to the curious. > > Whether you use the async or sync style, there?s need to pass data from > computation done before the call to computation done after. In Java, the > async style normally requires allocating a new object to do that, while > sync reuses the same mutable stack. Hypothetically, async code could > implement such a mutable stack manually, but in Java it is difficult, > because, unlike in C, heap objects cannot switch between storing pointers > and primitives in the same memory cell. Stacks are designed to allow that > thanks to special GC protocols, that were adapted for virtual threads. > > On the other hand, for simplicity and performance reasons in the JIT > compiler, the way data is stored in the stack is wasteful (so fewer but > bigger objects are allocated). We?re now working on making it more compact > in the case of virtual threads. > > ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 24 13:07:17 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 24 Jul 2022 14:07:17 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> Message-ID: I think none of this statement has anything to do with Little's law. On Sat, 23 Jul 2022, 02:04 Ron Pressler, wrote: > We?re talking about thread-per-request programs. In such programs, one > thread has a concurrency of one (i.e. it handles one request, hence > ?thread-per-request?). As I explained, to get higher concurrency than > what?s allowed by the number of OS threads you can *either* use user-mode > threads *or* not represent a unit of concurrency as a thread, but here > we?re talking about the former. All that is covered in JEP 425. > > ? Ron > > On 23 Jul 2022, at 00:25, Alex Otenko wrote: > > I think the single threaded example I gave speaks for itself. 1 thread can > sustain various throughputs with various concurrency. I've shown a case > with 99 concurrent requests, as per Little's law (and I agree with it), > and it's easy to see how to get any higher concurrency. > > There are other laws at play, too, so my example latency wasn't random. > But this has been long enough. > > > On Thu, 21 Jul 2022, 12:30 Ron Pressler, wrote: > >> Little?s law has no notion of threads, only of ?requests.? But if you?re >> talking about a *thread-per-request* program, as I made explicitly clear, >> then the number of threads is equal to or greater than the number of >> requests. >> >> And yes, if the *maximum* thread count is low, a thread-per-request >> program will have a low bound on the number of concurrent requests, and >> hence, by Little?s law, on throughput. >> >> ? Ron >> >> On 20 Jul 2022, at 19:24, Alex Otenko wrote: >> >> To me that statement implies a few things: >> >> - that Little's law talks of thread count >> >> - that if thread count is low, can't have throughput advantage >> >> >> Well, I don't feel like discussing my imperfect grasp of English. >> >> On Tue, 19 Jul 2022, 23:52 Ron Pressler, wrote: >> >>> >>> >>> On 19 Jul 2022, at 18:38, Alex Otenko >>> wrote: >>> >>> Agreed about the architectural advantages. >>> >>> The email that triggered my rant did contain the claim that using >>> Virtual threads has the advantage of higher concurrency. >>> >>> > The throughput advantage to virtual threads comes from one aspect ? >>> their *number* ? as explained by Little?s law. >>> >>> >>> >>> >>> Yes, and that is correct. As I explained, a higher maximum number of >>> threads does indeed mean it is possible to reach the higher concurrency >>> needed for higher throughput, so virtual threads, by virtue of their >>> number, do allow for higher throughput. That statement is completely >>> accurate, and yet it means something very different from (the incorrect) >>> ?increasing the number of threads increases throughput?, which is how you >>> misinterpreted the statement. >>> >>> This is similar to saying that AC allows people to live in areas with >>> higher temperature, and that is a very different statement from saying that >>> AC increases the temperature (althoughI guess it happens to also do that). >>> >>> ? Ron >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sun Jul 24 14:18:30 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 24 Jul 2022 14:18:30 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> Message-ID: <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> Little?s law dictates that concurrency must rise with throughput (= request rate) assuming latency doesn?t drop, i.e. the number of requests being served rises. If your program is thread-per-request then, by definition, if the number of requests rises so must the number of threads. Of course, while we?re only talking about thread-per-request here, if you choose to do something else you can disentangle requests from threads and then concurrency is not tied to the number of threads (but then you give up on synchronous code and on full platform support). All this is covered in JEP 425, which explains that to reach higher throughputs you must either abandon the thread as the unit of concurrency and write asynchronous code, or use threads that can be plentiful. The reasons we invested to much in making threads that can be plentiful are: 1. There are many people who prefer the synchronous style, and 2. The asynchronous style is fundamentally at odds with the design of the language and the platform, which cannot support it as well as they can the synchronous style (at least not without an overhaul to very basic concepts). BTW, I don?t understand your point about there being ?other laws at play.? Little?s law is not a physical law subject to refutation by observation, but a mathematical theorem. As such, short of an inconsistency in the foundation of mathematics, all other mathematical theorems must be consistent with it. To accommodate higher request rates, latency must drop or concurrency must rise ? that must always be true. Other laws may state other things that must also be true, but they cannot contradict this. ? Ron On 24 Jul 2022, at 14:07, Alex Otenko > wrote: I think none of this statement has anything to do with Little's law. On Sat, 23 Jul 2022, 02:04 Ron Pressler, > wrote: We?re talking about thread-per-request programs. In such programs, one thread has a concurrency of one (i.e. it handles one request, hence ?thread-per-request?). As I explained, to get higher concurrency than what?s allowed by the number of OS threads you can *either* use user-mode threads *or* not represent a unit of concurrency as a thread, but here we?re talking about the former. All that is covered in JEP 425. ? Ron On 23 Jul 2022, at 00:25, Alex Otenko > wrote: I think the single threaded example I gave speaks for itself. 1 thread can sustain various throughputs with various concurrency. I've shown a case with 99 concurrent requests, as per Little's law (and I agree with it), and it's easy to see how to get any higher concurrency. There are other laws at play, too, so my example latency wasn't random. But this has been long enough. On Thu, 21 Jul 2022, 12:30 Ron Pressler, > wrote: Little?s law has no notion of threads, only of ?requests.? But if you?re talking about a *thread-per-request* program, as I made explicitly clear, then the number of threads is equal to or greater than the number of requests. And yes, if the *maximum* thread count is low, a thread-per-request program will have a low bound on the number of concurrent requests, and hence, by Little?s law, on throughput. ? Ron On 20 Jul 2022, at 19:24, Alex Otenko > wrote: To me that statement implies a few things: - that Little's law talks of thread count - that if thread count is low, can't have throughput advantage Well, I don't feel like discussing my imperfect grasp of English. On Tue, 19 Jul 2022, 23:52 Ron Pressler, > wrote: On 19 Jul 2022, at 18:38, Alex Otenko > wrote: Agreed about the architectural advantages. The email that triggered my rant did contain the claim that using Virtual threads has the advantage of higher concurrency. > The throughput advantage to virtual threads comes from one aspect ? their *number* ? as explained by Little?s law. Yes, and that is correct. As I explained, a higher maximum number of threads does indeed mean it is possible to reach the higher concurrency needed for higher throughput, so virtual threads, by virtue of their number, do allow for higher throughput. That statement is completely accurate, and yet it means something very different from (the incorrect) ?increasing the number of threads increases throughput?, which is how you misinterpreted the statement. This is similar to saying that AC allows people to live in areas with higher temperature, and that is a very different statement from saying that AC increases the temperature (althoughI guess it happens to also do that). ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sun Jul 24 14:39:39 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 24 Jul 2022 14:39:39 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: <859E7F3B-2C17-421A-9C97-EA6BD2EE70DB@oracle.com> On 24 Jul 2022, at 14:05, Alex Otenko > wrote: I am not sure what made you think I am fighting for async style everywhere. I am merely pointing out that some tasks are harder to solve in sync code - and impossible to solve, if you rely on library/built-in that isn't doing what you want. Like, if JDK doesn't expose a mechanism to prioritize reads, you can't solve the problem of reading fewer requests completely vs more requests partially. I disagree with your claims, and the JDK exposes as many mechanisms to prioritise threads as it does to prioritise callback calls, and composition of synchronous constructs is easier in Java. Also, I am not clear why you dismissed the allocation problem by just saying IO buffering is the same. The allocation problem is that user code allocates before read(byte[]) is called. This is the same in both sync and async code. The difference is that in async code the lifespan of the allocation is shorter - we read only read-ready channels, and those that aren't ready return quickly. In sync code the buffer remains allocated for a long time - even seconds, if that's what the connection reuse pattern is on the client side. I am not dismissing it, merely claiming it is untrue. The JDK provides the same mechanisms that would allow synchronous and asynchronous I/O libraries to allocate buffers only for read-ready channels. It is true that the JDK?s built-in synchronous I/O constructs don?t do that, but the JDK?s built-in asynchronous don?t do that either. So both are equally possible, and both are equally not done by the JDK itself. ? Ron On Sat, 23 Jul 2022, 02:00 Ron Pressler, > wrote: > On 23 Jul 2022, at 00:06, Alex Otenko > wrote: > > I am familiar with the bijection between types and continuation-passing. There is however a barrier. Who does the thing of interest in each of the styles: the caller or the callee. > > In terms of type theory it doesn't matter; both styles have the same power. In terms of software engineering it does matter. It is impossible to plug custom logic in a library that you import, unless a mechanism is planned at design time, and built in - most commonly a bunch of configuration options to tweak a parameterised algorithm; not really the ability to implement arbitrary algorithms that the theoretical bijection requires. So in practice you are limited to doing only things a caller can do. > > Same for allocation. It is doable to get the same behaviour as in async API, but it is not what InputStream.read(byte[]) does - the method signature forces you to allocate and wait. And there is no mechanism to tell the blocking API to read 10k of 100 requests instead of 100 bytes of 10k requests. It's doable, but it is not there. No, all that is simply incorrect. The customisability of synchronous code is at least as ergonomic and flexible as for async code if not more so (because Java is built around synchronous primitives, and its basic composition operators are made for synchronous primitives), and the I/O buffering primitives provided by the JDK are no different between synchronous and asynchronous. I feel like you?re trying to find some justification for your aesthetic preference for asynchronous code, and there?s really no need. If you enjoy it more, and you don?t need the observability support from the runtime ? by all means keep using it. Our goal isn?t to get fans of asynchronous code to abandon using it. It is to give the same benefits to those who prefer using synchronous code, and we can even go further because it is a better fit for the design of the Java platform and so that?s where we can support the code better, both in the runtime and the language. But if you like async better ? use async. But while we?re in the weeds, there are some interesting differences re memory usage, although they are not fundamentally about sync vs async, but some technical specifics of the JDK and how virtual threads are implemented. This might be of interest to the curious. Whether you use the async or sync style, there?s need to pass data from computation done before the call to computation done after. In Java, the async style normally requires allocating a new object to do that, while sync reuses the same mutable stack. Hypothetically, async code could implement such a mutable stack manually, but in Java it is difficult, because, unlike in C, heap objects cannot switch between storing pointers and primitives in the same memory cell. Stacks are designed to allow that thanks to special GC protocols, that were adapted for virtual threads. On the other hand, for simplicity and performance reasons in the JIT compiler, the way data is stored in the stack is wasteful (so fewer but bigger objects are allocated). We?re now working on making it more compact in the case of virtual threads. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Sun Jul 24 15:52:23 2022 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Sun, 24 Jul 2022 12:52:23 -0300 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: Em dom., 24 de jul. de 2022 ?s 10:08, Alex Otenko < oleksandr.otenko at gmail.com> escreveu: > Also, I am not clear why you dismissed the allocation problem by just > saying IO buffering is the same. The allocation problem is that user code > allocates before read(byte[]) is called. This is the same in both sync and > async code. The difference is that in async code the lifespan of the > allocation is shorter - we read only read-ready channels, and those that > aren't ready return quickly. In sync code the buffer remains allocated for > a long time - even seconds, if that's what the connection reuse pattern is > on the client side. > I am not what this allocation problem is supposed to be. If you have provisioned the memory required to support your workload, it doesn't matter for how long some buffer remains "inside" a call. If you have not provisioned the memory required to support your workload, then you have an architectural problem no matter how little time some buffer remains "inside" a call; you will crash when concurrent reads peak. Am I missing something? -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 24 18:16:22 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 24 Jul 2022 19:16:22 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: Correct. But if you control how many connections you are reading, then you control how much memory you spend on that. Hence reading 100 connections 10kb at a time won't pin more than 1mb. But if you don't have control over how many connections you are reading from, which is the case with sync API, then you might be pinning 100mb before one request becomes available. On Sun, 24 Jul 2022, 16:52 Pedro Lamar?o, wrote: > Em dom., 24 de jul. de 2022 ?s 10:08, Alex Otenko < > oleksandr.otenko at gmail.com> escreveu: > > >> Also, I am not clear why you dismissed the allocation problem by just >> saying IO buffering is the same. The allocation problem is that user code >> allocates before read(byte[]) is called. This is the same in both sync and >> async code. The difference is that in async code the lifespan of the >> allocation is shorter - we read only read-ready channels, and those that >> aren't ready return quickly. In sync code the buffer remains allocated for >> a long time - even seconds, if that's what the connection reuse pattern is >> on the client side. >> > > I am not what this allocation problem is supposed to be. > If you have provisioned the memory required to support your workload, > it doesn't matter for how long some buffer remains "inside" a call. > If you have not provisioned the memory required to support your workload, > then you have an architectural problem no matter how little time some > buffer remains "inside" a call; > you will crash when concurrent reads peak. > Am I missing something? > > -- > Pedro Lamar?o > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 24 18:26:11 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 24 Jul 2022 19:26:11 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> Message-ID: The "other laws" don't contradict Little's law, they only explain that you can't have an equals sign between thread count and throughput. Let me remind you what I mean. 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and at request rate 99 concurrency is 99. Etc. All of this is because response time gets worse as the "other laws" predict. But we already see thread count is not a cap on concurrency, as was one of the claims earlier in this thread. If we increase thread count we can improve response times. But at thread count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms response time. Whereas arithmetically the situation keeps improving (by an ever smaller fraction of a microsecond), the mathematics of it cannot capture the notion of diminished returns. So that answers why we are typically fine with a small thread count. Alex On Sun, 24 Jul 2022, 15:18 Ron Pressler, wrote: > Little?s law dictates that concurrency must rise with throughput (= > request rate) assuming latency doesn?t drop, i.e. the number of requests > being served rises. If your program is thread-per-request then, by > definition, if the number of requests rises so must the number of threads. > Of course, while we?re only talking about thread-per-request here, if you > choose to do something else you can disentangle requests from threads and > then concurrency is not tied to the number of threads (but then you give up > on synchronous code and on full platform support). > > All this is covered in JEP 425, which explains that to reach higher > throughputs you must either abandon the thread as the unit of concurrency > and write asynchronous code, or use threads that can be plentiful. The > reasons we invested to much in making threads that can be plentiful are: 1. > There are many people who prefer the synchronous style, and 2. The > asynchronous style is fundamentally at odds with the design of the language > and the platform, which cannot support it as well as they can the > synchronous style (at least not without an overhaul to very basic > concepts). > > BTW, I don?t understand your point about there being ?other laws at play.? > Little?s law is not a physical law subject to refutation by observation, > but a mathematical theorem. As such, short of an inconsistency in the > foundation of mathematics, all other mathematical theorems must be > consistent with it. To accommodate higher request rates, latency must drop > or concurrency must rise ? that must always be true. Other laws may state > other things that must also be true, but they cannot contradict this. > > ? Ron > > On 24 Jul 2022, at 14:07, Alex Otenko wrote: > > I think none of this statement has anything to do with Little's law. > > On Sat, 23 Jul 2022, 02:04 Ron Pressler, wrote: > >> We?re talking about thread-per-request programs. In such programs, one >> thread has a concurrency of one (i.e. it handles one request, hence >> ?thread-per-request?). As I explained, to get higher concurrency than >> what?s allowed by the number of OS threads you can *either* use user-mode >> threads *or* not represent a unit of concurrency as a thread, but here >> we?re talking about the former. All that is covered in JEP 425. >> >> ? Ron >> >> On 23 Jul 2022, at 00:25, Alex Otenko wrote: >> >> I think the single threaded example I gave speaks for itself. 1 thread >> can sustain various throughputs with various concurrency. I've shown a case >> with 99 concurrent requests, as per Little's law (and I agree with it), >> and it's easy to see how to get any higher concurrency. >> >> There are other laws at play, too, so my example latency wasn't random. >> But this has been long enough. >> >> >> On Thu, 21 Jul 2022, 12:30 Ron Pressler, wrote: >> >>> Little?s law has no notion of threads, only of ?requests.? But if you?re >>> talking about a *thread-per-request* program, as I made explicitly clear, >>> then the number of threads is equal to or greater than the number of >>> requests. >>> >>> And yes, if the *maximum* thread count is low, a thread-per-request >>> program will have a low bound on the number of concurrent requests, and >>> hence, by Little?s law, on throughput. >>> >>> ? Ron >>> >>> On 20 Jul 2022, at 19:24, Alex Otenko >>> wrote: >>> >>> To me that statement implies a few things: >>> >>> - that Little's law talks of thread count >>> >>> - that if thread count is low, can't have throughput advantage >>> >>> >>> Well, I don't feel like discussing my imperfect grasp of English. >>> >>> On Tue, 19 Jul 2022, 23:52 Ron Pressler, >>> wrote: >>> >>>> >>>> >>>> On 19 Jul 2022, at 18:38, Alex Otenko >>>> wrote: >>>> >>>> Agreed about the architectural advantages. >>>> >>>> The email that triggered my rant did contain the claim that using >>>> Virtual threads has the advantage of higher concurrency. >>>> >>>> > The throughput advantage to virtual threads comes from one aspect ? >>>> their *number* ? as explained by Little?s law. >>>> >>>> >>>> >>>> >>>> Yes, and that is correct. As I explained, a higher maximum number of >>>> threads does indeed mean it is possible to reach the higher concurrency >>>> needed for higher throughput, so virtual threads, by virtue of their >>>> number, do allow for higher throughput. That statement is completely >>>> accurate, and yet it means something very different from (the incorrect) >>>> ?increasing the number of threads increases throughput?, which is how you >>>> misinterpreted the statement. >>>> >>>> This is similar to saying that AC allows people to live in areas with >>>> higher temperature, and that is a very different statement from saying that >>>> AC increases the temperature (althoughI guess it happens to also do that). >>>> >>>> ? Ron >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sun Jul 24 19:18:43 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 24 Jul 2022 19:18:43 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <838D57E6-8671-4F6C-8792-E95E6D2DAD87@oracle.com> Message-ID: <6032BEC5-8A83-49B8-9730-F8D01ECCFB28@oracle.com> > On 24 Jul 2022, at 19:16, Alex Otenko wrote: > > Correct. But if you control how many connections you are reading, then you control how much memory you spend on that. Hence reading 100 connections 10kb at a time won't pin more than 1mb. But if you don't have control over how many connections you are reading from, which is the case with sync API, then you might be pinning 100mb before one request becomes available. Sync APIs give you exactly the same control as async APIs over anything. ? Ron From ron.pressler at oracle.com Sun Jul 24 19:44:57 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 24 Jul 2022 19:44:57 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> Message-ID: <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> > On 24 Jul 2022, at 19:26, Alex Otenko wrote: > > The "other laws" don't contradict Little's law, they only explain that you can't have an equals sign between thread count and throughput. That there is an equals sign when the system is stable is a mathematical theorem, so there cannot exist a correct explanation for its falsehood. Your feelings about it are irrelevant to its correctness. It is a theorem. > > Let me remind you what I mean. > > 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and at request rate 99 concurrency is 99. Etc. All of this is because response time gets worse as the "other laws" predict. But we already see thread count is not a cap on concurrency, as was one of the claims earlier in this thread. > > If we increase thread count we can improve response times. But at thread count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms response time. Whereas arithmetically the situation keeps improving (by an ever smaller fraction of a microsecond), the mathematics of it cannot capture the notion of diminished returns. First, as I must have repeated three times in this discussion, we?re talking about thread-per-request (please read JEP 425, as all this is explained there), so by definition, we?re talking about cases where the number of threads is equal to or greater than the concurrency, i.e. the number of requests in flight. Second, as I must have repeated at least three times in this discussion, increasing the number of threads does nothing. A mathematical theorem tells us what the concurrency *is equal to* in a stable system. It cannot possibly be any lower or any higher. So your number of requests that are being processed is equal to some number if your system is stable, and in the case of thread-per-request programs, which are our topic, the number of threads processing requests is exactly equal to the number of concurrent requests times the number of threads per request, which is at least one by definition. If you add any more threads they cannot be processing requests, and if you have fewer threads then your system isn?t stable. Finally, if your latency starts going up, then so does your concurrency, up to the point where one of your software or hardware components reaches its peak concurrency and your server destabilises. While Little?s law tells you what the concurrency is equal to (and so, in a thread-per-request program what the number of request-processing threads is equal to), the number of threads is not the only limit on the maximum capacity. We know that in a thread-per-request server, every request consumes at least one thread, but it consumes other resources as well, and they, too, place limitations on concurrency. All this is factored into the bounds on the concurrency level. It?s just that we empirically know that the limitation on threads is hit *first* by many servers, which is why async APIs and lightweight user-mode threads were invented. Note that Little?s law, being a mathematical theorem, applies to every component separately, too. I.e., you can treat your CPU as the server, the requests would be the processing bursts, and the maximum concurrency would be the number of cores. > > So that answers why we are typically fine with a small thread count. > That we are not typically fine writing scalable thread-per-request programs with few threads is the reason why async I/O and user-mode threads were created. It is possible some people are fine, but clearly many are not. If your thread-per-request program needs to handle only a small number of requests concurrently, and so needs only a few threads, then there?s no need for you to use virtual threads. That is exactly why, when this discussion started what feels like a year ago, I said that when there are virtual threads, there must be many of them (or else they?re not needed). ? Ron From ron.pressler at oracle.com Sun Jul 24 21:48:39 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 24 Jul 2022 21:48:39 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> Message-ID: P.S. In case I didn?t make it abundantly clear, your example is not of a thread-per-request program, while I'm talking about thread-per-request programs (that is the source of the association between number of threads and level of concurrency) where a thread, by definition, can only handle a single request. That?s the style virtual threads are aimed at (as JEP 425 explicitly states), because that?s the style many people prefer and the style Java can best support. Abandoning thread-per-request is, indeed, another way of getting around the scarcity of OS threads (as JEP 425 explains), but it has no relevance to virtual threads. When talking about virtual threads, we take it as a given that a thread does not handle more than one request. ? Ron > On 24 Jul 2022, at 20:44, Ron Pressler wrote: > > > >> On 24 Jul 2022, at 19:26, Alex Otenko wrote: >> >> The "other laws" don't contradict Little's law, they only explain that you can't have an equals sign between thread count and throughput. > > That there is an equals sign when the system is stable is a mathematical theorem, so there cannot exist a correct explanation for its falsehood. Your feelings about it are irrelevant to its correctness. It is a theorem. > >> >> Let me remind you what I mean. >> >> 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and at request rate 99 concurrency is 99. Etc. All of this is because response time gets worse as the "other laws" predict. But we already see thread count is not a cap on concurrency, as was one of the claims earlier in this thread. >> >> If we increase thread count we can improve response times. But at thread count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms response time. Whereas arithmetically the situation keeps improving (by an ever smaller fraction of a microsecond), the mathematics of it cannot capture the notion of diminished returns. > > First, as I must have repeated three times in this discussion, we?re talking about thread-per-request (please read JEP 425, as all this is explained there), so by definition, we?re talking about cases where the number of threads is equal to or greater than the concurrency, i.e. the number of requests in flight. > > Second, as I must have repeated at least three times in this discussion, increasing the number of threads does nothing. A mathematical theorem tells us what the concurrency *is equal to* in a stable system. It cannot possibly be any lower or any higher. So your number of requests that are being processed is equal to some number if your system is stable, and in the case of thread-per-request programs, which are our topic, the number of threads processing requests is exactly equal to the number of concurrent requests times the number of threads per request, which is at least one by definition. If you add any more threads they cannot be processing requests, and if you have fewer threads then your system isn?t stable. > > Finally, if your latency starts going up, then so does your concurrency, up to the point where one of your software or hardware components reaches its peak concurrency and your server destabilises. While Little?s law tells you what the concurrency is equal to (and so, in a thread-per-request program what the number of request-processing threads is equal to), the number of threads is not the only limit on the maximum capacity. We know that in a thread-per-request server, every request consumes at least one thread, but it consumes other resources as well, and they, too, place limitations on concurrency. All this is factored into the bounds on the concurrency level. It?s just that we empirically know that the limitation on threads is hit *first* by many servers, which is why async APIs and lightweight user-mode threads were invented. > > Note that Little?s law, being a mathematical theorem, applies to every component separately, too. I.e., you can treat your CPU as the server, the requests would be the processing bursts, and the maximum concurrency would be the number of cores. > >> >> So that answers why we are typically fine with a small thread count. >> > > That we are not typically fine writing scalable thread-per-request programs with few threads is the reason why async I/O and user-mode threads were created. It is possible some people are fine, but clearly many are not. If your thread-per-request program needs to handle only a small number of requests concurrently, and so needs only a few threads, then there?s no need for you to use virtual threads. That is exactly why, when this discussion started what feels like a year ago, I said that when there are virtual threads, there must be many of them (or else they?re not needed). > > ? Ron > > From oleksandr.otenko at gmail.com Mon Jul 25 08:16:45 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Mon, 25 Jul 2022 09:16:45 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> Message-ID: Well, there are a few things I said several times too, so we are in the same boat. :) Ok, just open your favourite modelling software and see: Given a request rate and a request processing time, there is a minimal number of threads that can process them. That's the capacity needed to do work (i.e. for the system to remain stable). Thread-per-request is simply the maximum number of threads you can have to process that work. Then you can see what your favourite modelling software says about concurrency. It says that as you add threads, concurrency in the sense used in Little's law decreases. Since this is also a mathematical fact, something in the claim that adding threads increases concurrency, needs reconciling. Alex On Sun, 24 Jul 2022, 20:45 Ron Pressler, wrote: > > > > On 24 Jul 2022, at 19:26, Alex Otenko > wrote: > > > > The "other laws" don't contradict Little's law, they only explain that > you can't have an equals sign between thread count and throughput. > > That there is an equals sign when the system is stable is a mathematical > theorem, so there cannot exist a correct explanation for its falsehood. > Your feelings about it are irrelevant to its correctness. It is a theorem. > > > > > Let me remind you what I mean. > > > > 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and > at request rate 99 concurrency is 99. Etc. All of this is because response > time gets worse as the "other laws" predict. But we already see thread > count is not a cap on concurrency, as was one of the claims earlier in this > thread. > > > > If we increase thread count we can improve response times. But at thread > count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms > response time. Whereas arithmetically the situation keeps improving (by an > ever smaller fraction of a microsecond), the mathematics of it cannot > capture the notion of diminished returns. > > First, as I must have repeated three times in this discussion, we?re > talking about thread-per-request (please read JEP 425, as all this is > explained there), so by definition, we?re talking about cases where the > number of threads is equal to or greater than the concurrency, i.e. the > number of requests in flight. > > Second, as I must have repeated at least three times in this discussion, > increasing the number of threads does nothing. A mathematical theorem tells > us what the concurrency *is equal to* in a stable system. It cannot > possibly be any lower or any higher. So your number of requests that are > being processed is equal to some number if your system is stable, and in > the case of thread-per-request programs, which are our topic, the number > of threads processing requests is exactly equal to the number of concurrent > requests times the number of threads per request, which is at least one by > definition. If you add any more threads they cannot be processing requests, > and if you have fewer threads then your system isn?t stable. > > Finally, if your latency starts going up, then so does your concurrency, > up to the point where one of your software or hardware components reaches > its peak concurrency and your server destabilises. While Little?s law tells > you what the concurrency is equal to (and so, in a thread-per-request > program what the number of request-processing threads is equal to), the > number of threads is not the only limit on the maximum capacity. We know > that in a thread-per-request server, every request consumes at least one > thread, but it consumes other resources as well, and they, too, place > limitations on concurrency. All this is factored into the bounds on the > concurrency level. It?s just that we empirically know that the limitation > on threads is hit *first* by many servers, which is why async APIs and > lightweight user-mode threads were invented. > > Note that Little?s law, being a mathematical theorem, applies to every > component separately, too. I.e., you can treat your CPU as the server, the > requests would be the processing bursts, and the maximum concurrency would > be the number of cores. > > > > > So that answers why we are typically fine with a small thread count. > > > > That we are not typically fine writing scalable thread-per-request > programs with few threads is the reason why async I/O and user-mode threads > were created. It is possible some people are fine, but clearly many are > not. If your thread-per-request program needs to handle only a small number > of requests concurrently, and so needs only a few threads, then there?s no > need for you to use virtual threads. That is exactly why, when this > discussion started what feels like a year ago, I said that when there are > virtual threads, there must be many of them (or else they?re not needed). > > ? Ron > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Mon Jul 25 09:18:54 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Mon, 25 Jul 2022 09:18:54 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> Message-ID: <20468672-2291-4E56-890B-20F80DD2BCC7@oracle.com> You are talking about systems where a thread processes more than one request. That is the very opposite of what thread-per-request means. Also, as I said over and over and over, there is no claim that adding threads increases concurrency. Please stop repeating that nonsense. Higher throughput in a thread-per-request system means higher concurrency, which, *in a thread-per-request program* means adding threads, but adding threads doers not increase the concurrency. Reading a book when it?s dark outside requires turning on the lights, but reading a book with the lights on does not make it dark outside. Here are measurements from an actual server. As the laws of the universe require, they follow Little?s law ? the system becomes unstable exactly when the equation breaks ? as anything else would be impossible. Every simulation software will show you the same thing. If you do see anything else, then you?re not talking about thread-per-request. Absolutely everything we?ve discussed is states in JEP 425, so I would ask you to read it carefully, and only respond if you have a question about a particular section you can quote. Ask yourself if you?re talking about programs where a thread can make progress on more than one request; if so, go back and think. ? Ron [cid:A1DEEA70-93EC-4D61-A862-35DB04665BDA] On 25 Jul 2022, at 09:16, Alex Otenko > wrote: Well, there are a few things I said several times too, so we are in the same boat. :) Ok, just open your favourite modelling software and see: Given a request rate and a request processing time, there is a minimal number of threads that can process them. That's the capacity needed to do work (i.e. for the system to remain stable). Thread-per-request is simply the maximum number of threads you can have to process that work. Then you can see what your favourite modelling software says about concurrency. It says that as you add threads, concurrency in the sense used in Little's law decreases. Since this is also a mathematical fact, something in the claim that adding threads increases concurrency, needs reconciling. Alex On Sun, 24 Jul 2022, 20:45 Ron Pressler, > wrote: > On 24 Jul 2022, at 19:26, Alex Otenko > wrote: > > The "other laws" don't contradict Little's law, they only explain that you can't have an equals sign between thread count and throughput. That there is an equals sign when the system is stable is a mathematical theorem, so there cannot exist a correct explanation for its falsehood. Your feelings about it are irrelevant to its correctness. It is a theorem. > > Let me remind you what I mean. > > 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and at request rate 99 concurrency is 99. Etc. All of this is because response time gets worse as the "other laws" predict. But we already see thread count is not a cap on concurrency, as was one of the claims earlier in this thread. > > If we increase thread count we can improve response times. But at thread count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms response time. Whereas arithmetically the situation keeps improving (by an ever smaller fraction of a microsecond), the mathematics of it cannot capture the notion of diminished returns. First, as I must have repeated three times in this discussion, we?re talking about thread-per-request (please read JEP 425, as all this is explained there), so by definition, we?re talking about cases where the number of threads is equal to or greater than the concurrency, i.e. the number of requests in flight. Second, as I must have repeated at least three times in this discussion, increasing the number of threads does nothing. A mathematical theorem tells us what the concurrency *is equal to* in a stable system. It cannot possibly be any lower or any higher. So your number of requests that are being processed is equal to some number if your system is stable, and in the case of thread-per-request programs, which are our topic, the number of threads processing requests is exactly equal to the number of concurrent requests times the number of threads per request, which is at least one by definition. If you add any more threads they cannot be processing requests, and if you have fewer threads then your system isn?t stable. Finally, if your latency starts going up, then so does your concurrency, up to the point where one of your software or hardware components reaches its peak concurrency and your server destabilises. While Little?s law tells you what the concurrency is equal to (and so, in a thread-per-request program what the number of request-processing threads is equal to), the number of threads is not the only limit on the maximum capacity. We know that in a thread-per-request server, every request consumes at least one thread, but it consumes other resources as well, and they, too, place limitations on concurrency. All this is factored into the bounds on the concurrency level. It?s just that we empirically know that the limitation on threads is hit *first* by many servers, which is why async APIs and lightweight user-mode threads were invented. Note that Little?s law, being a mathematical theorem, applies to every component separately, too. I.e., you can treat your CPU as the server, the requests would be the processing bursts, and the maximum concurrency would be the number of cores. > > So that answers why we are typically fine with a small thread count. > That we are not typically fine writing scalable thread-per-request programs with few threads is the reason why async I/O and user-mode threads were created. It is possible some people are fine, but clearly many are not. If your thread-per-request program needs to handle only a small number of requests concurrently, and so needs only a few threads, then there?s no need for you to use virtual threads. That is exactly why, when this discussion started what feels like a year ago, I said that when there are virtual threads, there must be many of them (or else they?re not needed). ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2022-07-25 at 10.05.53.png Type: image/png Size: 354087 bytes Desc: Screenshot 2022-07-25 at 10.05.53.png URL: From org.openjdk at io7m.com Mon Jul 25 13:10:28 2022 From: org.openjdk at io7m.com (Mark Raynsford) Date: Mon, 25 Jul 2022 13:10:28 +0000 Subject: Controlling the VirtualThread.scheduler In-Reply-To: References: Message-ID: <20220725131028.0b42b61c@sunflower.int.arc7.info> On 2022-06-07T10:33:44 +0000 Ron Pressler wrote: > Hi. > > The plan is to ultimately allow custom schedulers, but we want to first let > the ecosystem learn about virtual threads and their uses with the default > scheduler. Just a "me too" in case these messages are considered when prioritizing features. :) A big use case I have for Loom is the same as another recent poster to this list: I have a DSL, and I want to expose multiple threads of control within the language without also introducing the data hazards inherent with running multiple kernel threads. That is, I don't care _when_ code runs, but I do care that it all happens on one kernel thread. I did try several of the earlier builds that exposed the ability to set an executor service for the underlying kernel threads. I look forward to getting something along these lines back! -- Mark Raynsford | https://www.io7m.com From ron.pressler at oracle.com Tue Jul 26 09:15:06 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 26 Jul 2022 09:15:06 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> <20468672-2291-4E56-890B-20F80DD2BCC7@oracle.com> Message-ID: Let me make this as simple as I think I can: 1. We are talking *only* about a server that creates a new thread for every incoming request. That?s how we define ?thread-per-request." If what you have in mind is a server that operates in any other way, you?re misunderstanding the conversation. 2. While artificially increasing the number of threads in that server would do nothing, whatever that system?s latency is, whatever its resource utilisation is, a rising rate of requests *will* result in that server having more threads that are alive concurrently (by virtue of how it operates, as a rising request rate will not cause that server to reduce latency); i.e. it?s the increased throughput that causes the number of threads to rise, not vice-versa. Therefore, to cope with high request rates that server must have the capacity for many threads. That is all, and that is how we know that a server using virtual threads would normally have a great many of them: because virtual threads are used by thread-per-request servers with high throughputs. Other things will happen too, and other concurrency limits will eventually come into play, but this ? that the number of threads will rise ? is necessarily true. Now we can get to what I think your actual point is. You believe that the server we?re talking about must be at some kind of a disadvantage compared to other kinds of servers. I understand you want me to convince that is not the case, but the only thing I can do to do that at this point is for you to actually write a server in this style, employing virtual threads, and then report what problems and limitations you actually run into, not hypothesise what problems you think you might run into. That will help you understand how virtual threads are used, and will help us find potentially missing APIs. ? Ron On 26 Jul 2022, at 08:32, Alex Otenko > wrote: I am talking of all systems with threads. A thread-per-request system is just a system with more threads. I can't understand what fault you find in me comparing one and the other. Isn't that what should be done when someone wants to be convinced, rather than take it on faith? You ask me not to say something, and then you say it yourself. The missing bit is that one can also reduce response time, and that's what you might see, if you tried different thread counts. Well, unless you think there's nothing to see and increased concurrency is the only explanation. The picture is interesting, but out of context of resource utilization and the time spent by each thread on one request you can't say that 200 or 800 threads is the right capacity to sustain offered traffic. Once you have enough threads to supply that capacity (which in this particular workload may require Virtual threads), then compare to even higher thread counts and to thread-per-request. On Mon, 25 Jul 2022, 10:18 Ron Pressler, > wrote: You are talking about systems where a thread processes more than one request. That is the very opposite of what thread-per-request means. Also, as I said over and over and over, there is no claim that adding threads increases concurrency. Please stop repeating that nonsense. Higher throughput in a thread-per-request system means higher concurrency, which, *in a thread-per-request program* means adding threads, but adding threads doers not increase the concurrency. Reading a book when it?s dark outside requires turning on the lights, but reading a book with the lights on does not make it dark outside. Here are measurements from an actual server. As the laws of the universe require, they follow Little?s law ? the system becomes unstable exactly when the equation breaks ? as anything else would be impossible. Every simulation software will show you the same thing. If you do see anything else, then you?re not talking about thread-per-request. Absolutely everything we?ve discussed is states in JEP 425, so I would ask you to read it carefully, and only respond if you have a question about a particular section you can quote. Ask yourself if you?re talking about programs where a thread can make progress on more than one request; if so, go back and think. ? Ron [cid:A1DEEA70-93EC-4D61-A862-35DB04665BDA] On 25 Jul 2022, at 09:16, Alex Otenko > wrote: Well, there are a few things I said several times too, so we are in the same boat. :) Ok, just open your favourite modelling software and see: Given a request rate and a request processing time, there is a minimal number of threads that can process them. That's the capacity needed to do work (i.e. for the system to remain stable). Thread-per-request is simply the maximum number of threads you can have to process that work. Then you can see what your favourite modelling software says about concurrency. It says that as you add threads, concurrency in the sense used in Little's law decreases. Since this is also a mathematical fact, something in the claim that adding threads increases concurrency, needs reconciling. Alex On Sun, 24 Jul 2022, 20:45 Ron Pressler, > wrote: > On 24 Jul 2022, at 19:26, Alex Otenko > wrote: > > The "other laws" don't contradict Little's law, they only explain that you can't have an equals sign between thread count and throughput. That there is an equals sign when the system is stable is a mathematical theorem, so there cannot exist a correct explanation for its falsehood. Your feelings about it are irrelevant to its correctness. It is a theorem. > > Let me remind you what I mean. > > 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, and at request rate 99 concurrency is 99. Etc. All of this is because response time gets worse as the "other laws" predict. But we already see thread count is not a cap on concurrency, as was one of the claims earlier in this thread. > > If we increase thread count we can improve response times. But at thread count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms response time. Whereas arithmetically the situation keeps improving (by an ever smaller fraction of a microsecond), the mathematics of it cannot capture the notion of diminished returns. First, as I must have repeated three times in this discussion, we?re talking about thread-per-request (please read JEP 425, as all this is explained there), so by definition, we?re talking about cases where the number of threads is equal to or greater than the concurrency, i.e. the number of requests in flight. Second, as I must have repeated at least three times in this discussion, increasing the number of threads does nothing. A mathematical theorem tells us what the concurrency *is equal to* in a stable system. It cannot possibly be any lower or any higher. So your number of requests that are being processed is equal to some number if your system is stable, and in the case of thread-per-request programs, which are our topic, the number of threads processing requests is exactly equal to the number of concurrent requests times the number of threads per request, which is at least one by definition. If you add any more threads they cannot be processing requests, and if you have fewer threads then your system isn?t stable. Finally, if your latency starts going up, then so does your concurrency, up to the point where one of your software or hardware components reaches its peak concurrency and your server destabilises. While Little?s law tells you what the concurrency is equal to (and so, in a thread-per-request program what the number of request-processing threads is equal to), the number of threads is not the only limit on the maximum capacity. We know that in a thread-per-request server, every request consumes at least one thread, but it consumes other resources as well, and they, too, place limitations on concurrency. All this is factored into the bounds on the concurrency level. It?s just that we empirically know that the limitation on threads is hit *first* by many servers, which is why async APIs and lightweight user-mode threads were invented. Note that Little?s law, being a mathematical theorem, applies to every component separately, too. I.e., you can treat your CPU as the server, the requests would be the processing bursts, and the maximum concurrency would be the number of cores. > > So that answers why we are typically fine with a small thread count. > That we are not typically fine writing scalable thread-per-request programs with few threads is the reason why async I/O and user-mode threads were created. It is possible some people are fine, but clearly many are not. If your thread-per-request program needs to handle only a small number of requests concurrently, and so needs only a few threads, then there?s no need for you to use virtual threads. That is exactly why, when this discussion started what feels like a year ago, I said that when there are virtual threads, there must be many of them (or else they?re not needed). ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Tue Jul 26 13:33:13 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Tue, 26 Jul 2022 14:33:13 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> <20468672-2291-4E56-890B-20F80DD2BCC7@oracle.com> Message-ID: Hi Ron, I think I can verbalize what bothered me all along. I wish someone made a distinction between: Offered traffic - actual term; determined based on the time one thread spends on request. Capacity - I don't think this is the actual term. This is the actual thread count. If this is at or below offered traffic, the system is not stable. You can increase capacity until you get to the thread-per-request, which probably corresponds to +oo. Concurrency as used in Little's law. This is measured in the same units as offered traffic, but is not the same as offered traffic, because the time used here is the actual response time, which includes all sorts of waits. The confusing bit then is that we can't be talking of concurrency before capacity exceeds offered traffic, because the system is not stable, and after that adding threads only decreases concurrency. Then also the pragmatic angle. At which point, or for what systems should I say "yeah, we can't do this without Virtual threads", and at which point should I say "thread-per-request is the way to go". The answer to the first question is: "when your offered traffic is in thousands per CPU". Why CPU specifically? Because otherwise something else is the bottleneck. This means 100ms wait per 100 microseconds of on-cpu time. I don't know how common this is in the world, but in my practice this never was the case - because 100 microseconds is about as much as a REST endpoint takes to produce a few KB of JSON, and 100ms wait is an eternity in comparison. Why thousands? Because we had 200 threads per CPU and sync code, and were fine. Maybe it's gross, but Virtual threads is not the killer feature in those cases. Ok, I haven't seen the world, but I reckon the back of the envelope working out is ok. The second question is then not really based on performance, rather on architectural differences that thread-per-request offers. One less thing to tune is good. The reason that this is not a performance question, is that adding threads gets response time indistinguishably close to minimal possible way before you get to +oo. Alex On Tue, 26 Jul 2022, 10:15 Ron Pressler, wrote: > Let me make this as simple as I think I can: > > 1. We are talking *only* about a server that creates a new thread for > every incoming request. That?s how we define ?thread-per-request." If what > you have in mind is a server that operates in any other way, you?re > misunderstanding the conversation. > > 2. While artificially increasing the number of threads in that server > would do nothing, whatever that system?s latency is, whatever its resource > utilisation is, a rising rate of requests *will* result in that server > having more threads that are alive concurrently (by virtue of how it > operates, as a rising request rate will not cause that server to reduce > latency); i.e. it?s the increased throughput that causes the number of > threads to rise, not vice-versa. Therefore, to cope with high request rates > that server must have the capacity for many threads. > > That is all, and that is how we know that a server using virtual threads > would normally have a great many of them: because virtual threads are used > by thread-per-request servers with high throughputs. Other things will > happen too, and other concurrency limits will eventually come into play, > but this ? that the number of threads will rise ? is necessarily true. > > Now we can get to what I think your actual point is. You believe that the > server we?re talking about must be at some kind of a disadvantage compared > to other kinds of servers. I understand you want me to convince that is not > the case, but the only thing I can do to do that at this point is for you > to actually write a server in this style, employing virtual threads, and > then report what problems and limitations you actually run into, not > hypothesise what problems you think you might run into. That will help you > understand how virtual threads are used, and will help us find potentially > missing APIs. > > ? Ron > > On 26 Jul 2022, at 08:32, Alex Otenko wrote: > > I am talking of all systems with threads. A thread-per-request system is > just a system with more threads. I can't understand what fault you find in > me comparing one and the other. Isn't that what should be done when someone > wants to be convinced, rather than take it on faith? > > You ask me not to say something, and then you say it yourself. The missing > bit is that one can also reduce response time, and that's what you might > see, if you tried different thread counts. Well, unless you think there's > nothing to see and increased concurrency is the only explanation. > > The picture is interesting, but out of context of resource utilization and > the time spent by each thread on one request you can't say that 200 or 800 > threads is the right capacity to sustain offered traffic. Once you have > enough threads to supply that capacity (which in this particular workload > may require Virtual threads), then compare to even higher thread counts and > to thread-per-request. > > On Mon, 25 Jul 2022, 10:18 Ron Pressler, wrote: > >> You are talking about systems where a thread processes more than one >> request. That is the very opposite of what thread-per-request means. Also, >> as I said over and over and over, there is no claim that adding threads >> increases concurrency. Please stop repeating that nonsense. Higher >> throughput in a thread-per-request system means higher concurrency, which, >> *in a thread-per-request program* means adding threads, but adding threads >> doers not increase the concurrency. Reading a book when it?s dark outside >> requires turning on the lights, but reading a book with the lights on does >> not make it dark outside. >> >> Here are measurements from an actual server. As the laws of the universe >> require, they follow Little?s law ? the system becomes unstable exactly >> when the equation breaks ? as anything else would be impossible. Every >> simulation software will show you the same thing. If you do see anything >> else, then you?re not talking about thread-per-request. >> >> Absolutely everything we?ve discussed is states in JEP 425, so I would >> ask you to read it carefully, and only respond if you have a question about >> a particular section you can quote. Ask yourself if you?re talking about >> programs where a thread can make progress on more than one request; if so, >> go back and think. >> >> ? Ron >> >> >> >> On 25 Jul 2022, at 09:16, Alex Otenko wrote: >> >> Well, there are a few things I said several times too, so we are in the >> same boat. :) >> >> Ok, just open your favourite modelling software and see: >> >> Given a request rate and a request processing time, there is a minimal >> number of threads that can process them. That's the capacity needed to do >> work (i.e. for the system to remain stable). >> >> Thread-per-request is simply the maximum number of threads you can have >> to process that work. >> >> Then you can see what your favourite modelling software says about >> concurrency. It says that as you add threads, concurrency in the sense used >> in Little's law decreases. >> >> Since this is also a mathematical fact, something in the claim that >> adding threads increases concurrency, needs reconciling. >> >> >> Alex >> >> On Sun, 24 Jul 2022, 20:45 Ron Pressler, wrote: >> >>> >>> >>> > On 24 Jul 2022, at 19:26, Alex Otenko >>> wrote: >>> > >>> > The "other laws" don't contradict Little's law, they only explain that >>> you can't have an equals sign between thread count and throughput. >>> >>> That there is an equals sign when the system is stable is a mathematical >>> theorem, so there cannot exist a correct explanation for its falsehood. >>> Your feelings about it are irrelevant to its correctness. It is a theorem. >>> >>> > >>> > Let me remind you what I mean. >>> > >>> > 1 thread, 10ms per request. At request rate 66.667 concurrency is 2, >>> and at request rate 99 concurrency is 99. Etc. All of this is because >>> response time gets worse as the "other laws" predict. But we already see >>> thread count is not a cap on concurrency, as was one of the claims earlier >>> in this thread. >>> > >>> > If we increase thread count we can improve response times. But at >>> thread count 5 or 6 you are only 1 microsecond away from the "optimal" 10ms >>> response time. Whereas arithmetically the situation keeps improving (by an >>> ever smaller fraction of a microsecond), the mathematics of it cannot >>> capture the notion of diminished returns. >>> >>> First, as I must have repeated three times in this discussion, we?re >>> talking about thread-per-request (please read JEP 425, as all this is >>> explained there), so by definition, we?re talking about cases where the >>> number of threads is equal to or greater than the concurrency, i.e. the >>> number of requests in flight. >>> >>> Second, as I must have repeated at least three times in this discussion, >>> increasing the number of threads does nothing. A mathematical theorem tells >>> us what the concurrency *is equal to* in a stable system. It cannot >>> possibly be any lower or any higher. So your number of requests that are >>> being processed is equal to some number if your system is stable, and in >>> the case of thread-per-request programs, which are our topic, the number >>> of threads processing requests is exactly equal to the number of concurrent >>> requests times the number of threads per request, which is at least one by >>> definition. If you add any more threads they cannot be processing requests, >>> and if you have fewer threads then your system isn?t stable. >>> >>> Finally, if your latency starts going up, then so does your concurrency, >>> up to the point where one of your software or hardware components reaches >>> its peak concurrency and your server destabilises. While Little?s law tells >>> you what the concurrency is equal to (and so, in a thread-per-request >>> program what the number of request-processing threads is equal to), the >>> number of threads is not the only limit on the maximum capacity. We know >>> that in a thread-per-request server, every request consumes at least one >>> thread, but it consumes other resources as well, and they, too, place >>> limitations on concurrency. All this is factored into the bounds on the >>> concurrency level. It?s just that we empirically know that the limitation >>> on threads is hit *first* by many servers, which is why async APIs and >>> lightweight user-mode threads were invented. >>> >>> Note that Little?s law, being a mathematical theorem, applies to every >>> component separately, too. I.e., you can treat your CPU as the server, the >>> requests would be the processing bursts, and the maximum concurrency would >>> be the number of cores. >>> >>> > >>> > So that answers why we are typically fine with a small thread count. >>> > >>> >>> That we are not typically fine writing scalable thread-per-request >>> programs with few threads is the reason why async I/O and user-mode threads >>> were created. It is possible some people are fine, but clearly many are >>> not. If your thread-per-request program needs to handle only a small number >>> of requests concurrently, and so needs only a few threads, then there?s no >>> need for you to use virtual threads. That is exactly why, when this >>> discussion started what feels like a year ago, I said that when there are >>> virtual threads, there must be many of them (or else they?re not needed). >>> >>> ? Ron >>> >>> >>> >> 10.05.53.png> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clement.escoffier at redhat.com Tue Jul 26 14:13:27 2022 From: clement.escoffier at redhat.com (Clement Escoffier) Date: Tue, 26 Jul 2022 16:13:27 +0200 Subject: Virtual Threads support in Quarkus - current integration and ideal Message-ID: Hello, This email reports our observations around Loom, mainly in the context of Quarkus. It discusses the current approach and our plans. We are sharing this information on our current success and challenges with Loom. Please let us know your thoughts and questions on our approach(es). Context Since the early days of the Loom project, we have been looking at various approaches to integrate Loom (mostly virtual threads) into Quarkus. Our goal was (and still is) to dispatch processing (HTTP requests, Kafka messages, gRPC calls) on virtual threads. Thus, the user would not have to think about blocking or not blocking (more on that later as it relates to the Quarkus architecture) and can write synchronous code without limiting the application's concurrency. To achieve this, we need to dispatch the processing on virtual threads but also have compliant clients to invoke remote services (HTTP, gRPC?), send messages (Kafka, AMQP), or interact with a data store (SQL or NoSQL). Quarkus Architecture Before going further, we need to explain how Quarkus is structured. Quarkus is based on a reactive engine (Netty + Eclipse Vert.x), so under the hood, Quarkus uses event loops to schedule the workloads and non-blocking I/O. There is also the possibility of using Netty Native Transport (epoll, kqueue, io_uring). The processing can be either directly dispatched to the event loop or on a worker thread (OS thread). In the first case, the code must be written in an asynchronous and non-blocking manner. Quarkus proposes a programming model and safety guards to write such a code. In the latter case, the code can be blocking. Quarkus decides which dispatching strategy it uses for each processing job. The decision is based on the method signatures and annotations (for example, the user can force it to be called on an event loop or a worker thread). When using a worker thread, the request is received on an event loop and dispatched to the worker thread, and when the response is ready to be written (when it fits in memory), Quarkus switches back to the event loop. The current approach The integration of Loom's virtual threads is currently based[1] on a new annotation (@RunOnVirtualThread). It introduces a third dispatching strategy, and methods annotated with this annotation are called on a virtual thread. So, we now have three possibilities: - Execute the processing on an event loop thread - the code must be non-blocking - Execute the processing on an OS (worker) thread - with the thread cost and concurrency limit - Execute the processing on a virtual thread The following snippet shows an elementary example: @GET @Path("/loom") @RunOnVirtualThread Fortune example() { var list = repo.findAll(); return pickOne(list); } This support is already experimentally available in Quarkus 2.10. Previous attempts The current approach is not our first attempt. We had two other approaches that we discarded, while the second one is something we want to reconsider. First Approach - All workers are virtual threads The first approach was straightforward. The idea was to replace the worker (OS) threads with Virtual Threads. However, we quickly realized some limitations. Long-running (purely CPU-bound) processing would block the carrier thread as there is no preemption. While the user should be aware that long-running processing should not be executed on virtual threads, in this model, it was not explicit. We also started capturing carrier thread pinning situation (our current approach still has this issue, we will explain our bandaid later). Second Approach - Marry event loops and carrier threads Quarkus is designed to reduce the memory usage of the application. We are obsessed with RSS usage, especially when running in a container where resources are scarce. It has driven lots of our architecture choices, including the dimensioning strategies (number of event loops, number of worker threads?). Thus, we investigated the possibility of avoiding having a second carrier thread pool and reducing the number of switches between the event loops and the carrier threads. We tried to use Netty event loops as carrier threads to achieve this. We had to use private APIs (which used to be public at some point in early access builds) to implement such an approach [3]. Unfortunately, we quickly ran into issues (explaining why our method is not part of the public API). Typically we had deadlock situations when a carrier thread shared locks with virtual threads. This made it impossible to use event-loops as carriers considering the probability of lock sharing. That custom scheduling strategy also prevents work stealing (Netty event loops do not handle work stealing) and must keep a strict ordering between I/O tasks. Pros and Cons of the current approach Our current approach (based on @RunOnVirtualThread) integrates smoothly with the rest of Quarkus (even if the integration is limited to the HTTP part at that moment, as the integration with Kafka and gRPC are slightly more complicated but not impossible). The user's code is written synchronously, and the users are aware of the dispatching strategy. Due to the limitation mentioned before, we still believe it's a good trade-off, even if not ideal. However, the chances of pinning the carrier threads are still very high (caused by pervasive usage in the ecosystem of certain common JDK features - synchronized, JNI, etc.). Because we would like to reduce the number of carrier threads to the bare minimum (to limit the RSS usage), we can end up with an underperforming application, which would have a concurrency level lower than the classic worker thread approach with pretty lousy response times. The Netty / Loom dance We implemented a bandaid to reduce the chance of pinning while not limiting the users to a small set of Quarkus APIs. Remember, Quarkus is based on a reactive core, and most of our APIs are available in two forms: - An imperative form blocking the caller thread when dealing with I/O - A reactive form that is non-blocking (reactive programming) To avoid thread pinning when running on a virtual thread, we offer the possibility to use the reactive form of our APIs but block the virtual thread while the result is still being computed. These awaits do not block the carrier thread and can be used with API returning 0 or one result, but also with streams (like Kafka topics) where you receive an iterator. As said above, this is a band-aid until we have non-pinning clients/APIs. Under the hood, there is a dance between the virtual thread and the netty event loop (used by the reactive API). It introduces a few unfortunate switches but workaround the pinning issue. Observations Over the past year, we ran many tests to design and implement our integration. The current approach is far from ideal, but it works fine. We have excellent results when we compared with a full reactive approach and a worker approach (Quarkus can have the three variants in the same app). The response time under load is close enough to the reactive approach. It is far better than the classic worker thread approach [1][2]. However (remember we are obsessed with RSS), the RSS usage is very high. Even higher than the worker thread approach. At that moment, we are investigating where these objects come from. We hope to have a better understanding after the summer. Our observations show that the performance penalty is likely due to memory consumption (and GC cycles). However, as said, we are still investigating. Ideally For us (Quarkus) and probably several other Java frameworks based on Netty, it would be terrific if we could find a way to reconcile the two scheduling strategies (in a sense, we would use the event loops as carrier thread). Of course, there will be trade-offs and limitations. Our initial attempt didn't end well, but that does not mean it's a dead end. An event-loop carrier thread would greatly benefit the underlying reactive engine (Netty/Vert.x in the case of Quarkus). It retains some event-loop execution semantics: code is multithreaded (in the virtual thread meaning) yet executed with a single carrier thread that respects the event-loop principles and shall have decent mechanical sympathy. In addition, it should enable using classic blocking constructs (e.g., java.util.lock.Lock), whereas currently, it can only block on Vert.x (e.g., a Vert.x futures but not java.util.lock.Lock) as Vert.x needs to be aware of the thread suspension to schedule event dispatching in a race-free / deadlock-free manner. With such integration, virtual threads would be executed on the event loop. When they "block", they would be unmounted, and I/O or another virtual thread would be processed. That would reduce the number of switches between threads, reduce RSS usage, and allow lots of Java frameworks to leverage Loom virtual threads quickly. Of course, this approach can only be validated empirically. Typically, it adds latency to every virtual thread dispatch. In addition, watchdogs would need to be implemented to prevent (or at least warn the user) the execution of long CPU-intensive actions that do not yield in an acceptable time. Conclusion Our integration of Loom virtual threads in Quarkus is already available to our users, and we will be collecting feedback. As explained in this email, we have thus identified two issues. The first one is purely about performance, and we were able to measure it empirically: the interaction between Loom and the Netty/Vert.x reactive stack seems to create an abundance of data structures that put pressure on the GC and degrade the overall performance of the application. As said above, we are investigating. The second one is more general and also impacts programming with Quarkus/Vert.x Loom. The goal is to reconcile the scheduling strategies of Loom and Netty/Vert.x. This could improve performance by decreasing the number of context switches (Loom-Netty dance) and the RSS of an application. Moreover, it would enable the use of classic blocking constructs in Vert.x directly -i.e., without wrapping them in Vert.x own abstractions). We could not validate and/or characterize the performance improvement of such a model yet. The result is unclear as we don?t know if the decrease in context switches would be outweighed by the additional latency in virtual threads dispatch. We are sharing this information on our current success and challenges with Loom. Please let us know your thoughts and concerns on our approach(es). Thanks! Clement [1] - https://developers.redhat.com/devnation/tech-talks/integrate-loom-quarkus [2] - https://github.com/anavarr/fortunes_benchmark [3] - https://github.com/openjdk/loom/commit/cad26ce74c98e28854f02106117fe03741f69ba0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Tue Jul 26 14:41:41 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 26 Jul 2022 14:41:41 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> <20468672-2291-4E56-890B-20F80DD2BCC7@oracle.com> Message-ID: On 26 Jul 2022, at 14:33, Alex Otenko > wrote: Hi Ron, I think I can verbalize what bothered me all along. I wish someone made a distinction between: Offered traffic - actual term; determined based on the time one thread spends on request. Capacity - I don't think this is the actual term. This is the actual thread count. If this is at or below offered traffic, the system is not stable. You can increase capacity until you get to the thread-per-request, which probably corresponds to +oo. I don?t understand this sentence. Concurrency as used in Little's law. This is measured in the same units as offered traffic, but is not the same as offered traffic, because the time used here is the actual response time, which includes all sorts of waits. None of that matters. Little?s law is a mathematical theorem about some unit arriving at some processing centre ? a customer, a request, whatever ? and for *that* unit, the theorem relates the average latency of performing that operation and the average rate of arrival of those things to the average number of those things existing concurrently in the centre. So, we pick requests as the things we look at, and everything follows. The theorem tells us how many requests, on average, are concurrently being processed, and since we?re assuming thread-per-request, this tells us how many threads are active, because *by definition* of thread-per-request a concurrent request takes at least one thread. The confusing bit then is that we can't be talking of concurrency before capacity exceeds offered traffic, because the system is not stable, and after that adding threads only decreases concurrency. No one is talking about *adding* threads. The number of threads grows because rising throughput *makes it grow* in a thread-per-request system. Also, we?re not interested in what?s happening in a system in the process of crashing. Then also the pragmatic angle. At which point, or for what systems should I say "yeah, we can't do this without Virtual threads", and at which point should I say "thread-per-request is the way to go". As explained in JEP 425, there is absolutely no such point: Picking thread-per-request is the premise we?re taking as a given, not the conclusion. I.e. we assume thread-per-request, and the conclusion is that we need many threads. Virtual threads are designed to allow thread-per-request servers to achieve the maximum throughput allowable by the hardware. Why do so many people want to pick thread-per-request? Because thread-per-request is the model that allows representing your application?s unit of concurrency with the platform?s unit of concurrency, and the Java platform has only one such unit: the thread. I.e. it is the only model that the language and the platform fully support. That is why asynchronous APIs are essentially DSLs and do not rely on the language?s basic composition constructs (loops, try/catch, try-with-resources etc.), why JFR yields less-than-informative profiles for such programs, and why debuggers can?t step through the logical flow of such programs. So there is absolutely no point at which you?d say ?we must do it like that?. But *if* you choose to do it like that then you?d need virtual threads if your concurrency exceeds ~1000. Thread-per-request or async are neither good nor bad; they?re just different aesthetic styles for writing code. But Java only fully supports the former, and *IF* you choose to do it that way, THEN you?ll need virtual threads. In other words, a person who should be interested in virtual threads is one who thinks it would be nice to write code in the thread-per-request style, but doesn?t want to give up on throughput. I think the JEP is clear on that. The answer to the first question is: "when your offered traffic is in thousands per CPU". Why CPU specifically? Because otherwise something else is the bottleneck. This means 100ms wait per 100 microseconds of on-cpu time. I don't know how common this is in the world, but in my practice this never was the case - because 100 microseconds is about as much as a REST endpoint takes to produce a few KB of JSON, and 100ms wait is an eternity in comparison. Why thousands? Because we had 200 threads per CPU and sync code, and were fine. Maybe it's gross, but Virtual threads is not the killer feature in those cases. Ok, I haven't seen the world, but I reckon the back of the envelope working out is ok. If what you?re claiming is that simple thread-per-request servers using OS threads are satisfactory for virtually all systems, then that has long since been established to not be the case. There?s just no point arguing over this. As I think I already told you, 100ms wait is the total of all waits, even if done in parallel, and it is quite common because quite a lot of servers do outgoing calls to scores of services. It is very common for a single incoming request to do 20 outgoing I/O requests if not more. The second question is then not really based on performance, rather on architectural differences that thread-per-request offers. One less thing to tune is good. The reason that this is not a performance question, is that adding threads gets response time indistinguishably close to minimal possible way before you get to +oo. As long as you?re talking about ?adding threads? I can tell you?re not getting this. No one is suggesting adding threads. If you pick thread-per-request, then the number of threads grows with throughput, and that?s why you need virtual threads. Alex On Tue, 26 Jul 2022, 10:15 Ron Pressler, > wrote: Let me make this as simple as I think I can: 1. We are talking *only* about a server that creates a new thread for every incoming request. That?s how we define ?thread-per-request." If what you have in mind is a server that operates in any other way, you?re misunderstanding the conversation. 2. While artificially increasing the number of threads in that server would do nothing, whatever that system?s latency is, whatever its resource utilisation is, a rising rate of requests *will* result in that server having more threads that are alive concurrently (by virtue of how it operates, as a rising request rate will not cause that server to reduce latency); i.e. it?s the increased throughput that causes the number of threads to rise, not vice-versa. Therefore, to cope with high request rates that server must have the capacity for many threads. That is all, and that is how we know that a server using virtual threads would normally have a great many of them: because virtual threads are used by thread-per-request servers with high throughputs. Other things will happen too, and other concurrency limits will eventually come into play, but this ? that the number of threads will rise ? is necessarily true. Now we can get to what I think your actual point is. You believe that the server we?re talking about must be at some kind of a disadvantage compared to other kinds of servers. I understand you want me to convince that is not the case, but the only thing I can do to do that at this point is for you to actually write a server in this style, employing virtual threads, and then report what problems and limitations you actually run into, not hypothesise what problems you think you might run into. That will help you understand how virtual threads are used, and will help us find potentially missing APIs. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Tue Jul 26 15:10:41 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Tue, 26 Jul 2022 15:10:41 +0000 Subject: Virtual Threads support in Quarkus - current integration and ideal In-Reply-To: References: Message-ID: <939D2497-2E57-43DE-8A22-35E08ADFD036@oracle.com> Hi and thank you very much for your report! It has been our experience as well that trying to marry an asynchronous engine with virtual threads is cumbersome and often wasteful. Writing the entire pipeline with simple blocking in mind gave us not only superior performance, but a much smaller and simpler codebase, and that would be the approach I?d recommend. I expect that there will soon be HTTP servers demonstrating that simple approach. However, if you wish to use an existing async engine, I think the approach you?ve taken ? spawning/unblocking a virtual thread running in the virtual thread scheduler ? is probably the best one. Integrating explicit scheduler loops with virtual thread via custom schedulers is on the roadmap, but, encouraged by the performance of servers that go the ?full simple? approach, this might not be a top priority and might take some time [1]. The API was removed for the simple reason that it?s just not ready, as you noticed. As for memory footprint, although this might not be the cause of your issue, it might interest you to know that we?re now working on dramatically reducing the footprint of virtual thread stacks. That work also wasn?t ready for 19, but is a higher priority than custom schedulers. So I?m interested to know how much of that excess footprint is due to virtual thread stacks (those would appear as jdk.internal.vm.StackChunk objects in your heap). What I?d like to hear more about is pinning, and what common causes of it you see. I would also be interested to hear your thoughts about how much of it is due to ecosystem readiness (e.g. some JDBC drivers don?t pin while others still do, although that?s expected to change). ? Ron [1]: The ?mechanical sympathy? effects you alluded to are real but too small in comparison to the throughput increase of thread-per-request code for them to be an immediate focus, especially as a work-stealing scheduler has pretty decent mechanical sympathy already. On the other hand, there are other reasons to support custom schedulers (e.g. UI event threads) that might shift the priority balance. On 26 Jul 2022, at 15:13, Clement Escoffier > wrote: Hello, This email reports our observations around Loom, mainly in the context of Quarkus. It discusses the current approach and our plans. We are sharing this information on our current success and challenges with Loom. Please let us know your thoughts and questions on our approach(es). Context Since the early days of the Loom project, we have been looking at various approaches to integrate Loom (mostly virtual threads) into Quarkus. Our goal was (and still is) to dispatch processing (HTTP requests, Kafka messages, gRPC calls) on virtual threads. Thus, the user would not have to think about blocking or not blocking (more on that later as it relates to the Quarkus architecture) and can write synchronous code without limiting the application's concurrency. To achieve this, we need to dispatch the processing on virtual threads but also have compliant clients to invoke remote services (HTTP, gRPC?), send messages (Kafka, AMQP), or interact with a data store (SQL or NoSQL). Quarkus Architecture Before going further, we need to explain how Quarkus is structured. Quarkus is based on a reactive engine (Netty + Eclipse Vert.x), so under the hood, Quarkus uses event loops to schedule the workloads and non-blocking I/O. There is also the possibility of using Netty Native Transport (epoll, kqueue, io_uring). The processing can be either directly dispatched to the event loop or on a worker thread (OS thread). In the first case, the code must be written in an asynchronous and non-blocking manner. Quarkus proposes a programming model and safety guards to write such a code. In the latter case, the code can be blocking. Quarkus decides which dispatching strategy it uses for each processing job. The decision is based on the method signatures and annotations (for example, the user can force it to be called on an event loop or a worker thread). When using a worker thread, the request is received on an event loop and dispatched to the worker thread, and when the response is ready to be written (when it fits in memory), Quarkus switches back to the event loop. The current approach The integration of Loom's virtual threads is currently based[1] on a new annotation (@RunOnVirtualThread). It introduces a third dispatching strategy, and methods annotated with this annotation are called on a virtual thread. So, we now have three possibilities: * Execute the processing on an event loop thread - the code must be non-blocking * Execute the processing on an OS (worker) thread - with the thread cost and concurrency limit * Execute the processing on a virtual thread The following snippet shows an elementary example: @GET @Path("/loom") @RunOnVirtualThread Fortune example() { var list = repo.findAll(); return pickOne(list); } This support is already experimentally available in Quarkus 2.10. Previous attempts The current approach is not our first attempt. We had two other approaches that we discarded, while the second one is something we want to reconsider. First Approach - All workers are virtual threads The first approach was straightforward. The idea was to replace the worker (OS) threads with Virtual Threads. However, we quickly realized some limitations. Long-running (purely CPU-bound) processing would block the carrier thread as there is no preemption. While the user should be aware that long-running processing should not be executed on virtual threads, in this model, it was not explicit. We also started capturing carrier thread pinning situation (our current approach still has this issue, we will explain our bandaid later). Second Approach - Marry event loops and carrier threads Quarkus is designed to reduce the memory usage of the application. We are obsessed with RSS usage, especially when running in a container where resources are scarce. It has driven lots of our architecture choices, including the dimensioning strategies (number of event loops, number of worker threads?). Thus, we investigated the possibility of avoiding having a second carrier thread pool and reducing the number of switches between the event loops and the carrier threads. We tried to use Netty event loops as carrier threads to achieve this. We had to use private APIs (which used to be public at some point in early access builds) to implement such an approach [3]. Unfortunately, we quickly ran into issues (explaining why our method is not part of the public API). Typically we had deadlock situations when a carrier thread shared locks with virtual threads. This made it impossible to use event-loops as carriers considering the probability of lock sharing. That custom scheduling strategy also prevents work stealing (Netty event loops do not handle work stealing) and must keep a strict ordering between I/O tasks. Pros and Cons of the current approach Our current approach (based on @RunOnVirtualThread) integrates smoothly with the rest of Quarkus (even if the integration is limited to the HTTP part at that moment, as the integration with Kafka and gRPC are slightly more complicated but not impossible). The user's code is written synchronously, and the users are aware of the dispatching strategy. Due to the limitation mentioned before, we still believe it's a good trade-off, even if not ideal. However, the chances of pinning the carrier threads are still very high (caused by pervasive usage in the ecosystem of certain common JDK features - synchronized, JNI, etc.). Because we would like to reduce the number of carrier threads to the bare minimum (to limit the RSS usage), we can end up with an underperforming application, which would have a concurrency level lower than the classic worker thread approach with pretty lousy response times. The Netty / Loom dance We implemented a bandaid to reduce the chance of pinning while not limiting the users to a small set of Quarkus APIs. Remember, Quarkus is based on a reactive core, and most of our APIs are available in two forms: * An imperative form blocking the caller thread when dealing with I/O * A reactive form that is non-blocking (reactive programming) To avoid thread pinning when running on a virtual thread, we offer the possibility to use the reactive form of our APIs but block the virtual thread while the result is still being computed. These awaits do not block the carrier thread and can be used with API returning 0 or one result, but also with streams (like Kafka topics) where you receive an iterator. As said above, this is a band-aid until we have non-pinning clients/APIs. Under the hood, there is a dance between the virtual thread and the netty event loop (used by the reactive API). It introduces a few unfortunate switches but workaround the pinning issue. Observations Over the past year, we ran many tests to design and implement our integration. The current approach is far from ideal, but it works fine. We have excellent results when we compared with a full reactive approach and a worker approach (Quarkus can have the three variants in the same app). The response time under load is close enough to the reactive approach. It is far better than the classic worker thread approach [1][2]. However (remember we are obsessed with RSS), the RSS usage is very high. Even higher than the worker thread approach. At that moment, we are investigating where these objects come from. We hope to have a better understanding after the summer. Our observations show that the performance penalty is likely due to memory consumption (and GC cycles). However, as said, we are still investigating. Ideally For us (Quarkus) and probably several other Java frameworks based on Netty, it would be terrific if we could find a way to reconcile the two scheduling strategies (in a sense, we would use the event loops as carrier thread). Of course, there will be trade-offs and limitations. Our initial attempt didn't end well, but that does not mean it's a dead end. An event-loop carrier thread would greatly benefit the underlying reactive engine (Netty/Vert.x in the case of Quarkus). It retains some event-loop execution semantics: code is multithreaded (in the virtual thread meaning) yet executed with a single carrier thread that respects the event-loop principles and shall have decent mechanical sympathy. In addition, it should enable using classic blocking constructs (e.g., java.util.lock.Lock), whereas currently, it can only block on Vert.x (e.g., a Vert.x futures but not java.util.lock.Lock) as Vert.x needs to be aware of the thread suspension to schedule event dispatching in a race-free / deadlock-free manner. With such integration, virtual threads would be executed on the event loop. When they "block", they would be unmounted, and I/O or another virtual thread would be processed. That would reduce the number of switches between threads, reduce RSS usage, and allow lots of Java frameworks to leverage Loom virtual threads quickly. Of course, this approach can only be validated empirically. Typically, it adds latency to every virtual thread dispatch. In addition, watchdogs would need to be implemented to prevent (or at least warn the user) the execution of long CPU-intensive actions that do not yield in an acceptable time. Conclusion Our integration of Loom virtual threads in Quarkus is already available to our users, and we will be collecting feedback. As explained in this email, we have thus identified two issues. The first one is purely about performance, and we were able to measure it empirically: the interaction between Loom and the Netty/Vert.x reactive stack seems to create an abundance of data structures that put pressure on the GC and degrade the overall performance of the application. As said above, we are investigating. The second one is more general and also impacts programming with Quarkus/Vert.x Loom. The goal is to reconcile the scheduling strategies of Loom and Netty/Vert.x. This could improve performance by decreasing the number of context switches (Loom-Netty dance) and the RSS of an application. Moreover, it would enable the use of classic blocking constructs in Vert.x directly -i.e., without wrapping them in Vert.x own abstractions). We could not validate and/or characterize the performance improvement of such a model yet. The result is unclear as we don?t know if the decrease in context switches would be outweighed by the additional latency in virtual threads dispatch. We are sharing this information on our current success and challenges with Loom. Please let us know your thoughts and concerns on our approach(es). Thanks! Clement [1] - https://developers.redhat.com/devnation/tech-talks/integrate-loom-quarkus [2] - https://github.com/anavarr/fortunes_benchmark [3] - https://github.com/openjdk/loom/commit/cad26ce74c98e28854f02106117fe03741f69ba0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Tue Jul 26 17:13:37 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Tue, 26 Jul 2022 10:13:37 -0700 Subject: Virtual Threads support in Quarkus - current integration and ideal In-Reply-To: <939D2497-2E57-43DE-8A22-35E08ADFD036@oracle.com> References: <939D2497-2E57-43DE-8A22-35E08ADFD036@oracle.com> Message-ID: <075501d8a113$0c03b610$240b2230$@kolotyluk.net> ?Writing the entire pipeline with simple blocking in mind gave us not only superior performance, but a much smaller and simpler codebase, and that would be the approach I?d recommend.? Ron, are there any projects you are aware of that will do this? Such as a Loom-based HTTP Server and Client library, something like Jetty, but completely based on Virtual Threads, and maybe some Structured Concurrency? Cheers, Eric From: loom-dev On Behalf Of Ron Pressler Sent: July 26, 2022 8:11 AM To: Clement Escoffier Cc: loom-dev at openjdk.java.net Subject: Re: Virtual Threads support in Quarkus - current integration and ideal Hi and thank you very much for your report! It has been our experience as well that trying to marry an asynchronous engine with virtual threads is cumbersome and often wasteful. Writing the entire pipeline with simple blocking in mind gave us not only superior performance, but a much smaller and simpler codebase, and that would be the approach I?d recommend. I expect that there will soon be HTTP servers demonstrating that simple approach. However, if you wish to use an existing async engine, I think the approach you?ve taken ? spawning/unblocking a virtual thread running in the virtual thread scheduler ? is probably the best one. Integrating explicit scheduler loops with virtual thread via custom schedulers is on the roadmap, but, encouraged by the performance of servers that go the ?full simple? approach, this might not be a top priority and might take some time [1]. The API was removed for the simple reason that it?s just not ready, as you noticed. As for memory footprint, although this might not be the cause of your issue, it might interest you to know that we?re now working on dramatically reducing the footprint of virtual thread stacks. That work also wasn?t ready for 19, but is a higher priority than custom schedulers. So I?m interested to know how much of that excess footprint is due to virtual thread stacks (those would appear as jdk.internal.vm.StackChunk objects in your heap). What I?d like to hear more about is pinning, and what common causes of it you see. I would also be interested to hear your thoughts about how much of it is due to ecosystem readiness (e.g. some JDBC drivers don?t pin while others still do, although that?s expected to change). ? Ron [1]: The ?mechanical sympathy? effects you alluded to are real but too small in comparison to the throughput increase of thread-per-request code for them to be an immediate focus, especially as a work-stealing scheduler has pretty decent mechanical sympathy already. On the other hand, there are other reasons to support custom schedulers (e.g. UI event threads) that might shift the priority balance. On 26 Jul 2022, at 15:13, Clement Escoffier > wrote: Hello, This email reports our observations around Loom, mainly in the context of Quarkus. It discusses the current approach and our plans. We are sharing this information on our current success and challenges with Loom. Please let us know your thoughts and questions on our approach(es). Context Since the early days of the Loom project, we have been looking at various approaches to integrate Loom (mostly virtual threads) into Quarkus. Our goal was (and still is) to dispatch processing (HTTP requests, Kafka messages, gRPC calls) on virtual threads. Thus, the user would not have to think about blocking or not blocking (more on that later as it relates to the Quarkus architecture) and can write synchronous code without limiting the application's concurrency. To achieve this, we need to dispatch the processing on virtual threads but also have compliant clients to invoke remote services (HTTP, gRPC?), send messages (Kafka, AMQP), or interact with a data store (SQL or NoSQL). Quarkus Architecture Before going further, we need to explain how Quarkus is structured. Quarkus is based on a reactive engine (Netty + Eclipse Vert.x), so under the hood, Quarkus uses event loops to schedule the workloads and non-blocking I/O. There is also the possibility of using Netty Native Transport (epoll, kqueue, io_uring). The processing can be either directly dispatched to the event loop or on a worker thread (OS thread). In the first case, the code must be written in an asynchronous and non-blocking manner. Quarkus proposes a programming model and safety guards to write such a code. In the latter case, the code can be blocking. Quarkus decides which dispatching strategy it uses for each processing job. The decision is based on the method signatures and annotations (for example, the user can force it to be called on an event loop or a worker thread). When using a worker thread, the request is received on an event loop and dispatched to the worker thread, and when the response is ready to be written (when it fits in memory), Quarkus switches back to the event loop. The current approach The integration of Loom's virtual threads is currently based[1] on a new annotation (@RunOnVirtualThread). It introduces a third dispatching strategy, and methods annotated with this annotation are called on a virtual thread. So, we now have three possibilities: * * Execute * the processing on an event loop thread - the code must be non-blocking * * * Execute * the processing on an OS (worker) thread - with the thread cost and concurrency limit * * * Execute * the processing on a virtual thread * The following snippet shows an elementary example: @GET @Path("/loom") @RunOnVirtualThread Fortune example() { var list = repo.findAll(); return pickOne(list); } This support is already experimentally available in Quarkus 2.10. Previous attempts The current approach is not our first attempt. We had two other approaches that we discarded, while the second one is something we want to reconsider. First Approach - All workers are virtual threads The first approach was straightforward. The idea was to replace the worker (OS) threads with Virtual Threads. However, we quickly realized some limitations. Long-running (purely CPU-bound) processing would block the carrier thread as there is no preemption. While the user should be aware that long-running processing should not be executed on virtual threads, in this model, it was not explicit. We also started capturing carrier thread pinning situation (our current approach still has this issue, we will explain our bandaid later). Second Approach - Marry event loops and carrier threads Quarkus is designed to reduce the memory usage of the application. We are obsessed with RSS usage, especially when running in a container where resources are scarce. It has driven lots of our architecture choices, including the dimensioning strategies (number of event loops, number of worker threads?). Thus, we investigated the possibility of avoiding having a second carrier thread pool and reducing the number of switches between the event loops and the carrier threads. We tried to use Netty event loops as carrier threads to achieve this. We had to use private APIs (which used to be public at some point in early access builds) to implement such an approach [3]. Unfortunately, we quickly ran into issues (explaining why our method is not part of the public API). Typically we had deadlock situations when a carrier thread shared locks with virtual threads. This made it impossible to use event-loops as carriers considering the probability of lock sharing. That custom scheduling strategy also prevents work stealing (Netty event loops do not handle work stealing) and must keep a strict ordering between I/O tasks. Pros and Cons of the current approach Our current approach (based on @RunOnVirtualThread) integrates smoothly with the rest of Quarkus (even if the integration is limited to the HTTP part at that moment, as the integration with Kafka and gRPC are slightly more complicated but not impossible). The user's code is written synchronously, and the users are aware of the dispatching strategy. Due to the limitation mentioned before, we still believe it's a good trade-off, even if not ideal. However, the chances of pinning the carrier threads are still very high (caused by pervasive usage in the ecosystem of certain common JDK features - synchronized, JNI, etc.). Because we would like to reduce the number of carrier threads to the bare minimum (to limit the RSS usage), we can end up with an underperforming application, which would have a concurrency level lower than the classic worker thread approach with pretty lousy response times. The Netty / Loom dance We implemented a bandaid to reduce the chance of pinning while not limiting the users to a small set of Quarkus APIs. Remember, Quarkus is based on a reactive core, and most of our APIs are available in two forms: * * An imperative * form blocking the caller thread when dealing with I/O * * * A reactive * form that is non-blocking (reactive programming) * To avoid thread pinning when running on a virtual thread, we offer the possibility to use the reactive form of our APIs but block the virtual thread while the result is still being computed. These awaits do not block the carrier thread and can be used with API returning 0 or one result, but also with streams (like Kafka topics) where you receive an iterator. As said above, this is a band-aid until we have non-pinning clients/APIs. Under the hood, there is a dance between the virtual thread and the netty event loop (used by the reactive API). It introduces a few unfortunate switches but workaround the pinning issue. Observations Over the past year, we ran many tests to design and implement our integration. The current approach is far from ideal, but it works fine. We have excellent results when we compared with a full reactive approach and a worker approach (Quarkus can have the three variants in the same app). The response time under load is close enough to the reactive approach. It is far better than the classic worker thread approach [1][2]. However (remember we are obsessed with RSS), the RSS usage is very high. Even higher than the worker thread approach. At that moment, we are investigating where these objects come from. We hope to have a better understanding after the summer. Our observations show that the performance penalty is likely due to memory consumption (and GC cycles). However, as said, we are still investigating. Ideally For us (Quarkus) and probably several other Java frameworks based on Netty, it would be terrific if we could find a way to reconcile the two scheduling strategies (in a sense, we would use the event loops as carrier thread). Of course, there will be trade-offs and limitations. Our initial attempt didn't end well, but that does not mean it's a dead end. An event-loop carrier thread would greatly benefit the underlying reactive engine (Netty/Vert.x in the case of Quarkus). It retains some event-loop execution semantics: code is multithreaded (in the virtual thread meaning) yet executed with a single carrier thread that respects the event-loop principles and shall have decent mechanical sympathy. In addition, it should enable using classic blocking constructs (e.g., java.util.lock.Lock), whereas currently, it can only block on Vert.x (e.g., a Vert.x futures but not java.util.lock.Lock) as Vert.x needs to be aware of the thread suspension to schedule event dispatching in a race-free / deadlock-free manner. With such integration, virtual threads would be executed on the event loop. When they "block", they would be unmounted, and I/O or another virtual thread would be processed. That would reduce the number of switches between threads, reduce RSS usage, and allow lots of Java frameworks to leverage Loom virtual threads quickly. Of course, this approach can only be validated empirically. Typically, it adds latency to every virtual thread dispatch. In addition, watchdogs would need to be implemented to prevent (or at least warn the user) the execution of long CPU-intensive actions that do not yield in an acceptable time. Conclusion Our integration of Loom virtual threads in Quarkus is already available to our users, and we will be collecting feedback. As explained in this email, we have thus identified two issues. The first one is purely about performance, and we were able to measure it empirically: the interaction between Loom and the Netty/Vert.x reactive stack seems to create an abundance of data structures that put pressure on the GC and degrade the overall performance of the application. As said above, we are investigating. The second one is more general and also impacts programming with Quarkus/Vert.x Loom. The goal is to reconcile the scheduling strategies of Loom and Netty/Vert.x. This could improve performance by decreasing the number of context switches (Loom-Netty dance) and the RSS of an application. Moreover, it would enable the use of classic blocking constructs in Vert.x directly -i.e., without wrapping them in Vert.x own abstractions). We could not validate and/or characterize the performance improvement of such a model yet. The result is unclear as we don?t know if the decrease in context switches would be outweighed by the additional latency in virtual threads dispatch. We are sharing this information on our current success and challenges with Loom. Please let us know your thoughts and concerns on our approach(es). Thanks! Clement [1] - https://developers.redhat.com/devnation/tech-talks/integrate-loom-quarkus [2] - https://github.com/anavarr/fortunes_benchmark [3] - https://github.com/openjdk/loom/commit/cad26ce74c98e28854f02106117fe03741f69ba0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Wed Jul 27 15:54:23 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Wed, 27 Jul 2022 15:54:23 +0000 Subject: Virtual Threads support in Quarkus - current integration and ideal In-Reply-To: <075501d8a113$0c03b610$240b2230$@kolotyluk.net> References: <939D2497-2E57-43DE-8A22-35E08ADFD036@oracle.com> <075501d8a113$0c03b610$240b2230$@kolotyluk.net> Message-ID: <96BDAFCE-09A3-4CCB-A2D9-E977A6C951D0@oracle.com> I am aware of one, but having just answered questions and provided advice I?ll let the project?s developers announce it when it?s ready. I believe it should be soon enough. ? Ron On 26 Jul 2022, at 18:13, eric at kolotyluk.net wrote: ?Writing the entire pipeline with simple blocking in mind gave us not only superior performance, but a much smaller and simpler codebase, and that would be the approach I?d recommend.? Ron, are there any projects you are aware of that will do this? Such as a Loom-based HTTP Server and Client library, something like Jetty, but completely based on Virtual Threads, and maybe some Structured Concurrency? Cheers, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Wed Jul 27 16:35:15 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Wed, 27 Jul 2022 09:35:15 -0700 Subject: Unclear on close() Message-ID: <092b01d8a1d6$da30e3a0$8e92aae0$@kolotyluk.net> >From https://openjdk.org/jeps/428 and https://download.java.net/java/early_access/loom/docs/api/jdk.incubator.conc urrent/jdk/incubator/concurrent/StructuredTaskScope.html public void close() Closes this task scope. This method first shuts down the task scope (as if by invoking the shutdown method). It then waits for the threads executing any unfinished tasks to finish. If interrupted then this method will continue to wait for the threads to finish before completing with the interrupt status set. This method may only be invoked by the task scope owner. A StructuredTaskScope is intended to be used in a structured manner. If this method is called to close a task scope before nested task scopes are closed then it closes the underlying construct of each nested task scope (in the reverse order that they were created in), closes this task scope, and then throws StructureViolationException. Similarly, if called to close a task scope that encloses operations with extent-local bindings then it also throws StructureViolationException after closing the task scope. I am unclear on "It then waits for the threads executing any unfinished tasks to finish." Instant deadline = ... try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { Future future1 = scope.fork(() -> query(left)); Future future2 = scope.fork(() -> query(right)); scope.joinUntil(deadline); scope. throwIfFailed(e -> new WebApplicationException(e)); // both tasks completed successfully String result = Stream.of(future1, future2) .map( Future::resultNow) .collect(Collectors.joining(", ", "{ ", " }")); ... } 1. Is there any scenario where close() waits forever? a. Where it is implicit in this try block. b. I can imagine scenarios where subtasks don't cancel properly or respond correctly to interrupts. 2. If there is, is there any programmatic way out of this? a. Does the InterruptedException bypass close() and exit the try block? b. Is this guaranteed by the runtime? c. I assume it is, but I have made bad assumptions about the runtime before. 3. Personally, I would have thought that "scope.joinUntil(deadline);" would guarantee this code exits the try block, but the documentation, as written, does not give me that confidence. There are two wait points. a. scope.joinUntil(deadline); b. scope.close(); 4. While this may not be ambiguous to others, it is to me. a. It would be nice if there was text that made this more explicit. Cheers, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Wed Jul 27 17:17:19 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Wed, 27 Jul 2022 17:17:19 +0000 Subject: Unclear on close() In-Reply-To: <092b01d8a1d6$da30e3a0$8e92aae0$@kolotyluk.net> References: <092b01d8a1d6$da30e3a0$8e92aae0$@kolotyluk.net> Message-ID: <0C6A9C3D-0A1E-4A4E-9722-BB05BA694F43@oracle.com> On 27 Jul 2022, at 17:35, eric at kolotyluk.net wrote: 1. Is there any scenario where close() waits forever? * Where it is implicit in this try block. * I can imagine scenarios where subtasks don?t cancel properly or respond correctly to interrupts. Yes, that could happen. It is a property of very general languages, like Java, and there?s no getting around it. It takes a very carefully-controlled language, like Erlang, to support the forceful (non-cooperative) termination of a thread, and even there things could go wrong unless some discipline is followed. The core of the issue is that waiting for a thread to terminate is the least of your worries. It is more important to ensure that threads maintain your program?s logical invariants, and so we must ensure threads are terminated when they decide they?re ready. This requires their cooperation. So we can only ever *ask* for a thread to terminate; we can?t kill it in a way that safely maintains program invariants. 1. If there is, is there any programmatic way out of this? * Does the InterruptedException bypass close() and exit the try block? * Is this guaranteed by the runtime? * I assume it is, but I have made bad assumptions about the runtime before? An exception could not bypass the close if used in a try-with-resources block, but because we rely on try-with-resources, which is optional (i.e. you could neglect to use the construct altogether), there are ways to write code that doesn?t call close. The runtime would, currently, only detect that if this interferes with other things that rely on correct nesting of scopes. So, e.g. if you don?t close a scope but then close an enclosing scope, that will be detected. * 1. Personally, I would have thought that ?scope.joinUntil(deadline);? would guarantee this code exits the try block, but the documentation, as written, does not give me that confidence. There are two wait points? * scope.joinUntil(deadline); * scope.close(); 2. While this may not be ambiguous to others, it is to me. * It would be nice if there was text that made this more explicit. Cheers, Eric close can only return after all threads have terminated (except, maybe, due to a VM error). join/joinUntil wait for forks to: 1. terminate OR 2. be cancelled due to shutdown OR 3. until the waiting thread is interrupted or the timeout expires. Whatever happens, close waits for all forked threads to fully terminate. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at kolotyluk.net Wed Jul 27 18:36:05 2022 From: eric at kolotyluk.net (eric at kolotyluk.net) Date: Wed, 27 Jul 2022 11:36:05 -0700 Subject: Unclear on close() In-Reply-To: <0C6A9C3D-0A1E-4A4E-9722-BB05BA694F43@oracle.com> References: <092b01d8a1d6$da30e3a0$8e92aae0$@kolotyluk.net> <0C6A9C3D-0A1E-4A4E-9722-BB05BA694F43@oracle.com> Message-ID: <097c01d8a1e7$bb843270$328c9750$@kolotyluk.net> Thanks for that clarity, Ron. So, as always, use existing best practices to make sure that tasks and subtasks are robust in handling cancellation, interrupts, exceptions, etc. Loom does not bring any new magic here? Cheers, Eric From: Ron Pressler Sent: July 27, 2022 10:17 AM To: Eric Kolotyluk Cc: loom-dev at openjdk.java.net Subject: Re: Unclear on close() On 27 Jul 2022, at 17:35, eric at kolotyluk.net wrote: 1. Is there any scenario where close() waits forever? a. Where it is implicit in this try block. b. I can imagine scenarios where subtasks don?t cancel properly or respond correctly to interrupts. Yes, that could happen. It is a property of very general languages, like Java, and there?s no getting around it. It takes a very carefully-controlled language, like Erlang, to support the forceful (non-cooperative) termination of a thread, and even there things could go wrong unless some discipline is followed. The core of the issue is that waiting for a thread to terminate is the least of your worries. It is more important to ensure that threads maintain your program?s logical invariants, and so we must ensure threads are terminated when they decide they?re ready. This requires their cooperation. So we can only ever *ask* for a thread to terminate; we can?t kill it in a way that safely maintains program invariants. 2. If there is, is there any programmatic way out of this? a. Does the InterruptedException bypass close() and exit the try block? b. Is this guaranteed by the runtime? c. I assume it is, but I have made bad assumptions about the runtime before? An exception could not bypass the close if used in a try-with-resources block, but because we rely on try-with-resources, which is optional (i.e. you could neglect to use the construct altogether), there are ways to write code that doesn?t call close. The runtime would, currently, only detect that if this interferes with other things that rely on correct nesting of scopes. So, e.g. if you don?t close a scope but then close an enclosing scope, that will be detected. c. 4. Personally, I would have thought that ?scope.joinUntil(deadline);? would guarantee this code exits the try block, but the documentation, as written, does not give me that confidence. There are two wait points? a. scope.joinUntil(deadline); b. scope.close(); 5. While this may not be ambiguous to others, it is to me. a. It would be nice if there was text that made this more explicit. Cheers, Eric close can only return after all threads have terminated (except, maybe, due to a VM error). join/joinUntil wait for forks to: 1. terminate OR 2. be cancelled due to shutdown OR 3. until the waiting thread is interrupted or the timeout expires. Whatever happens, close waits for all forked threads to fully terminate. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Wed Jul 27 22:33:47 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Wed, 27 Jul 2022 22:33:47 +0000 Subject: [External] : RE: Unclear on close() In-Reply-To: <097c01d8a1e7$bb843270$328c9750$@kolotyluk.net> References: <092b01d8a1d6$da30e3a0$8e92aae0$@kolotyluk.net> <0C6A9C3D-0A1E-4A4E-9722-BB05BA694F43@oracle.com> <097c01d8a1e7$bb843270$328c9750$@kolotyluk.net> Message-ID: <80AAE3A7-12FD-4DE3-92B4-1571A86844B6@oracle.com> That?s correct. The kind of magic that would be required here would have to make the platform much more high-level. On the other hand, some languages ? Clojure maybe? ? might be restrictive enough in their control of side-effects, or at least have sufficiently useful subsets that are restrictive enough, that they could consider having their compilers automatically emit interrupting checks into the bytecode they generate. ? Ron On 27 Jul 2022, at 19:36, eric at kolotyluk.net wrote: Thanks for that clarity, Ron. So, as always, use existing best practices to make sure that tasks and subtasks are robust in handling cancellation, interrupts, exceptions, etc. Loom does not bring any new magic here? Cheers, Eric From: Ron Pressler > Sent: July 27, 2022 10:17 AM To: Eric Kolotyluk > Cc: loom-dev at openjdk.java.net Subject: Re: Unclear on close() On 27 Jul 2022, at 17:35, eric at kolotyluk.net wrote: 1. Is there any scenario where close() waits forever? * Where it is implicit in this try block. * I can imagine scenarios where subtasks don?t cancel properly or respond correctly to interrupts. Yes, that could happen. It is a property of very general languages, like Java, and there?s no getting around it. It takes a very carefully-controlled language, like Erlang, to support the forceful (non-cooperative) termination of a thread, and even there things could go wrong unless some discipline is followed. The core of the issue is that waiting for a thread to terminate is the least of your worries. It is more important to ensure that threads maintain your program?s logical invariants, and so we must ensure threads are terminated when they decide they?re ready. This requires their cooperation. So we can only ever *ask* for a thread to terminate; we can?t kill it in a way that safely maintains program invariants. 1. If there is, is there any programmatic way out of this? * Does the InterruptedException bypass close() and exit the try block? * Is this guaranteed by the runtime? * I assume it is, but I have made bad assumptions about the runtime before? An exception could not bypass the close if used in a try-with-resources block, but because we rely on try-with-resources, which is optional (i.e. you could neglect to use the construct altogether), there are ways to write code that doesn?t call close. The runtime would, currently, only detect that if this interferes with other things that rely on correct nesting of scopes. So, e.g. if you don?t close a scope but then close an enclosing scope, that will be detected. * 1. Personally, I would have thought that ?scope.joinUntil(deadline);? would guarantee this code exits the try block, but the documentation, as written, does not give me that confidence. There are two wait points? * scope.joinUntil(deadline); * scope.close(); 1. While this may not be ambiguous to others, it is to me. * It would be nice if there was text that made this more explicit. Cheers, Eric close can only return after all threads have terminated (except, maybe, due to a VM error). join/joinUntil wait for forks to: 1. terminate OR 2. be cancelled due to shutdown OR 3. until the waiting thread is interrupted or the timeout expires. Whatever happens, close waits for all forked threads to fully terminate. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Thu Jul 28 20:31:41 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Thu, 28 Jul 2022 21:31:41 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> <20468672-2291-4E56-890B-20F80DD2BCC7@oracle.com> Message-ID: Hi Ron, The claim in JEP is the same as in this email thread, so that is not much help. But now I don't need help anymore, because I found the explaination how the thread count, response time, request rate and thread-per-request are connected. Now what makes me bothered about the claims. Little's law connects throughput to concurrency. We agreed it has no connection to thread count. That's a disconnect between the claim about threads and Little's law dictating it. There's also the assumption that response time remains constant, but that's a mighty assumption - response time changes with thread count. There's also the claim of needing more threads. That's also not something that follows from thread-per-request. Essentially, thread-per-request is a constructive proof of having an infinite number of threads. How can one want more? Also, from a different angle - the number of threads in the thread-per-request needed does not depend on throughput at all. Just consider what the request rate means. It means that if you choose however small time frame, and however large request count, there is a nonzero probability that it will happen. Consequently the number of threads needed is arbitrarily large for any throughput. Which is just another way to say that the number of threads is effectively infinite and there is no point trying to connect it to Little's law. Request rate doesn't change the number of threads that can exist at any given time, it only changes the probability of observing any particular number of them in a fixed period of time. All this is only criticism of the formal mathematical claims made here and in the JEP. Nothing needs doing, if no one is interested in formal claims being perfect. Alex On Tue, 26 Jul 2022, 15:41 Ron Pressler, wrote: > > > On 26 Jul 2022, at 14:33, Alex Otenko wrote: > > Hi Ron, > > I think I can verbalize what bothered me all along. > > I wish someone made a distinction between: > > Offered traffic - actual term; determined based on the time one thread > spends on request. > > Capacity - I don't think this is the actual term. This is the actual > thread count. If this is at or below offered traffic, the system is not > stable. You can increase capacity until you get to the thread-per-request, > which probably corresponds to +oo. > > > I don?t understand this sentence. > > > Concurrency as used in Little's law. This is measured in the same units as > offered traffic, but is not the same as offered traffic, because the time > used here is the actual response time, which includes all sorts of waits. > > > None of that matters. Little?s law is a mathematical theorem about some > unit arriving at some processing centre ? a customer, a request, whatever ? > and for *that* unit, the theorem relates the average latency of performing > that operation and the average rate of arrival of those things to the > average number of those things existing concurrently in the centre. So, we > pick requests as the things we look at, and everything follows. The theorem > tells us how many requests, on average, are concurrently being processed, > and since we?re assuming thread-per-request, this tells us how many threads > are active, because *by definition* of thread-per-request a concurrent > request takes at least one thread. > > > The confusing bit then is that we can't be talking of concurrency before > capacity exceeds offered traffic, because the system is not stable, and > after that adding threads only decreases concurrency. > > > > No one is talking about *adding* threads. The number of threads grows > because rising throughput *makes it grow* in a thread-per-request system. > Also, we?re not interested in what?s happening in a system in the process > of crashing. > > > > Then also the pragmatic angle. At which point, or for what systems should > I say "yeah, we can't do this without Virtual threads", and at which point > should I say "thread-per-request is the way to go". > > > As explained in JEP 425, there is absolutely no such point: Picking > thread-per-request is the premise we?re taking as a given, not the > conclusion. I.e. we assume thread-per-request, and the conclusion is that > we need many threads. Virtual threads are designed to allow > thread-per-request servers to achieve the maximum throughput allowable by > the hardware. > > Why do so many people want to pick thread-per-request? Because > thread-per-request is the model that allows representing your application?s > unit of concurrency with the platform?s unit of concurrency, and the Java > platform has only one such unit: the thread. I.e. it is the only model that > the language and the platform fully support. That is why asynchronous APIs > are essentially DSLs and do not rely on the language?s basic > composition constructs (loops, try/catch, try-with-resources etc.), why JFR > yields less-than-informative profiles for such programs, and why debuggers > can?t step through the logical flow of such programs. > > So there is absolutely no point at which you?d say ?we must do it like > that?. But *if* you choose to do it like that then you?d need virtual > threads if your concurrency exceeds ~1000. > > Thread-per-request or async are neither good nor bad; they?re just > different aesthetic styles for writing code. But Java only fully supports > the former, and *IF* you choose to do it that way, THEN you?ll need virtual > threads. In other words, a person who should be interested in virtual > threads is one who thinks it would be nice to write code in the > thread-per-request style, but doesn?t want to give up on throughput. I > think the JEP is clear on that. > > > The answer to the first question is: "when your offered traffic is in > thousands per CPU". Why CPU specifically? Because otherwise something else > is the bottleneck. This means 100ms wait per 100 microseconds of on-cpu > time. I don't know how common this is in the world, but in my practice this > never was the case - because 100 microseconds is about as much as a REST > endpoint takes to produce a few KB of JSON, and 100ms wait is an eternity > in comparison. Why thousands? Because we had 200 threads per CPU and sync > code, and were fine. Maybe it's gross, but Virtual threads is not the > killer feature in those cases. Ok, I haven't seen the world, but I reckon > the back of the envelope working out is ok. > > > If what you?re claiming is that simple thread-per-request servers using OS > threads are satisfactory for virtually all systems, then that has long > since been established to not be the case. There?s just no point arguing > over this. As I think I already told you, 100ms wait is the total of all > waits, even if done in parallel, and it is quite common because quite a lot > of servers do outgoing calls to scores of services. It is very common for a > single incoming request to do 20 outgoing I/O requests if not more. > > > The second question is then not really based on performance, rather on > architectural differences that thread-per-request offers. One less thing to > tune is good. The reason that this is not a performance question, is that > adding threads gets response time indistinguishably close to minimal > possible way before you get to +oo. > > > As long as you?re talking about ?adding threads? I can tell you?re not > getting this. No one is suggesting adding threads. > > If you pick thread-per-request, then the number of threads grows with > throughput, and that?s why you need virtual threads. > > > > Alex > > On Tue, 26 Jul 2022, 10:15 Ron Pressler, wrote: > >> Let me make this as simple as I think I can: >> >> 1. We are talking *only* about a server that creates a new thread for >> every incoming request. That?s how we define ?thread-per-request." If what >> you have in mind is a server that operates in any other way, you?re >> misunderstanding the conversation. >> >> 2. While artificially increasing the number of threads in that server >> would do nothing, whatever that system?s latency is, whatever its resource >> utilisation is, a rising rate of requests *will* result in that server >> having more threads that are alive concurrently (by virtue of how it >> operates, as a rising request rate will not cause that server to reduce >> latency); i.e. it?s the increased throughput that causes the number of >> threads to rise, not vice-versa. Therefore, to cope with high request rates >> that server must have the capacity for many threads. >> >> That is all, and that is how we know that a server using virtual threads >> would normally have a great many of them: because virtual threads are used >> by thread-per-request servers with high throughputs. Other things will >> happen too, and other concurrency limits will eventually come into play, >> but this ? that the number of threads will rise ? is necessarily true. >> >> Now we can get to what I think your actual point is. You believe that the >> server we?re talking about must be at some kind of a disadvantage compared >> to other kinds of servers. I understand you want me to convince that is not >> the case, but the only thing I can do to do that at this point is for you >> to actually write a server in this style, employing virtual threads, and >> then report what problems and limitations you actually run into, not >> hypothesise what problems you think you might run into. That will help you >> understand how virtual threads are used, and will help us find potentially >> missing APIs. >> >> ? Ron >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielaveryj at gmail.com Thu Jul 28 22:35:16 2022 From: danielaveryj at gmail.com (Daniel Avery) Date: Thu, 28 Jul 2022 16:35:16 -0600 Subject: [External] : Re: jstack, profilers and other tools Message-ID: Maybe this is not helpful, but it is how I understood the JEP This is Little?s Law: L = ?W Where - L is the average number of requests being processed by a stationary system (aka concurrency) - ? is the average arrival rate of requests (aka throughput) - W is the average time to process a request (aka latency) This is a thread-per-request system: T = L Where - T is the average number of threads - L is the average number of requests (same L as in Little?s Law) Therefore, T = L = ?W T = ?W Prior to loom, memory footprint of (platform) threads gives a bound on thread count: T <= ~1000 After loom, reduced memory footprint of (virtual) threads gives a relaxed bound on thread count: T <= ~1000000 Relating thread count to Little?s Law tells us that virtual threads can support a higher average arrival rate of requests (throughput), or a higher average time to process a request (latency), than platform threads could: T = ?W <= ~1000000 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Thu Jul 28 23:11:04 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Fri, 29 Jul 2022 00:11:04 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: Thanks. That works under _assumption_ that the time stays constant, which is also a statement made in the JEP. But we know that with thread count growing the time goes down. So one needs more reasoning to explain why T goes up - and at the same time keep the assumption that time doesn't go down. In addition, it makes sense to talk of thread counts when that presents a limit of some sort - eg if N threads are busy and one more request arrives, the request waits for a thread to become available - we have a system with N threads. Thread-per-request does not have that limit: for any number of threads already busy, if one more request arrives, it still gets a thread. My study concludes that thread-per-request is the case of infinite number of threads (from mathematical point of view). In this case talking about T and its dependency on request rate is meaningless. Poisson distribution of requests means that for any request rate there is a non-zero probability for any number of requests in the system - ergo, the number of threads. Connecting this number to a rate is meaningless. Basically, you need to support "a high number of threads" for any request rate, not just a very high request rate. On Thu, 28 Jul 2022, 23:35 Daniel Avery, wrote: > Maybe this is not helpful, but it is how I understood the JEP > > > This is Little?s Law: > > > L = ?W > > > Where > > - L is the average number of requests being processed by a stationary > system (aka concurrency) > > - ? is the average arrival rate of requests (aka throughput) > > - W is the average time to process a request (aka latency) > > > This is a thread-per-request system: > > > T = L > > > Where > > - T is the average number of threads > > - L is the average number of requests (same L as in Little?s Law) > > > Therefore, > > > T = L = ?W > > T = ?W > > > Prior to loom, memory footprint of (platform) threads gives a bound on > thread count: > > > T <= ~1000 > > > After loom, reduced memory footprint of (virtual) threads gives a relaxed > bound on thread count: > > > T <= ~1000000 > > > Relating thread count to Little?s Law tells us that virtual threads can > support a higher average arrival rate of requests (throughput), or a higher > average time to process a request (latency), than platform threads could: > > > T = ?W <= ~1000000 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Thu Jul 28 23:22:18 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Fri, 29 Jul 2022 00:22:18 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: Or, putting it in yet another way, you need support not for the average number of requests in the system, but for the maximum number of requests in the system. In a system with finite thread count this translates into support of arbitrarily long request queues (which don't depend on request rate, too). In a thread-per-request system it necessarily is arbitrarily large number of threads. (Of course, this is only a model of some ideal system) Thank you for bearing with me. Alex On Fri, 29 Jul 2022, 00:11 Alex Otenko, wrote: > Thanks. > > That works under _assumption_ that the time stays constant, which is also > a statement made in the JEP. > > But we know that with thread count growing the time goes down. So one > needs more reasoning to explain why T goes up - and at the same time keep > the assumption that time doesn't go down. > > In addition, it makes sense to talk of thread counts when that presents a > limit of some sort - eg if N threads are busy and one more request arrives, > the request waits for a thread to become available - we have a system with > N threads. Thread-per-request does not have that limit: for any number of > threads already busy, if one more request arrives, it still gets a thread. > > My study concludes that thread-per-request is the case of infinite number > of threads (from mathematical point of view). In this case talking about T > and its dependency on request rate is meaningless. > > Poisson distribution of requests means that for any request rate there is > a non-zero probability for any number of requests in the system - ergo, the > number of threads. Connecting this number to a rate is meaningless. > > Basically, you need to support "a high number of threads" for any request > rate, not just a very high request rate. > > On Thu, 28 Jul 2022, 23:35 Daniel Avery, wrote: > >> Maybe this is not helpful, but it is how I understood the JEP >> >> >> This is Little?s Law: >> >> >> L = ?W >> >> >> Where >> >> - L is the average number of requests being processed by a stationary >> system (aka concurrency) >> >> - ? is the average arrival rate of requests (aka throughput) >> >> - W is the average time to process a request (aka latency) >> >> >> This is a thread-per-request system: >> >> >> T = L >> >> >> Where >> >> - T is the average number of threads >> >> - L is the average number of requests (same L as in Little?s Law) >> >> >> Therefore, >> >> >> T = L = ?W >> >> T = ?W >> >> >> Prior to loom, memory footprint of (platform) threads gives a bound on >> thread count: >> >> >> T <= ~1000 >> >> >> After loom, reduced memory footprint of (virtual) threads gives a relaxed >> bound on thread count: >> >> >> T <= ~1000000 >> >> >> Relating thread count to Little?s Law tells us that virtual threads can >> support a higher average arrival rate of requests (throughput), or a higher >> average time to process a request (latency), than platform threads could: >> >> >> T = ?W <= ~1000000 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Thu Jul 28 23:46:01 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Thu, 28 Jul 2022 23:46:01 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <5B55B456-C2CC-4856-80A8-8A3F9E40CF05@oracle.com> <0E494B4B-ADCF-4CBB-AFFF-F463CB05AD0E@oracle.com> <7EC05322-6BD9-4A17-9AB3-62115F3940D0@oracle.com> <271F8C58-F096-42B2-9432-E5643A7F133C@oracle.com> <0F187B4E-65FE-4BF0-86DE-661088C65DD8@oracle.com> <1FA73DAE-FD5D-4CC6-9638-C9985C20D286@oracle.com> <5922F1B9-C3E1-4D0C-BA61-64EFBB8FD88E@oracle.com> <75424126-7EB0-43BD-9DB6-F28F29396473@oracle.com> <2F640F43-3233-4C0D-90C3-243396DE69BD@oracle.com> <20468672-2291-4E56-890B-20F80DD2BCC7@oracle.com> Message-ID: <0375A98B-1430-4E70-AEBF-A802F4F5DE7E@oracle.com> On 28 Jul 2022, at 21:31, Alex Otenko > wrote: Hi Ron, The claim in JEP is the same as in this email thread, so that is not much help. But now I don't need help anymore, because I found the explaination how the thread count, response time, request rate and thread-per-request are connected. Now what makes me bothered about the claims. Little's law connects throughput to concurrency. We agreed it has no connection to thread count. That's a disconnect between the claim about threads and Little's law dictating it. Not really, no. In thread-per-request systems, the number of threads is equal to (or perhaps greater than) the concurrency because that?s the definition of thread-per-request. That?s why we agreed that in thread-per-request systems, Little?s law tells us the (average) number of threads. There's also the assumption that response time remains constant, but that's a mighty assumption - response time changes with thread count. There is absolutely no such assumption. Unless the a rising request rate causes the latency to significantly *drop*, the number of threads will grow. If the latency happens to rise, the number of threads will grow even faster. There's also the claim of needing more threads. That's also not something that follows from thread-per-request. Essentially, thread-per-request is a constructive proof of having an infinite number of threads. How can one want more? Also, from a different angle - the number of threads in the thread-per-request needed does not depend on throughput at all. The average number of requests being processed concurrently is equal to the rate of requests (i.e. throughput) times the average latency. Again, because a thread-per-request system is defined as one that assigns (at least) one thread for every request, the number of threads is therefore proportional to the throughput (as long as the system is stable). Just consider what the request rate means. It means that if you choose however small time frame, and however large request count, there is a nonzero probability that it will happen. Consequently the number of threads needed is arbitrarily large for any throughput. Little?s theorem is about *long-term average* concurrency, latency and throughput, and it is interesting precisely because it holds regardless of the distribution of the requests. Which is just another way to say that the number of threads is effectively infinite and there is no point trying to connect it to Little's law. I don?t understand what that means. A mathematical theorem about some quantity (again the theorem is about concurrency, but we *define* a thread-per-request system to be one where the number of threads is equal to (or greater than) the concurrency) is true whether you think there?s a point to it or not. The (average) number of threads is obviously not infinite, but equal to the throughput times latency (assuming just one thread per request). Since Little?s law has been effectively used to size sever systems for decades, and so obviously there?s also a very practical point to understanding it. Request rate doesn't change the number of threads that can exist at any given time The (average) number of threads in a thread-per-request system rises proportionately with the (average) throughput. We?re not talking about the number of threads that *can* exist, but the number of threads that *do* exist (on average, of course). The number of threads that *can* exist puts a bound on the number of threads that *do* exist, and so on maximum throughput. it only changes the probability of observing any particular number of them in a fixed period of time. It changes their (average) number. You can start any thread-per-request server increase the load and see for yourself (if the server uses a pool, you?ll see an increase not in live threads but in non-idle threads, but it?s the same thing). All this is only criticism of the formal mathematical claims made here and in the JEP. Nothing needs doing, if no one is interested in formal claims being perfect. The claims, however, were written with care for precision, while your reading of them is not only imprecise and at times incorrect, but may lead people to misunderstand how concurrent systems behave. Alex On Tue, 26 Jul 2022, 15:41 Ron Pressler, > wrote: On 26 Jul 2022, at 14:33, Alex Otenko > wrote: Hi Ron, I think I can verbalize what bothered me all along. I wish someone made a distinction between: Offered traffic - actual term; determined based on the time one thread spends on request. Capacity - I don't think this is the actual term. This is the actual thread count. If this is at or below offered traffic, the system is not stable. You can increase capacity until you get to the thread-per-request, which probably corresponds to +oo. I don?t understand this sentence. Concurrency as used in Little's law. This is measured in the same units as offered traffic, but is not the same as offered traffic, because the time used here is the actual response time, which includes all sorts of waits. None of that matters. Little?s law is a mathematical theorem about some unit arriving at some processing centre ? a customer, a request, whatever ? and for *that* unit, the theorem relates the average latency of performing that operation and the average rate of arrival of those things to the average number of those things existing concurrently in the centre. So, we pick requests as the things we look at, and everything follows. The theorem tells us how many requests, on average, are concurrently being processed, and since we?re assuming thread-per-request, this tells us how many threads are active, because *by definition* of thread-per-request a concurrent request takes at least one thread. The confusing bit then is that we can't be talking of concurrency before capacity exceeds offered traffic, because the system is not stable, and after that adding threads only decreases concurrency. No one is talking about *adding* threads. The number of threads grows because rising throughput *makes it grow* in a thread-per-request system. Also, we?re not interested in what?s happening in a system in the process of crashing. Then also the pragmatic angle. At which point, or for what systems should I say "yeah, we can't do this without Virtual threads", and at which point should I say "thread-per-request is the way to go". As explained in JEP 425, there is absolutely no such point: Picking thread-per-request is the premise we?re taking as a given, not the conclusion. I.e. we assume thread-per-request, and the conclusion is that we need many threads. Virtual threads are designed to allow thread-per-request servers to achieve the maximum throughput allowable by the hardware. Why do so many people want to pick thread-per-request? Because thread-per-request is the model that allows representing your application?s unit of concurrency with the platform?s unit of concurrency, and the Java platform has only one such unit: the thread. I.e. it is the only model that the language and the platform fully support. That is why asynchronous APIs are essentially DSLs and do not rely on the language?s basic composition constructs (loops, try/catch, try-with-resources etc.), why JFR yields less-than-informative profiles for such programs, and why debuggers can?t step through the logical flow of such programs. So there is absolutely no point at which you?d say ?we must do it like that?. But *if* you choose to do it like that then you?d need virtual threads if your concurrency exceeds ~1000. Thread-per-request or async are neither good nor bad; they?re just different aesthetic styles for writing code. But Java only fully supports the former, and *IF* you choose to do it that way, THEN you?ll need virtual threads. In other words, a person who should be interested in virtual threads is one who thinks it would be nice to write code in the thread-per-request style, but doesn?t want to give up on throughput. I think the JEP is clear on that. The answer to the first question is: "when your offered traffic is in thousands per CPU". Why CPU specifically? Because otherwise something else is the bottleneck. This means 100ms wait per 100 microseconds of on-cpu time. I don't know how common this is in the world, but in my practice this never was the case - because 100 microseconds is about as much as a REST endpoint takes to produce a few KB of JSON, and 100ms wait is an eternity in comparison. Why thousands? Because we had 200 threads per CPU and sync code, and were fine. Maybe it's gross, but Virtual threads is not the killer feature in those cases. Ok, I haven't seen the world, but I reckon the back of the envelope working out is ok. If what you?re claiming is that simple thread-per-request servers using OS threads are satisfactory for virtually all systems, then that has long since been established to not be the case. There?s just no point arguing over this. As I think I already told you, 100ms wait is the total of all waits, even if done in parallel, and it is quite common because quite a lot of servers do outgoing calls to scores of services. It is very common for a single incoming request to do 20 outgoing I/O requests if not more. The second question is then not really based on performance, rather on architectural differences that thread-per-request offers. One less thing to tune is good. The reason that this is not a performance question, is that adding threads gets response time indistinguishably close to minimal possible way before you get to +oo. As long as you?re talking about ?adding threads? I can tell you?re not getting this. No one is suggesting adding threads. If you pick thread-per-request, then the number of threads grows with throughput, and that?s why you need virtual threads. Alex On Tue, 26 Jul 2022, 10:15 Ron Pressler, > wrote: Let me make this as simple as I think I can: 1. We are talking *only* about a server that creates a new thread for every incoming request. That?s how we define ?thread-per-request." If what you have in mind is a server that operates in any other way, you?re misunderstanding the conversation. 2. While artificially increasing the number of threads in that server would do nothing, whatever that system?s latency is, whatever its resource utilisation is, a rising rate of requests *will* result in that server having more threads that are alive concurrently (by virtue of how it operates, as a rising request rate will not cause that server to reduce latency); i.e. it?s the increased throughput that causes the number of threads to rise, not vice-versa. Therefore, to cope with high request rates that server must have the capacity for many threads. That is all, and that is how we know that a server using virtual threads would normally have a great many of them: because virtual threads are used by thread-per-request servers with high throughputs. Other things will happen too, and other concurrency limits will eventually come into play, but this ? that the number of threads will rise ? is necessarily true. Now we can get to what I think your actual point is. You believe that the server we?re talking about must be at some kind of a disadvantage compared to other kinds of servers. I understand you want me to convince that is not the case, but the only thing I can do to do that at this point is for you to actually write a server in this style, employing virtual threads, and then report what problems and limitations you actually run into, not hypothesise what problems you think you might run into. That will help you understand how virtual threads are used, and will help us find potentially missing APIs. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 29 00:01:05 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 29 Jul 2022 00:01:05 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: <9AF18193-D45F-4910-9AEA-9E2DCFC323EC@oracle.com> On 29 Jul 2022, at 00:11, Alex Otenko > wrote: Thanks. That works under _assumption_ that the time stays constant, which is also a statement made in the JEP. But we know that with thread count growing the time goes down. So one needs more reasoning to explain why T goes up - and at the same time keep the assumption that time doesn't go down. A new request arrives ? a thread is created. What time goes down? If you?re talking about fanout, i.e. adding threads to perform operations in parallel, the maths actually stays exactly the same. First, remember that we?re not talking about threads *we?re adding* but threads that are created by virtue of the fact that every request gets a thread. Second, if you want to do the calculation with threads directly, rather than requests, then the concurrency goes up by the same factor as the latency is reduced (https://inside.java/2020/08/07/loom-performance/). In addition, it makes sense to talk of thread counts when that presents a limit of some sort - eg if N threads are busy and one more request arrives, the request waits for a thread to become available - we have a system with N threads. Thread-per-request does not have that limit: for any number of threads already busy, if one more request arrives, it still gets a thread. That could be true, but that?s not the point of thread-per-request at all. The most important point is that a thread-per-request system is one that consumes a thread for the entire duration of processing a request. My study concludes that thread-per-request is the case of infinite number of threads (from mathematical point of view). In this case talking about T and its dependency on request rate is meaningless. That is incorrect. Poisson distribution of requests means that for any request rate there is a non-zero probability for any number of requests in the system - ergo, the number of threads. Connecting this number to a rate is meaningless. Little?s proof is independent of distribution. Basically, you need to support "a high number of threads" for any request rate, not just a very high request rate. I sort of understand what you?re trying to say, but it?s more misleading than helpful. If your server gets 10 requests per second on average, then if their average latency is never more than 0.1, then if you can only have one thread, your system would still be stable (i.e. there would be no queues growing without bounds). If the requests momentarily arrive close together, or if some latencies are momentarily high, then the queue will momentarily grow. So no, since the goal is to keep the server stable, if you have a low throughput you do not need many threads. ? Ron -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielaveryj at gmail.com Fri Jul 29 00:10:36 2022 From: danielaveryj at gmail.com (Daniel Avery) Date: Thu, 28 Jul 2022 18:10:36 -0600 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: > Or, putting it in yet another way, you need support not for the average number of requests in the system, but for the maximum number of requests in the system. That's a fair point, but I think the result (T = ?W <= ~1000000) still holds. If we broke this step T <= ~1000000 into multiple steps T' <= ~1000000 T <= T' Where - T' is the instantaneous number of threads (or, requests) - T is the average number of threads Then the other steps still hold, but we've clarified that our system can never exceed ~1000000 instantaneous concurrent requests (or else it will run out of memory / crash / become "unstable" in a way that makes Little's Law inapplicable) -Daniel On Thu, Jul 28, 2022 at 5:22 PM Alex Otenko wrote: > Or, putting it in yet another way, you need support not for the average > number of requests in the system, but for the maximum number of requests in > the system. > > In a system with finite thread count this translates into support of > arbitrarily long request queues (which don't depend on request rate, too). > In a thread-per-request system it necessarily is arbitrarily large number > of threads. (Of course, this is only a model of some ideal system) > > Thank you for bearing with me. > > Alex > > On Fri, 29 Jul 2022, 00:11 Alex Otenko, > wrote: > >> Thanks. >> >> That works under _assumption_ that the time stays constant, which is also >> a statement made in the JEP. >> >> But we know that with thread count growing the time goes down. So one >> needs more reasoning to explain why T goes up - and at the same time keep >> the assumption that time doesn't go down. >> >> In addition, it makes sense to talk of thread counts when that presents a >> limit of some sort - eg if N threads are busy and one more request arrives, >> the request waits for a thread to become available - we have a system with >> N threads. Thread-per-request does not have that limit: for any number of >> threads already busy, if one more request arrives, it still gets a thread. >> >> My study concludes that thread-per-request is the case of infinite number >> of threads (from mathematical point of view). In this case talking about T >> and its dependency on request rate is meaningless. >> >> Poisson distribution of requests means that for any request rate there is >> a non-zero probability for any number of requests in the system - ergo, the >> number of threads. Connecting this number to a rate is meaningless. >> >> Basically, you need to support "a high number of threads" for any request >> rate, not just a very high request rate. >> >> On Thu, 28 Jul 2022, 23:35 Daniel Avery, wrote: >> >>> Maybe this is not helpful, but it is how I understood the JEP >>> >>> >>> This is Little?s Law: >>> >>> >>> L = ?W >>> >>> >>> Where >>> >>> - L is the average number of requests being processed by a stationary >>> system (aka concurrency) >>> >>> - ? is the average arrival rate of requests (aka throughput) >>> >>> - W is the average time to process a request (aka latency) >>> >>> >>> This is a thread-per-request system: >>> >>> >>> T = L >>> >>> >>> Where >>> >>> - T is the average number of threads >>> >>> - L is the average number of requests (same L as in Little?s Law) >>> >>> >>> Therefore, >>> >>> >>> T = L = ?W >>> >>> T = ?W >>> >>> >>> Prior to loom, memory footprint of (platform) threads gives a bound on >>> thread count: >>> >>> >>> T <= ~1000 >>> >>> >>> After loom, reduced memory footprint of (virtual) threads gives a >>> relaxed bound on thread count: >>> >>> >>> T <= ~1000000 >>> >>> >>> Relating thread count to Little?s Law tells us that virtual threads can >>> support a higher average arrival rate of requests (throughput), or a higher >>> average time to process a request (latency), than platform threads could: >>> >>> >>> T = ?W <= ~1000000 >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedro.lamarao at prodist.com.br Fri Jul 29 00:19:00 2022 From: pedro.lamarao at prodist.com.br (=?UTF-8?Q?Pedro_Lamar=C3=A3o?=) Date: Thu, 28 Jul 2022 21:19:00 -0300 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: Em qui., 28 de jul. de 2022 ?s 20:24, Alex Otenko < oleksandr.otenko at gmail.com> escreveu: > Or, putting it in yet another way, you need support not for the average > number of requests in the system, but for the maximum number of requests in > the system. > > No system is capable of supporting infinite requests. If your maximum is known, just replace "average" with "maximum" and the math stays the same. If your maximum is unknown, you add a load balancer capable of spinning instances up and down on demand, and the math stays the same. -- Pedro Lamar?o -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielaveryj at gmail.com Fri Jul 29 00:42:53 2022 From: danielaveryj at gmail.com (Daniel Avery) Date: Thu, 28 Jul 2022 18:42:53 -0600 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: whoops, that should have been T' <= ~1000000 T <= max(T') On Thu, Jul 28, 2022 at 6:10 PM Daniel Avery wrote: > > Or, putting it in yet another way, you need support not for the average > number of requests in the system, but for the maximum number of requests in > the system. > > That's a fair point, but I think the result (T = ?W <= ~1000000) still > holds. If we broke this step > > T <= ~1000000 > > into multiple steps > > T' <= ~1000000 > T <= T' > > Where > - T' is the instantaneous number of threads (or, requests) > - T is the average number of threads > > Then the other steps still hold, but we've clarified that our system can > never exceed ~1000000 instantaneous concurrent requests (or else it will > run out of memory / crash / become "unstable" in a way that makes Little's > Law inapplicable) > > -Daniel > > On Thu, Jul 28, 2022 at 5:22 PM Alex Otenko > wrote: > >> Or, putting it in yet another way, you need support not for the average >> number of requests in the system, but for the maximum number of requests in >> the system. >> >> In a system with finite thread count this translates into support of >> arbitrarily long request queues (which don't depend on request rate, too). >> In a thread-per-request system it necessarily is arbitrarily large number >> of threads. (Of course, this is only a model of some ideal system) >> >> Thank you for bearing with me. >> >> Alex >> >> On Fri, 29 Jul 2022, 00:11 Alex Otenko, >> wrote: >> >>> Thanks. >>> >>> That works under _assumption_ that the time stays constant, which is >>> also a statement made in the JEP. >>> >>> But we know that with thread count growing the time goes down. So one >>> needs more reasoning to explain why T goes up - and at the same time keep >>> the assumption that time doesn't go down. >>> >>> In addition, it makes sense to talk of thread counts when that presents >>> a limit of some sort - eg if N threads are busy and one more request >>> arrives, the request waits for a thread to become available - we have a >>> system with N threads. Thread-per-request does not have that limit: for any >>> number of threads already busy, if one more request arrives, it still gets >>> a thread. >>> >>> My study concludes that thread-per-request is the case of infinite >>> number of threads (from mathematical point of view). In this case talking >>> about T and its dependency on request rate is meaningless. >>> >>> Poisson distribution of requests means that for any request rate there >>> is a non-zero probability for any number of requests in the system - ergo, >>> the number of threads. Connecting this number to a rate is meaningless. >>> >>> Basically, you need to support "a high number of threads" for any >>> request rate, not just a very high request rate. >>> >>> On Thu, 28 Jul 2022, 23:35 Daniel Avery, wrote: >>> >>>> Maybe this is not helpful, but it is how I understood the JEP >>>> >>>> >>>> This is Little?s Law: >>>> >>>> >>>> L = ?W >>>> >>>> >>>> Where >>>> >>>> - L is the average number of requests being processed by a stationary >>>> system (aka concurrency) >>>> >>>> - ? is the average arrival rate of requests (aka throughput) >>>> >>>> - W is the average time to process a request (aka latency) >>>> >>>> >>>> This is a thread-per-request system: >>>> >>>> >>>> T = L >>>> >>>> >>>> Where >>>> >>>> - T is the average number of threads >>>> >>>> - L is the average number of requests (same L as in Little?s Law) >>>> >>>> >>>> Therefore, >>>> >>>> >>>> T = L = ?W >>>> >>>> T = ?W >>>> >>>> >>>> Prior to loom, memory footprint of (platform) threads gives a bound on >>>> thread count: >>>> >>>> >>>> T <= ~1000 >>>> >>>> >>>> After loom, reduced memory footprint of (virtual) threads gives a >>>> relaxed bound on thread count: >>>> >>>> >>>> T <= ~1000000 >>>> >>>> >>>> Relating thread count to Little?s Law tells us that virtual threads can >>>> support a higher average arrival rate of requests (throughput), or a higher >>>> average time to process a request (latency), than platform threads could: >>>> >>>> >>>> T = ?W <= ~1000000 >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 29 13:53:09 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 29 Jul 2022 13:53:09 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <0375A98B-1430-4E70-AEBF-A802F4F5DE7E@oracle.com> Message-ID: <46BC2612-85AE-4C07-8F53-16D45961EA50@oracle.com> BTW, the easiest way to visualise the temporary discrepancies between the averages in Little?s law and instantaneous behaviour is to think of the queue forming at the entry to the system. If the system is unstable (i.e. the equation doesn?t hold), the queue will grow without bounds. If it is stable, it can momentarily grow but will then shrink as we regress to the mean. So suppose ? = 100 and W = 1/20, and therefore the average concurrency 5. If the *maximum* capacity for concurrent operations is also 5 (e.g we have a thread-per-request server that uses just one thread per request and the maximum number of threads we can support is 5), then if the rate of requests momentarily rises above 100 then the queue will grow, but will eventually shrink when the rate drops below 100 (as it must). So if our throughput/rate-of-requests is expected to not exceed 100, we should be fine supporting just 5 threads. To get a feel that this actually works, consider that the world?s most popular thread-per-request servers already work in exactly this way. Rather than spawning a brand new thread for every request, they borrow one from a pool. The pool is normally fixed and set to something like a few hundred threads by default. They work fine as long as their throughput doesn?t exceed the maximum expected throughput, where the threads in the pool (or perhaps some other resource) is exhausted. I.e. they do *not* work by allowing for an unbounded number of threads, but a bounded one; they are very much thread-per-request, yet their threads are capped. This does place an upper limit on their throughput, but they work fine until they reach it. Of course, if other resources are exhausted before the pool is depleted, then (fanout aside) lightweight threads aren?t needed. But because there are so many systems where the threads are the resource that?s exhausted first, people invented non-thread-per-request (i.e. asynchronous) servers as well as lightweight threads. ? Ron On 29 Jul 2022, at 00:46, Ron Pressler > wrote: On 28 Jul 2022, at 21:31, Alex Otenko > wrote: Hi Ron, The claim in JEP is the same as in this email thread, so that is not much help. But now I don't need help anymore, because I found the explaination how the thread count, response time, request rate and thread-per-request are connected. Now what makes me bothered about the claims. Little's law connects throughput to concurrency. We agreed it has no connection to thread count. That's a disconnect between the claim about threads and Little's law dictating it. Not really, no. In thread-per-request systems, the number of threads is equal to (or perhaps greater than) the concurrency because that?s the definition of thread-per-request. That?s why we agreed that in thread-per-request systems, Little?s law tells us the (average) number of threads. There's also the assumption that response time remains constant, but that's a mighty assumption - response time changes with thread count. There is absolutely no such assumption. Unless the a rising request rate causes the latency to significantly *drop*, the number of threads will grow. If the latency happens to rise, the number of threads will grow even faster. There's also the claim of needing more threads. That's also not something that follows from thread-per-request. Essentially, thread-per-request is a constructive proof of having an infinite number of threads. How can one want more? Also, from a different angle - the number of threads in the thread-per-request needed does not depend on throughput at all. The average number of requests being processed concurrently is equal to the rate of requests (i.e. throughput) times the average latency. Again, because a thread-per-request system is defined as one that assigns (at least) one thread for every request, the number of threads is therefore proportional to the throughput (as long as the system is stable). Just consider what the request rate means. It means that if you choose however small time frame, and however large request count, there is a nonzero probability that it will happen. Consequently the number of threads needed is arbitrarily large for any throughput. Little?s theorem is about *long-term average* concurrency, latency and throughput, and it is interesting precisely because it holds regardless of the distribution of the requests. Which is just another way to say that the number of threads is effectively infinite and there is no point trying to connect it to Little's law. I don?t understand what that means. A mathematical theorem about some quantity (again the theorem is about concurrency, but we *define* a thread-per-request system to be one where the number of threads is equal to (or greater than) the concurrency) is true whether you think there?s a point to it or not. The (average) number of threads is obviously not infinite, but equal to the throughput times latency (assuming just one thread per request). Since Little?s law has been effectively used to size sever systems for decades, and so obviously there?s also a very practical point to understanding it. Request rate doesn't change the number of threads that can exist at any given time The (average) number of threads in a thread-per-request system rises proportionately with the (average) throughput. We?re not talking about the number of threads that *can* exist, but the number of threads that *do* exist (on average, of course). The number of threads that *can* exist puts a bound on the number of threads that *do* exist, and so on maximum throughput. it only changes the probability of observing any particular number of them in a fixed period of time. It changes their (average) number. You can start any thread-per-request server increase the load and see for yourself (if the server uses a pool, you?ll see an increase not in live threads but in non-idle threads, but it?s the same thing). All this is only criticism of the formal mathematical claims made here and in the JEP. Nothing needs doing, if no one is interested in formal claims being perfect. The claims, however, were written with care for precision, while your reading of them is not only imprecise and at times incorrect, but may lead people to misunderstand how concurrent systems behave. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From holo3146 at gmail.com Fri Jul 29 14:35:30 2022 From: holo3146 at gmail.com (Holo The Sage Wolf) Date: Fri, 29 Jul 2022 17:35:30 +0300 Subject: Coupling in ExtentLocal In-Reply-To: References: Message-ID: Hi, Thanks for the response Kasper, the reason I sent the mail to core-libs is because JDK-8263012 is assigned to "core-libs". I added loom-devs to the chain. What Andrew Haley wrote is correct, it is also correct pretty much for every implementation of `AutoClosable` (although, unlike ExtentLocals, most implementations of AutoClosable will cause resource leak, and ExtentLocal can cause a big logical bug). The idea of `strong try-with-resources` seems to solve the problem (and can force better APIs in other places), is there a conversation about this feature? Yuval Paz On Fri, Jul 29, 2022 at 12:15 PM Kasper Nielsen wrote: > > On Thu, 28 Jul 2022 at 22:11, Holo The Sage Wolf > wrote: > >> I have a question about the proposal, why not allow try-with-resources >> with this API? >> > Hi, > > For Loom related questions, loom-dev at openjdk.org is probably a better > fit. > > The main problem with TWR is that it cannot guard against non-nested use. > Because there are no way to force the user to call close(). See more here > [1] > > /Kasper > > [1] https://mail.openjdk.org/pipermail/loom-dev/2021-June/002558.html > -- Holo The Wise Wolf Of Yoitsu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Fri Jul 29 20:37:23 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Fri, 29 Jul 2022 20:37:23 +0000 Subject: Coupling in ExtentLocal In-Reply-To: References: Message-ID: Hi. The difference between ExtentLocals and other TwR constructs is that we?d like to use it for critical, foundational, things where we want guarantees we can rely on, and we haven?t been able to find a way to guarantee them as well as we?d like with TwR, even dynamically. Yes, we are thinking about a "strong? TwR, but we?re busy with so many things so it might take a while. Until then, we might introduce a weaker, or less trustworthy version of ExtentLocals that uses the existing TwR and wouldn?t be used for critical things, but we think that it?s best to start incubation with just the lambda API and then see what problems are most common/annoying and how we can best address them. ? Ron On 29 Jul 2022, at 15:35, Holo The Sage Wolf > wrote: Hi, Thanks for the response Kasper, the reason I sent the mail to core-libs is because JDK-8263012 is assigned to "core-libs". I added loom-devs to the chain. What Andrew Haley wrote is correct, it is also correct pretty much for every implementation of `AutoClosable` (although, unlike ExtentLocals, most implementations of AutoClosable will cause resource leak, and ExtentLocal can cause a big logical bug). The idea of `strong try-with-resources` seems to solve the problem (and can force better APIs in other places), is there a conversation about this feature? Yuval Paz On Fri, Jul 29, 2022 at 12:15 PM Kasper Nielsen > wrote: On Thu, 28 Jul 2022 at 22:11, Holo The Sage Wolf > wrote: I have a question about the proposal, why not allow try-with-resources with this API? Hi, For Loom related questions, loom-dev at openjdk.org is probably a better fit. The main problem with TWR is that it cannot guard against non-nested use. Because there are no way to force the user to call close(). See more here [1] /Kasper [1] https://mail.openjdk.org/pipermail/loom-dev/2021-June/002558.html -- Holo The Wise Wolf Of Yoitsu -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 31 11:59:03 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 31 Jul 2022 12:59:03 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: Message-ID: Of course, that was a mathematical model - like the case with finite threads still requires an unbounded queue - from mathematical point of view . And no, you can't replace average with max. On Fri, 29 Jul 2022, 01:19 Pedro Lamar?o, wrote: > Em qui., 28 de jul. de 2022 ?s 20:24, Alex Otenko < > oleksandr.otenko at gmail.com> escreveu: > > >> Or, putting it in yet another way, you need support not for the average >> number of requests in the system, but for the maximum number of requests in >> the system. >> >> > No system is capable of supporting infinite requests. > If your maximum is known, just replace "average" with "maximum" and the > math stays the same. > If your maximum is unknown, you add a load balancer capable of spinning > instances up and down on demand, and the math stays the same. > > -- > Pedro Lamar?o > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 31 12:05:11 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 31 Jul 2022 13:05:11 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <9AF18193-D45F-4910-9AEA-9E2DCFC323EC@oracle.com> References: <9AF18193-D45F-4910-9AEA-9E2DCFC323EC@oracle.com> Message-ID: > Little?s proof is independent of distribution Correct. That's why it is ok to pick any distribution and see. On Fri, 29 Jul 2022, 01:01 Ron Pressler, wrote: > > > On 29 Jul 2022, at 00:11, Alex Otenko wrote: > > Thanks. > > That works under _assumption_ that the time stays constant, which is also > a statement made in the JEP. > > > But we know that with thread count growing the time goes down. So one > needs more reasoning to explain why T goes up - and at the same time keep > the assumption that time doesn't go down. > > > A new request arrives ? a thread is created. What time goes down? > If you?re talking about fanout, i.e. adding threads to perform operations > in parallel, the maths actually stays exactly the same. First, remember > that we?re not talking about threads *we?re adding* but threads that are > created by virtue of the fact that every request gets a thread. Second, if > you want to do the calculation with threads directly, rather than requests, > then the concurrency goes up by the same factor as the latency is reduced ( > https://inside.java/2020/08/07/loom-performance/). > > > In addition, it makes sense to talk of thread counts when that presents a > limit of some sort - eg if N threads are busy and one more request arrives, > the request waits for a thread to become available - we have a system with > N threads. Thread-per-request does not have that limit: for any number of > threads already busy, if one more request arrives, it still gets a thread. > > > That could be true, but that?s not the point of thread-per-request at all. > The most important point is that a thread-per-request system is one that > consumes a thread for the entire duration of processing a request. > > > My study concludes that thread-per-request is the case of infinite number > of threads (from mathematical point of view). In this case talking about T > and its dependency on request rate is meaningless. > > > That is incorrect. > > > Poisson distribution of requests means that for any request rate there is > a non-zero probability for any number of requests in the system - ergo, the > number of threads. Connecting this number to a rate is meaningless. > > > Little?s proof is independent of distribution. > > > Basically, you need to support "a high number of threads" for any request > rate, not just a very high request rate. > > > I sort of understand what you?re trying to say, but it?s more misleading > than helpful. If your server gets 10 requests per second on average, then > if their average latency is never more than 0.1, then if you can only have > one thread, your system would still be stable (i.e. there would be no > queues growing without bounds). If the requests momentarily arrive close > together, or if some latencies are momentarily high, then the queue will > momentarily grow. So no, since the goal is to keep the server stable, if > you have a low throughput you do not need many threads. > > ? Ron > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.otenko at gmail.com Sun Jul 31 12:27:10 2022 From: oleksandr.otenko at gmail.com (Alex Otenko) Date: Sun, 31 Jul 2022 13:27:10 +0100 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: <46BC2612-85AE-4C07-8F53-16D45961EA50@oracle.com> References: <0375A98B-1430-4E70-AEBF-A802F4F5DE7E@oracle.com> <46BC2612-85AE-4C07-8F53-16D45961EA50@oracle.com> Message-ID: Hi Ron, I am glad you mentioned this visualisation experiment. That's the sort of experiment I proposed (and done) what seems like a week ago. You may find some unintuitive things coming from this experiment, since you called them untrue. Alex On Fri, 29 Jul 2022, 14:53 Ron Pressler, wrote: > > BTW, the easiest way to visualise the temporary discrepancies between the > averages in Little?s law and instantaneous behaviour is to think of the > queue forming at the entry to the system. > > If the system is unstable (i.e. the equation doesn?t hold), the queue will > grow without bounds. If it is stable, it can momentarily grow but will then > shrink as we regress to the mean. So suppose ? = 100 and W = 1/20, and > therefore the average concurrency 5. If the *maximum* capacity for > concurrent operations is also 5 (e.g we have a thread-per-request server > that uses just one thread per request and the maximum number of threads we > can support is 5), then if the rate of requests momentarily rises above 100 > then the queue will grow, but will eventually shrink when the rate drops > below 100 (as it must). > > So if our throughput/rate-of-requests is expected to not exceed 100, we > should be fine supporting just 5 threads. To get a feel that this actually > works, consider that the world?s most popular thread-per-request servers > already work in exactly this way. Rather than spawning a brand new thread > for every request, they borrow one from a pool. The pool is normally fixed > and set to something like a few hundred threads by default. They work fine > as long as their throughput doesn?t exceed the maximum expected throughput, > where the threads in the pool (or perhaps some other resource) is > exhausted. I.e. they do *not* work by allowing for an unbounded number of > threads, but a bounded one; they are very much thread-per-request, yet > their threads are capped. This does place an upper limit on their > throughput, but they work fine until they reach it. > > Of course, if other resources are exhausted before the pool is depleted, > then (fanout aside) lightweight threads aren?t needed. But because there > are so many systems where the threads are the resource that?s exhausted > first, people invented non-thread-per-request (i.e. asynchronous) servers > as well as lightweight threads. > > ? Ron > > On 29 Jul 2022, at 00:46, Ron Pressler wrote: > > > > On 28 Jul 2022, at 21:31, Alex Otenko wrote: > > Hi Ron, > > The claim in JEP is the same as in this email thread, so that is not much > help. But now I don't need help anymore, because I found the explaination > how the thread count, response time, request rate and thread-per-request > are connected. > > Now what makes me bothered about the claims. > > Little's law connects throughput to concurrency. We agreed it has no > connection to thread count. That's a disconnect between the claim about > threads and Little's law dictating it. > > > Not really, no. In thread-per-request systems, the number of threads is > equal to (or perhaps greater than) the concurrency because that?s the > definition of thread-per-request. That?s why we agreed that in > thread-per-request systems, Little?s law tells us the (average) number of > threads. > > > There's also the assumption that response time remains constant, but > that's a mighty assumption - response time changes with thread count. > > > There is absolutely no such assumption. Unless the a rising request rate > causes the latency to significantly *drop*, the number of threads will > grow. If the latency happens to rise, the number of threads will grow even > faster. > > > There's also the claim of needing more threads. That's also not something > that follows from thread-per-request. Essentially, thread-per-request is a > constructive proof of having an infinite number of threads. How can one > want more? Also, from a different angle - the number of threads in the > thread-per-request needed does not depend on throughput at all. > > > The average number of requests being processed concurrently is equal to > the rate of requests (i.e. throughput) times the average latency. Again, > because a thread-per-request system is defined as one that assigns (at > least) one thread for every request, the number of threads is therefore > proportional to the throughput (as long as the system is stable). > > > Just consider what the request rate means. It means that if you choose > however small time frame, and however large request count, there is a > nonzero probability that it will happen. Consequently the number of threads > needed is arbitrarily large for any throughput. > > > Little?s theorem is about *long-term average* concurrency, latency and > throughput, and it is interesting precisely because it holds regardless of > the distribution of the requests. > > Which is just another way to say that the number of threads is effectively > infinite and there is no point trying to connect it to Little's law. > > > I don?t understand what that means. A mathematical theorem about some > quantity (again the theorem is about concurrency, but we *define* a > thread-per-request system to be one where the number of threads is equal to > (or greater than) the concurrency) is true whether you think there?s a > point to it or not. The (average) number of threads is obviously not > infinite, but equal to the throughput times latency (assuming just one > thread per request). > > Since Little?s law has been effectively used to size sever systems for > decades, and so obviously there?s also a very practical point to > understanding it. > > Request rate doesn't change the number of threads that can exist at any > given time > > > The (average) number of threads in a thread-per-request system rises > proportionately with the (average) throughput. We?re not talking about the > number of threads that *can* exist, but the number of threads that *do* > exist (on average, of course). The number of threads that *can* exist puts > a bound on the number of threads that *do* exist, and so on maximum > throughput. > > it only changes the probability of observing any particular number of them > in a fixed period of time. > > > It changes their (average) number. You can start any thread-per-request > server increase the load and see for yourself (if the server uses a pool, > you?ll see an increase not in live threads but in non-idle threads, but > it?s the same thing). > > > All this is only criticism of the formal mathematical claims made here and > in the JEP. Nothing needs doing, if no one is interested in formal claims > being perfect. > > > The claims, however, were written with care for precision, while your > reading of them is not only imprecise and at times incorrect, but may lead > people to misunderstand how concurrent systems behave. > > > > Alex > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ron.pressler at oracle.com Sun Jul 31 16:38:47 2022 From: ron.pressler at oracle.com (Ron Pressler) Date: Sun, 31 Jul 2022 16:38:47 +0000 Subject: [External] : Re: jstack, profilers and other tools In-Reply-To: References: <0375A98B-1430-4E70-AEBF-A802F4F5DE7E@oracle.com> <46BC2612-85AE-4C07-8F53-16D45961EA50@oracle.com> Message-ID: A thread-per-request server under an average load 100req/s and an average request processing duration of 10ms will not destabilise if it can have no more than 10 threads, where ?destabilise? means that the queue grows indefinitely. I.e. in a a stable system is defined as one where queues do not grow indefinitely. While it is true that for any queue size there is some shrinking though non-zero probability that it might be reached, it is absolutely unrelated to the server being thread-per-request. However it is that you choose to represent a request, if your maximum capacity for concurrent requests is at or above L, the system will be stable. ? Ron On 31 Jul 2022, at 13:27, Alex Otenko > wrote: Hi Ron, I am glad you mentioned this visualisation experiment. That's the sort of experiment I proposed (and done) what seems like a week ago. You may find some unintuitive things coming from this experiment, since you called them untrue. Alex On Fri, 29 Jul 2022, 14:53 Ron Pressler, > wrote: BTW, the easiest way to visualise the temporary discrepancies between the averages in Little?s law and instantaneous behaviour is to think of the queue forming at the entry to the system. If the system is unstable (i.e. the equation doesn?t hold), the queue will grow without bounds. If it is stable, it can momentarily grow but will then shrink as we regress to the mean. So suppose ? = 100 and W = 1/20, and therefore the average concurrency 5. If the *maximum* capacity for concurrent operations is also 5 (e.g we have a thread-per-request server that uses just one thread per request and the maximum number of threads we can support is 5), then if the rate of requests momentarily rises above 100 then the queue will grow, but will eventually shrink when the rate drops below 100 (as it must). So if our throughput/rate-of-requests is expected to not exceed 100, we should be fine supporting just 5 threads. To get a feel that this actually works, consider that the world?s most popular thread-per-request servers already work in exactly this way. Rather than spawning a brand new thread for every request, they borrow one from a pool. The pool is normally fixed and set to something like a few hundred threads by default. They work fine as long as their throughput doesn?t exceed the maximum expected throughput, where the threads in the pool (or perhaps some other resource) is exhausted. I.e. they do *not* work by allowing for an unbounded number of threads, but a bounded one; they are very much thread-per-request, yet their threads are capped. This does place an upper limit on their throughput, but they work fine until they reach it. Of course, if other resources are exhausted before the pool is depleted, then (fanout aside) lightweight threads aren?t needed. But because there are so many systems where the threads are the resource that?s exhausted first, people invented non-thread-per-request (i.e. asynchronous) servers as well as lightweight threads. ? Ron On 29 Jul 2022, at 00:46, Ron Pressler > wrote: On 28 Jul 2022, at 21:31, Alex Otenko > wrote: Hi Ron, The claim in JEP is the same as in this email thread, so that is not much help. But now I don't need help anymore, because I found the explaination how the thread count, response time, request rate and thread-per-request are connected. Now what makes me bothered about the claims. Little's law connects throughput to concurrency. We agreed it has no connection to thread count. That's a disconnect between the claim about threads and Little's law dictating it. Not really, no. In thread-per-request systems, the number of threads is equal to (or perhaps greater than) the concurrency because that?s the definition of thread-per-request. That?s why we agreed that in thread-per-request systems, Little?s law tells us the (average) number of threads. There's also the assumption that response time remains constant, but that's a mighty assumption - response time changes with thread count. There is absolutely no such assumption. Unless the a rising request rate causes the latency to significantly *drop*, the number of threads will grow. If the latency happens to rise, the number of threads will grow even faster. There's also the claim of needing more threads. That's also not something that follows from thread-per-request. Essentially, thread-per-request is a constructive proof of having an infinite number of threads. How can one want more? Also, from a different angle - the number of threads in the thread-per-request needed does not depend on throughput at all. The average number of requests being processed concurrently is equal to the rate of requests (i.e. throughput) times the average latency. Again, because a thread-per-request system is defined as one that assigns (at least) one thread for every request, the number of threads is therefore proportional to the throughput (as long as the system is stable). Just consider what the request rate means. It means that if you choose however small time frame, and however large request count, there is a nonzero probability that it will happen. Consequently the number of threads needed is arbitrarily large for any throughput. Little?s theorem is about *long-term average* concurrency, latency and throughput, and it is interesting precisely because it holds regardless of the distribution of the requests. Which is just another way to say that the number of threads is effectively infinite and there is no point trying to connect it to Little's law. I don?t understand what that means. A mathematical theorem about some quantity (again the theorem is about concurrency, but we *define* a thread-per-request system to be one where the number of threads is equal to (or greater than) the concurrency) is true whether you think there?s a point to it or not. The (average) number of threads is obviously not infinite, but equal to the throughput times latency (assuming just one thread per request). Since Little?s law has been effectively used to size sever systems for decades, and so obviously there?s also a very practical point to understanding it. Request rate doesn't change the number of threads that can exist at any given time The (average) number of threads in a thread-per-request system rises proportionately with the (average) throughput. We?re not talking about the number of threads that *can* exist, but the number of threads that *do* exist (on average, of course). The number of threads that *can* exist puts a bound on the number of threads that *do* exist, and so on maximum throughput. it only changes the probability of observing any particular number of them in a fixed period of time. It changes their (average) number. You can start any thread-per-request server increase the load and see for yourself (if the server uses a pool, you?ll see an increase not in live threads but in non-idle threads, but it?s the same thing). All this is only criticism of the formal mathematical claims made here and in the JEP. Nothing needs doing, if no one is interested in formal claims being perfect. The claims, however, were written with care for precision, while your reading of them is not only imprecise and at times incorrect, but may lead people to misunderstand how concurrent systems behave. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From robbepincket at live.be Sun Jul 31 20:19:55 2022 From: robbepincket at live.be (Robbe Pincket) Date: Sun, 31 Jul 2022 20:19:55 +0000 Subject: Motivation to put Continuation, ContinuationScope, and Scope in jdk.internal.vm package Message-ID: The wiki still lists the old information: > ## Continuations > > ### Design > > The primitive continuation construct is that of a scoped (AKA multiple-named-prompt), > stackful, one-shot (non-reentrant) delimited continuation. The continuation can be cloned, > and thus used to implement reentrant delimited continuations. The construct is exposed via > the java.lang.Continuation class. Continuations are intended as a low-level API, that > application authors are not intended to use directly. They will use higher-level > constructs built on top of continuations, such as virtual threads or generators. Someone should probably fix that before Java 19 releases? Regards Robbe Pincket -------------- next part -------------- An HTML attachment was scrubbed... URL: