From doug.simon at oracle.com Mon Apr 1 06:59:42 2019 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 1 Apr 2019 08:59:42 +0200 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <82C212AE-CE8A-4B6A-A1A0-DD08B41A06A5@oracle.com> Message-ID: > On 1 Apr 2019, at 04:17, David Holmes wrote: > > On 30/03/2019 1:16 am, Doug Simon wrote: >> Hi Robbin, >>> From: Robbin Ehn >>> >>> Hi, >>> >>> 434 for (; JavaThread *thr = jtiwh.next(); ) { >>> 435 if (thr!=thr_cur && thr->thread_state() == _thread_in_native) { >>> 436 num_active++; >>> 437 if (thr->is_Compiler_thread()) { >>> 438 CompilerThread* ct = (CompilerThread*) thr; >>> 439 if (ct->compiler() == NULL || !ct->compiler()->is_jvmci()) { >>> 440 num_active_compiler_thread++; >>> 441 } else { >>> 442 // When using a Java based JVMCI compiler, it's possible >>> 443 // for one compiler thread to grab a Java lock, enter >>> 444 // HotSpot and go to sleep on the shutdown safepoint. >>> 445 // Another JVMCI compiler thread can then attempt grab >>> 446 // the lock and thus never make progress. >>> 447 } >>> 448 } >>> 449 } >>> 450 } >>> >>> We inc num_active on threads in native. >>> If such thread is a compiler thread we also inc num_active_compiler_thread. >>> JavaThread blocking on safepoint would be state blocked. >>> JavaThread waiting on the 'Java lock' would also be blocked. >>> >>> Why are you not blocked when waiting on that contended Java lock ? >> This change was made primarily in the context of libgraal. >> It can happen that a JVMCI compiler thread acquires a lock in libgraal, enters HotSpot >> and goes to sleep in the shutdown safepoint. Another JVMCI compiler thread then >> attempts to acquire the same lock and goes to sleep in libgraal which from HotSpot?s >> perspective is the _thread_in_native state. >> This is the original fix I had for this: >> CompilerThread* ct = (CompilerThread*) thr; >> if (ct->compiler() == NULL || !ct->compiler()->is_jvmci() JVMCI_ONLY(|| !UseJVMCINativeLibrary)) { >> num_active_compiler_thread++; >> } else { >> // When using a compiler in a JVMCI shared library, it's possible >> // for one compiler thread to grab a lock in the shared library, >> // enter HotSpot and go to sleep on the shutdown safepoint. Another >> // JVMCI shared library compiler thread can then attempt to grab the >> // lock and thus never make progress. >> } >> which is probably the right one. I hadn?t realized that a JavaGraal >> (as opposed to libgraal) JVMCI compiler thread blocked on a lock will be in >> the blocked state, not in the _thread_in_native state. > > It depends on whether the thread is supposed to participate in safepoints and whether the lock is acquired with or without a safepoint check. The libgraal thread acquires the lock without a (HotSpot) safepoint check (note that SVM may have its own safepoint check but that safepoint implementation is disjoint from HotSpot?s). > I'm confused by the use of "shared library" in this context. If the VM is exiting and the thread holding the lock is blocked at the termination safepoint, then why would you expect another compiler thread blocked on that lock to make progress? I don?t expect it to make progress. However, there?s no way for HotSpot to know whether the other thread is blocked on a SVM lock or still progressing in SVM code. From HotSpot?s perspective, the thread is simply in the _thread_in_native state. > This all sounds very odd to me. Hopefully I could clarify things. The important thing to note is that code executing in SVM compiled code is just like any other native code. -Doug From doug.simon at oracle.com Mon Apr 1 11:05:43 2019 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 1 Apr 2019 13:05:43 +0200 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <2c48b99a-f0a7-1541-7290-c77da9986df7@oracle.com> References: <82C212AE-CE8A-4B6A-A1A0-DD08B41A06A5@oracle.com> <2c48b99a-f0a7-1541-7290-c77da9986df7@oracle.com> Message-ID: > On 1 Apr 2019, at 09:27, Robbin Ehn wrote: > > Hi Doug, > >> This change was made primarily in the context of libgraal. >> It can happen that a JVMCI compiler thread acquires a lock in libgraal, enters HotSpot >> and goes to sleep in the shutdown safepoint. Another JVMCI compiler thread then >> attempts to acquire the same lock and goes to sleep in libgraal which from HotSpot?s >> perspective is the _thread_in_native state. > > Ok. > >> This is the original fix I had for this: >> CompilerThread* ct = (CompilerThread*) thr; >> if (ct->compiler() == NULL || !ct->compiler()->is_jvmci() JVMCI_ONLY(|| !UseJVMCINativeLibrary)) { >> num_active_compiler_thread++; >> } else { >> // When using a compiler in a JVMCI shared library, it's possible >> // for one compiler thread to grab a lock in the shared library, >> // enter HotSpot and go to sleep on the shutdown safepoint. Another >> // JVMCI shared library compiler thread can then attempt to grab the >> // lock and thus never make progress. >> } >> which is probably the right one. I hadn?t realized that a JavaGraal >> (as opposed to libgraal) JVMCI compiler thread blocked on a lock will be in >> the blocked state, not in the _thread_in_native state. > > Yes, makes more sense. > > Another thing is this HandleMark: > > JvmtiAgentThread::call_start_function() { > + HandleMark hm(this); > ThreadToNativeFromVM transition(this); > > Since a safepoint can happen at any time when you are in native, I don't see how using a Handle in native would be safe or correct. I'm guessing you are missing a HandleMark somewhere when you re-enter VM? Tom, can you recall why this HandleMark was added? Doe we run any JVMCI code on a JVMTI agent thread? -Doug From tom.rodriguez at oracle.com Mon Apr 1 17:01:08 2019 From: tom.rodriguez at oracle.com (Tom Rodriguez) Date: Mon, 1 Apr 2019 10:01:08 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <82C212AE-CE8A-4B6A-A1A0-DD08B41A06A5@oracle.com> <2c48b99a-f0a7-1541-7290-c77da9986df7@oracle.com> Message-ID: <2b8d7d1e-ef05-7067-826f-cea5aac0028a@oracle.com> >> Since a safepoint can happen at any time when you are in native, I don't see how using a Handle in native would be safe or correct. I'm guessing you are missing a HandleMark somewhere when you re-enter VM? > > Tom, can you recall why this HandleMark was added? Doe we run any JVMCI code on a JVMTI agent thread? > > -Doug It's a leftover from the JNI self call logic I added early in libgraal development for testing. To ease testing and bringing up libgraal I'd added logic to let HotSpot call itself through JNI. This let me test and debug libgraal JVMCI changes without having a fully working libgraal. It required adding HandleMarks in a few places that had nothing to do with JVMCI because we were calling back into ourselves. I forgot to remove all of them when I removed the self call stuff. Basically the problem I saw was that the HandleMarkCleaner takes over the last HandleMark it finds in the current thread and will clean out it's contents in its own destructor, which is actually earlier than the HandleMark destructor. If you aren't careful about having a HandleMark close to where you call out of the JVM you could end up releasing a handle earlier than the HandleMark scoping would suggest. I convinced myself this isn't a problem in the current code but maybe I missed something. I can resurrect my assertion checking if someone thinks there might be a real bug lurking here. tom From doug.simon at oracle.com Tue Apr 2 17:06:52 2019 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 2 Apr 2019 19:06:52 +0200 Subject: JVMCI 0.58 released Message-ID: <1F59D21F-3660-44F3-8437-41271988D69F@oracle.com> Changes in JVMCI 0.58 include: ? GR-14881: Adjust stack size for JVMCI shared library compiler threads. ? GR-14826: Use JVMCI shared library by default if it is present. ? GR-14755: Misc fixes for registerNativeMethods. ? GR-14836: Only treat JVMCI threads in native library as user threads. ? GR-14747: NPE in HotSpotMemoryAccessProviderImpl.readNarrowOopConstant(). ? GR-14727: Do not prevent JVMCI compilation during bootstrapping. ? GR-14677: CompilerToVM.isInternedString is missing check for String. The GR-14826 change is particularly noteworthy as it means GraalVM RC15 will use libgraal by default when running in JVM mode. This can be disabled with -XX:-UseJVMCINativeLibrary. The OpenJDK based binaries are at https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-0.58 The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). -Doug From vladimir.kozlov at oracle.com Tue Apr 2 20:41:38 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 2 Apr 2019 13:41:38 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> Message-ID: <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is less than 1% of a pause time. It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as Stefan suggested. Stefan, are you satisfied with these changes now? Here is latest delta update which includes previous [1] delta and - use CompilerThreadStackSize * 2 for libgraal instead of exact value, - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), - added recent jvmci-8 changes to fix registration of native methods in libgraal (jvmciCompilerToVM.cpp) http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ Thanks, Vladimir [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' shows how many metadata references were processed): [1.083s][1554229160638ms][info ][gc,start ] GC(2) Pause Remark [1.085s][1554229160639ms][info ][gc ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms [1.099s][1554229160654ms][info ][gc ] GC(2) Pause Remark 28M->28M(108M) 16.123ms [3.097s][1554229162651ms][info ][gc,start ] GC(12) Pause Remark [3.114s][1554229162668ms][info ][gc ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms [3.148s][1554229162702ms][info ][gc ] GC(12) Pause Remark 215M->213M(720M) 51.103ms [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects [455.455s][1554229615010ms][info ][gc ] GC(1095) JVMCI::do_unloading(): 4048 alive 0.821ms [455.456s][1554229615010ms][info ][gc,phases ] GC(1095) Phase 1: Mark live objects 344.107ms [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects [849.248s][1554230008803ms][info ][gc ] GC(1860) JVMCI::do_unloading(): 3266 alive 0.470ms [849.249s][1554230008803ms][info ][gc,phases ] GC(1860) Phase 1: Mark live objects 316.527ms [1163.778s][1554230323332ms][info ][gc,start ] GC(2627) Pause Remark [1163.932s][1554230323486ms][info ][gc ] GC(2627) JVMCI::do_unloading(): 3474 alive 0.642ms [1163.941s][1554230323496ms][info ][gc ] GC(2627) Pause Remark 2502M->2486M(4248M) 163.296ms [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects [1242.899s][1554230402453ms][info ][gc ] GC(2734) JVMCI::do_unloading(): 3449 alive 0.570ms [1242.899s][1554230402453ms][info ][gc,phases ] GC(2734) Phase 1: Mark live objects 311.719ms [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects [1364.613s][1554230524167ms][info ][gc ] GC(3023) JVMCI::do_unloading(): 3449 alive 0.000ms [1364.613s][1554230524167ms][info ][gc,phases ] GC(3023) Phase 1: Mark live objects 448.495ms [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects [1425.587s][1554230585142ms][info ][gc ] GC(3151) JVMCI::do_unloading(): 3491 alive 0.882ms [1425.587s][1554230585142ms][info ][gc,phases ] GC(3151) Phase 1: Mark live objects 365.403ms [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects [1456.769s][1554230616324ms][info ][gc ] GC(3223) JVMCI::do_unloading(): 3478 alive 0.616ms [1456.769s][1554230616324ms][info ][gc,phases ] GC(3223) Phase 1: Mark live objects 368.643ms [1806.139s][1554230965694ms][info ][gc,start ] GC(4014) Pause Remark [1806.161s][1554230965716ms][info ][gc ] GC(4014) JVMCI::do_unloading(): 3478 alive 0.000ms [1806.163s][1554230965717ms][info ][gc ] GC(4014) Pause Remark 1305M->1177M(2772M) 23.190ms On 4/1/19 12:34 AM, Stefan Karlsson wrote: > On 2019-03-29 17:55, Vladimir Kozlov wrote: >> Stefan, >> >> Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? > > > -Xlog:gc prints the remark times: > [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms > > StefanK > >> >> Thanks, >> Vladimir >> >> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>> Hi Stefan, >>>> >>>> I collected some data on MetadataHandleBlock. >>>> >>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It should not >>>> affect normal G1 remark pause. >>> >>> It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, will be >>> affected. >>> >>>> >>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of execution: >>>> >>>> max_blocks = 232 >>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>> max_total_alive_values = 4631 >>> >>> OK. Thanks for the info. >>> >>> StefanK >>> >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>> Thank you, Stefan >>>>> >>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>> Hi Vladimir, >>>>>> >>>>>> I started to check the GC code. >>>>>> >>>>>> ======================================================================== >>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>> + #if INCLUDE_JVMCI >>>>>> + #include "jvmci/jvmci.hpp" >>>>>> + #endif >>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>> ?? #include "oops/oop.inline.hpp" >>>>>> >>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>> >>>>> okay >>>>> >>>>>> >>>>>> ======================================================================== >>>>>> Could you also change the following: >>>>>> >>>>>> + #if INCLUDE_JVMCI >>>>>> +???? // Clean JVMCI metadata handles. >>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>> + #endif >>>>>> >>>>>> to: >>>>>> +???? // Clean JVMCI metadata handles. >>>>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>> >>>>>> to get rid of some of the line noise in the GC files. >>>>> >>>>> okay >>>>> >>>>>> >>>>>> ======================================================================== >>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. >>>>> >>>>> Yes, we need to support concurrent cleaning in a future. >>>>> >>>>>> >>>>>> ======================================================================== >>>>>> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >>>>>> >>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>> 3276???????????????????????????????????????? bool class_unloading_occurred) { >>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>>>> 3279?? workers()->run_task(&unlink_task); >>>>>> 3280 #if INCLUDE_JVMCI >>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>> 3283 #endif >>>>>> 3284 } >>>>> >>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in product VM) and >>>>> check: >>>>> >>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>> >>>>> If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per array and >>>>> array per MetadataHandleBlock block which are linked in list) and not large. >>>>> If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. >>>>> >>>>>> >>>>>> ======================================================================== >>>>>> Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? >>>>>> >>>>>> See how other tasks are claimed by one worker: >>>>>> void KlassCleaningTask::work() { >>>>>> ?? ResourceMark rm; >>>>>> >>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>> ???? Klass::clean_subklass_tree(); >>>>>> ?? } >>>>> >>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask in JDK8. >>>>> >>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>> >>>>>> >>>>>> ======================================================================== >>>>>> In MetadataHandleBlock::do_unloading: >>>>>> >>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>> +????????? // but can't be put on the free list yet. The >>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>> +????????? // put it on the free list. >>>>>> >>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>>>> >>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>> >>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>> >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>>>> >>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>> >>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>> Using aoted Graal can offers benefits including: >>>>>>> ?- fast startup >>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>> >>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>> >>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>> >>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested only in >>>>>>> tier3. >>>>>>> >>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were found >>>>>>> which were present before these changes. >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>> From vladimir.kozlov at oracle.com Wed Apr 3 01:37:59 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 2 Apr 2019 18:37:59 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> Message-ID: <14f3a526-eb3c-a23a-a1bc-198977086f08@oracle.com> On 4/2/19 4:51 PM, Kim Barrett wrote: >> On Apr 2, 2019, at 4:41 PM, Vladimir Kozlov wrote: >> >> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is less than 1% of a pause time. >> >> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as Stefan suggested. > > A few comments, while I'm still looking at this. > > ------------------------------------------------------------------------------ > src/hotspot/share/gc/shared/parallelCleaning.cpp > 213 JVMCI_ONLY(_jvmci_cleaning_task.work(_unloading_occurred);) > > I think putting the serial JVMCI cleaning task at the end of the > ParallelCleaningTask can result in it being mostly run by itself, > without any parallelism. I think it should be put up front, so the > first thread in starts working on it right away, while later threads > can pick up other work. Should we really put it to the beginning? It takes < 1ms. May be other tasks are more expensive and should be run first as now. But I don't know what is strategy is used here: run short or long tasks first. If you think it is okay to move it - I will move it. > > That's assuming this is all needed. I see that the Java side of things > is using WeakReference to do cleanup. I haven't figured out yet why > this new kind of weak reference mechanism in the VM is required in > addition to the Java WeakReference cleanup. Still working on that... > > ------------------------------------------------------------------------------ > src/hotspot/share/gc/shared/parallelCleaning.cpp > 194 #if INCLUDE_JVMCI > 195 _jvmci_cleaning_task(is_alive), > 196 #endif > > This could be made a bit less cluttered: > > JVMCI_ONLY(_jvmci_cleaning_task(is_alive) COMMA) Yes, I will do this. I tried use JVMCI_ONLY with regular ',' and it failed. Thanks, Vladimir > > ------------------------------------------------------------------------------ > src/hotspot/share/runtime/jniHandles.cpp > 192 #if INCLUDE_JVMCI > 193 JVMCI::oops_do(f); > 194 #endif > > I don't think that belongs here. > > ------------------------------------------------------------------------------ > From vladimir.kozlov at oracle.com Wed Apr 3 01:39:41 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 2 Apr 2019 18:39:41 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> Message-ID: <718ce598-694d-d8be-0ee3-cd0057e830e5@oracle.com> On 4/2/19 5:49 PM, Kim Barrett wrote: >> On Apr 2, 2019, at 7:51 PM, Kim Barrett wrote: >> src/hotspot/share/runtime/jniHandles.cpp >> 192 #if INCLUDE_JVMCI >> 193 JVMCI::oops_do(f); >> 194 #endif >> >> I don't think that belongs here. > > In addition to my thinking this is just out of place (we already removed the similar JVMTI > piggybacking from JNIHandles::weak_oops_do, though it had other problems too), note > that ZGC doesn?t call JNIHandles::oops_do at all (it uses the parallel oopstorage API), and > probably the only reason some of the other collectors (particularly G) still do is because > nobody has gotten around to making them use the parallel oopstorage API too. > This is ported from JDK 8 changes. Please, give me suggestion where I should call it? Thanks, Vladimir From rkennke at redhat.com Wed Apr 3 14:59:00 2019 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 3 Apr 2019 16:59:00 +0200 Subject: How to make progress with PRs? Message-ID: Hello, I have two PRs lingering: https://github.com/oracle/graal/pull/1015 and https://github.com/oracle/graal/pull/1117 which should be good to integrate, afaict. Which buttons do I need to press to get progress on them? :-) Thanks, Roman From tom.rodriguez at oracle.com Wed Apr 3 15:12:12 2019 From: tom.rodriguez at oracle.com (Tom Rodriguez) Date: Wed, 3 Apr 2019 08:12:12 -0700 Subject: How to make progress with PRs? In-Reply-To: References: Message-ID: <0a96df22-880d-bcdc-949b-452b4a1432d2@oracle.com> Roman Kennke wrote on 4/3/19 7:59 AM: > Hello, > > I have two PRs lingering: > > https://github.com/oracle/graal/pull/1015 Sorry, I'll update with your latest changes and push if everything looks good. tom > and > https://github.com/oracle/graal/pull/1117 > > which should be good to integrate, afaict. Which buttons do I need to > press to get progress on them? :-) > > Thanks, > Roman From rkennke at redhat.com Wed Apr 3 15:18:13 2019 From: rkennke at redhat.com (Roman Kennke) Date: Wed, 3 Apr 2019 17:18:13 +0200 Subject: How to make progress with PRs? In-Reply-To: <0a96df22-880d-bcdc-949b-452b4a1432d2@oracle.com> References: <0a96df22-880d-bcdc-949b-452b4a1432d2@oracle.com> Message-ID: <990cbc9c-c94a-0a99-ef3b-82a1a73cb48b@redhat.com> >> Hello, >> >> I have two PRs lingering: >> >> https://github.com/oracle/graal/pull/1015 > > Sorry, I'll update with your latest changes and push if everything looks > good. Thanks!! Roman From vladimir.kozlov at oracle.com Wed Apr 3 16:54:19 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 3 Apr 2019 09:54:19 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> Message-ID: <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> On 4/2/19 11:35 PM, Stefan Karlsson wrote: > On 2019-04-02 22:41, Vladimir Kozlov wrote: >> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is less >> than 1% of a pause time. > > Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run these changes through our > internal performance test setup. Okay, I will run performance tests there too. > >> >> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as Stefan >> suggested. >> >> Stefan, are you satisfied with these changes now? > > Yes, the clean-ups look good. Thanks for cleaning this up. > > Kim had some extra comments about a few more places where JVMCI_ONLY could be used. > > I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think you should put it where > you put the AOTLoader::oops_do calls. Okay. Thanks, Vladimir > > Thanks, > StefanK > > >> >> Here is latest delta update which includes previous [1] delta and >> - use CompilerThreadStackSize * 2 for libgraal instead of exact value, >> - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), >> - added recent jvmci-8 changes to fix registration of native methods in libgraal (jvmciCompilerToVM.cpp) >> >> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ >> >> Thanks, >> Vladimir >> >> [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ >> [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >> [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' shows how many metadata >> references were processed): >> >> [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark >> [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms >> [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark 28M->28M(108M) 16.123ms >> >> [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark >> [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms >> [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark 215M->213M(720M) 51.103ms >> >> [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects >> [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) JVMCI::do_unloading(): 4048 alive 0.821ms >> [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: Mark live objects 344.107ms >> >> [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects >> [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) JVMCI::do_unloading(): 3266 alive 0.470ms >> [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: Mark live objects 316.527ms >> >> [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark >> [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) JVMCI::do_unloading(): 3474 alive 0.642ms >> [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause Remark 2502M->2486M(4248M) 163.296ms >> >> [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects >> [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) JVMCI::do_unloading(): 3449 alive 0.570ms >> [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: Mark live objects 311.719ms >> >> [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects >> [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) JVMCI::do_unloading(): 3449 alive 0.000ms >> [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: Mark live objects 448.495ms >> >> [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects >> [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) JVMCI::do_unloading(): 3491 alive 0.882ms >> [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: Mark live objects 365.403ms >> >> [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects >> [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) JVMCI::do_unloading(): 3478 alive 0.616ms >> [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: Mark live objects 368.643ms >> >> [1806.139s][1554230965694ms][info?? ][gc,start?????? ] GC(4014) Pause Remark >> [1806.161s][1554230965716ms][info?? ][gc???????????? ] GC(4014) JVMCI::do_unloading(): 3478 alive 0.000ms >> [1806.163s][1554230965717ms][info?? ][gc???????????? ] GC(4014) Pause Remark 1305M->1177M(2772M) 23.190ms >> >> >> >> On 4/1/19 12:34 AM, Stefan Karlsson wrote: >>> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>>> Stefan, >>>> >>>> Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? >>> >>> >>> -Xlog:gc prints the remark times: >>> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >>> >>> StefanK >>> >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>>> Hi Stefan, >>>>>> >>>>>> I collected some data on MetadataHandleBlock. >>>>>> >>>>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It should >>>>>> not affect normal G1 remark pause. >>>>> >>>>> It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, will >>>>> be affected. >>>>> >>>>>> >>>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of execution: >>>>>> >>>>>> max_blocks = 232 >>>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>>> max_total_alive_values = 4631 >>>>> >>>>> OK. Thanks for the info. >>>>> >>>>> StefanK >>>>> >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>>> Thank you, Stefan >>>>>>> >>>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>>> Hi Vladimir, >>>>>>>> >>>>>>>> I started to check the GC code. >>>>>>>> >>>>>>>> ======================================================================== >>>>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>>> + #if INCLUDE_JVMCI >>>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>>> + #endif >>>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>>> >>>>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>>>> >>>>>>> okay >>>>>>> >>>>>>>> >>>>>>>> ======================================================================== >>>>>>>> Could you also change the following: >>>>>>>> >>>>>>>> + #if INCLUDE_JVMCI >>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>>> + #endif >>>>>>>> >>>>>>>> to: >>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>>>> >>>>>>>> to get rid of some of the line noise in the GC files. >>>>>>> >>>>>>> okay >>>>>>> >>>>>>>> >>>>>>>> ======================================================================== >>>>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. >>>>>>> >>>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>>> >>>>>>>> >>>>>>>> ======================================================================== >>>>>>>> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >>>>>>>> >>>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>>>> 3276???????????????????????????????????????? bool class_unloading_occurred) { >>>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>>>>>> 3279?? workers()->run_task(&unlink_task); >>>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>>> 3283 #endif >>>>>>>> 3284 } >>>>>>> >>>>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in product VM) >>>>>>> and check: >>>>>>> >>>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>>> >>>>>>> If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per array and >>>>>>> array per MetadataHandleBlock block which are linked in list) and not large. >>>>>>> If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. >>>>>>> >>>>>>>> >>>>>>>> ======================================================================== >>>>>>>> Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? >>>>>>>> >>>>>>>> See how other tasks are claimed by one worker: >>>>>>>> void KlassCleaningTask::work() { >>>>>>>> ?? ResourceMark rm; >>>>>>>> >>>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>>> ?? } >>>>>>> >>>>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask in JDK8. >>>>>>> >>>>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>>>> >>>>>>>> >>>>>>>> ======================================================================== >>>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>>> >>>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>>> +????????? // put it on the free list. >>>>>>>> >>>>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>>>>>> >>>>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> StefanK >>>>>>>> >>>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>> >>>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>>> ?- fast startup >>>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>>> >>>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>>> >>>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>>>> >>>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested only in >>>>>>>>> tier3. >>>>>>>>> >>>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were found >>>>>>>>> which were present before these changes. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>>> From stefan.karlsson at oracle.com Mon Apr 1 07:34:06 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Mon, 1 Apr 2019 09:34:06 +0200 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> Message-ID: <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> On 2019-03-29 17:55, Vladimir Kozlov wrote: > Stefan, > > Do you have a test (and flags) which can allow me to measure effect of > this code on G1 remark pause? -Xlog:gc prints the remark times: [4,296s][info][gc ] GC(89) Pause Remark 4M->4M(28M) 36,412ms StefanK > > Thanks, > Vladimir > > On 3/29/19 12:36 AM, Stefan Karlsson wrote: >> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>> Hi Stefan, >>> >>> I collected some data on MetadataHandleBlock. >>> >>> First, do_unloading() code is executed only when >>> class_unloading_occurred is 'true' - it is rare case. It should not >>> affect normal G1 remark pause. >> >> It's only rare for applications that don't do dynamic class loading >> and unloading. The applications that do, will be affected. >> >>> >>> Second, I run a test with -Xcomp. I got about 10,000 compilations by >>> Graal and next data at the end of execution: >>> >>> max_blocks = 232 >>> max_handles_per_block = 32 (since handles array has 32 elements) >>> max_total_alive_values = 4631 >> >> OK. Thanks for the info. >> >> StefanK >> >>> >>> Thanks, >>> Vladimir >>> >>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>> Thank you, Stefan >>>> >>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>> Hi Vladimir, >>>>> >>>>> I started to check the GC code. >>>>> >>>>> ======================================================================== >>>>> >>>>> I see that you've added guarded includes in the middle of the >>>>> include list: >>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>> + #if INCLUDE_JVMCI >>>>> + #include "jvmci/jvmci.hpp" >>>>> + #endif >>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>> ?? #include "oops/oop.inline.hpp" >>>>> >>>>> The style we use is to put these conditional includes at the end of >>>>> the include lists. >>>> >>>> okay >>>> >>>>> >>>>> ======================================================================== >>>>> >>>>> Could you also change the following: >>>>> >>>>> + #if INCLUDE_JVMCI >>>>> +???? // Clean JVMCI metadata handles. >>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>> + #endif >>>>> >>>>> to: >>>>> +???? // Clean JVMCI metadata handles. >>>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), >>>>> purged_class);) >>>>> >>>>> to get rid of some of the line noise in the GC files. >>>> >>>> okay >>>> >>>>> >>>>> ======================================================================== >>>>> >>>>> In the future we will need version of JVMCI::do_unloading that >>>>> supports concurrent cleaning for ZGC. >>>> >>>> Yes, we need to support concurrent cleaning in a future. >>>> >>>>> >>>>> ======================================================================== >>>>> >>>>> What's the performance impact for G1 remark pause with this serial >>>>> walk over the MetadataHandleBlock? >>>>> >>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* >>>>> is_alive, >>>>> 3276???????????????????????????????????????? bool >>>>> class_unloading_occurred) { >>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, >>>>> class_unloading_occurred, false); >>>>> 3279?? workers()->run_task(&unlink_task); >>>>> 3280 #if INCLUDE_JVMCI >>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>> 3283 #endif >>>>> 3284 } >>>> >>>> There should not be impact if Graal is not used. Only cost of call >>>> (which most likely is inlined in product VM) and check: >>>> >>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>> >>>> >>>> If Graal is used it should not have big impact since these metadata >>>> has regular pattern (32 handles per array and array per >>>> MetadataHandleBlock block which are linked in list) and not large. >>>> If there will be noticeable impact - we will work on it as you >>>> suggested by using ParallelCleaningTask. >>>> >>>>> >>>>> ======================================================================== >>>>> >>>>> Did you consider adding it as a task for one of the worker threads >>>>> to execute in ParallelCleaningTask? >>>>> >>>>> See how other tasks are claimed by one worker: >>>>> void KlassCleaningTask::work() { >>>>> ?? ResourceMark rm; >>>>> >>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>> ?? if (claim_clean_klass_tree_task()) { >>>>> ???? Klass::clean_subklass_tree(); >>>>> ?? } >>>> >>>> These changes were ported from JDK8u based changes in graal-jvmci-8 >>>> and there are no ParallelCleaningTask in JDK8. >>>> >>>> Your suggestion is interesting and I agree that we should >>>> investigate it. >>>> >>>>> >>>>> ======================================================================== >>>>> >>>>> In MetadataHandleBlock::do_unloading: >>>>> >>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>> +????????? // but can't be put on the free list yet. The >>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>> +????????? // put it on the free list. >>>>> >>>>> I couldn't find the ReferenceCleaner in the patch or in the source. >>>>> Where can I find this code? >>>> >>>> I think it is typo (I will fix it) - it references new HandleCleaner >>>> class: >>>> >>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>> >>>> >>>> Thanks, >>>> Vladimir >>>> >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>> >>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>> Using aoted Graal can offers benefits including: >>>>>> ?- fast startup >>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>> ?- memory usage disjoint from the application Java heap >>>>>> ?- no profile pollution of JDK code used by the application >>>>>> >>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up >>>>>> to date. >>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>> >>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) >>>>>> and our compiler group. >>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI >>>>>> flags. >>>>>> >>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it >>>>>> was clean. In this set Graal was tested only in tier3. >>>>>> >>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests >>>>>> available in our system. Several issue were found which were >>>>>> present before these changes. >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> [1] >>>>>> https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>> >>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>> From stefan.karlsson at oracle.com Wed Apr 3 06:35:14 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Wed, 3 Apr 2019 08:35:14 +0200 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> Message-ID: <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> On 2019-04-02 22:41, Vladimir Kozlov wrote: > I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times > are not consistent even without Graal. > To see effect I added time spent in JVMCI::do_unloading() to GC log (see > below [3]). The result is < 1ms - it is less than 1% of a pause time. Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run these changes through our internal performance test setup. > > It will have even less effect since I moved JVMCI::do_unloading() from > serial path to parallel worker thread as Stefan suggested. > > Stefan, are you satisfied with these changes now? Yes, the clean-ups look good. Thanks for cleaning this up. Kim had some extra comments about a few more places where JVMCI_ONLY could be used. I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think you should put it where you put the AOTLoader::oops_do calls. Thanks, StefanK > > Here is latest delta update which includes previous [1] delta and > - use CompilerThreadStackSize * 2 for libgraal instead of exact value, > - removed HandleMark added for debugging (reverted changes in > jvmtiImpl.cpp), > - added recent jvmci-8 changes to fix registration of native methods in > libgraal (jvmciCompilerToVM.cpp) > > http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ > > Thanks, > Vladimir > > [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ > [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ > [3] Pauses times from Kitchensink (0.0ms means there were no unloaded > classes, 'NNN alive' shows how many metadata references were processed): > > [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark > [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) > JVMCI::do_unloading(): 0 alive 0.000ms > [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark > 28M->28M(108M) 16.123ms > > [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark > [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) > JVMCI::do_unloading(): 3471 alive 0.164ms > [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark > 215M->213M(720M) 51.103ms > > [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: > Mark live objects > [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) > JVMCI::do_unloading(): 4048 alive 0.821ms > [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: > Mark live objects 344.107ms > > [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: > Mark live objects > [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) > JVMCI::do_unloading(): 3266 alive 0.470ms > [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: > Mark live objects 316.527ms > > [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark > [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) > JVMCI::do_unloading(): 3474 alive 0.642ms > [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause > Remark 2502M->2486M(4248M) 163.296ms > > [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: > Mark live objects > [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) > JVMCI::do_unloading(): 3449 alive 0.570ms > [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: > Mark live objects 311.719ms > > [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: > Mark live objects > [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) > JVMCI::do_unloading(): 3449 alive 0.000ms > [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: > Mark live objects 448.495ms > > [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: > Mark live objects > [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) > JVMCI::do_unloading(): 3491 alive 0.882ms > [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: > Mark live objects 365.403ms > > [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: > Mark live objects > [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) > JVMCI::do_unloading(): 3478 alive 0.616ms > [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: > Mark live objects 368.643ms > > [1806.139s][1554230965694ms][info?? ][gc,start?????? ] GC(4014) Pause > Remark > [1806.161s][1554230965716ms][info?? ][gc???????????? ] GC(4014) > JVMCI::do_unloading(): 3478 alive 0.000ms > [1806.163s][1554230965717ms][info?? ][gc???????????? ] GC(4014) Pause > Remark 1305M->1177M(2772M) 23.190ms > > > > On 4/1/19 12:34 AM, Stefan Karlsson wrote: >> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>> Stefan, >>> >>> Do you have a test (and flags) which can allow me to measure effect >>> of this code on G1 remark pause? >> >> >> -Xlog:gc prints the remark times: >> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >> >> StefanK >> >>> >>> Thanks, >>> Vladimir >>> >>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>> Hi Stefan, >>>>> >>>>> I collected some data on MetadataHandleBlock. >>>>> >>>>> First, do_unloading() code is executed only when >>>>> class_unloading_occurred is 'true' - it is rare case. It should not >>>>> affect normal G1 remark pause. >>>> >>>> It's only rare for applications that don't do dynamic class loading >>>> and unloading. The applications that do, will be affected. >>>> >>>>> >>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations >>>>> by Graal and next data at the end of execution: >>>>> >>>>> max_blocks = 232 >>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>> max_total_alive_values = 4631 >>>> >>>> OK. Thanks for the info. >>>> >>>> StefanK >>>> >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>> Thank you, Stefan >>>>>> >>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>> Hi Vladimir, >>>>>>> >>>>>>> I started to check the GC code. >>>>>>> >>>>>>> ======================================================================== >>>>>>> >>>>>>> I see that you've added guarded includes in the middle of the >>>>>>> include list: >>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>> + #if INCLUDE_JVMCI >>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>> + #endif >>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>> >>>>>>> The style we use is to put these conditional includes at the end >>>>>>> of the include lists. >>>>>> >>>>>> okay >>>>>> >>>>>>> >>>>>>> ======================================================================== >>>>>>> >>>>>>> Could you also change the following: >>>>>>> >>>>>>> + #if INCLUDE_JVMCI >>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>> + #endif >>>>>>> >>>>>>> to: >>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), >>>>>>> purged_class);) >>>>>>> >>>>>>> to get rid of some of the line noise in the GC files. >>>>>> >>>>>> okay >>>>>> >>>>>>> >>>>>>> ======================================================================== >>>>>>> >>>>>>> In the future we will need version of JVMCI::do_unloading that >>>>>>> supports concurrent cleaning for ZGC. >>>>>> >>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>> >>>>>>> >>>>>>> ======================================================================== >>>>>>> >>>>>>> What's the performance impact for G1 remark pause with this >>>>>>> serial walk over the MetadataHandleBlock? >>>>>>> >>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* >>>>>>> is_alive, >>>>>>> 3276???????????????????????????????????????? bool >>>>>>> class_unloading_occurred) { >>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, >>>>>>> class_unloading_occurred, false); >>>>>>> 3279?? workers()->run_task(&unlink_task); >>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>> 3283 #endif >>>>>>> 3284 } >>>>>> >>>>>> There should not be impact if Graal is not used. Only cost of call >>>>>> (which most likely is inlined in product VM) and check: >>>>>> >>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>> >>>>>> >>>>>> If Graal is used it should not have big impact since these >>>>>> metadata has regular pattern (32 handles per array and array per >>>>>> MetadataHandleBlock block which are linked in list) and not large. >>>>>> If there will be noticeable impact - we will work on it as you >>>>>> suggested by using ParallelCleaningTask. >>>>>> >>>>>>> >>>>>>> ======================================================================== >>>>>>> >>>>>>> Did you consider adding it as a task for one of the worker >>>>>>> threads to execute in ParallelCleaningTask? >>>>>>> >>>>>>> See how other tasks are claimed by one worker: >>>>>>> void KlassCleaningTask::work() { >>>>>>> ?? ResourceMark rm; >>>>>>> >>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>> ?? } >>>>>> >>>>>> These changes were ported from JDK8u based changes in >>>>>> graal-jvmci-8 and there are no ParallelCleaningTask in JDK8. >>>>>> >>>>>> Your suggestion is interesting and I agree that we should >>>>>> investigate it. >>>>>> >>>>>>> >>>>>>> ======================================================================== >>>>>>> >>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>> >>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>> +????????? // put it on the free list. >>>>>>> >>>>>>> I couldn't find the ReferenceCleaner in the patch or in the >>>>>>> source. Where can I find this code? >>>>>> >>>>>> I think it is typo (I will fix it) - it references new >>>>>> HandleCleaner class: >>>>>> >>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> StefanK >>>>>>> >>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>> >>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>> ?- fast startup >>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>> >>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up >>>>>>>> to date. >>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>> >>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) >>>>>>>> and our compiler group. >>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and >>>>>>>> JVMCI flags. >>>>>>>> >>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it >>>>>>>> was clean. In this set Graal was tested only in tier3. >>>>>>>> >>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests >>>>>>>> available in our system. Several issue were found which were >>>>>>>> present before these changes. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>> [1] >>>>>>>> https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>> >>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>> From kim.barrett at oracle.com Tue Apr 2 23:51:14 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 2 Apr 2019 19:51:14 -0400 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> Message-ID: > On Apr 2, 2019, at 4:41 PM, Vladimir Kozlov wrote: > > I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. > To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is less than 1% of a pause time. > > It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as Stefan suggested. A few comments, while I'm still looking at this. ------------------------------------------------------------------------------ src/hotspot/share/gc/shared/parallelCleaning.cpp 213 JVMCI_ONLY(_jvmci_cleaning_task.work(_unloading_occurred);) I think putting the serial JVMCI cleaning task at the end of the ParallelCleaningTask can result in it being mostly run by itself, without any parallelism. I think it should be put up front, so the first thread in starts working on it right away, while later threads can pick up other work. That's assuming this is all needed. I see that the Java side of things is using WeakReference to do cleanup. I haven't figured out yet why this new kind of weak reference mechanism in the VM is required in addition to the Java WeakReference cleanup. Still working on that... ------------------------------------------------------------------------------ src/hotspot/share/gc/shared/parallelCleaning.cpp 194 #if INCLUDE_JVMCI 195 _jvmci_cleaning_task(is_alive), 196 #endif This could be made a bit less cluttered: JVMCI_ONLY(_jvmci_cleaning_task(is_alive) COMMA) ------------------------------------------------------------------------------ src/hotspot/share/runtime/jniHandles.cpp 192 #if INCLUDE_JVMCI 193 JVMCI::oops_do(f); 194 #endif I don't think that belongs here. ------------------------------------------------------------------------------ From kim.barrett at oracle.com Wed Apr 3 00:49:28 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 2 Apr 2019 20:49:28 -0400 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> Message-ID: > On Apr 2, 2019, at 7:51 PM, Kim Barrett wrote: > src/hotspot/share/runtime/jniHandles.cpp > 192 #if INCLUDE_JVMCI > 193 JVMCI::oops_do(f); > 194 #endif > > I don't think that belongs here. In addition to my thinking this is just out of place (we already removed the similar JVMTI piggybacking from JNIHandles::weak_oops_do, though it had other problems too), note that ZGC doesn?t call JNIHandles::oops_do at all (it uses the parallel oopstorage API), and probably the only reason some of the other collectors (particularly G) still do is because nobody has gotten around to making them use the parallel oopstorage API too. From kim.barrett at oracle.com Wed Apr 3 02:33:09 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Tue, 2 Apr 2019 22:33:09 -0400 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <14f3a526-eb3c-a23a-a1bc-198977086f08@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <14f3a526-eb3c-a23a-a1bc-198977086f08@oracle.com> Message-ID: <1F02DF2C-16C5-4A90-A980-D1ACE387B5C0@oracle.com> > On Apr 2, 2019, at 9:37 PM, Vladimir Kozlov wrote: > > On 4/2/19 4:51 PM, Kim Barrett wrote: >>> On Apr 2, 2019, at 4:41 PM, Vladimir Kozlov wrote: >>> >>> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >>> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is less than 1% of a pause time. >>> >>> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as Stefan suggested. >> A few comments, while I'm still looking at this. >> ------------------------------------------------------------------------------ >> src/hotspot/share/gc/shared/parallelCleaning.cpp >> 213 JVMCI_ONLY(_jvmci_cleaning_task.work(_unloading_occurred);) >> I think putting the serial JVMCI cleaning task at the end of the >> ParallelCleaningTask can result in it being mostly run by itself, >> without any parallelism. I think it should be put up front, so the >> first thread in starts working on it right away, while later threads >> can pick up other work. > > Should we really put it to the beginning? It takes < 1ms. > May be other tasks are more expensive and should be run first as now. > But I don't know what is strategy is used here: run short or long tasks first. > If you think it is okay to move it - I will move it. The JVMCI cleaning task is (at least currently) a serial task. All of the others are parallelized, in the sense that each thread will take pieces of the first of those tasks until there aren't any left, and then move on to the next task. Putting the serial task at the end allows all the parallel work to be completed as one thread picks up that remaining serial piece, and then chugs along on its own while all the other threads are now idle. Putting the serial task first means all the other threads can be working through the parallelized work. The serial thread joins the others on the remaining parallel work when it's done, and if there isn't any then it's a good thing we started the serial work first, as it's the long pole. >> That's assuming this is all needed. I see that the Java side of things >> is using WeakReference to do cleanup. I haven't figured out yet why >> this new kind of weak reference mechanism in the VM is required in >> addition to the Java WeakReference cleanup. Still working on that... >> ------------------------------------------------------------------------------ >> src/hotspot/share/gc/shared/parallelCleaning.cpp >> 194 #if INCLUDE_JVMCI >> 195 _jvmci_cleaning_task(is_alive), >> 196 #endif >> This could be made a bit less cluttered: >> JVMCI_ONLY(_jvmci_cleaning_task(is_alive) COMMA) > > Yes, I will do this. I tried use JVMCI_ONLY with regular ',' and it failed. The COMMA macro is exactly for this sort of thing where one needs a deferred comma. It?s a relatively recent addition, and not used all that much yet. From vladimir.kozlov at oracle.com Thu Apr 4 07:22:27 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 4 Apr 2019 00:22:27 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> Message-ID: <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> New delta: http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ Full: http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ New changes are based on Kim and Stefan suggestions: - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). - Used JVMCI_ONLY macro with COMMA. - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to find missing JVMCI guards. I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). I started hs-tier4..8-graal testing. I will do performance testing next. Thanks, Vladimir On 4/3/19 9:54 AM, Vladimir Kozlov wrote: > On 4/2/19 11:35 PM, Stefan Karlsson wrote: >> On 2019-04-02 22:41, Vladimir Kozlov wrote: >>> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >>> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is less >>> than 1% of a pause time. >> >> Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run these changes through >> our internal performance test setup. > > Okay, I will run performance tests there too. > >> >>> >>> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as >>> Stefan suggested. >>> >>> Stefan, are you satisfied with these changes now? >> >> Yes, the clean-ups look good. Thanks for cleaning this up. >> >> Kim had some extra comments about a few more places where JVMCI_ONLY could be used. >> >> I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think you should put it where >> you put the AOTLoader::oops_do calls. > > Okay. > > Thanks, > Vladimir > >> >> Thanks, >> StefanK >> >> >>> >>> Here is latest delta update which includes previous [1] delta and >>> - use CompilerThreadStackSize * 2 for libgraal instead of exact value, >>> - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), >>> - added recent jvmci-8 changes to fix registration of native methods in libgraal (jvmciCompilerToVM.cpp) >>> >>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ >>> >>> Thanks, >>> Vladimir >>> >>> [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ >>> [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>> [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' shows how many metadata >>> references were processed): >>> >>> [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark >>> [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms >>> [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark 28M->28M(108M) 16.123ms >>> >>> [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark >>> [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms >>> [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark 215M->213M(720M) 51.103ms >>> >>> [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects >>> [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) JVMCI::do_unloading(): 4048 alive 0.821ms >>> [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: Mark live objects 344.107ms >>> >>> [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects >>> [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) JVMCI::do_unloading(): 3266 alive 0.470ms >>> [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: Mark live objects 316.527ms >>> >>> [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark >>> [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) JVMCI::do_unloading(): 3474 alive 0.642ms >>> [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause Remark 2502M->2486M(4248M) 163.296ms >>> >>> [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects >>> [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) JVMCI::do_unloading(): 3449 alive 0.570ms >>> [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: Mark live objects 311.719ms >>> >>> [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects >>> [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) JVMCI::do_unloading(): 3449 alive 0.000ms >>> [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: Mark live objects 448.495ms >>> >>> [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects >>> [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) JVMCI::do_unloading(): 3491 alive 0.882ms >>> [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: Mark live objects 365.403ms >>> >>> [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects >>> [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) JVMCI::do_unloading(): 3478 alive 0.616ms >>> [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: Mark live objects 368.643ms >>> >>> [1806.139s][1554230965694ms][info?? ][gc,start?????? ] GC(4014) Pause Remark >>> [1806.161s][1554230965716ms][info?? ][gc???????????? ] GC(4014) JVMCI::do_unloading(): 3478 alive 0.000ms >>> [1806.163s][1554230965717ms][info?? ][gc???????????? ] GC(4014) Pause Remark 1305M->1177M(2772M) 23.190ms >>> >>> >>> >>> On 4/1/19 12:34 AM, Stefan Karlsson wrote: >>>> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>>>> Stefan, >>>>> >>>>> Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? >>>> >>>> >>>> -Xlog:gc prints the remark times: >>>> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >>>> >>>> StefanK >>>> >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>>>> Hi Stefan, >>>>>>> >>>>>>> I collected some data on MetadataHandleBlock. >>>>>>> >>>>>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It should >>>>>>> not affect normal G1 remark pause. >>>>>> >>>>>> It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, will >>>>>> be affected. >>>>>> >>>>>>> >>>>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of execution: >>>>>>> >>>>>>> max_blocks = 232 >>>>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>>>> max_total_alive_values = 4631 >>>>>> >>>>>> OK. Thanks for the info. >>>>>> >>>>>> StefanK >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>>>> Thank you, Stefan >>>>>>>> >>>>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>>>> Hi Vladimir, >>>>>>>>> >>>>>>>>> I started to check the GC code. >>>>>>>>> >>>>>>>>> ======================================================================== >>>>>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>>>> + #endif >>>>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>>>> >>>>>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>>>>> >>>>>>>> okay >>>>>>>> >>>>>>>>> >>>>>>>>> ======================================================================== >>>>>>>>> Could you also change the following: >>>>>>>>> >>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>>>> + #endif >>>>>>>>> >>>>>>>>> to: >>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>>>>> >>>>>>>>> to get rid of some of the line noise in the GC files. >>>>>>>> >>>>>>>> okay >>>>>>>> >>>>>>>>> >>>>>>>>> ======================================================================== >>>>>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. >>>>>>>> >>>>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>>>> >>>>>>>>> >>>>>>>>> ======================================================================== >>>>>>>>> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >>>>>>>>> >>>>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>>>>> 3276???????????????????????????????????????? bool class_unloading_occurred) { >>>>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>>>>>>> 3279?? workers()->run_task(&unlink_task); >>>>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>>>> 3283 #endif >>>>>>>>> 3284 } >>>>>>>> >>>>>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in product VM) >>>>>>>> and check: >>>>>>>> >>>>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>>>> >>>>>>>> If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per array >>>>>>>> and array per MetadataHandleBlock block which are linked in list) and not large. >>>>>>>> If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. >>>>>>>> >>>>>>>>> >>>>>>>>> ======================================================================== >>>>>>>>> Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? >>>>>>>>> >>>>>>>>> See how other tasks are claimed by one worker: >>>>>>>>> void KlassCleaningTask::work() { >>>>>>>>> ?? ResourceMark rm; >>>>>>>>> >>>>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>>>> ?? } >>>>>>>> >>>>>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask in JDK8. >>>>>>>> >>>>>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>>>>> >>>>>>>>> >>>>>>>>> ======================================================================== >>>>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>>>> >>>>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>>>> +????????? // put it on the free list. >>>>>>>>> >>>>>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>>>>>>> >>>>>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> StefanK >>>>>>>>> >>>>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>>> >>>>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>>>> ?- fast startup >>>>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>>>> >>>>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>>>> >>>>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>>>>> >>>>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested only >>>>>>>>>> in tier3. >>>>>>>>>> >>>>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were found >>>>>>>>>> which were present before these changes. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Vladimir >>>>>>>>>> >>>>>>>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>>>> From kim.barrett at oracle.com Thu Apr 4 20:49:27 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 4 Apr 2019 16:49:27 -0400 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> Message-ID: > On Apr 4, 2019, at 3:22 AM, Vladimir Kozlov wrote: > > New delta: > http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ > > Full: > http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ > > New changes are based on Kim and Stefan suggestions: > > - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. > - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). > - Used JVMCI_ONLY macro with COMMA. > - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to find missing JVMCI guards. > > I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). > I started hs-tier4..8-graal testing. > I will do performance testing next. ------------------------------------------------------------------------------ src/hotspot/share/jvmci/jvmci.hpp There is this new header file, declaring the JVMCI class. But the implementation seems to all be in JVMCIRuntime.cpp. That's pretty atypical code organization in HotSpot. ------------------------------------------------------------------------------ src/hotspot/share/jvmci/jvmci.hpp 46 static JNIHandleBlock* _object_handles; We moved away from JNIHandleBlock for "globals" (JNI global handles, JNI global weak handles), replacing those uses with OopStorage, because JNIHandleBlock and the way it was being used for "globals" had various thread-safety issues. JNIHandleBlock was retained for use with local handles, where thread-safety isn't an issue for the most part. Trying to have a chain of blocks data structure that supported both usage models was deemed to impose undesirable overhead on one or the other (or both) use cases. So, how have those issues been addressed in the new uses introduced by JVMCI? Should JVMCI be using OopStorage rather than JNIHandleBlock? (Yes, it should.) Further searching, and JVMCI::is_global_handle is racy and can crash, just like JDK-8174790. ------------------------------------------------------------------------------ src/hotspot/share/jvmci/jvmci.hpp src/hotspot/share/jvmci/jvmciRuntime.cpp 1198 jobject JVMCI::make_global(Handle obj) { 1199 assert(_object_handles != NULL, "uninitialized"); 1200 MutexLocker ml(JVMCI_lock); 1201 return _object_handles->allocate_handle(obj()); 1202 } This is returning a new kind of "jobject" that JNI knows nothing about! Any sort of JNI handle checking (such as is enabled by -Xcheck:jni) will fail for one of these. It looks like JVMCI used to be using JNI directly, but there has been some decision to separate it into it's own near copy. Why? Maybe these pseudo-jobjects aren't supposed to ever be passed to a "normal" JNI function. If that's true, they should have a different type, and not be jobject at all. ------------------------------------------------------------------------------ src/hotspot/share/prims/jni.cpp 1321 Klass* klass = java_lang_Class::as_Klass(JNIHandles::resolve_non_null(clazz)); Pre-existing: why are we resolving clazz twice, here and a few lines away for the is_primitive check? ------------------------------------------------------------------------------ I've just started looking at the MetadataHandle stuff, and don't have any comments there yet. From kim.barrett at oracle.com Thu Apr 4 20:57:47 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Thu, 4 Apr 2019 16:57:47 -0400 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> Message-ID: > On Apr 4, 2019, at 3:22 AM, Vladimir Kozlov wrote: > > New delta: > http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ > > Full: > http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ > > New changes are based on Kim and Stefan suggestions: > > - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. > - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). > - Used JVMCI_ONLY macro with COMMA. > - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to find missing JVMCI guards. > > I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). > I started hs-tier4..8-graal testing. > I will do performance testing next. ------------------------------------------------------------------------------ src/hotspot/share/jvmci/jvmciJavaClasses.hpp 28 #include "runtime/jniHandles.inline.hpp" .hpp files are not permitted to #include .inline.hpp files. We've fixed most violations (I think there are still a few lingering ones that have not yet been disentangled); please don't add new ones. ------------------------------------------------------------------------------ From vladimir.kozlov at oracle.com Thu Apr 4 23:45:46 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 4 Apr 2019 16:45:46 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> Message-ID: Thank you, Kim I moved HotSpotJVMCI methods which call JNIHandles::resolve() to jvmciJavaClasses.cpp file so that I don't need to include jniHandles.inline.hpp into .hpp file. I will update changes when I address your other comments. Thanks, Vladimir On 4/4/19 1:57 PM, Kim Barrett wrote: >> On Apr 4, 2019, at 3:22 AM, Vladimir Kozlov wrote: >> >> New delta: >> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ >> >> Full: >> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ >> >> New changes are based on Kim and Stefan suggestions: >> >> - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. >> - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). >> - Used JVMCI_ONLY macro with COMMA. >> - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to find missing JVMCI guards. >> >> I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). >> I started hs-tier4..8-graal testing. >> I will do performance testing next. > > ------------------------------------------------------------------------------ > src/hotspot/share/jvmci/jvmciJavaClasses.hpp > 28 #include "runtime/jniHandles.inline.hpp" > > .hpp files are not permitted to #include .inline.hpp files. > > We've fixed most violations (I think there are still a few lingering > ones that have not yet been disentangled); please don't add new ones. > > ------------------------------------------------------------------------------ > From vladimir.kozlov at oracle.com Fri Apr 5 15:22:26 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 5 Apr 2019 08:22:26 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> Message-ID: Thank you, Stefan Vladimir On 4/4/19 1:10 AM, Stefan Karlsson wrote: > GC delta looks good. > > Thanks, > StefanK > > On 2019-04-04 09:22, Vladimir Kozlov wrote: >> New delta: >> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ >> >> Full: >> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ >> >> New changes are based on Kim and Stefan suggestions: >> >> - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. >> - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). >> - Used JVMCI_ONLY macro with COMMA. >> - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to >> find missing JVMCI guards. >> >> I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). >> I started hs-tier4..8-graal testing. >> I will do performance testing next. >> >> Thanks, >> Vladimir >> >> On 4/3/19 9:54 AM, Vladimir Kozlov wrote: >>> On 4/2/19 11:35 PM, Stefan Karlsson wrote: >>>> On 2019-04-02 22:41, Vladimir Kozlov wrote: >>>>> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >>>>> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is >>>>> less than 1% of a pause time. >>>> >>>> Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run these changes through >>>> our internal performance test setup. >>> >>> Okay, I will run performance tests there too. >>> >>>> >>>>> >>>>> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as >>>>> Stefan suggested. >>>>> >>>>> Stefan, are you satisfied with these changes now? >>>> >>>> Yes, the clean-ups look good. Thanks for cleaning this up. >>>> >>>> Kim had some extra comments about a few more places where JVMCI_ONLY could be used. >>>> >>>> I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think you should put it >>>> where you put the AOTLoader::oops_do calls. >>> >>> Okay. >>> >>> Thanks, >>> Vladimir >>> >>>> >>>> Thanks, >>>> StefanK >>>> >>>> >>>>> >>>>> Here is latest delta update which includes previous [1] delta and >>>>> - use CompilerThreadStackSize * 2 for libgraal instead of exact value, >>>>> - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), >>>>> - added recent jvmci-8 changes to fix registration of native methods in libgraal (jvmciCompilerToVM.cpp) >>>>> >>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ >>>>> [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>> [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' shows how many metadata >>>>> references were processed): >>>>> >>>>> [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark >>>>> [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms >>>>> [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark 28M->28M(108M) 16.123ms >>>>> >>>>> [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark >>>>> [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms >>>>> [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark 215M->213M(720M) 51.103ms >>>>> >>>>> [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects >>>>> [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) JVMCI::do_unloading(): 4048 alive 0.821ms >>>>> [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: Mark live objects 344.107ms >>>>> >>>>> [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects >>>>> [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) JVMCI::do_unloading(): 3266 alive 0.470ms >>>>> [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: Mark live objects 316.527ms >>>>> >>>>> [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark >>>>> [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) JVMCI::do_unloading(): 3474 alive 0.642ms >>>>> [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause Remark 2502M->2486M(4248M) 163.296ms >>>>> >>>>> [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects >>>>> [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) JVMCI::do_unloading(): 3449 alive 0.570ms >>>>> [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: Mark live objects 311.719ms >>>>> >>>>> [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects >>>>> [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) JVMCI::do_unloading(): 3449 alive 0.000ms >>>>> [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: Mark live objects 448.495ms >>>>> >>>>> [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects >>>>> [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) JVMCI::do_unloading(): 3491 alive 0.882ms >>>>> [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: Mark live objects 365.403ms >>>>> >>>>> [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects >>>>> [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) JVMCI::do_unloading(): 3478 alive 0.616ms >>>>> [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: Mark live objects 368.643ms >>>>> >>>>> [1806.139s][1554230965694ms][info?? ][gc,start?????? ] GC(4014) Pause Remark >>>>> [1806.161s][1554230965716ms][info?? ][gc???????????? ] GC(4014) JVMCI::do_unloading(): 3478 alive 0.000ms >>>>> [1806.163s][1554230965717ms][info?? ][gc???????????? ] GC(4014) Pause Remark 1305M->1177M(2772M) 23.190ms >>>>> >>>>> >>>>> >>>>> On 4/1/19 12:34 AM, Stefan Karlsson wrote: >>>>>> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>>>>>> Stefan, >>>>>>> >>>>>>> Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? >>>>>> >>>>>> >>>>>> -Xlog:gc prints the remark times: >>>>>> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >>>>>> >>>>>> StefanK >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>>>>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>>>>>> Hi Stefan, >>>>>>>>> >>>>>>>>> I collected some data on MetadataHandleBlock. >>>>>>>>> >>>>>>>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It >>>>>>>>> should not affect normal G1 remark pause. >>>>>>>> >>>>>>>> It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, >>>>>>>> will be affected. >>>>>>>> >>>>>>>>> >>>>>>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of execution: >>>>>>>>> >>>>>>>>> max_blocks = 232 >>>>>>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>>>>>> max_total_alive_values = 4631 >>>>>>>> >>>>>>>> OK. Thanks for the info. >>>>>>>> >>>>>>>> StefanK >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>>>>>> Thank you, Stefan >>>>>>>>>> >>>>>>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>>>>>> Hi Vladimir, >>>>>>>>>>> >>>>>>>>>>> I started to check the GC code. >>>>>>>>>>> >>>>>>>>>>> ======================================================================== >>>>>>>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>>>>>> + #endif >>>>>>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>>>>>> >>>>>>>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>>>>>>> >>>>>>>>>> okay >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ======================================================================== >>>>>>>>>>> Could you also change the following: >>>>>>>>>>> >>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>>>>>> + #endif >>>>>>>>>>> >>>>>>>>>>> to: >>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>>>>>>> >>>>>>>>>>> to get rid of some of the line noise in the GC files. >>>>>>>>>> >>>>>>>>>> okay >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ======================================================================== >>>>>>>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. >>>>>>>>>> >>>>>>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ======================================================================== >>>>>>>>>>> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >>>>>>>>>>> >>>>>>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>>>>>>> 3276???????????????????????????????????????? bool class_unloading_occurred) { >>>>>>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>>>>>>>>> 3279?? workers()->run_task(&unlink_task); >>>>>>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>>>>>> 3283 #endif >>>>>>>>>>> 3284 } >>>>>>>>>> >>>>>>>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in product >>>>>>>>>> VM) and check: >>>>>>>>>> >>>>>>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>>>>>> >>>>>>>>>> If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per array >>>>>>>>>> and array per MetadataHandleBlock block which are linked in list) and not large. >>>>>>>>>> If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ======================================================================== >>>>>>>>>>> Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? >>>>>>>>>>> >>>>>>>>>>> See how other tasks are claimed by one worker: >>>>>>>>>>> void KlassCleaningTask::work() { >>>>>>>>>>> ?? ResourceMark rm; >>>>>>>>>>> >>>>>>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>>>>>> ?? } >>>>>>>>>> >>>>>>>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask in >>>>>>>>>> JDK8. >>>>>>>>>> >>>>>>>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ======================================================================== >>>>>>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>>>>>> >>>>>>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>>>>>> +????????? // put it on the free list. >>>>>>>>>>> >>>>>>>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>>>>>>>>> >>>>>>>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>>>>>>> >>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Vladimir >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> StefanK >>>>>>>>>>> >>>>>>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>>>>> >>>>>>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>>>>>> ?- fast startup >>>>>>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>>>>>> >>>>>>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>>>>>> >>>>>>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>>>>>>> >>>>>>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested only >>>>>>>>>>>> in tier3. >>>>>>>>>>>> >>>>>>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were found >>>>>>>>>>>> which were present before these changes. >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Vladimir >>>>>>>>>>>> >>>>>>>>>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>>>>>> From vladimir.kozlov at oracle.com Sat Apr 6 00:58:34 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 5 Apr 2019 17:58:34 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> Message-ID: <66bedd54-769a-b6e7-1508-b5648e9c3e82@oracle.com> Thank you, Kim On 4/5/19 4:56 PM, Kim Barrett wrote: >> On Apr 4, 2019, at 7:45 PM, Vladimir Kozlov wrote: >> >> Thank you, Kim >> >> I moved HotSpotJVMCI methods which call JNIHandles::resolve() to jvmciJavaClasses.cpp file so that I don't need to include jniHandles.inline.hpp into .hpp file. >> >> I will update changes when I address your other comments. > > Here are a few more comments. > > I really want to better understand the MetadataHandle stuff, but I > don't think I will be able to seriously dig into it for a while. I'm > not sanguine about what seems to be yet another weak reference > mechanism. > > ------------------------------------------------------------------------------ > src/hotspot/share/prims/jvmtiTagMap.cpp > 3046 blk.set_kind(JVMTI_HEAP_REFERENCE_OTHER); > 3047 Universe::oops_do(&blk); > 3048 > 3049 #if INCLUDE_JVMCI > 3050 blk.set_kind(JVMTI_HEAP_REFERENCE_OTHER); > 3051 JVMCI::oops_do(&blk); > 3052 if (blk.stopped()) { > 3053 return false; > 3054 } > 3055 #endif > > (New code starts with line 3049.) > > There should probably be a blk.stopped() check after the call to > Universe::oops_do. This seems like a pre-existing bug, made more > apparent by the addition of the JVMCI code. Yes, I was also puzzled about that. I will add check. > > ------------------------------------------------------------------------------ > src/hotspot/share/classfile/classFileParser.cpp > 5634 if (!is_internal()) { > 5635 bool trace_class_loading = log_is_enabled(Info, class, load); > 5636 #if INCLUDE_JVMCI > 5637 bool trace_loading_cause = TraceClassLoadingCause != NULL && > 5638 (strcmp(TraceClassLoadingCause, "*") == 0 || > 5639 strstr(ik->external_name(), TraceClassLoadingCause) != NULL); > 5640 trace_class_loading = trace_class_loading || trace_loading_cause; > 5641 #endif > 5642 if (trace_class_loading) { > 5643 ResourceMark rm; > 5644 const char* module_name = (module_entry->name() == NULL) ? UNNAMED_MODULE : module_entry->name()->as_C_string(); > 5645 ik->print_class_load_logging(_loader_data, module_name, _stream); > 5646 #if INCLUDE_JVMCI > 5647 if (trace_loading_cause) { > 5648 JavaThread::current()->print_stack_on(tty); > 5649 } > 5650 #endif > > This appears to be attempting to force a call to > print_class_load_logging if either log_is_enabled(Info, class, load) > is true or if TraceClassLoadingCause triggers it. But > print_class_load_logging does nothing if that logging is not enabled, > with the result that if it isn't enabled, but the tracing option > matches, we'll get a mysterious stack trace printed and nothing else. > I suspect that isn't the desired behavior. I will fix it. > > ------------------------------------------------------------------------------ > src/hotspot/share/jvmci/jvmciRuntime.cpp > 135 // The following instance variables are only used by the first block in a chain. > 136 // Having two types of blocks complicates the code and the space overhead is negligible. > 137 MetadataHandleBlock* _last; // Last block in use > 138 intptr_t _free_list; // Handle free list > 139 int _allocate_before_rebuild; // Number of blocks to allocate before rebuilding free list > > There seems to be exactly one MetadataHandleBlock chain, in > _metadata_handles. So these instance variables could be static class > variables. Agree. > > If there was more than one list, I think a better approach would have > a MetadataHandleBlockList class that had those members and a pointer > to the first block in the list. This code seems to have been pretty > much copy-paste-modified from JNIHandleBlock, which has somewhat > different requirements and usage pattern. It is only one list. I don't want to complicate it. Thanks, Vladimir From kim.barrett at oracle.com Fri Apr 5 23:56:15 2019 From: kim.barrett at oracle.com (Kim Barrett) Date: Fri, 5 Apr 2019 19:56:15 -0400 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> Message-ID: > On Apr 4, 2019, at 7:45 PM, Vladimir Kozlov wrote: > > Thank you, Kim > > I moved HotSpotJVMCI methods which call JNIHandles::resolve() to jvmciJavaClasses.cpp file so that I don't need to include jniHandles.inline.hpp into .hpp file. > > I will update changes when I address your other comments. Here are a few more comments. I really want to better understand the MetadataHandle stuff, but I don't think I will be able to seriously dig into it for a while. I'm not sanguine about what seems to be yet another weak reference mechanism. ------------------------------------------------------------------------------ src/hotspot/share/prims/jvmtiTagMap.cpp 3046 blk.set_kind(JVMTI_HEAP_REFERENCE_OTHER); 3047 Universe::oops_do(&blk); 3048 3049 #if INCLUDE_JVMCI 3050 blk.set_kind(JVMTI_HEAP_REFERENCE_OTHER); 3051 JVMCI::oops_do(&blk); 3052 if (blk.stopped()) { 3053 return false; 3054 } 3055 #endif (New code starts with line 3049.) There should probably be a blk.stopped() check after the call to Universe::oops_do. This seems like a pre-existing bug, made more apparent by the addition of the JVMCI code. ------------------------------------------------------------------------------ src/hotspot/share/classfile/classFileParser.cpp 5634 if (!is_internal()) { 5635 bool trace_class_loading = log_is_enabled(Info, class, load); 5636 #if INCLUDE_JVMCI 5637 bool trace_loading_cause = TraceClassLoadingCause != NULL && 5638 (strcmp(TraceClassLoadingCause, "*") == 0 || 5639 strstr(ik->external_name(), TraceClassLoadingCause) != NULL); 5640 trace_class_loading = trace_class_loading || trace_loading_cause; 5641 #endif 5642 if (trace_class_loading) { 5643 ResourceMark rm; 5644 const char* module_name = (module_entry->name() == NULL) ? UNNAMED_MODULE : module_entry->name()->as_C_string(); 5645 ik->print_class_load_logging(_loader_data, module_name, _stream); 5646 #if INCLUDE_JVMCI 5647 if (trace_loading_cause) { 5648 JavaThread::current()->print_stack_on(tty); 5649 } 5650 #endif This appears to be attempting to force a call to print_class_load_logging if either log_is_enabled(Info, class, load) is true or if TraceClassLoadingCause triggers it. But print_class_load_logging does nothing if that logging is not enabled, with the result that if it isn't enabled, but the tracing option matches, we'll get a mysterious stack trace printed and nothing else. I suspect that isn't the desired behavior. ------------------------------------------------------------------------------ src/hotspot/share/jvmci/jvmciRuntime.cpp 135 // The following instance variables are only used by the first block in a chain. 136 // Having two types of blocks complicates the code and the space overhead is negligible. 137 MetadataHandleBlock* _last; // Last block in use 138 intptr_t _free_list; // Handle free list 139 int _allocate_before_rebuild; // Number of blocks to allocate before rebuilding free list There seems to be exactly one MetadataHandleBlock chain, in _metadata_handles. So these instance variables could be static class variables. If there was more than one list, I think a better approach would have a MetadataHandleBlockList class that had those members and a pointer to the first block in the list. This code seems to have been pretty much copy-paste-modified from JNIHandleBlock, which has somewhat different requirements and usage pattern. ------------------------------------------------------------------------------ From rkennke at redhat.com Tue Apr 9 19:49:14 2019 From: rkennke at redhat.com (Roman Kennke) Date: Tue, 09 Apr 2019 21:49:14 +0200 Subject: How to make progress with PRs? In-Reply-To: <0a96df22-880d-bcdc-949b-452b4a1432d2@oracle.com> References: <0a96df22-880d-bcdc-949b-452b4a1432d2@oracle.com> Message-ID: <60b7d2c6689f85a34252db0a8803b018a9f25531.camel@redhat.com> Hi Tom, I recently set up a new machine, and unfortunately pushed some changesets with wrong email, which makes oca-check complain in both pull requests. They should otherwise be good now. Thanks, Roman > Roman Kennke wrote on 4/3/19 7:59 AM: > > Hello, > > > > I have two PRs lingering: > > > > https://github.com/oracle/graal/pull/1015 > > Sorry, I'll update with your latest changes and push if everything > looks > good. > > tom > > > and > > https://github.com/oracle/graal/pull/1117 > > > > which should be good to integrate, afaict. Which buttons do I need > > to > > press to get progress on them? :-) > > > > Thanks, > > Roman From vladimir.kozlov at oracle.com Wed Apr 10 02:25:35 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 9 Apr 2019 19:25:35 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <63b8e1d2-3516-88f5-02ac-828dd15baf83@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> <63b8e1d2-3516-88f5-02ac-828dd15baf83@oracle.com> Message-ID: <39774cdd-de9e-c878-4a5a-f6595a93859f@oracle.com> Thank you, Coleen On 4/9/19 1:36 PM, coleen.phillimore at oracle.com wrote: > > I think I missed graal-dev with this reply.? I have a few other comments. > > +void MetadataHandleBlock::do_unloading(BoolObjectClosure* is_alive) { > > > We've removed the is_alive parameter from all do_unloading, and it appears unused here also. Yes, I can remove it. > > I don't know about this MetadataHandles block.?? It seems that it could be a concurrent hashtable > with a WeakHandle<> if it's for jdk11 and beyond.? Kim might have mentioned this (I haven't read all > the replies thoroughly) but JNIHandleBlock wasn't MT safe, and the new OopStorage is safe and scalable. Yes, Kim also suggested OopStorage. I did not get into that part yet but I will definitely do. > > +? jmetadata allocate_handle(methodHandle handle)?????? { return allocate_metadata_handle(handle()); } > +? jmetadata allocate_handle(constantPoolHandle handle) { return allocate_metadata_handle(handle()); } > > +CompLevel JVMCI::adjust_comp_level(methodHandle method, bool is_osr, CompLevel level, JavaThread* > thread) { > > +JVMCIObject JVMCIEnv::new_StackTraceElement(methodHandle method, int bci, JVMCI_TRAPS) { > > +JVMCIObject JVMCIEnv::new_HotSpotNmethod(methodHandle method, const char* name, jboolean isDefault, > jlong compileId, JVMCI_TRAPS) { > > Passing metadata Handles by copy will call the copy constructor and destructor for these parameters > unnecessarily.? They should be passed as *const* references to avoid this. Okay. > > +class MetadataHandleBlock : public CHeapObj { > > > There should be a better mt? for this.? mtCompiler seems appropriate here.? Depending on how many > others of these, you could add an mtJVMCI. mtJVMCI is good suggestion. > > +??????????? if (TraceNMethodInstalls) { > > > We've had Unified Logging in the sources for a long time now. New code should use UL rather than > adding a TraceSomething option.?? I understand it's supposed to be shared with JDK8 code but it > seems that you're forward porting what looks like old code into the repository. Yes, we should use UL for this. Existing JIT code (ciEnv.cpp) still not using UL for this: http://hg.openjdk.java.net/jdk/jdk/file/f847a42ddc01/src/hotspot/share/ci/ciEnv.cpp#l1075 May be I should update it too ... > > Coleen > > > On 4/9/19 4:00 PM, coleen.phillimore at oracle.com wrote: >> >> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/src/hotspot/share/classfile/classFileParser.cpp.udiff.html >> >> >> It appears this change is to implement https://bugs.openjdk.java.net/browse/JDK-8193513 which we >> closed as WNF.? If you want this change, remove it from this giant patch and reopen and submit a >> separate patch for this bug. Thank you for pointing it. I will do as you suggested. >> >> It shouldn't be conditional on JVMCI and should use the normal unified logging mechanism. Okay. >> >> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/src/hotspot/share/runtime/thread.hpp.udiff.html >> >> *!_jlong__pending_failed_speculation;* >> >> >> We've been trying to remove and avoid java types in hotspot code and use the appropriate C++ types >> instead.? Can this be changed to int64_t?? 'long' is generally wrong though. This field should be java type since it is accessed from Java Graal: http://hg.openjdk.java.net/jdk/jdk/file/f847a42ddc01/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.hotspot/src/org/graalvm/compiler/hotspot/GraalHotSpotVMConfig.java#l401 >> >> I seem to remember there was code to deal with metadata in oops for redefinition, but I can't find >> it in this big patch.? I was going to look at that. May be it is MetadataHandleBlock::metadata_do() (in jvmciRuntime.cpp)? >> >> Otherwise, I've reviewed the runtime code. Thanks, Vladimir >> >> Coleen >> >> On 4/4/19 3:22 AM, Vladimir Kozlov wrote: >>> New delta: >>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ >>> >>> Full: >>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ >>> >>> New changes are based on Kim and Stefan suggestions: >>> >>> - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. >>> - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). >>> - Used JVMCI_ONLY macro with COMMA. >>> - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. >>> Disabling also helps to find missing JVMCI guards. >>> >>> I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). >>> I started hs-tier4..8-graal testing. >>> I will do performance testing next. >>> >>> Thanks, >>> Vladimir >>> >>> On 4/3/19 9:54 AM, Vladimir Kozlov wrote: >>>> On 4/2/19 11:35 PM, Stefan Karlsson wrote: >>>>> On 2019-04-02 22:41, Vladimir Kozlov wrote: >>>>>> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent >>>>>> even without Graal. >>>>>> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The >>>>>> result is < 1ms - it is less than 1% of a pause time. >>>>> >>>>> Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run >>>>> these changes through our internal performance test setup. >>>> >>>> Okay, I will run performance tests there too. >>>> >>>>> >>>>>> >>>>>> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel >>>>>> worker thread as Stefan suggested. >>>>>> >>>>>> Stefan, are you satisfied with these changes now? >>>>> >>>>> Yes, the clean-ups look good. Thanks for cleaning this up. >>>>> >>>>> Kim had some extra comments about a few more places where JVMCI_ONLY could be used. >>>>> >>>>> I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think >>>>> you should put it where you put the AOTLoader::oops_do calls. >>>> >>>> Okay. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>>> >>>>> Thanks, >>>>> StefanK >>>>> >>>>> >>>>>> >>>>>> Here is latest delta update which includes previous [1] delta and >>>>>> - use CompilerThreadStackSize * 2 for libgraal instead of exact value, >>>>>> - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), >>>>>> - added recent jvmci-8 changes to fix registration of native methods in libgraal >>>>>> (jvmciCompilerToVM.cpp) >>>>>> >>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ >>>>>> [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>> [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' >>>>>> shows how many metadata references were processed): >>>>>> >>>>>> [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark >>>>>> [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms >>>>>> [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark 28M->28M(108M) 16.123ms >>>>>> >>>>>> [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark >>>>>> [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms >>>>>> [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark 215M->213M(720M) 51.103ms >>>>>> >>>>>> [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects >>>>>> [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) JVMCI::do_unloading(): 4048 alive >>>>>> 0.821ms >>>>>> [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: Mark live objects 344.107ms >>>>>> >>>>>> [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects >>>>>> [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) JVMCI::do_unloading(): 3266 alive >>>>>> 0.470ms >>>>>> [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: Mark live objects 316.527ms >>>>>> >>>>>> [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark >>>>>> [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) JVMCI::do_unloading(): 3474 >>>>>> alive 0.642ms >>>>>> [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause Remark 2502M->2486M(4248M) >>>>>> 163.296ms >>>>>> >>>>>> [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects >>>>>> [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) JVMCI::do_unloading(): 3449 >>>>>> alive 0.570ms >>>>>> [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: Mark live objects >>>>>> 311.719ms >>>>>> >>>>>> [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects >>>>>> [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) JVMCI::do_unloading(): 3449 >>>>>> alive 0.000ms >>>>>> [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: Mark live objects >>>>>> 448.495ms >>>>>> >>>>>> [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects >>>>>> [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) JVMCI::do_unloading(): 3491 >>>>>> alive 0.882ms >>>>>> [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: Mark live objects >>>>>> 365.403ms >>>>>> >>>>>> [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects >>>>>> [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) JVMCI::do_unloading(): 3478 >>>>>> alive 0.616ms >>>>>> [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: Mark live objects >>>>>> 368.643ms >>>>>> >>>>>> [1806.139s][1554230965694ms][info?? ][gc,start?????? ] GC(4014) Pause Remark >>>>>> [1806.161s][1554230965716ms][info?? ][gc???????????? ] GC(4014) JVMCI::do_unloading(): 3478 >>>>>> alive 0.000ms >>>>>> [1806.163s][1554230965717ms][info?? ][gc???????????? ] GC(4014) Pause Remark >>>>>> 1305M->1177M(2772M) 23.190ms >>>>>> >>>>>> >>>>>> >>>>>> On 4/1/19 12:34 AM, Stefan Karlsson wrote: >>>>>>> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>>>>>>> Stefan, >>>>>>>> >>>>>>>> Do you have a test (and flags) which can allow me to measure effect of this code on G1 >>>>>>>> remark pause? >>>>>>> >>>>>>> >>>>>>> -Xlog:gc prints the remark times: >>>>>>> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >>>>>>> >>>>>>> StefanK >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>>>>>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>>>>>>> Hi Stefan, >>>>>>>>>> >>>>>>>>>> I collected some data on MetadataHandleBlock. >>>>>>>>>> >>>>>>>>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it >>>>>>>>>> is rare case. It should not affect normal G1 remark pause. >>>>>>>>> >>>>>>>>> It's only rare for applications that don't do dynamic class loading and unloading. The >>>>>>>>> applications that do, will be affected. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data >>>>>>>>>> at the end of execution: >>>>>>>>>> >>>>>>>>>> max_blocks = 232 >>>>>>>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>>>>>>> max_total_alive_values = 4631 >>>>>>>>> >>>>>>>>> OK. Thanks for the info. >>>>>>>>> >>>>>>>>> StefanK >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Vladimir >>>>>>>>>> >>>>>>>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>>>>>>> Thank you, Stefan >>>>>>>>>>> >>>>>>>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>>>>>>> Hi Vladimir, >>>>>>>>>>>> >>>>>>>>>>>> I started to check the GC code. >>>>>>>>>>>> >>>>>>>>>>>> ======================================================================== >>>>>>>>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>>>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>>>>>>> + #endif >>>>>>>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>>>>>>> >>>>>>>>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>>>>>>>> >>>>>>>>>>> okay >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ======================================================================== >>>>>>>>>>>> Could you also change the following: >>>>>>>>>>>> >>>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>>>>>>> + #endif >>>>>>>>>>>> >>>>>>>>>>>> to: >>>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>>> + JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>>>>>>>> >>>>>>>>>>>> to get rid of some of the line noise in the GC files. >>>>>>>>>>> >>>>>>>>>>> okay >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ======================================================================== >>>>>>>>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent >>>>>>>>>>>> cleaning for ZGC. >>>>>>>>>>> >>>>>>>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ======================================================================== >>>>>>>>>>>> What's the performance impact for G1 remark pause with this serial walk over the >>>>>>>>>>>> MetadataHandleBlock? >>>>>>>>>>>> >>>>>>>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>>>>>>>> 3276 bool class_unloading_occurred) { >>>>>>>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, >>>>>>>>>>>> false); >>>>>>>>>>>> 3279 workers()->run_task(&unlink_task); >>>>>>>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>>>>>>> 3283 #endif >>>>>>>>>>>> 3284 } >>>>>>>>>>> >>>>>>>>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is >>>>>>>>>>> inlined in product VM) and check: >>>>>>>>>>> >>>>>>>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> If Graal is used it should not have big impact since these metadata has regular pattern >>>>>>>>>>> (32 handles per array and array per MetadataHandleBlock block which are linked in list) >>>>>>>>>>> and not large. >>>>>>>>>>> If there will be noticeable impact - we will work on it as you suggested by using >>>>>>>>>>> ParallelCleaningTask. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ======================================================================== >>>>>>>>>>>> Did you consider adding it as a task for one of the worker threads to execute in >>>>>>>>>>>> ParallelCleaningTask? >>>>>>>>>>>> >>>>>>>>>>>> See how other tasks are claimed by one worker: >>>>>>>>>>>> void KlassCleaningTask::work() { >>>>>>>>>>>> ?? ResourceMark rm; >>>>>>>>>>>> >>>>>>>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>>>>>>> ?? } >>>>>>>>>>> >>>>>>>>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no >>>>>>>>>>> ParallelCleaningTask in JDK8. >>>>>>>>>>> >>>>>>>>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ======================================================================== >>>>>>>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>>>>>>> >>>>>>>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>>>>>>> +????????? // put it on the free list. >>>>>>>>>>>> >>>>>>>>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find >>>>>>>>>>>> this code? >>>>>>>>>>> >>>>>>>>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>>>>>>>> >>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Vladimir >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> StefanK >>>>>>>>>>>> >>>>>>>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>>>>>> >>>>>>>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>>>>>>> ?- fast startup >>>>>>>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>>>>>>> >>>>>>>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>>>>>>> >>>>>>>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>>>>>>>> >>>>>>>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set >>>>>>>>>>>>> Graal was tested only in tier3. >>>>>>>>>>>>> >>>>>>>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. >>>>>>>>>>>>> Several issue were found which were present before these changes. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Vladimir >>>>>>>>>>>>> >>>>>>>>>>>>> [1] >>>>>>>>>>>>> https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>>>>>>> >> > From shade at redhat.com Thu Apr 11 09:40:13 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 11 Apr 2019 11:40:13 +0200 Subject: Epsilon + Graal Message-ID: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> Hi, I am tinkering with Epsilon support for Graal (targeting AOT binaries with no GC). This patch applies to jdk/jdk: http://cr.openjdk.java.net/~shade/epsilon/graal-support.patch ...and passes this test suite: $ CONF=linux-x86_64-server-fastdebug make images run-test TEST=compiler/aot TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC" ...and does this: $ build/linux-x86_64-server-release/images/jdk/bin/jaotc -J-XX:+UseEpsilonGC --info HelloWorld.class --output hello-epsilon.so $ build/linux-x86_64-server-release/images/jdk/bin/java -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC -Xlog:gc -XX:+UseAOT -XX:AOTLibrary=./hello-epsilon.so HelloWorld HelloWorld [0.004s][info][gc] Resizeable heap; starting at 2009M, max: 30718M, step: 128M [0.004s][info][gc] Using TLAB allocation; max: 4096K [0.004s][info][gc] Elastic TLABs enabled; elasticity: 1.10x [0.004s][info][gc] Elastic TLABs decay enabled; decay time: 1000ms [0.004s][info][gc] Using Epsilon [0.038s][info][gc] Heap: 30718M reserved, 2009M (6.54%) committed, 698K (0.00%) used real 0m0.048s user 0m0.064s sys 0m0.014s I am confused what to do next. Some process questions: a) Where do I propose the patch? As GitHub PR to oracle/graal, is that right? b) The change requires adjustments in JVMCI, how is that handled? I assume JVMCI and Graal changes are done independently? In that case, there is a bit of circularity here: I cannot put JVMCI change in without breaking runs with Epsilon for a while, and cannot put Epsilon changes in before JVMCI is updated? c) Pretty sure current patch fails some write barrier verification, because verification assumes either G1 or CardTable-based BarrierSet. Do we expect to clean up verification before the Epsilon patch, or can it be done within the patch? d) How do we run Graal tests (especially given the need for JVMCI adjustments)? Are they run automatically on PR proposal? -- Thanks, -Aleksey From rkennke at redhat.com Thu Apr 11 09:46:20 2019 From: rkennke at redhat.com (Roman Kennke) Date: Thu, 11 Apr 2019 11:46:20 +0200 Subject: Epsilon + Graal In-Reply-To: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> Message-ID: > I am tinkering with Epsilon support for Graal (targeting AOT binaries with no GC). This patch > applies to jdk/jdk: > http://cr.openjdk.java.net/~shade/epsilon/graal-support.patch > > ...and passes this test suite: > > $ CONF=linux-x86_64-server-fastdebug make images run-test TEST=compiler/aot > TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC" > > ...and does this: > > $ build/linux-x86_64-server-release/images/jdk/bin/jaotc -J-XX:+UseEpsilonGC --info HelloWorld.class > --output hello-epsilon.so > $ build/linux-x86_64-server-release/images/jdk/bin/java -XX:+UnlockExperimentalVMOptions > -XX:+UseEpsilonGC -Xlog:gc -XX:+UseAOT -XX:AOTLibrary=./hello-epsilon.so HelloWorld > HelloWorld > [0.004s][info][gc] Resizeable heap; starting at 2009M, max: 30718M, step: 128M > [0.004s][info][gc] Using TLAB allocation; max: 4096K > [0.004s][info][gc] Elastic TLABs enabled; elasticity: 1.10x > [0.004s][info][gc] Elastic TLABs decay enabled; decay time: 1000ms > [0.004s][info][gc] Using Epsilon > [0.038s][info][gc] Heap: 30718M reserved, 2009M (6.54%) committed, 698K (0.00%) used > > real 0m0.048s > user 0m0.064s > sys 0m0.014s Cool! > I am confused what to do next. Some process questions: > > a) Where do I propose the patch? As GitHub PR to oracle/graal, is that right? Well, the Hotspot part to one of the hotspot mailing list (probably this here?). For the Graal part, yes, as PR vs oracle/graal. > b) The change requires adjustments in JVMCI, how is that handled? I assume JVMCI and Graal changes > are done independently? In that case, there is a bit of circularity here: I cannot put JVMCI change > in without breaking runs with Epsilon for a while, and cannot put Epsilon changes in before JVMCI is > updated? Dunno about that. > c) Pretty sure current patch fails some write barrier verification, because verification assumes > either G1 or CardTable-based BarrierSet. Do we expect to clean up verification before the Epsilon > patch, or can it be done within the patch? I can have a look at that because I'm currently knee-deep in that code anyway. > d) How do we run Graal tests (especially given the need for JVMCI adjustments)? Are they run > automatically on PR proposal? Yes, when you file a PR, *some* CI stuff is run automatically. other than that, you should at least run 'mx unittest' from your graal/compiler project locally. Roman From doug.simon at oracle.com Thu Apr 11 11:20:00 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 11 Apr 2019 13:20:00 +0200 Subject: Epsilon + Graal In-Reply-To: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> Message-ID: <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> Hi Aleksey, It would be great to see support for Epsilon in Graal. More inline below: > On 11 Apr 2019, at 11:40, Aleksey Shipilev wrote: > > Hi, > > I am tinkering with Epsilon support for Graal (targeting AOT binaries with no GC). This patch > applies to jdk/jdk: > http://cr.openjdk.java.net/~shade/epsilon/graal-support.patch > > ...and passes this test suite: > > $ CONF=linux-x86_64-server-fastdebug make images run-test TEST=compiler/aot > TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC" > > ...and does this: > > $ build/linux-x86_64-server-release/images/jdk/bin/jaotc -J-XX:+UseEpsilonGC --info HelloWorld.class > --output hello-epsilon.so > $ build/linux-x86_64-server-release/images/jdk/bin/java -XX:+UnlockExperimentalVMOptions > -XX:+UseEpsilonGC -Xlog:gc -XX:+UseAOT -XX:AOTLibrary=./hello-epsilon.so HelloWorld > HelloWorld > [0.004s][info][gc] Resizeable heap; starting at 2009M, max: 30718M, step: 128M > [0.004s][info][gc] Using TLAB allocation; max: 4096K > [0.004s][info][gc] Elastic TLABs enabled; elasticity: 1.10x > [0.004s][info][gc] Elastic TLABs decay enabled; decay time: 1000ms > [0.004s][info][gc] Using Epsilon > [0.038s][info][gc] Heap: 30718M reserved, 2009M (6.54%) committed, 698K (0.00%) used > > real 0m0.048s > user 0m0.064s > sys 0m0.014s > > > I am confused what to do next. Some process questions: > > a) Where do I propose the patch? As GitHub PR to oracle/graal, is that right? Yes, that?s where the Graal changes should go. > b) The change requires adjustments in JVMCI, how is that handled? I assume JVMCI and Graal changes > are done independently? In that case, there is a bit of circularity here: I cannot put JVMCI change > in without breaking runs with Epsilon for a while, and cannot put Epsilon changes in before JVMCI is > updated? I think you can make the Graal changes independently of the JVMCI changes with this in GraalHotSpotVMConfig: public final boolean useEpsilonGC = getFlag("UseEpsilonGC", Boolean.class, false); That means the JVMCI patch can be submitted separately. > c) Pretty sure current patch fails some write barrier verification, because verification assumes > either G1 or CardTable-based BarrierSet. Do we expect to clean up verification before the Epsilon > patch, or can it be done within the patch? Roman has volunteered to look into this so hopefully you can co-ordinate with him. I don?t see any problem with doing it all in one PR. > d) How do we run Graal tests (especially given the need for JVMCI adjustments)? Are they run > automatically on PR proposal? There are a few tests run in the Travis gate on a GitHub PR but I doubt these would be enough for what you want. We perform a bunch more testing when integrating a Graal PR internally. Any issues discovered there will be post to the GitHub PR, hopefully with commands to reproduce. One process option is to submit a normal JDK webrev with both JVMCI and Graal changes at the same time as submitting a Graal GitHub PR. This allows you to do whatever testing you want in the normal OpenJDK workflow. During the periodic Graal syncs to OpenJDK (which are thankfully becoming more frequent thanks to Jesper Wilhelmsson) , the Graal changes in OpenJDK will simply be overwritten. Hope that helps! -Doug From shade at redhat.com Thu Apr 11 18:06:21 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Thu, 11 Apr 2019 20:06:21 +0200 Subject: Epsilon + Graal In-Reply-To: <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> Message-ID: <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> Hi Doug, Some more questions, if you will: On 4/11/19 1:20 PM, Doug Simon wrote: >> b) The change requires adjustments in JVMCI, how is that handled? I assume JVMCI and Graal changes >> are done independently? In that case, there is a bit of circularity here: I cannot put JVMCI change >> in without breaking runs with Epsilon for a while, and cannot put Epsilon changes in before JVMCI is >> updated? > > I think you can make the Graal changes independently of the JVMCI changes with this in > GraalHotSpotVMConfig: > > ? ??public final boolean useEpsilonGC = getFlag("UseEpsilonGC", Boolean.class, false); > > That means the JVMCI patch can be submitted separately. Yes, but that would mean I cannot run Graal tests with Epsilon enabled, or? > One process option is to submit a normal JDK webrev with both JVMCI and Graal changes at the same > time as submitting a Graal GitHub PR. This allows you to do whatever testing you want in the normal > OpenJDK workflow. During the periodic Graal syncs to OpenJDK (which are thankfully becoming more > frequent thanks to Jesper Wilhelmsson) , the Graal changes in OpenJDK will simply be overwritten. Oh, that's nice. So, can I develop the change in jdk/jdk, and then PR the Graal subset of it to oracle/graal github? That would definitely work better for my workflow. Is there a way to run Graal unit tests from jdk/jdk? Thanks, -Aleksey From doug.simon at oracle.com Thu Apr 11 18:33:32 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 11 Apr 2019 20:33:32 +0200 Subject: Epsilon + Graal In-Reply-To: <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> Message-ID: <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> > On 11 Apr 2019, at 20:06, Aleksey Shipilev wrote: > > Hi Doug, > > Some more questions, if you will: > > On 4/11/19 1:20 PM, Doug Simon wrote: >>> b) The change requires adjustments in JVMCI, how is that handled? I assume JVMCI and Graal changes >>> are done independently? In that case, there is a bit of circularity here: I cannot put JVMCI change >>> in without breaking runs with Epsilon for a while, and cannot put Epsilon changes in before JVMCI is >>> updated? >> >> I think you can make the Graal changes independently of the JVMCI changes with this in >> GraalHotSpotVMConfig: >> >> public final boolean useEpsilonGC = getFlag("UseEpsilonGC", Boolean.class, false); >> >> That means the JVMCI patch can be submitted separately. > > Yes, but that would mean I cannot run Graal tests with Epsilon enabled, or? Correct. >> One process option is to submit a normal JDK webrev with both JVMCI and Graal changes at the same >> time as submitting a Graal GitHub PR. This allows you to do whatever testing you want in the normal >> OpenJDK workflow. During the periodic Graal syncs to OpenJDK (which are thankfully becoming more >> frequent thanks to Jesper Wilhelmsson) , the Graal changes in OpenJDK will simply be overwritten. > > Oh, that's nice. So, can I develop the change in jdk/jdk, and then PR the Graal subset of it to > oracle/graal github? That would definitely work better for my workflow. Is there a way to run Graal > unit tests from jdk/jdk? Yes, although I?ve never mastered it. There is test/hotspot/jtreg/compiler/graalunit/README.md. I?m not sure complete or up to date it is. I?ve cc?ed Katya who may be able to help with any missing info. -Doug From vladimir.kozlov at oracle.com Thu Apr 11 19:17:02 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 11 Apr 2019 12:17:02 -0700 Subject: Epsilon + Graal In-Reply-To: <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> Message-ID: <0f82ac60-f7de-03ea-f8f2-c85585b7a3ed@oracle.com> For testing graalunit in JDK I do: MYDIR=$PWD make images CONF=fastdebug make test-image-hotspot-jtreg-graal CONF=fastdebug cd open/test/hotspot/jtreg Run jtreg with -Dgraalunit.libs=$MYDIR/build/fastdebug/images/test/hotspot/jtreg/graal/ compiler/graalunit There is also 'make test' command to run in top directory but I forgot flags for it. Vladimir On 4/11/19 11:33 AM, Doug Simon wrote: > > >> On 11 Apr 2019, at 20:06, Aleksey Shipilev wrote: >> >> Hi Doug, >> >> Some more questions, if you will: >> >> On 4/11/19 1:20 PM, Doug Simon wrote: >>>> b) The change requires adjustments in JVMCI, how is that handled? I assume JVMCI and Graal changes >>>> are done independently? In that case, there is a bit of circularity here: I cannot put JVMCI change >>>> in without breaking runs with Epsilon for a while, and cannot put Epsilon changes in before JVMCI is >>>> updated? >>> >>> I think you can make the Graal changes independently of the JVMCI changes with this in >>> GraalHotSpotVMConfig: >>> >>> public final boolean useEpsilonGC = getFlag("UseEpsilonGC", Boolean.class, false); >>> >>> That means the JVMCI patch can be submitted separately. >> >> Yes, but that would mean I cannot run Graal tests with Epsilon enabled, or? > > Correct. > >>> One process option is to submit a normal JDK webrev with both JVMCI and Graal changes at the same >>> time as submitting a Graal GitHub PR. This allows you to do whatever testing you want in the normal >>> OpenJDK workflow. During the periodic Graal syncs to OpenJDK (which are thankfully becoming more >>> frequent thanks to Jesper Wilhelmsson) , the Graal changes in OpenJDK will simply be overwritten. >> >> Oh, that's nice. So, can I develop the change in jdk/jdk, and then PR the Graal subset of it to >> oracle/graal github? That would definitely work better for my workflow. Is there a way to run Graal >> unit tests from jdk/jdk? > > Yes, although I?ve never mastered it. There is test/hotspot/jtreg/compiler/graalunit/README.md. I?m not sure complete or up to date it is. I?ve cc?ed Katya who may be able to help with any missing info. > > -Doug > From jean-philippe.halimi at intel.com Fri Apr 12 23:00:46 2019 From: jean-philippe.halimi at intel.com (Halimi, Jean-Philippe) Date: Fri, 12 Apr 2019 23:00:46 +0000 Subject: x86 FMA intrinsic support design In-Reply-To: <7c0a8607-4585-619d-2d4f-9cbb9d7caf62@oracle.com> References: <7c0a8607-4585-619d-2d4f-9cbb9d7caf62@oracle.com> Message-ID: Hi all, Thanks a lot for your feedback. It has been two weeks, and I have made some progress, however it looks like the design shared earlier is incomplete. From what I can see, there are a few classes missing in Graal to allow the implementation. 1. FusedMultiplyAddNode needs to extend a new TernaryNode. 2. AMD64ArithmeticLIRGenerator::emitFusedMultiplyAdd needs to be added, and I believe it needs a new AMD64Ternary class for code generation, to call the VexRVMOp. --> Here, I am not sure of whether AMD64Ternary is necessary, but I believe it is, since we are reading three values and writing back to the first one. Do you believe this is the appropriate approach? Thanks -Jp -----Original Message----- From: graal-dev [mailto:graal-dev-bounces at openjdk.java.net] On Behalf Of Gilles Duboscq Sent: Friday, March 29, 2019 2:34 AM To: graal-dev at openjdk.java.net Subject: Re: x86 FMA intrinsic support design Hi Jean-Philippe, That sounds like a good plan! In terms of naming, i would call such a node `FusedMultiplyAddNode`: spelling out what it does is much more important than the fact that it comes from an intrinsic. Thanks, Gilles On 29/03/2019 01:04, Halimi, Jean-Philippe wrote: > Hello, > > I am currently looking into adding support for FMA intrinsics in Graal. I would like to share what I plan to do to make sure it is how it should be implemented. > > > 1. Add VexRVMOp class support in AMD64Assembler with the corresponding FMA instructions > > a. It requires to add the VexOpAssertion.FMA and CPUFeature.FMA flags > > 2. Add UseFMA flag from HotSpot flags in GraalHotSpotVMConfig.java > > 3. Add a registerFMA method in AMD64GraphBuilderPlugins::registerMathPlugins > > a. This requires to add a specific FMAIntrinsicNode, which will emit the corresponding FMA instructions. > > Is there anything else that is needed in this case? > > Thanks for your insights, > Jp > From shade at redhat.com Mon Apr 15 09:59:34 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 15 Apr 2019 11:59:34 +0200 Subject: Epsilon + Graal In-Reply-To: <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> Message-ID: On 4/11/19 8:33 PM, Doug Simon wrote: >> Oh, that's nice. So, can I develop the change in jdk/jdk, and then PR the Graal subset of it >> to oracle/graal github? That would definitely work better for my workflow. Is there a way to >> run Graal unit tests from jdk/jdk? > > Yes, although I?ve never mastered it. There is test/hotspot/jtreg/compiler/graalunit/README.md. > I?m not sure complete or up to date it is. I?ve cc?ed Katya who may be able to help with any > missing info. Seems to work like this: $ mkdir graal-test-libs $ cd graal-test-libs $ wget (JARs mentioned in README.md) $ cd .. $ sh ./configure ... --with-graalunit-lib=graal-test-libs/ $ make run-test TEST=compiler/graalunit TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler -Djvmci.Compiler=graal" With one little wrinkle: https://bugs.openjdk.java.net/browse/JDK-8222482 -Aleksey From shade at redhat.com Mon Apr 15 10:16:08 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 15 Apr 2019 12:16:08 +0200 Subject: Epsilon + Graal In-Reply-To: References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> Message-ID: <1e64cf85-84e8-edab-8fe7-a7a4a51b8cd9@redhat.com> On 4/15/19 11:59 AM, Aleksey Shipilev wrote: > On 4/11/19 8:33 PM, Doug Simon wrote: >>> Oh, that's nice. So, can I develop the change in jdk/jdk, and then PR the Graal subset of it >>> to oracle/graal github? That would definitely work better for my workflow. Is there a way to >>> run Graal unit tests from jdk/jdk? >> >> Yes, although I?ve never mastered it. There is test/hotspot/jtreg/compiler/graalunit/README.md. >> I?m not sure complete or up to date it is. I?ve cc?ed Katya who may be able to help with any >> missing info. > Seems to work like this: > > $ mkdir graal-test-libs > $ cd graal-test-libs > $ wget (JARs mentioned in README.md) > $ cd .. > > $ sh ./configure ... --with-graalunit-lib=graal-test-libs/ > $ make run-test TEST=compiler/graalunit TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions > -XX:+EnableJVMCI -XX:+UseJVMCICompiler -Djvmci.Compiler=graal" > > With one little wrinkle: > https://bugs.openjdk.java.net/browse/JDK-8222482 ...well, maybe with another one: https://bugs.openjdk.java.net/browse/JDK-8222483 -Aleksey From shade at redhat.com Mon Apr 15 14:40:36 2019 From: shade at redhat.com (Aleksey Shipilev) Date: Mon, 15 Apr 2019 16:40:36 +0200 Subject: Epsilon + Graal In-Reply-To: <1e64cf85-84e8-edab-8fe7-a7a4a51b8cd9@redhat.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> <1e64cf85-84e8-edab-8fe7-a7a4a51b8cd9@redhat.com> Message-ID: <9eef414e-f583-8b9e-6734-39506a5a425c@redhat.com> On 4/15/19 12:16 PM, Aleksey Shipilev wrote: >> With one little wrinkle: >> https://bugs.openjdk.java.net/browse/JDK-8222482 > > ...well, maybe with another one: > https://bugs.openjdk.java.net/browse/JDK-8222483 Ignoring these two issues, the following patch passes Graal unit tests with: $ CONF=linux-x86_64-server-fastdebug make run-test TEST=compiler/graalunit/ TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler -Djvmci.Compiler=graal -XX:+UseEpsilonGC -Xmx50g" TEST_JOBS=1 Patch: http://cr.openjdk.java.net/~shade/epsilon/graal-initial/webrev.01/ I assume I can RFR it for jdk/jdk, and simultaneously PR src/jdk.internal.vm.compiler parts to oracle/graal GitHub? -Aleksey From doug.simon at oracle.com Mon Apr 15 14:51:45 2019 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 15 Apr 2019 16:51:45 +0200 Subject: Epsilon + Graal In-Reply-To: <9eef414e-f583-8b9e-6734-39506a5a425c@redhat.com> References: <15ec222f-22a0-49b3-4c7d-29d477dd3c19@redhat.com> <8CC4C60B-9F75-4D42-8FE2-992968A95AF6@oracle.com> <885935f1-62a1-4acc-5817-24c956d4647b@redhat.com> <11CE0B87-6627-482F-A996-1E9305F46BCB@oracle.com> <1e64cf85-84e8-edab-8fe7-a7a4a51b8cd9@redhat.com> <9eef414e-f583-8b9e-6734-39506a5a425c@redhat.com> Message-ID: <09836F0E-C0A6-4C79-8ED9-8FAED0968DBC@oracle.com> > On 15 Apr 2019, at 16:40, Aleksey Shipilev wrote: > > On 4/15/19 12:16 PM, Aleksey Shipilev wrote: >>> With one little wrinkle: >>> https://bugs.openjdk.java.net/browse/JDK-8222482 >> >> ...well, maybe with another one: >> https://bugs.openjdk.java.net/browse/JDK-8222483 > > Ignoring these two issues, the following patch passes Graal unit tests with: > > $ CONF=linux-x86_64-server-fastdebug make run-test TEST=compiler/graalunit/ > TEST_VM_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+EnableJVMCI -XX:+UseJVMCICompiler > -Djvmci.Compiler=graal -XX:+UseEpsilonGC -Xmx50g" TEST_JOBS=1 > > Patch: > http://cr.openjdk.java.net/~shade/epsilon/graal-initial/webrev.01/ > > I assume I can RFR it for jdk/jdk, and simultaneously PR src/jdk.internal.vm.compiler parts to > oracle/graal GitHub? Yes, please go ahead. -Doug From coppa at di.uniroma1.it Mon Apr 15 17:21:59 2019 From: coppa at di.uniroma1.it (Emilio Coppa) Date: Mon, 15 Apr 2019 19:21:59 +0200 Subject: MPLR 2019 - Call for Papers Message-ID: Apologies if you receive multiple copies of this CFP. The 16th International Conference on Managed Programming Languages & Runtimes (MPLR, formerly ManLang) is a premier forum for presenting and discussing novel results in all aspects of managed programming languages and runtime systems, which serve as building blocks for some of the most important computing systems around, ranging from small-scale (embedded and real-time systems) to large-scale (cloud-computing and big-data platforms) and anything in between (mobile, IoT, and wearable applications). This year, MPLR is co-located with SPLASH 2019 and sponsored by ACM. For more information, check out the conference website: https://conf.researchr.org/home/mplr-2019 # Topics Topics of interest include but are not limited to: * Languages and Compilers - Managed languages (e.g., Java, Scala, JavaScript, Python, Ruby, C#, F#, Clojure, Groovy, Kotlin, R, Smalltalk, Racket, Rust, Go, etc.) - Domain-specific languages - Language design - Compilers and interpreters - Type systems and program logics - Language interoperability - Parallelism, distribution, and concurrency * Virtual Machines - Managed runtime systems (e.g., JVM, Dalvik VM, Android Runtime (ART), LLVM, .NET CLR, RPython, etc.) - VM design and optimization - VMs for mobile and embedded devices - VMs for real-time applications - Memory management - Hardware/software co-design * Techniques, Tools, and Applications - Static and dynamic program analysis - Testing and debugging - Refactoring - Program understanding - Program synthesis - Security and privacy - Performance analysis and monitoring - Compiler and program verification # Submission Categories MPLR accepts four types of submissions: 1. Regular research papers, which describe novel contributions involving managed language platforms (up to 12 pages excluding bibliography and appendix). Research papers will be evaluated based on their relevance, novelty, technical rigor, and contribution to the state-of-the-art. 2. Work-in-progress research papers, which describe promising new ideas but yet have less maturity than full papers (up to 6 pages excluding bibliography and appendix). When evaluating work-in-progress papers, more emphasis will be placed on novelty and the potential of the new ideas than on technical rigor and experimental results. 3. Industry and tool papers, which present technical challenges and solutions for managed language platforms in the context of deployed applications and systems (up to 6 pages excluding bibliography and appendix). Industry and tool papers will be evaluated on their relevance, usefulness, and results. Suitability for demonstration and availability will also be considered for tool papers. 4. Posters, which can be accompanied by a one-page abstract and will be evaluated on similar criteria as Work-in-progress papers. Posters can accompany any submission as a way to provide additional demonstration and discussion opportunities. MPLR 2019 submissions must conform to the ACM Policy on Prior Publication and Simultaneous Submissions and to the SIGPLAN Republication Policy. # Important Dates and Organization Submission Deadline: ***Jul 8, 2019*** Author Notification: Aug 24, 2019 Camera Ready: Sep 12, 2019 Conference Dates: Oct 20-25, 2019 General Chair: Tony Hosking, Australian National University / Data61, Australia Program Chair: Irene Finocchi, Sapienza University of Rome, Italy Program Committee: * Edd Barrett, King's College London, United Kingdom * Steve Blackburn, Australian National University, Australia * Lubom?r Bulej, Charles University, Czech Republic * Shigeru Chiba, University of Tokyo, Japan * Daniele Cono D'Elia, Sapienza University of Rome, Italy * Ana L?cia de Moura, Pontifical Catholic University of Rio de Janeiro, Brazil * Erik Ernst, Google, Denmark * Matthew Hertz, University at Buffalo, United States * Vivek Kumar, Indraprastha Institute of Information Technology, Delhi * Doug Lea, State University of New York (SUNY) Oswego, United States * Magnus Madsen, Aarhus University, Denmark * Hidehiko Masuhara, Tokyo Institute of Technology, Japan * Ana Milanova, Rensselaer Polytechnic Institute, United States * Matthew Parkinson, Microsoft Research, United Kingdom * Gregor Richards, University of Waterloo, Canada * Manuel Rigger, ETH Zurich, Switzerland * Andrea Ros?, University of Lugano, Switzerland * Guido Salvaneschi, TU Darmstadt, Germany * Lukas Stadler, Oracle Labs, Austria * Ben L. Titzer, Google, Germany From jesper.wilhelmsson at oracle.com Wed Apr 17 22:13:08 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 18 Apr 2019 00:13:08 +0200 Subject: RFR: JDK-8221598 - Update Graal Message-ID: Hi, Please review the patch to integrate recent Graal changes into OpenJDK. Graal tip to integrate: 20f370437efb6b2a3f455a238da6141dc101d38c Bug: https://bugs.openjdk.java.net/browse/JDK-8221598 Webrev: http://cr.openjdk.java.net/~jwilhelm/8221598/webrev.00/ Thanks, /Jesper From bmcwhirt at redhat.com Thu Apr 18 17:29:03 2019 From: bmcwhirt at redhat.com (Bob McWhirter) Date: Thu, 18 Apr 2019 13:29:03 -0400 Subject: Graal and JDK11 Message-ID: Through a series of hacks, I've been able to create a `native-image` binary based on JDK, and then use it to produce a simple binary native-image from a Hello World application. Unlike JDK8-based, I have to pass a significant amount of `-cp` and `--module-path` arguments to the `native-image` CLI. ./latest_graalvm_home/lib/svm/bin/native-image \ -cp ~/iron/test:/Users/bob/repos/graal/sdk/mxbuild/dists/jdk11/graal-sdk.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/objectfile.jar:/Users/bob/repos/graal/truffle/mxbuild/dists/jdk11/truffle-api.jar:/Users/bob/repos/graal/compiler/mxbuild/dists/jdk11/graal.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/pointsto.jar:/Users/bob/.mx/cache/HAMCREST_42a25dc3219429f0e5d060061f71acb49bf010a0/hamcrest.jar:/Users/bob/.mx/cache/JUNIT_2973d150c0dc1fefe998f834810d68f278ea58ec/junit.jar:/Users/bob/repos/protean/mx/mxbuild/dists/jdk1.8/junit-tool.jar:/Users/bob/repos/graal/truffle/mxbuild/dists/jdk11/truffle-nfi.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk9/svm.jar:/Users/bob/.mx/cache/JLINE_c3aeac59c022bdc497c8c48ed86fa50450e4896a/jline.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/library-support.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/svm-driver.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/svm-agent.jar \ Foo \ -J--module-path=/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../boot/graal-sdk.jar:/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../truffle/truffle-api.jar\ -J--upgrade-module-path=/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../jvmci/graal.jar:/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../jvmci/graal-management.jar \ -J--add-opens=org.graalvm.truffle/com.oracle.truffle.polyglot=ALL-UNNAMED \ -J--add-opens=org.graalvm.truffle/com.oracle.truffle.api.impl=ALL-UNNAMED \ -J--add-opens=jdk.internal.vm.compiler/org.graalvm.compiler.debug=ALL-UNNAMED \ -J--add-opens=org.graalvm.sdk/org.graalvm.polyglot=ALL-UNNAMED \ -H:Name=foo \ --no-server \ -H:+ReportExceptionStackTraces The resulting binary works as you'd expect for a simplistic app: $ ./foo Hello world from Java 11.0.1 $ du -sh ./foo 13M ./foo $ file ./foo ./foo: Mach-O 64-bit executable x86_64 Thus far I've mostly be faffing about to figure out what's needed. Does anything have any insight on how to bake this stuff into the basic execution of native-image, preferably storing all the module-path and such inside the native-image binary itself, instead of having to reference outboard modules/jars/etc? Apologies if there's a better place/way to discuss this. Thanks, Bob McWhirter Red Hat From gilles.m.duboscq at oracle.com Thu Apr 18 17:31:44 2019 From: gilles.m.duboscq at oracle.com (Gilles Duboscq) Date: Thu, 18 Apr 2019 18:31:44 +0100 Subject: x86 FMA intrinsic support design In-Reply-To: References: <7c0a8607-4585-619d-2d4f-9cbb9d7caf62@oracle.com> Message-ID: Hi Jean-Philippe, Regarding `TernaryNode` you can introduce one if you want but this is not strictly necessary: there's no problem with your node directly sub-classing `FloatingNode`. For `AMD64Ternary`, you can add it. You will indeed need a class with the correct amount of @Use and @Def fields. There might be some other "ternary" operations which use a specialized classes but this is some refactoring we can look at later. Gilles On 13/04/2019 00:00, Halimi, Jean-Philippe wrote: > Hi all, > > Thanks a lot for your feedback. It has been two weeks, and I have made some progress, however it looks like the design shared earlier is incomplete. From what I can see, there are a few classes missing in Graal to allow the implementation. > > 1. FusedMultiplyAddNode needs to extend a new TernaryNode. > 2. AMD64ArithmeticLIRGenerator::emitFusedMultiplyAdd needs to be added, and I believe it needs a new AMD64Ternary class for code generation, to call the VexRVMOp. > --> Here, I am not sure of whether AMD64Ternary is necessary, but I believe it is, since we are reading three values and writing back to the first one. > > Do you believe this is the appropriate approach? > > Thanks > -Jp > > -----Original Message----- > From: graal-dev [mailto:graal-dev-bounces at openjdk.java.net] On Behalf Of Gilles Duboscq > Sent: Friday, March 29, 2019 2:34 AM > To: graal-dev at openjdk.java.net > Subject: Re: x86 FMA intrinsic support design > > Hi Jean-Philippe, > > That sounds like a good plan! > > In terms of naming, i would call such a node `FusedMultiplyAddNode`: spelling out what it does is much more important than the fact that it comes from an intrinsic. > > Thanks, > Gilles > > On 29/03/2019 01:04, Halimi, Jean-Philippe wrote: >> Hello, >> >> I am currently looking into adding support for FMA intrinsics in Graal. I would like to share what I plan to do to make sure it is how it should be implemented. >> >> >> 1. Add VexRVMOp class support in AMD64Assembler with the corresponding FMA instructions >> >> a. It requires to add the VexOpAssertion.FMA and CPUFeature.FMA flags >> >> 2. Add UseFMA flag from HotSpot flags in GraalHotSpotVMConfig.java >> >> 3. Add a registerFMA method in AMD64GraphBuilderPlugins::registerMathPlugins >> >> a. This requires to add a specific FMAIntrinsicNode, which will emit the corresponding FMA instructions. >> >> Is there anything else that is needed in this case? >> >> Thanks for your insights, >> Jp >> From doug.simon at oracle.com Thu Apr 18 21:16:47 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 18 Apr 2019 23:16:47 +0200 Subject: Graal and JDK11 In-Reply-To: References: Message-ID: <6E351EF2-0F1D-4FAE-935C-15D54B253388@oracle.com> Hi Bob, Thanks for this effort! The next steps will be to convert some of these jars into modules and add them to the JRT image. Then the native-image launcher (i.e. NativeImage class) will have to be modified to add all the module options to the inner VM command that runs the NativeImageGenerator. I?m sure there?s other pieces as well. Once we emerge from the current stabilization period (in about a month), someone (probably Danilo or Paul) will follow up with you on how we can proceed. -Doug > On 18 Apr 2019, at 19:29, Bob McWhirter wrote: > > Through a series of hacks, I've been able to create a `native-image` binary > based on JDK, and then use it to produce a simple binary native-image from > a Hello World application. > > Unlike JDK8-based, I have to pass a significant amount of `-cp` and > `--module-path` arguments to the `native-image` CLI. > > ./latest_graalvm_home/lib/svm/bin/native-image \ > -cp > ~/iron/test:/Users/bob/repos/graal/sdk/mxbuild/dists/jdk11/graal-sdk.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/objectfile.jar:/Users/bob/repos/graal/truffle/mxbuild/dists/jdk11/truffle-api.jar:/Users/bob/repos/graal/compiler/mxbuild/dists/jdk11/graal.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/pointsto.jar:/Users/bob/.mx/cache/HAMCREST_42a25dc3219429f0e5d060061f71acb49bf010a0/hamcrest.jar:/Users/bob/.mx/cache/JUNIT_2973d150c0dc1fefe998f834810d68f278ea58ec/junit.jar:/Users/bob/repos/protean/mx/mxbuild/dists/jdk1.8/junit-tool.jar:/Users/bob/repos/graal/truffle/mxbuild/dists/jdk11/truffle-nfi.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk9/svm.jar:/Users/bob/.mx/cache/JLINE_c3aeac59c022bdc497c8c48ed86fa50450e4896a/jline.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/library-support.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/svm-driver.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/svm-agent.jar > \ > Foo \ > > -J--module-path=/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../boot/graal-sdk.jar:/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../truffle/truffle-api.jar\ > > -J--upgrade-module-path=/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../jvmci/graal.jar:/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../jvmci/graal-management.jar > \ > -J--add-opens=org.graalvm.truffle/com.oracle.truffle.polyglot=ALL-UNNAMED > \ > -J--add-opens=org.graalvm.truffle/com.oracle.truffle.api.impl=ALL-UNNAMED > \ > > -J--add-opens=jdk.internal.vm.compiler/org.graalvm.compiler.debug=ALL-UNNAMED > \ > -J--add-opens=org.graalvm.sdk/org.graalvm.polyglot=ALL-UNNAMED \ > -H:Name=foo \ > --no-server \ > -H:+ReportExceptionStackTraces > > The resulting binary works as you'd expect for a simplistic app: > > $ ./foo > Hello world from Java 11.0.1 > $ du -sh ./foo > 13M ./foo > $ file ./foo > ./foo: Mach-O 64-bit executable x86_64 > > > Thus far I've mostly be faffing about to figure out what's needed. > > Does anything have any insight on how to bake this stuff into the basic > execution of native-image, preferably storing all the module-path and such > inside the native-image binary itself, instead of having to reference > outboard modules/jars/etc? > > Apologies if there's a better place/way to discuss this. > > Thanks, > > Bob McWhirter > Red Hat From forax at univ-mlv.fr Thu Apr 18 22:41:52 2019 From: forax at univ-mlv.fr (Remi Forax) Date: Fri, 19 Apr 2019 00:41:52 +0200 (CEST) Subject: Graal and JDK11 In-Reply-To: <6E351EF2-0F1D-4FAE-935C-15D54B253388@oracle.com> References: <6E351EF2-0F1D-4FAE-935C-15D54B253388@oracle.com> Message-ID: <1024840555.32008.1555627312222.JavaMail.zimbra@u-pem.fr> Hi Bob, hi Doug, at some point you will have to decide what to do with ldc on a ConstantDynamic [1], i believe it should work a lot like a constant initialized in a static block. i.e. the bootstrap method is run when creating the image and all ldc on that ConstantDynamic should be replaced by the result of the bootstrap call. R?mi [1] https://openjdk.java.net/jeps/309 ----- Mail original ----- > De: "Doug Simon" > ?: "Bob McWhirter" > Cc: "Paul W?gerer" , "graal-dev" > Envoy?: Jeudi 18 Avril 2019 23:16:47 > Objet: Re: Graal and JDK11 > Hi Bob, > > Thanks for this effort! > > The next steps will be to convert some of these jars into modules and add them > to the JRT image. Then the native-image launcher (i.e. NativeImage class) will > have to be modified to add all the module options to the inner VM command that > runs the NativeImageGenerator. I?m sure there?s other pieces as well. > > Once we emerge from the current stabilization period (in about a month), someone > (probably Danilo or Paul) will follow up with you on how we can proceed. > > -Doug > >> On 18 Apr 2019, at 19:29, Bob McWhirter wrote: >> >> Through a series of hacks, I've been able to create a `native-image` binary >> based on JDK, and then use it to produce a simple binary native-image from >> a Hello World application. >> >> Unlike JDK8-based, I have to pass a significant amount of `-cp` and >> `--module-path` arguments to the `native-image` CLI. >> >> ./latest_graalvm_home/lib/svm/bin/native-image \ >> -cp >> ~/iron/test:/Users/bob/repos/graal/sdk/mxbuild/dists/jdk11/graal-sdk.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/objectfile.jar:/Users/bob/repos/graal/truffle/mxbuild/dists/jdk11/truffle-api.jar:/Users/bob/repos/graal/compiler/mxbuild/dists/jdk11/graal.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/pointsto.jar:/Users/bob/.mx/cache/HAMCREST_42a25dc3219429f0e5d060061f71acb49bf010a0/hamcrest.jar:/Users/bob/.mx/cache/JUNIT_2973d150c0dc1fefe998f834810d68f278ea58ec/junit.jar:/Users/bob/repos/protean/mx/mxbuild/dists/jdk1.8/junit-tool.jar:/Users/bob/repos/graal/truffle/mxbuild/dists/jdk11/truffle-nfi.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk9/svm.jar:/Users/bob/.mx/cache/JLINE_c3aeac59c022bdc497c8c48ed86fa50450e4896a/jline.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/library-support.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/svm-driver.jar:/Users/bob/repos/graal/substratevm/mxbuild/dists/jdk1.8/svm-agent.jar >> \ >> Foo \ >> >> -J--module-path=/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../boot/graal-sdk.jar:/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../truffle/truffle-api.jar\ >> >> -J--upgrade-module-path=/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../jvmci/graal.jar:/Users/bob/repos/graal/vm11/mxbuild/darwin-amd64/GRAALVM_CMP_GU_GVM_NFI_POLYNATIVE_RGX_STAGE1_SVM_SVMAG_SVMCF_SVML_TFL/graalvm-unknown-1.0.0-rc15-dev/Contents/Home/lib/svm/bin/../../jvmci/graal-management.jar >> \ >> -J--add-opens=org.graalvm.truffle/com.oracle.truffle.polyglot=ALL-UNNAMED >> \ >> -J--add-opens=org.graalvm.truffle/com.oracle.truffle.api.impl=ALL-UNNAMED >> \ >> >> -J--add-opens=jdk.internal.vm.compiler/org.graalvm.compiler.debug=ALL-UNNAMED >> \ >> -J--add-opens=org.graalvm.sdk/org.graalvm.polyglot=ALL-UNNAMED \ >> -H:Name=foo \ >> --no-server \ >> -H:+ReportExceptionStackTraces >> >> The resulting binary works as you'd expect for a simplistic app: >> >> $ ./foo >> Hello world from Java 11.0.1 >> $ du -sh ./foo >> 13M ./foo >> $ file ./foo >> ./foo: Mach-O 64-bit executable x86_64 >> >> >> Thus far I've mostly be faffing about to figure out what's needed. >> >> Does anything have any insight on how to bake this stuff into the basic >> execution of native-image, preferably storing all the module-path and such >> inside the native-image binary itself, instead of having to reference >> outboard modules/jars/etc? >> >> Apologies if there's a better place/way to discuss this. >> >> Thanks, >> >> Bob McWhirter > > Red Hat From paul.woegerer at oracle.com Fri Apr 19 08:52:34 2019 From: paul.woegerer at oracle.com (Paul Woegerer) Date: Fri, 19 Apr 2019 01:52:34 -0700 (PDT) Subject: Graal and JDK11 Message-ID: <31fed4d1-56aa-4f13-99d0-f6cf33941732@default> A while back I added support for Java 11 based image building into native image. https://github.com/oracle/graal/commit/ed5f5f82d2962e5f65b3ee784a0cd62c489c41c5 Since then we build images on Java 11 as part of our regular gate tasks. E.g. See: https://travis-ci.org/oracle/graal/jobs/521993959 If you checkout master and run: [master $%=] ~/OLabs/git/svm-master/graal/substratevm> cat Hello.java public class Hello { public static void main(String[] args) { var javaVersion = System.getProperty("java.version"); System.out.println("Hello Java " + javaVersion); } } [master $%=] ~/OLabs/git/svm-master/graal/substratevm> java --version openjdk 11.0.2 2019-01-15 OpenJDK Runtime Environment 18.9 (build 11.0.2+7) OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+7, mixed mode, sharing) [master $%=] ~/OLabs/git/svm-master/graal/substratevm> javac Hello.java [master $%=] ~/OLabs/git/svm-master/graal/substratevm> mx native-image Hello [hello:19205] classlist: 1,607.95 ms WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.oracle.svm.core.jdk.FileTypeDetectorFeature (file:/home/pwoegere/OLabs/git/svm-master/graal/substratevm/mxbuild/dists/jdk9/svm.jar) to field java.nio.file.Files$FileTypeDetectors.installedDetectors WARNING: Please consider reporting this to the maintainers of com.oracle.svm.core.jdk.FileTypeDetectorFeature WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release [hello:19205] (cap): 854.96 ms [hello:19205] setup: 1,985.61 ms Warning: RecomputeFieldValue.FieldOffset automatic substitution failed. The automatic substitution registration was attempted because a call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, String) was detected in the static initializer of jdk.internal.misc.InnocuousThread. Detailed failure reason(s): Could not determine the field where the value produced by the call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, String) for the field offset computation is stored. The call is not directly followed by a field store or by a sign extend node followed directly by a field store. Warning: RecomputeFieldValue.FieldOffset automatic substitution failed. The automatic substitution registration was attempted because a call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, String) was detected in the static initializer of jdk.internal.misc.InnocuousThread. Detailed failure reason(s): Could not determine the field where the value produced by the call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, String) for the field offset computation is stored. The call is not directly followed by a field store or by a sign extend node followed directly by a field store. [hello:19205] (typeflow): 6,324.79 ms [hello:19205] (objects): 6,154.23 ms [hello:19205] (features): 245.79 ms [hello:19205] analysis: 12,930.88 ms [hello:19205] universe: 318.56 ms [hello:19205] (parse): 970.05 ms [hello:19205] (inline): 1,849.87 ms [hello:19205] (compile): 8,515.67 ms [hello:19205] compile: 11,999.75 ms [hello:19205] image: 1,248.74 ms [hello:19205] write: 155.05 ms [hello:19205] [total]: 30,392.57 ms [master $%=] ~/OLabs/git/svm-master/graal/substratevm> ./hello Hello Java 11.0.2 You can see that overall native-image works reasonably well for Java 11 already. But there is still work to do. E.g. currently when you build the "native-image" image on Java 11: [master $%=] ~/OLabs/git/svm-master/graal/substratevm> mkdir -p svmbuild/native-image-root-11/bin [master $%=] ~/OLabs/git/svm-master/graal/substratevm> mx native-image --tool:native-image -H:Path=svmbuild/native-image-root-11/bin [native-image:20223] classlist: 1,238.58 ms ... [native-image:20223] write: 224.72 ms [native-image:20223] [total]: 40,393.15 ms And you try to use it you get: [master $%=] ~/OLabs/git/svm-master/graal/substratevm> svmbuild/native-image-root-11/bin/native-image --tool:native-image -H:Path=svmbuild/native-image-root-11/bin Error: Starting image-build server instance failed Caused by: com.oracle.svm.driver.NativeImage$NativeImageError: Could not determine port for sending image-build requests. Server stdout/stderr: Exception in thread "main" com.oracle.svm.core.util.VMError$HostedError: Static field processReaperExecutor of class java.lang.UNIXProcess can't be reset. Underlying exception: java.lang.UNIXProcess (Though, if you use that "native-image" image with --no-server it works as expected) If someone wants to work on Java 11 support while I'm busy with other things I suggest looking at the changes brought in by merge commit https://github.com/oracle/graal/commit/ed5f5f82d2962e5f65b3ee784a0cd62c489c41c5 and start improving on that. HTH, Paul From bmcwhirt at redhat.com Fri Apr 19 09:42:58 2019 From: bmcwhirt at redhat.com (Bob McWhirter) Date: Fri, 19 Apr 2019 05:42:58 -0400 Subject: Graal and JDK11 In-Reply-To: <31fed4d1-56aa-4f13-99d0-f6cf33941732@default> References: <31fed4d1-56aa-4f13-99d0-f6cf33941732@default> Message-ID: Yah I?ve been using jdk11 and ?my native-image? on aarch64. This push has been to produce a JDK11 based distro with a binary bin/native-image etc. Hit a few things including the server issue. But just trying to work out the mx changes etc to produce a latest_graal_home and such. I?ll keep trucking. Bob On Fri, Apr 19, 2019 at 4:52 AM Paul Woegerer wrote: > A while back I added support for Java 11 based image building into native > image. > > > https://github.com/oracle/graal/commit/ed5f5f82d2962e5f65b3ee784a0cd62c489c41c5 > > Since then we build images on Java 11 as part of our regular gate tasks. > E.g. See: > > https://travis-ci.org/oracle/graal/jobs/521993959 > > If you checkout master and run: > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> cat Hello.java > public class Hello { > public static void main(String[] args) { > var javaVersion = System.getProperty("java.version"); > System.out.println("Hello Java " + javaVersion); > } > } > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> java --version > openjdk 11.0.2 2019-01-15 > OpenJDK Runtime Environment 18.9 (build 11.0.2+7) > OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+7, mixed mode, sharing) > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> javac Hello.java > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> mx native-image > Hello > [hello:19205] classlist: 1,607.95 ms > WARNING: An illegal reflective access operation has occurred > WARNING: Illegal reflective access by > com.oracle.svm.core.jdk.FileTypeDetectorFeature > (file:/home/pwoegere/OLabs/git/svm-master/graal/substratevm/mxbuild/dists/jdk9/svm.jar) > to field java.nio.file.Files$FileTypeDetectors.installedDetectors > WARNING: Please consider reporting this to the maintainers of > com.oracle.svm.core.jdk.FileTypeDetectorFeature > WARNING: Use --illegal-access=warn to enable warnings of further > illegal reflective access operations > WARNING: All illegal access operations will be denied in a future > release > [hello:19205] (cap): 854.96 ms > [hello:19205] setup: 1,985.61 ms > Warning: RecomputeFieldValue.FieldOffset automatic substitution > failed. The automatic substitution registration was attempted because a > call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, String) was > detected in the static initializer of jdk.internal.misc.InnocuousThread. > Detailed failure reason(s): Could not determine the field where the value > produced by the call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, > String) for the field offset computation is stored. The call is not > directly followed by a field store or by a sign extend node followed > directly by a field store. > Warning: RecomputeFieldValue.FieldOffset automatic substitution > failed. The automatic substitution registration was attempted because a > call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, String) was > detected in the static initializer of jdk.internal.misc.InnocuousThread. > Detailed failure reason(s): Could not determine the field where the value > produced by the call to jdk.internal.misc.Unsafe.objectFieldOffset(Class, > String) for the field offset computation is stored. The call is not > directly followed by a field store or by a sign extend node followed > directly by a field store. > [hello:19205] (typeflow): 6,324.79 ms > [hello:19205] (objects): 6,154.23 ms > [hello:19205] (features): 245.79 ms > [hello:19205] analysis: 12,930.88 ms > [hello:19205] universe: 318.56 ms > [hello:19205] (parse): 970.05 ms > [hello:19205] (inline): 1,849.87 ms > [hello:19205] (compile): 8,515.67 ms > [hello:19205] compile: 11,999.75 ms > [hello:19205] image: 1,248.74 ms > [hello:19205] write: 155.05 ms > [hello:19205] [total]: 30,392.57 ms > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> ./hello > Hello Java 11.0.2 > > You can see that overall native-image works reasonably well for Java 11 > already. > > But there is still work to do. > > E.g. currently when you build the "native-image" image on Java 11: > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> mkdir -p > svmbuild/native-image-root-11/bin > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> mx native-image > --tool:native-image -H:Path=svmbuild/native-image-root-11/bin > [native-image:20223] classlist: 1,238.58 ms > ... > [native-image:20223] write: 224.72 ms > [native-image:20223] [total]: 40,393.15 ms > > And you try to use it you get: > > [master $%=] ~/OLabs/git/svm-master/graal/substratevm> > svmbuild/native-image-root-11/bin/native-image --tool:native-image > -H:Path=svmbuild/native-image-root-11/bin > Error: Starting image-build server instance failed > Caused by: com.oracle.svm.driver.NativeImage$NativeImageError: Could > not determine port for sending image-build requests. > Server stdout/stderr: > Exception in thread "main" > com.oracle.svm.core.util.VMError$HostedError: Static field > processReaperExecutor of class java.lang.UNIXProcess can't be reset. > Underlying exception: java.lang.UNIXProcess > > (Though, if you use that "native-image" image with --no-server it works > as expected) > > If someone wants to work on Java 11 support while I'm busy with other > things I suggest looking at the changes brought in by merge commit > > > https://github.com/oracle/graal/commit/ed5f5f82d2962e5f65b3ee784a0cd62c489c41c5 > > and start improving on that. > > HTH, > Paul > From vladimir.kozlov at oracle.com Fri Apr 19 17:15:41 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 19 Apr 2019 10:15:41 -0700 Subject: RFR: JDK-8221598 - Update Graal In-Reply-To: References: Message-ID: Changes looks good. I looked on tests results and most of them are timeouts because Graal was run with -Xcomp. I was not able to identify serious issues because there were >200 failed tests - difficult to search. Someone have to look on results and see if there are new failures. Thanks, Vladimir On 4/17/19 3:13 PM, jesper.wilhelmsson at oracle.com wrote: > Hi, > > Please review the patch to integrate recent Graal changes into OpenJDK. > Graal tip to integrate: 20f370437efb6b2a3f455a238da6141dc101d38c > > Bug: https://bugs.openjdk.java.net/browse/JDK-8221598 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8221598/webrev.00/ > > Thanks, > /Jesper > From vladimir.kozlov at oracle.com Fri Apr 19 19:06:20 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 19 Apr 2019 12:06:20 -0700 Subject: RFR: JDK-8221598 - Update Graal In-Reply-To: References: Message-ID: <2976f9cb-90f5-731e-f30f-ee354032d6f7@oracle.com> On 4/19/19 11:29 AM, dean.long at oracle.com wrote: > I only see one failure in tiers 1-4, and it looks like JDK-8222550. Okay. We can push it then. Vladimir > > dl > > On 4/19/19 10:15 AM, Vladimir Kozlov wrote: >> Changes looks good. >> >> I looked on tests results and most of them are timeouts because Graal was run with -Xcomp. >> I was not able to identify serious issues because there were >200 failed tests - difficult to search. >> Someone have to look on results and see if there are new failures. >> >> Thanks, >> Vladimir >> >> On 4/17/19 3:13 PM, jesper.wilhelmsson at oracle.com wrote: >>> Hi, >>> >>> Please review the patch to integrate recent Graal changes into OpenJDK. >>> Graal tip to integrate: 20f370437efb6b2a3f455a238da6141dc101d38c >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8221598 >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221598/webrev.00/ >>> >>> Thanks, >>> /Jesper >>> > From java at stefan-marr.de Sun Apr 21 20:24:53 2019 From: java at stefan-marr.de (Stefan Marr) Date: Sun, 21 Apr 2019 21:24:53 +0100 Subject: [CfP] DLS 2019 - 15th Dynamic Languages Symposium, Submission Deadline June 5th Message-ID: ======================================================================== Call for Papers DLS 2019 - 15th Dynamic Languages Symposium Co-located with SPLASH 2019, October 22, Athens, Greece https://conf.researchr.org/home/dls-2019 Follow us @dynlangsym ======================================================================== Dynamic Languages play a fundamental role in today?s world of software, from the perspective of research and practice. Languages such as JavaScript, R, and Python are vehicles for cutting edge research as well as building widely used products and computational tools. The 15th Dynamic Languages Symposium (DLS) at SPLASH 2019 is the premier forum for researchers and practitioners to share research and experience on all aspects on dynamic languages. DLS 2019 invites high quality papers reporting original research and experience related to the design, implementation, and applications of dynamic languages. Areas of interest are generally empirical studies, language design, implementation, and runtimes, which includes but is not limited to: - innovative language features - innovative implementation techniques - innovative applications - development environments and tools - experience reports and case studies - domain-oriented programming - late binding, dynamic composition, and run-time adaptation - reflection and meta-programming - software evolution - language symbiosis and multi-paradigm languages - dynamic optimization - interpretation, just-in-time and ahead-of-time compilation - soft/optional/gradual typing - hardware support - educational approaches and perspectives - semantics of dynamic languages - frameworks and languages for the Cloud and the IoT Submission Details ------------------ Submissions must neither be previously published nor under review at other events. DLS 2019 uses a single-blind, two-phase reviewing process. Papers are assumed to be in one of the following categories: Research Papers: describe work that advances the current state of the art Experience Papers: describe insights gained from substantive practical applications that should be of a broad interest Dynamic Pearls: describe a known idea in an appealing way to remind the community and capture a reader?s interest The program committee will evaluate each paper based on its relevance, significance, clarity, and originality. The paper category needs to be indicated during submission, and papers are judged accordingly. Papers are to be submitted electronically at https://dls19.hotcrp.com/ in PDF format. Submissions must be in the ACM SIGPLAN conference acmart format, 10 point font, and should not exceed 12 pages. Please see full details in the instructions for authors available at: https://conf.researchr.org/home/dls-2019#Instructions-for-Authors DLS 2019 will run a two-phase reviewing process to help authors make their final papers the best that they can be. Accepted papers will be published in the ACM Digital Library and will be freely available for one month, starting two weeks before the event. Important Deadlines ------------------- Abstract Submission: May 29, 2019 Paper Submission: June 5, 2019 First Phase Notification: July 3, 2019 Final Notifications: August 14, 2019 Camera Ready: August 28, 2019 All deadlines are 23:59 AoE (UTC-12h). AUTHORS TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. Program Committee ----------------- Alexandre Bergel, University of Chile Carl Friedrich Bolz-Tereick, Heinrich-Heine-Universit?t D?sseldorf Chris Seaton, Oracle Labs David Chisnall, Microsoft Research Elisa Gonzalez Boix, Vrije Universiteit Brussel Gregor Richards, University of Waterloo Guido Chari, Czech Technical University Hannes Payer, Google James Noble, Victoria University of Wellington Jeremy Singer, University of Glasgow Joe Gibbs Politz, University of California San Diego Juan Fumero, The University of Manchester Julien Ponge, Red Hat Mandana Vaziri, IBM Research Manuel Serrano, Inria Marc Feeley, Universit? de Montr?al Mark Marron, Microsoft Research Na Meng, Virginia Tech Nick Papoulias, Universit? Grenoble Alpes Richard Roberts, Victoria University of Wellington Sam Tobin-Hochstadt, Indiana University Sarah Mount, Aston University S?bastien Doeraene, ?cole polytechnique f?d?rale de Lausanne William Cook, University of Texas at Austin Program Chair ------------- Stefan Marr, University of Kent, United Kingdom -- Stefan Marr School of Computing, University of Kent https://stefan-marr.de/research/ From jesper.wilhelmsson at oracle.com Tue Apr 23 20:50:09 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 23 Apr 2019 22:50:09 +0200 Subject: RFR: JDK-8221598 - Update Graal In-Reply-To: <2976f9cb-90f5-731e-f30f-ee354032d6f7@oracle.com> References: <2976f9cb-90f5-731e-f30f-ee354032d6f7@oracle.com> Message-ID: Thank you! /Jesper > On 19 Apr 2019, at 21:06, Vladimir Kozlov wrote: > > On 4/19/19 11:29 AM, dean.long at oracle.com wrote: >> I only see one failure in tiers 1-4, and it looks like JDK-8222550. > > Okay. We can push it then. > > Vladimir > >> dl >> On 4/19/19 10:15 AM, Vladimir Kozlov wrote: >>> Changes looks good. >>> >>> I looked on tests results and most of them are timeouts because Graal was run with -Xcomp. >>> I was not able to identify serious issues because there were >200 failed tests - difficult to search. >>> Someone have to look on results and see if there are new failures. >>> >>> Thanks, >>> Vladimir >>> >>> On 4/17/19 3:13 PM, jesper.wilhelmsson at oracle.com wrote: >>>> Hi, >>>> >>>> Please review the patch to integrate recent Graal changes into OpenJDK. >>>> Graal tip to integrate: 20f370437efb6b2a3f455a238da6141dc101d38c >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8221598 >>>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221598/webrev.00/ >>>> >>>> Thanks, >>>> /Jesper >>>> From lists at fniephaus.com Thu Apr 25 09:11:58 2019 From: lists at fniephaus.com (Fabio Niephaus) Date: Thu, 25 Apr 2019 11:11:58 +0200 Subject: Call for Papers, The Programming Journal, Volume 4, Issue 2 Message-ID: ======================================================================== The Programming Journal The Art, Science, and Engineering of Programming Call for Papers for Volume 4, Issue 2 http://programming-journal.org/cfp/ Follow us @programmingconf ======================================================================== The Art, Science, and Engineering of Programming was created with the goal of placing the wonderful art of programming on the map of scholarly works. Many academic journals and conferences exist that publish research related to programming, starting with programming languages, software engineering, and expanding to the whole Computer Science field. Yet, many of us feel that, as the field of Computer Science expanded, programming, in itself, has been neglected to a secondary role not worthy of scholarly attention. That is a serious gap, as much of the progress in Computer Science lies on the basis of computer programs, the people who write Them, and the concepts and tools available to them to express computational tasks. The Art, Science, and Engineering of Programming aims at closing this gap by focusing primarily on programming: the art itself (programming styles, pearls, models, languages), the emerging science of understanding what works and what doesn?t work in general and in specific contexts, as well as more established engineering and mathematical perspectives. We solicit papers describing work from one of the following perspectives: Art: knowledge and technical skills acquired through practice and personal experiences. Examples include libraries, frameworks, languages, APIs, programming models and styles, programming pearls, and essays about programming. Science (Theoretical): knowledge and technical skills acquired through mathematical formalisms. Examples include formal programming models and proofs. Science (Empirical): knowledge and technical skills acquired through experiments and systematic observations. Examples include user studies and programming-related data mining. Engineering: knowledge and technical skills acquired through designing and building large systems and through calculated application of principles in building those systems. Examples include measurements of artifacts? properties, development processes and tools, and quality assurance methods. Independent of the type of work, the journal accepts submissions covering several areas of expertise, including but not limited to: - General-purpose programming - Data mining and machine learning programming, and for programming - Database programming - Distributed systems programming - Graphics and GPU programming - Interpreters, virtual machines, and compilers - Metaprogramming and reflection - Model-based development - Modularity and separation of concerns - Parallel and multi-core programming - Program verification - Programming education - Programming environments - Security programming - Social coding - Testing and debugging - User interface programming - Visual and live programming All details, including the selection process are described on http://programming-journal.org/cfp/ Details on the submission processed are available at http://programming-journal.org/submission/ Authors of accepted papers will be invited to present at the ?20 conference in Porto, Portugal from March 23-26: https://2020.programming-conference.org/ ## Upcoming Deadlines We solicit submissions for the following upcoming deadlines: Submission: June 1 First notification: August 1 Revised submission: September 1 Final notification: September 7 Camera-ready: September 15 We?ll also solicit submissions for Issue 3, for full details, see: https://programming-journal.org/timeline/ ## Standing Review Committee Volume 4 Christophe Scholliers, Ghent University Coen De Roover, Vrije Universiteit Brussel Craig Anslow, Victoria University of Wellington New Zealand Didier Verna, EPITA / LRDE France Diego Garbervetsky University of Buenos Aires Edd Barrett, King's College London Erik Ernst, Google Felienne Hermans, Leiden University Francisco Sant'Anna, Rio de Janeiro State University Friedrich Steimann, Fernuniversit?t Gordana Rakic, University of Novi Sad Guido Salvaneschi, TU Darmstadt Hidehiko Masuhara, Tokyo Institute of Technology Jeremy Gibbons, University of Oxford Jonathan Edwards, US Jun Kato, AIST Japan Luke Church, University of Cambridge Matthew Flatt, University of Utah Michael L. Van De Vanter, Cal Poly San Luis Obispo Nicol?s Cardozo, Universidad de los Andes Colombia Stephen Kell, University of Kent ## Editors Stefan Marr (Editor Volume 4), University of Kent Cristina V. Lopes (Editor-in-Chief), University of California, Irvine From lists at fniephaus.com Thu Apr 25 14:29:19 2019 From: lists at fniephaus.com (Fabio Niephaus) Date: Thu, 25 Apr 2019 16:29:19 +0200 Subject: Call for Papers: ICOOOLPS'19 Message-ID: 14th Workshop on Implementation, Compilation, Optimization of Object- Oriented Languages, Programs and Systems Co-located with ECOOP 2019 held Mon 15 - Fri 19 July in HammerSmith, London, United Kingdom Twitter: @ICOOOLPS URL: https://2019.ecoop.org/home/ICOOOLPS-2019 The ICOOOLPS workshop series brings together researchers and practitioners working in the field of language implementation and optimization. The goal of the workshop is to discuss emerging problems and research directions as well as new solutions to classic performance challenges. The topics of interest for the workshop include techniques for the implementation and optimization of a wide range of languages including but not limited to object-oriented ones. Furthermore, meta-compilation techniques or language-agnostic approaches are welcome, too. ### Topics of Interest A non-exclusive list of topics of interest for this workshop is: - Implementation and optimization of fundamental languages features (from automatic memory management to zero-overhead metaprogramming) - Runtime systems technology (libraries, virtual machines) - Static, adaptive, and speculative optimizations and compiler techniques - Meta-compilation techniques and language-agnostic approaches for the efficient implementation of languages - Compilers (intermediate representations, offline and online optimizations,?) - Empirical studies on language usage, benchmark design, and benchmarking methodology - Resource-sensitive systems (real-time, low power, mobile, cloud) - Studies on design choices and tradeoffs (dynamic vs. static compilation, heuristics vs. programmer input,?) - Tooling support, debuggability and observability of languages as well as their implementations ### Workshop Format and Submissions This workshop welcomes the presentation and discussion of new ideas and emerging problems that give a chance for interaction and exchange. More mature work is welcome as part of a mini-conference format, too. We aim to interleave interactive brainstorming and demonstration sessions between the formal presentations to foster an active exchange of ideas. The workshop papers will be published in ACM DL or an open archive (to be confirmed). Papers are to be submitted using the sigplanconf LaTeX template (http://www.sigplan.org/Resources/LaTeXClassFile/). Please submit contributions via EasyChair: https://easychair.org/conferences/?conf=icooolps19 ### Important Dates Abstract Submissions: 15 May 2019 Paper Submissions: 17 May 2018 Author Notification: 10 June 2018 ### Workshop Organizers Cl?ment B?ra, Google Aarhus Eric Jul, University of Oslo ### Program Committee Shigeru Chiba, University of Tokyo, Japan Gael Thomas, Telecom SudParis Elisa Gonzalez Boix, Vrije Universiteit Brussel, Belgium Jennifer B Sartor, Ghent University and Vrije Universiteit Brussel Tim Felgentreff, Oracle Labs, Potsdam Benoit Daloze, JKU Linz, Austria Edd Barrett, King's college London Juliana Franco, Microsoft research, Cambridge Marcus Denker, Inria Lille Manuel Rigger, JKU Linz Robin Morisset, Webkit Guido Chari, Czech Technical University, Czechia Oli Fl?ckiger, Northeastern university, USA Fabio Niephaus, Hasso Plattner Institute, University of Potsdam Baptiste Saleil, Universit? de Montr?al Olivier Zendra, Inria, France From dean.long at oracle.com Thu Apr 25 18:53:28 2019 From: dean.long at oracle.com (dean.long at oracle.com) Date: Thu, 25 Apr 2019 11:53:28 -0700 Subject: RFR(M) 8219403: JVMCIRuntime::adjust_comp_level should be replaced Message-ID: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> https://bugs.openjdk.java.net/browse/JDK-8219403 http://cr.openjdk.java.net/~dlong/8219403/webrev.2/ This change removes the problematic JVMCIRuntime::adjust_comp_level.? It is based on previous work in Graal, graal-jvmci-8, and Metropolis by Tom, Doug, and Vladimir. I also problem-listed several tests that were causing noise in the test results. dl From vladimir.kozlov at oracle.com Thu Apr 25 20:09:47 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 25 Apr 2019 13:09:47 -0700 Subject: RFR(M) 8219403: JVMCIRuntime::adjust_comp_level should be replaced In-Reply-To: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> References: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> Message-ID: <91f3d9be-5612-2556-ae77-a60102fc07ae@oracle.com> In general looks good. Only minor issue: in jvmciJavaClasses.hpp indent of '\' is not adjusted in changes line. Thanks, Vladimir On 4/25/19 11:53 AM, dean.long at oracle.com wrote: > https://bugs.openjdk.java.net/browse/JDK-8219403 > http://cr.openjdk.java.net/~dlong/8219403/webrev.2/ > > This change removes the problematic JVMCIRuntime::adjust_comp_level.? It > is based on previous work in Graal, graal-jvmci-8, and Metropolis by Tom, > Doug, and Vladimir. > > I also problem-listed several tests that were causing noise in the test results. > > dl From dean.long at oracle.com Thu Apr 25 22:00:38 2019 From: dean.long at oracle.com (dean.long at oracle.com) Date: Thu, 25 Apr 2019 15:00:38 -0700 Subject: RFR(M) 8219403: JVMCIRuntime::adjust_comp_level should be replaced In-Reply-To: <91f3d9be-5612-2556-ae77-a60102fc07ae@oracle.com> References: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> <91f3d9be-5612-2556-ae77-a60102fc07ae@oracle.com> Message-ID: <0c7df5d0-7638-7b44-65fd-18fe1279a0b3@oracle.com> Fixed.? Thanks for the review. dl On 4/25/19 1:09 PM, Vladimir Kozlov wrote: > In general looks good. Only minor issue: > > in jvmciJavaClasses.hpp indent of '\' is not adjusted in changes line. > > Thanks, > Vladimir > > On 4/25/19 11:53 AM, dean.long at oracle.com wrote: >> https://bugs.openjdk.java.net/browse/JDK-8219403 >> http://cr.openjdk.java.net/~dlong/8219403/webrev.2/ >> >> This change removes the problematic JVMCIRuntime::adjust_comp_level.? It >> is based on previous work in Graal, graal-jvmci-8, and Metropolis by >> Tom, >> Doug, and Vladimir. >> >> I also problem-listed several tests that were causing noise in the >> test results. >> >> dl From dean.long at oracle.com Fri Apr 26 19:09:38 2019 From: dean.long at oracle.com (dean.long at oracle.com) Date: Fri, 26 Apr 2019 12:09:38 -0700 Subject: RFR(S) 8218700: infinite loop in HotSpotJVMCIMetaAccessContext.fromClass after OutOfMemoryError Message-ID: <53bcf718-e543-d40c-5486-58b98f66bcee@oracle.com> https://bugs.openjdk.java.net/browse/JDK-8218700 http://cr.openjdk.java.net/~dlong/8218700/webrev.2/ If we throw an OutOfMemoryError in the right place (see JDK-8222941), HotSpotJVMCIMetaAccessContext.fromClass can go into an infinite loop calling ClassValue.remove.? To work around the problem, reset the value in a mutable cell instead of calling remove. dl From vladimir.kozlov at oracle.com Fri Apr 26 22:46:36 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 26 Apr 2019 15:46:36 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <39774cdd-de9e-c878-4a5a-f6595a93859f@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> <63b8e1d2-3516-88f5-02ac-828dd15baf83@oracle.com> <39774cdd-de9e-c878-4a5a-f6595a93859f@oracle.com> Message-ID: Hi I have 2 new deltas for easy review. Delta 1 is mostly JVMCI HotSpot refactoring and cleanup: http://cr.openjdk.java.net/~kvn/8220623/webrev_delta1.07/ - Cleanup #include jvmci files. - Removed BoolObjectClosure parameter from JVMCI::do_unloading() since it is not used. In JDK 13 this parameter is removed from other places too. - Added mtJVMCI type to track memory used by JVMCI. - Passed Handles as constant references. - Moved JNIAccessMark, JVMCIObject, MetadataHandleBlock class to separate new files. - Moved JVMCI methods bodies from jvmciRuntime.cpp into new jvmci.cpp file. - Moved bodies of some JVMCIEnv methods from .hpp into jvmciEnv.cpp file. They use JNIAccessMark and ThreadToNativeFromVM and I can't use them in header file because they require #include inline.hpp files. - Moved bodies of some HotSpotJVMCI methods into jvmciJavaClasses.cpp file because, again, they need jniHandles.inline.hpp. - Moved JVMCICompileState class definition to the beginning of jvmciEnv.hpp file. Delta 2: http://cr.openjdk.java.net/~kvn/8220623/webrev_delta2.07/ - Changed MetadataHandleBlock fields which are used only by one instance to static. - Renamed field _jmetadata::_handle to _value and corresponding access methods because it was confusing: handle->handle(). - Switched from JNIHandleBlock to OopStorage use for _object_handles. - Additional JVMCI Java side fix for libgraal. Full: http://cr.openjdk.java.net/~kvn/8220623/webrev.07/ I think I addressed all comments I received so far. Thanks, Vladimir On 4/9/19 7:25 PM, Vladimir Kozlov wrote: > Thank you, Coleen > > On 4/9/19 1:36 PM, coleen.phillimore at oracle.com wrote: >> >> I think I missed graal-dev with this reply.? I have a few other comments. >> >> +void MetadataHandleBlock::do_unloading(BoolObjectClosure* is_alive) { >> >> >> We've removed the is_alive parameter from all do_unloading, and it appears unused here also. > > Yes, I can remove it. > >> >> I don't know about this MetadataHandles block.?? It seems that it could be a concurrent hashtable with a WeakHandle<> >> if it's for jdk11 and beyond.? Kim might have mentioned this (I haven't read all the replies thoroughly) but >> JNIHandleBlock wasn't MT safe, and the new OopStorage is safe and scalable. > > Yes, Kim also suggested OopStorage. I did not get into that part yet but I will definitely do. > >> >> +? jmetadata allocate_handle(methodHandle handle)?????? { return allocate_metadata_handle(handle()); } >> +? jmetadata allocate_handle(constantPoolHandle handle) { return allocate_metadata_handle(handle()); } >> >> +CompLevel JVMCI::adjust_comp_level(methodHandle method, bool is_osr, CompLevel level, JavaThread* thread) { >> >> +JVMCIObject JVMCIEnv::new_StackTraceElement(methodHandle method, int bci, JVMCI_TRAPS) { >> >> +JVMCIObject JVMCIEnv::new_HotSpotNmethod(methodHandle method, const char* name, jboolean isDefault, jlong compileId, >> JVMCI_TRAPS) { >> >> Passing metadata Handles by copy will call the copy constructor and destructor for these parameters unnecessarily. >> They should be passed as *const* references to avoid this. > > Okay. > >> >> +class MetadataHandleBlock : public CHeapObj { >> >> >> There should be a better mt? for this.? mtCompiler seems appropriate here.? Depending on how many others of these, you >> could add an mtJVMCI. > > mtJVMCI is good suggestion. > >> >> +??????????? if (TraceNMethodInstalls) { >> >> >> We've had Unified Logging in the sources for a long time now. New code should use UL rather than adding a >> TraceSomething option.?? I understand it's supposed to be shared with JDK8 code but it seems that you're forward >> porting what looks like old code into the repository. > > Yes, we should use UL for this. > > Existing JIT code (ciEnv.cpp) still not using UL for this: > http://hg.openjdk.java.net/jdk/jdk/file/f847a42ddc01/src/hotspot/share/ci/ciEnv.cpp#l1075 > > May be I should update it too ... > >> >> Coleen >> >> >> On 4/9/19 4:00 PM, coleen.phillimore at oracle.com wrote: >>> >>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/src/hotspot/share/classfile/classFileParser.cpp.udiff.html >>> >>> It appears this change is to implement https://bugs.openjdk.java.net/browse/JDK-8193513 which we closed as WNF.? If >>> you want this change, remove it from this giant patch and reopen and submit a separate patch for this bug. > > Thank you for pointing it. I will do as you suggested. > >>> >>> It shouldn't be conditional on JVMCI and should use the normal unified logging mechanism. > > Okay. > >>> >>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/src/hotspot/share/runtime/thread.hpp.udiff.html >>> >>> *!_jlong__pending_failed_speculation;* >>> >>> >>> We've been trying to remove and avoid java types in hotspot code and use the appropriate C++ types instead.? Can this >>> be changed to int64_t?? 'long' is generally wrong though. > > This field should be java type since it is accessed from Java Graal: > > http://hg.openjdk.java.net/jdk/jdk/file/f847a42ddc01/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.hotspot/src/org/graalvm/compiler/hotspot/GraalHotSpotVMConfig.java#l401 > > >>> >>> I seem to remember there was code to deal with metadata in oops for redefinition, but I can't find it in this big >>> patch.? I was going to look at that. > > May be it is MetadataHandleBlock::metadata_do() (in jvmciRuntime.cpp)? > >>> >>> Otherwise, I've reviewed the runtime code. > > Thanks, > Vladimir > >>> >>> Coleen >>> >>> On 4/4/19 3:22 AM, Vladimir Kozlov wrote: >>>> New delta: >>>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ >>>> >>>> Full: >>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ >>>> >>>> New changes are based on Kim and Stefan suggestions: >>>> >>>> - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. >>>> - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). >>>> - Used JVMCI_ONLY macro with COMMA. >>>> - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to >>>> find missing JVMCI guards. >>>> >>>> I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). >>>> I started hs-tier4..8-graal testing. >>>> I will do performance testing next. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 4/3/19 9:54 AM, Vladimir Kozlov wrote: >>>>> On 4/2/19 11:35 PM, Stefan Karlsson wrote: >>>>>> On 2019-04-02 22:41, Vladimir Kozlov wrote: >>>>>>> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >>>>>>> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it is >>>>>>> less than 1% of a pause time. >>>>>> >>>>>> Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run these changes >>>>>> through our internal performance test setup. >>>>> >>>>> Okay, I will run performance tests there too. >>>>> >>>>>> >>>>>>> >>>>>>> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as >>>>>>> Stefan suggested. >>>>>>> >>>>>>> Stefan, are you satisfied with these changes now? >>>>>> >>>>>> Yes, the clean-ups look good. Thanks for cleaning this up. >>>>>> >>>>>> Kim had some extra comments about a few more places where JVMCI_ONLY could be used. >>>>>> >>>>>> I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think you should put it >>>>>> where you put the AOTLoader::oops_do calls. >>>>> >>>>> Okay. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>>> >>>>>> Thanks, >>>>>> StefanK >>>>>> >>>>>> >>>>>>> >>>>>>> Here is latest delta update which includes previous [1] delta and >>>>>>> - use CompilerThreadStackSize * 2 for libgraal instead of exact value, >>>>>>> - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), >>>>>>> - added recent jvmci-8 changes to fix registration of native methods in libgraal (jvmciCompilerToVM.cpp) >>>>>>> >>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ >>>>>>> [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>> [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' shows how many >>>>>>> metadata references were processed): >>>>>>> >>>>>>> [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark >>>>>>> [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms >>>>>>> [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark 28M->28M(108M) 16.123ms >>>>>>> >>>>>>> [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark >>>>>>> [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms >>>>>>> [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark 215M->213M(720M) 51.103ms >>>>>>> >>>>>>> [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects >>>>>>> [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) JVMCI::do_unloading(): 4048 alive 0.821ms >>>>>>> [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: Mark live objects 344.107ms >>>>>>> >>>>>>> [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects >>>>>>> [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) JVMCI::do_unloading(): 3266 alive 0.470ms >>>>>>> [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: Mark live objects 316.527ms >>>>>>> >>>>>>> [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark >>>>>>> [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) JVMCI::do_unloading(): 3474 alive 0.642ms >>>>>>> [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause Remark 2502M->2486M(4248M) 163.296ms >>>>>>> >>>>>>> [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects >>>>>>> [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) JVMCI::do_unloading(): 3449 alive 0.570ms >>>>>>> [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: Mark live objects 311.719ms >>>>>>> >>>>>>> [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects >>>>>>> [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) JVMCI::do_unloading(): 3449 alive 0.000ms >>>>>>> [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: Mark live objects 448.495ms >>>>>>> >>>>>>> [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects >>>>>>> [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) JVMCI::do_unloading(): 3491 alive 0.882ms >>>>>>> [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: Mark live objects 365.403ms >>>>>>> >>>>>>> [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects >>>>>>> [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) JVMCI::do_unloading(): 3478 alive 0.616ms >>>>>>> [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: Mark live objects 368.643ms >>>>>>> >>>>>>> [1806.139s][1554230965694ms][info?? ][gc,start?????? ] GC(4014) Pause Remark >>>>>>> [1806.161s][1554230965716ms][info?? ][gc???????????? ] GC(4014) JVMCI::do_unloading(): 3478 alive 0.000ms >>>>>>> [1806.163s][1554230965717ms][info?? ][gc???????????? ] GC(4014) Pause Remark 1305M->1177M(2772M) 23.190ms >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 4/1/19 12:34 AM, Stefan Karlsson wrote: >>>>>>>> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>>>>>>>> Stefan, >>>>>>>>> >>>>>>>>> Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? >>>>>>>> >>>>>>>> >>>>>>>> -Xlog:gc prints the remark times: >>>>>>>> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >>>>>>>> >>>>>>>> StefanK >>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>>>>>>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>>>>>>>> Hi Stefan, >>>>>>>>>>> >>>>>>>>>>> I collected some data on MetadataHandleBlock. >>>>>>>>>>> >>>>>>>>>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It >>>>>>>>>>> should not affect normal G1 remark pause. >>>>>>>>>> >>>>>>>>>> It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, >>>>>>>>>> will be affected. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of >>>>>>>>>>> execution: >>>>>>>>>>> >>>>>>>>>>> max_blocks = 232 >>>>>>>>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>>>>>>>> max_total_alive_values = 4631 >>>>>>>>>> >>>>>>>>>> OK. Thanks for the info. >>>>>>>>>> >>>>>>>>>> StefanK >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Vladimir >>>>>>>>>>> >>>>>>>>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>>>>>>>> Thank you, Stefan >>>>>>>>>>>> >>>>>>>>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>>>>>>>> Hi Vladimir, >>>>>>>>>>>>> >>>>>>>>>>>>> I started to check the GC code. >>>>>>>>>>>>> >>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>>>>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>>>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>>>>>>>> + #endif >>>>>>>>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>>>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>>>>>>>> >>>>>>>>>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>>>>>>>>> >>>>>>>>>>>> okay >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>> Could you also change the following: >>>>>>>>>>>>> >>>>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>>>>>>>> + #endif >>>>>>>>>>>>> >>>>>>>>>>>>> to: >>>>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>>>> + JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>>>>>>>>> >>>>>>>>>>>>> to get rid of some of the line noise in the GC files. >>>>>>>>>>>> >>>>>>>>>>>> okay >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. >>>>>>>>>>>> >>>>>>>>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >>>>>>>>>>>>> >>>>>>>>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>>>>>>>>> 3276 bool class_unloading_occurred) { >>>>>>>>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>>>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>>>>>>>>>>> 3279 workers()->run_task(&unlink_task); >>>>>>>>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>>>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>>>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>>>>>>>> 3283 #endif >>>>>>>>>>>>> 3284 } >>>>>>>>>>>> >>>>>>>>>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in product >>>>>>>>>>>> VM) and check: >>>>>>>>>>>> >>>>>>>>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>>>>>>>> >>>>>>>>>>>> If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per >>>>>>>>>>>> array and array per MetadataHandleBlock block which are linked in list) and not large. >>>>>>>>>>>> If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>> Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? >>>>>>>>>>>>> >>>>>>>>>>>>> See how other tasks are claimed by one worker: >>>>>>>>>>>>> void KlassCleaningTask::work() { >>>>>>>>>>>>> ?? ResourceMark rm; >>>>>>>>>>>>> >>>>>>>>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>>>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>>>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>>>>>>>> ?? } >>>>>>>>>>>> >>>>>>>>>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask in >>>>>>>>>>>> JDK8. >>>>>>>>>>>> >>>>>>>>>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>>>>>>>> >>>>>>>>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>>>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>>>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>>>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>>>>>>>> +????????? // put it on the free list. >>>>>>>>>>>>> >>>>>>>>>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>>>>>>>>>>> >>>>>>>>>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>>>>>>>>> >>>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Vladimir >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> StefanK >>>>>>>>>>>>> >>>>>>>>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>>>>>>> >>>>>>>>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>>>>>>>> ?- fast startup >>>>>>>>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>>>>>>>> >>>>>>>>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>>>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>>>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested >>>>>>>>>>>>>> only in tier3. >>>>>>>>>>>>>> >>>>>>>>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were >>>>>>>>>>>>>> found which were present before these changes. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> Vladimir >>>>>>>>>>>>>> >>>>>>>>>>>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>>>>>>>> >>> >> From doug.simon at oracle.com Mon Apr 29 17:57:15 2019 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 29 Apr 2019 19:57:15 +0200 Subject: RFR(M) 8219403: JVMCIRuntime::adjust_comp_level should be replaced In-Reply-To: <0c7df5d0-7638-7b44-65fd-18fe1279a0b3@oracle.com> References: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> <91f3d9be-5612-2556-ae77-a60102fc07ae@oracle.com> <0c7df5d0-7638-7b44-65fd-18fe1279a0b3@oracle.com> Message-ID: <88442FF7-4E94-4F34-9FE6-8507672C2FAE@oracle.com> Looks good to me. -Doug > On 26 Apr 2019, at 00:00, dean.long at oracle.com wrote: > > Fixed. Thanks for the review. > > dl > > On 4/25/19 1:09 PM, Vladimir Kozlov wrote: >> In general looks good. Only minor issue: >> >> in jvmciJavaClasses.hpp indent of '\' is not adjusted in changes line. >> >> Thanks, >> Vladimir >> >> On 4/25/19 11:53 AM, dean.long at oracle.com wrote: >>> https://bugs.openjdk.java.net/browse/JDK-8219403 >>> http://cr.openjdk.java.net/~dlong/8219403/webrev.2/ >>> >>> This change removes the problematic JVMCIRuntime::adjust_comp_level. It >>> is based on previous work in Graal, graal-jvmci-8, and Metropolis by Tom, >>> Doug, and Vladimir. >>> >>> I also problem-listed several tests that were causing noise in the test results. >>> >>> dl > From tom.rodriguez at oracle.com Mon Apr 29 20:21:52 2019 From: tom.rodriguez at oracle.com (Tom Rodriguez) Date: Mon, 29 Apr 2019 13:21:52 -0700 Subject: RFR(M) 8219403: JVMCIRuntime::adjust_comp_level should be replaced In-Reply-To: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> References: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> Message-ID: Looks good. tom dean.long at oracle.com wrote on 4/25/19 11:53 AM: > https://bugs.openjdk.java.net/browse/JDK-8219403 > http://cr.openjdk.java.net/~dlong/8219403/webrev.2/ > > This change removes the problematic JVMCIRuntime::adjust_comp_level.? It > is based on previous work in Graal, graal-jvmci-8, and Metropolis by Tom, > Doug, and Vladimir. > > I also problem-listed several tests that were causing noise in the test > results. > > dl From dean.long at oracle.com Tue Apr 30 03:29:19 2019 From: dean.long at oracle.com (dean.long at oracle.com) Date: Mon, 29 Apr 2019 20:29:19 -0700 Subject: RFR(M) 8219403: JVMCIRuntime::adjust_comp_level should be replaced In-Reply-To: References: <3f0271ca-b8e6-dfcb-8787-8c36f49265fe@oracle.com> Message-ID: Thanks Tom and Doug for the reviews. dl On 4/29/19 1:21 PM, Tom Rodriguez wrote: > Looks good. > > tom > > dean.long at oracle.com wrote on 4/25/19 11:53 AM: >> https://bugs.openjdk.java.net/browse/JDK-8219403 >> http://cr.openjdk.java.net/~dlong/8219403/webrev.2/ >> >> This change removes the problematic JVMCIRuntime::adjust_comp_level.? It >> is based on previous work in Graal, graal-jvmci-8, and Metropolis by >> Tom, >> Doug, and Vladimir. >> >> I also problem-listed several tests that were causing noise in the >> test results. >> >> dl From vladimir.kozlov at oracle.com Tue Apr 30 21:37:22 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 30 Apr 2019 14:37:22 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> <119335f1-acb7-fffc-f38a-e50a96b73b3c@oracle.com> <2d206374-cfeb-9b1b-2838-e4e0a774e89d@oracle.com> <8063fb37-f1b0-8b93-bbe8-4dbeeaa54959@oracle.com> <17233985-18c7-305e-5556-fe2b38926b71@oracle.com> <3514c74d-5f6a-61cc-ebea-b9564df61673@oracle.com> <63b8e1d2-3516-88f5-02ac-828dd15baf83@oracle.com> <39774cdd-de9e-c878-4a5a-f6595a93859f@oracle.com> Message-ID: <9605842f-416b-4ed9-7098-0b390b3d6a56@oracle.com> Thank you, Coleen, for reviews of these huge changes. Best regards, Vladimir K On 4/30/19 2:28 PM, coleen.phillimore at oracle.com wrote: > > This looks good to me.? Thank you for addressing my comments. > Coleen > > On 4/26/19 6:46 PM, Vladimir Kozlov wrote: >> Hi >> >> I have 2 new deltas for easy review. >> >> Delta 1 is mostly JVMCI HotSpot refactoring and cleanup: >> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta1.07/ >> >> - Cleanup #include jvmci files. >> - Removed BoolObjectClosure parameter from JVMCI::do_unloading() since it is not used. In JDK 13 this parameter is >> removed from other places too. >> - Added mtJVMCI type to track memory used by JVMCI. >> - Passed Handles as constant references. >> - Moved JNIAccessMark, JVMCIObject, MetadataHandleBlock class to separate new files. >> - Moved JVMCI methods bodies from jvmciRuntime.cpp into new jvmci.cpp file. >> - Moved bodies of some JVMCIEnv methods from .hpp into jvmciEnv.cpp file. They use JNIAccessMark and >> ThreadToNativeFromVM and I can't use them in header file because they require #include inline.hpp files. >> - Moved bodies of some HotSpotJVMCI methods into jvmciJavaClasses.cpp file because, again, they need >> jniHandles.inline.hpp. >> - Moved JVMCICompileState class definition to the beginning of jvmciEnv.hpp file. >> >> Delta 2: >> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta2.07/ >> >> - Changed MetadataHandleBlock fields which are used only by one instance to static. >> - Renamed field _jmetadata::_handle to _value and corresponding access methods because it was confusing: >> handle->handle(). >> - Switched from JNIHandleBlock to OopStorage use for _object_handles. >> - Additional JVMCI Java side fix for libgraal. >> >> Full: >> http://cr.openjdk.java.net/~kvn/8220623/webrev.07/ >> >> I think I addressed all comments I received so far. >> >> Thanks, >> Vladimir >> >> On 4/9/19 7:25 PM, Vladimir Kozlov wrote: >>> Thank you, Coleen >>> >>> On 4/9/19 1:36 PM, coleen.phillimore at oracle.com wrote: >>>> >>>> I think I missed graal-dev with this reply.? I have a few other comments. >>>> >>>> +void MetadataHandleBlock::do_unloading(BoolObjectClosure* is_alive) { >>>> >>>> >>>> We've removed the is_alive parameter from all do_unloading, and it appears unused here also. >>> >>> Yes, I can remove it. >>> >>>> >>>> I don't know about this MetadataHandles block.?? It seems that it could be a concurrent hashtable with a >>>> WeakHandle<> if it's for jdk11 and beyond.? Kim might have mentioned this (I haven't read all the replies >>>> thoroughly) but JNIHandleBlock wasn't MT safe, and the new OopStorage is safe and scalable. >>> >>> Yes, Kim also suggested OopStorage. I did not get into that part yet but I will definitely do. >>> >>>> >>>> +? jmetadata allocate_handle(methodHandle handle)?????? { return allocate_metadata_handle(handle()); } >>>> +? jmetadata allocate_handle(constantPoolHandle handle) { return allocate_metadata_handle(handle()); } >>>> >>>> +CompLevel JVMCI::adjust_comp_level(methodHandle method, bool is_osr, CompLevel level, JavaThread* thread) { >>>> >>>> +JVMCIObject JVMCIEnv::new_StackTraceElement(methodHandle method, int bci, JVMCI_TRAPS) { >>>> >>>> +JVMCIObject JVMCIEnv::new_HotSpotNmethod(methodHandle method, const char* name, jboolean isDefault, jlong >>>> compileId, JVMCI_TRAPS) { >>>> >>>> Passing metadata Handles by copy will call the copy constructor and destructor for these parameters unnecessarily. >>>> They should be passed as *const* references to avoid this. >>> >>> Okay. >>> >>>> >>>> +class MetadataHandleBlock : public CHeapObj { >>>> >>>> >>>> There should be a better mt? for this.? mtCompiler seems appropriate here.? Depending on how many others of these, >>>> you could add an mtJVMCI. >>> >>> mtJVMCI is good suggestion. >>> >>>> >>>> +??????????? if (TraceNMethodInstalls) { >>>> >>>> >>>> We've had Unified Logging in the sources for a long time now. New code should use UL rather than adding a >>>> TraceSomething option.?? I understand it's supposed to be shared with JDK8 code but it seems that you're forward >>>> porting what looks like old code into the repository. >>> >>> Yes, we should use UL for this. >>> >>> Existing JIT code (ciEnv.cpp) still not using UL for this: >>> http://hg.openjdk.java.net/jdk/jdk/file/f847a42ddc01/src/hotspot/share/ci/ciEnv.cpp#l1075 >>> >>> May be I should update it too ... >>> >>>> >>>> Coleen >>>> >>>> >>>> On 4/9/19 4:00 PM, coleen.phillimore at oracle.com wrote: >>>>> >>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/src/hotspot/share/classfile/classFileParser.cpp.udiff.html >>>>> >>>>> It appears this change is to implement https://bugs.openjdk.java.net/browse/JDK-8193513 which we closed as WNF.? If >>>>> you want this change, remove it from this giant patch and reopen and submit a separate patch for this bug. >>> >>> Thank you for pointing it. I will do as you suggested. >>> >>>>> >>>>> It shouldn't be conditional on JVMCI and should use the normal unified logging mechanism. >>> >>> Okay. >>> >>>>> >>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/src/hotspot/share/runtime/thread.hpp.udiff.html >>>>> >>>>> *!_jlong__pending_failed_speculation;* >>>>> >>>>> >>>>> We've been trying to remove and avoid java types in hotspot code and use the appropriate C++ types instead.? Can >>>>> this be changed to int64_t?? 'long' is generally wrong though. >>> >>> This field should be java type since it is accessed from Java Graal: >>> >>> http://hg.openjdk.java.net/jdk/jdk/file/f847a42ddc01/src/jdk.internal.vm.compiler/share/classes/org.graalvm.compiler.hotspot/src/org/graalvm/compiler/hotspot/GraalHotSpotVMConfig.java#l401 >>> >>> >>>>> >>>>> I seem to remember there was code to deal with metadata in oops for redefinition, but I can't find it in this big >>>>> patch.? I was going to look at that. >>> >>> May be it is MetadataHandleBlock::metadata_do() (in jvmciRuntime.cpp)? >>> >>>>> >>>>> Otherwise, I've reviewed the runtime code. >>> >>> Thanks, >>> Vladimir >>> >>>>> >>>>> Coleen >>>>> >>>>> On 4/4/19 3:22 AM, Vladimir Kozlov wrote: >>>>>> New delta: >>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.06/ >>>>>> >>>>>> Full: >>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.06/ >>>>>> >>>>>> New changes are based on Kim and Stefan suggestions: >>>>>> >>>>>> - Moved JVMCI::oops_do() from JNIHandles to places where it should be called. >>>>>> - Moved JVMCI cleanup task to the beginning of ParallelCleaningTask::work(). >>>>>> - Used JVMCI_ONLY macro with COMMA. >>>>>> - Disable JVMCI build on SPARC. We don't use it - neither Graal or AOT are built on SPARC. Disabling also helps to >>>>>> find missing JVMCI guards. >>>>>> >>>>>> I ran hs-tier1-3 testing - it passed (hs-tier3 includes graal testing). >>>>>> I started hs-tier4..8-graal testing. >>>>>> I will do performance testing next. >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> On 4/3/19 9:54 AM, Vladimir Kozlov wrote: >>>>>>> On 4/2/19 11:35 PM, Stefan Karlsson wrote: >>>>>>>> On 2019-04-02 22:41, Vladimir Kozlov wrote: >>>>>>>>> I ran Kitchensink with G1 and -Xmx8g. I observed that Remark pause times are not consistent even without Graal. >>>>>>>>> To see effect I added time spent in JVMCI::do_unloading() to GC log (see below [3]). The result is < 1ms - it >>>>>>>>> is less than 1% of a pause time. >>>>>>>> >>>>>>>> Kitchensink isn't really a benchmark, but a stress test. I sent you a private mail how to run these changes >>>>>>>> through our internal performance test setup. >>>>>>> >>>>>>> Okay, I will run performance tests there too. >>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> It will have even less effect since I moved JVMCI::do_unloading() from serial path to parallel worker thread as >>>>>>>>> Stefan suggested. >>>>>>>>> >>>>>>>>> Stefan, are you satisfied with these changes now? >>>>>>>> >>>>>>>> Yes, the clean-ups look good. Thanks for cleaning this up. >>>>>>>> >>>>>>>> Kim had some extra comments about a few more places where JVMCI_ONLY could be used. >>>>>>>> >>>>>>>> I also agree with him that JVMCI::oops_do should not be placed in JNIHandles::oops_do. I think you should put it >>>>>>>> where you put the AOTLoader::oops_do calls. >>>>>>> >>>>>>> Okay. >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> StefanK >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> Here is latest delta update which includes previous [1] delta and >>>>>>>>> - use CompilerThreadStackSize * 2 for libgraal instead of exact value, >>>>>>>>> - removed HandleMark added for debugging (reverted changes in jvmtiImpl.cpp), >>>>>>>>> - added recent jvmci-8 changes to fix registration of native methods in libgraal (jvmciCompilerToVM.cpp) >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.05/ >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> [1] http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ >>>>>>>>> [2] Original webrev http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>> [3] Pauses times from Kitchensink (0.0ms means there were no unloaded classes, 'NNN alive' shows how many >>>>>>>>> metadata references were processed): >>>>>>>>> >>>>>>>>> [1.083s][1554229160638ms][info ][gc,start???? ] GC(2) Pause Remark >>>>>>>>> [1.085s][1554229160639ms][info ][gc?????????? ] GC(2) JVMCI::do_unloading(): 0 alive 0.000ms >>>>>>>>> [1.099s][1554229160654ms][info ][gc?????????? ] GC(2) Pause Remark 28M->28M(108M) 16.123ms >>>>>>>>> >>>>>>>>> [3.097s][1554229162651ms][info ][gc,start???? ] GC(12) Pause Remark >>>>>>>>> [3.114s][1554229162668ms][info ][gc?????????? ] GC(12) JVMCI::do_unloading(): 3471 alive 0.164ms >>>>>>>>> [3.148s][1554229162702ms][info ][gc?????????? ] GC(12) Pause Remark 215M->213M(720M) 51.103ms >>>>>>>>> >>>>>>>>> [455.111s][1554229614666ms][info ][gc,phases,start] GC(1095) Phase 1: Mark live objects >>>>>>>>> [455.455s][1554229615010ms][info ][gc???????????? ] GC(1095) JVMCI::do_unloading(): 4048 alive 0.821ms >>>>>>>>> [455.456s][1554229615010ms][info ][gc,phases????? ] GC(1095) Phase 1: Mark live objects 344.107ms >>>>>>>>> >>>>>>>>> [848.932s][1554230008486ms][info ][gc,phases,start] GC(1860) Phase 1: Mark live objects >>>>>>>>> [849.248s][1554230008803ms][info ][gc???????????? ] GC(1860) JVMCI::do_unloading(): 3266 alive 0.470ms >>>>>>>>> [849.249s][1554230008803ms][info ][gc,phases????? ] GC(1860) Phase 1: Mark live objects 316.527ms >>>>>>>>> >>>>>>>>> [1163.778s][1554230323332ms][info ][gc,start?????? ] GC(2627) Pause Remark >>>>>>>>> [1163.932s][1554230323486ms][info ][gc???????????? ] GC(2627) JVMCI::do_unloading(): 3474 alive 0.642ms >>>>>>>>> [1163.941s][1554230323496ms][info ][gc???????????? ] GC(2627) Pause Remark 2502M->2486M(4248M) 163.296ms >>>>>>>>> >>>>>>>>> [1242.587s][1554230402141ms][info ][gc,phases,start] GC(2734) Phase 1: Mark live objects >>>>>>>>> [1242.899s][1554230402453ms][info ][gc???????????? ] GC(2734) JVMCI::do_unloading(): 3449 alive 0.570ms >>>>>>>>> [1242.899s][1554230402453ms][info ][gc,phases????? ] GC(2734) Phase 1: Mark live objects 311.719ms >>>>>>>>> >>>>>>>>> [1364.164s][1554230523718ms][info ][gc,phases,start] GC(3023) Phase 1: Mark live objects >>>>>>>>> [1364.613s][1554230524167ms][info ][gc???????????? ] GC(3023) JVMCI::do_unloading(): 3449 alive 0.000ms >>>>>>>>> [1364.613s][1554230524167ms][info ][gc,phases????? ] GC(3023) Phase 1: Mark live objects 448.495ms >>>>>>>>> >>>>>>>>> [1425.222s][1554230584776ms][info ][gc,phases,start] GC(3151) Phase 1: Mark live objects >>>>>>>>> [1425.587s][1554230585142ms][info ][gc???????????? ] GC(3151) JVMCI::do_unloading(): 3491 alive 0.882ms >>>>>>>>> [1425.587s][1554230585142ms][info ][gc,phases????? ] GC(3151) Phase 1: Mark live objects 365.403ms >>>>>>>>> >>>>>>>>> [1456.401s][1554230615955ms][info ][gc,phases,start] GC(3223) Phase 1: Mark live objects >>>>>>>>> [1456.769s][1554230616324ms][info ][gc???????????? ] GC(3223) JVMCI::do_unloading(): 3478 alive 0.616ms >>>>>>>>> [1456.769s][1554230616324ms][info ][gc,phases????? ] GC(3223) Phase 1: Mark live objects 368.643ms >>>>>>>>> >>>>>>>>> [1806.139s][1554230965694ms][info?? ][gc,start ] GC(4014) Pause Remark >>>>>>>>> [1806.161s][1554230965716ms][info?? ][gc ] GC(4014) JVMCI::do_unloading(): 3478 alive 0.000ms >>>>>>>>> [1806.163s][1554230965717ms][info?? ][gc ] GC(4014) Pause Remark 1305M->1177M(2772M) 23.190ms >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 4/1/19 12:34 AM, Stefan Karlsson wrote: >>>>>>>>>> On 2019-03-29 17:55, Vladimir Kozlov wrote: >>>>>>>>>>> Stefan, >>>>>>>>>>> >>>>>>>>>>> Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -Xlog:gc prints the remark times: >>>>>>>>>> [4,296s][info][gc?????? ] GC(89) Pause Remark 4M->4M(28M) 36,412ms >>>>>>>>>> >>>>>>>>>> StefanK >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Vladimir >>>>>>>>>>> >>>>>>>>>>> On 3/29/19 12:36 AM, Stefan Karlsson wrote: >>>>>>>>>>>> On 2019-03-29 03:07, Vladimir Kozlov wrote: >>>>>>>>>>>>> Hi Stefan, >>>>>>>>>>>>> >>>>>>>>>>>>> I collected some data on MetadataHandleBlock. >>>>>>>>>>>>> >>>>>>>>>>>>> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It >>>>>>>>>>>>> should not affect normal G1 remark pause. >>>>>>>>>>>> >>>>>>>>>>>> It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, >>>>>>>>>>>> will be affected. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of >>>>>>>>>>>>> execution: >>>>>>>>>>>>> >>>>>>>>>>>>> max_blocks = 232 >>>>>>>>>>>>> max_handles_per_block = 32 (since handles array has 32 elements) >>>>>>>>>>>>> max_total_alive_values = 4631 >>>>>>>>>>>> >>>>>>>>>>>> OK. Thanks for the info. >>>>>>>>>>>> >>>>>>>>>>>> StefanK >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Vladimir >>>>>>>>>>>>> >>>>>>>>>>>>> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>>>>>>>>>>>>> Thank you, Stefan >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>>>>>>>>>>>>> Hi Vladimir, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I started to check the GC code. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>>>> I see that you've added guarded includes in the middle of the include list: >>>>>>>>>>>>>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>>>>>>>>>>>>> ?? #include "gc/shared/weakProcessor.hpp" >>>>>>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>>>>>> + #include "jvmci/jvmci.hpp" >>>>>>>>>>>>>>> + #endif >>>>>>>>>>>>>>> ?? #include "oops/instanceRefKlass.hpp" >>>>>>>>>>>>>>> ?? #include "oops/oop.inline.hpp" >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The style we use is to put these conditional includes at the end of the include lists. >>>>>>>>>>>>>> >>>>>>>>>>>>>> okay >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>>>> Could you also change the following: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> + #if INCLUDE_JVMCI >>>>>>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>>>>>> + JVMCI::do_unloading(is_alive_closure(), purged_class); >>>>>>>>>>>>>>> + #endif >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> to: >>>>>>>>>>>>>>> +???? // Clean JVMCI metadata handles. >>>>>>>>>>>>>>> + JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> to get rid of some of the line noise in the GC files. >>>>>>>>>>>>>> >>>>>>>>>>>>>> okay >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Yes, we need to support concurrent cleaning in a future. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>>>> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>>>>>>>>>>>>> 3276 bool class_unloading_occurred) { >>>>>>>>>>>>>>> 3277?? uint num_workers = workers()->active_workers(); >>>>>>>>>>>>>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>>>>>>>>>>>>> 3279 workers()->run_task(&unlink_task); >>>>>>>>>>>>>>> 3280 #if INCLUDE_JVMCI >>>>>>>>>>>>>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>>>>>>>>>>>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>>>>>>>>>>>>> 3283 #endif >>>>>>>>>>>>>>> 3284 } >>>>>>>>>>>>>> >>>>>>>>>>>>>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in >>>>>>>>>>>>>> product VM) and check: >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>>>>>>>>>>>>> >>>>>>>>>>>>>> If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per >>>>>>>>>>>>>> array and array per MetadataHandleBlock block which are linked in list) and not large. >>>>>>>>>>>>>> If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>>>> Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> See how other tasks are claimed by one worker: >>>>>>>>>>>>>>> void KlassCleaningTask::work() { >>>>>>>>>>>>>>> ?? ResourceMark rm; >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ?? // One worker will clean the subklass/sibling klass tree. >>>>>>>>>>>>>>> ?? if (claim_clean_klass_tree_task()) { >>>>>>>>>>>>>>> ???? Klass::clean_subklass_tree(); >>>>>>>>>>>>>>> ?? } >>>>>>>>>>>>>> >>>>>>>>>>>>>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask >>>>>>>>>>>>>> in JDK8. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Your suggestion is interesting and I agree that we should investigate it. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ======================================================================== >>>>>>>>>>>>>>> In MetadataHandleBlock::do_unloading: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>>>>>>>>>>>>> +????????? // This needs to be marked so that it's no longer scanned >>>>>>>>>>>>>>> +????????? // but can't be put on the free list yet. The >>>>>>>>>>>>>>> +????????? // ReferenceCleaner will set this to NULL and >>>>>>>>>>>>>>> +????????? // put it on the free list. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>>>>>>>>>>>>> >>>>>>>>>>>>>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> Vladimir >>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> StefanK >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>>>>>>>>>>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>>>>>>>>>>>>> Using aoted Graal can offers benefits including: >>>>>>>>>>>>>>>> ?- fast startup >>>>>>>>>>>>>>>> ?- compile time similar to native JIt compilers (C2) >>>>>>>>>>>>>>>> ?- memory usage disjoint from the application Java heap >>>>>>>>>>>>>>>> ?- no profile pollution of JDK code used by the application >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>>>>>>>>>>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>>>>>>>>>>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested >>>>>>>>>>>>>>>> only in tier3. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were >>>>>>>>>>>>>>>> found which were present before these changes. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> Vladimir >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>>>>>>>>>>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>>>>>>>>>>>>> >>>>> >>>> >