From Peter.B.Kessler at Oracle.COM Fri Mar 1 00:49:02 2019 From: Peter.B.Kessler at Oracle.COM (Peter B. Kessler) Date: Thu, 28 Feb 2019 16:49:02 -0800 Subject: Thoughts about SubstrateVM GC In-Reply-To: References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> Message-ID: <6d33e798-14bc-6888-71ab-8008c8adfd36@Oracle.COM> Mostly "what Christian said". Some additional details and comments inline. ... peter On 02/28/19 02:09 PM, Christian Wimmer wrote: > Hi Roman, > > Thanks for your interest in Substrate VM! > > On 2/28/19 12:59, Roman Kennke wrote: >> Hello all, >> >> I hope this is the right mailing list to discuss SubstrateVM? If not, >> please redirect me. > > There is no better list, at least at this time. > >> During the last couple of days, I did have a closer look at >> SubstrateVM's GC, and also did some experiments. I would like to >> summarize what I found (so that you can correct me if I'm wrong), and >> make a case for some improvements that I would like to work on. >> >> Here's my findings so far: >> Substrate GC is a 2-generation, STW, single-threaded GC. > > yes > >> The young generation is a single space. When collected, all live objects >> get scavenged out, directly into the old generation. > > yes > >> The old generation is 2 semispaces (actually, 4 with the 2 pinned >> spaces, which I'll ask about later). When collected, live objects get >> scavenged back-and-forth between the two spaces. > > yes, in theory. In "reality" there is no contiguous memory for the spaces, so after a GC all the aligned chunks of the from-space are either returned to the OS or cached and immediately reused for young generation allocation. > >> Is that correct so far? >> >> It seemed a bit weird at first to write a Java GC in Java language. :-) > > It makes some things easier, e.g., the object layout code used by the GC can immediately be used in other parts of the VM and the compiler. But in the end there is C-style memory access of course to actually process the objects, and that code is more verbose in Java. > >> I analyzed a bit of generated assembly code, comparing it side-by-side >> with corresponding Java code, and was actually quite impressed by it. >> It's also got room for improvements, but that was not the major >> bottleneck. The single major bottleneck in my experiments was waiting >> for loads of the mark word during scavenging, in other words, it's doing >> way too much of it ;-) >> >> I have noticed a bunch of problems so far: >> - The promotion rate between young-gen and old-gen seems fairly hot. >> This is because there is no notion of tenuring objects or so. >> - This implies that there are relatively many old-gen collections >> happening, which seriously affect application throughput (once they happen) >> - Because of the above, the usual wisdoms from other GCs don't apply: I >> could get significant improvements (i.e. fewer diving into full-GCs) by >> configuring a small young-gen (like 10%) and large old-gen (like 90%). >> But that's not really great either. > > Our "design goal" was to start out with the simplest possible GC implementation that is viable (i.e., generational). That was the starting point, and we have not had time yet to do something better. But we know something better is certainly needed. > >> - The policy when to start collecting seems a bit unclear to me. In my >> understanding, there is (almost) no use (for STW GCs) in starting a >> collection before allocation space is exhausted. Which means, it seems >> an obvious trigger to start collection on allocation failure. Yet, the >> policies I'm looking at are time-based or time-and-space-based. I said >> 'almost' because the single use for time-based collection would be >> periodic GC that is able to knock out lingering garbage during >> no/little-allocation phase of an application, and then only when the GC >> is also uncommitting the resulting unused pages (which, afaics, >> Substrate GC would do: bravo!). But that doesn't seem to be the point of >> the time-based policies: it looks like the goal of those policies is to >> balance time spent in young-vs-old-gen collections.?! > > The young generation size is fixed, and a collection is started when this space is filled. So from that point of view, the starting trigger of a GC is always space based. > > The policy whether to do an incremental (young) collection or a full collection is time based (but you can easily plug in any policy you want). The goal is to balance time between incremental and full collection. We certainly don't want to fill up the old generation because that is the maximum heap size, and it is by default provisioned to be very big. The young generation size is variable, at the granularity of chunks. (And the chunk size is variable, at the granularity of powers of 2.) The current defaults were based on not enough real example applications. There is no hard "allocation failure in the young generation". Most allocation goes fast-path, into chunks handed out to individual threads (AllocationSnippets.newInstance). When fast-path allocation exhausts a chunk, the slow-path code (ThreadLocalAllocation.allocateNewInstance) asks if a collection should be done before the allocation. That invokes the current implementation of HeapPolicy.CollectOnAllocationPolicy.maybeCauseCollection, and one can write a policy of one's choice. The current default allocation policy (HeapPolicy.CollectOnAllocationPolicy.Sometimes.maybeCauseCollection) is to look at the bytes allocated since the last collection and compare it to the current value of the maximum young generation size: imposing an artificial young generation size limit. Time is not involved in any of the policies we currently have for when to request collections. Once a collection has been requested, there are separate decisions as to whether an incremental collection should be done (CollectionPolicy.collectIncrementally), and then a decision as to whether a complete collection should be done (CollectionPolicy.collectCompletely). The current collection policy is CollectionPolicy.ByTime, which as you observed, tries to balance the accumulated time in incremental collections with the time in complete collections. (See https://dl.acm.org/citation.cfm?id=1542433.) Probably a better default collection policy would be CollectionPolicy.BySpaceAndTime, which, as you want, allows the old generation to fill up (to the current value of the minimum size of the heap). So far, our users have requested small heaps rather than reduced collection time. (Well, they *ask* for both! :-) > >> With a little bit of distance and squinting of eyes, one can see that >> Substrate GC's young generation is really what is called 'nursery space' >> elsewhere, which aims to reduce the rate at which objects get introduced >> into young generation. And the old generation is really what is usually >> called young generation elsewhere. What's missing is a true old >> generation space. > > Not really, because the young generation can be collected independently, i.e., there are the generational write barriers, remembered sets, ... > > So the young generation is reduced to the nursery space, but I argue the old generation is really an old generation. An intermediate generation between a young generation and an old generation would allow us to collect medium-lived objects without a full collection (via remembered sets in the old generation). (Cf. HotSpot survivor spaces, which are effective even if they are difficult to explain to users.) A small matter of programming. > >> Considering all this, I would like to propose some improvements: >> - Introduce a notion of tenuring objects. I guess we need like 2 age >> bits in the header or elsewhere for this. Do we have that room? > > You don't need the age bits in the header. You can easily go from the object to the aligned chunk that the object is in (we do that all the time, for example for the write barrier to do the card marking), and store the age in the chunk header. Requiring all objects in one chunk to have the same age is not much of a limitation. > > Adding tenuring is definitely necessary to achieve reasonable GC performance. > >> - Implement a true old-space (and rename the existing young to nursery >> and old to young?). In my experience, sliding/mark-compact collection >> using a mark bitmap works best for this: it tends to create a 'sediment' >> of permanent/very-long-lived objects at the bottom which would never get >> copied again. Using a bitmap, walking of live objects (e.g. during >> copying, updating etc) would be very fast: much faster than walking >> objects by their size. > > A mark-and-compact old generation algorithm definitely makes sense. Again, the only reason why we don't have it yet is that no one had time to implement it. > > Mark-and-compact is also great to reduce memory footprint. Right now, during GC the memory footprint can double because of the temporary space for copying. Note that the marking bitmaps would have to be per-chunk, the way the remembered sets are per-chunk. There is no larger contiguous address range to leverage. If you slide the live objects to one end of a chunk you would still not be able to free the chunk unless it were completely empty. So you would have to slide between chunks. There is no inherent ordering between the chunks of a generation, but we could impose one, e.g., to get your "sedimentation". One would have to be careful about chunk boundaries, and Murphy would size objects so the "compaction" took up more space than the original, but it should mostly work out in practice. > >> - I am not totally sure about the policies. My current thinking is that >> this needs some cleanup/straightening-out, or maybe I am >> misunderstanding something there. I believe (fairly strongly) that >> allocation failure is the single useful trigger for STW GC, and on top >> of that an (optional) periodic GC trigger that would kick in after X >> (milli)seconds no GC. Ah, another constraint: we used to think that SubstrateVM would be useful in single-threaded, and maybe "tickless" applications. No background collection threads, no alarms. You could write a policy to start a collection after a time interval, but the instigation might still want to be failing over into slow-path allocation. A strictly time-based collection policy would certainly be possible, but you might disappoint applications that were in the middle of servicing a request. > > As I mentioned above, the GC trigger is allocation failure for the young generation. > >> - Low-hanging-fruit improvement that could be done right now: allocate >> large objects(arrays) straight into old-gen instead of copying them >> around. Those are usually long-lived anyway, and copying them >> back-and-forth just costs CPU time for no benefit. This will become even >> more pronounced with a true old-gen. > > Large arrays are allocated separately in unaligned chunks. Such arrays are never copied, but only logically moved from the young generation into the old generation. An unaligned chunk contains exactly one large array. > >> Oh and a question: what's this pinned object/chunks/spaces all about? > > There are two mechanisms right now to get objects that are never moved by the GC: > 1) A "normal" object can be temporarily pinned using org.graalvm.nativeimage.PinnedObject. The current implementation then keeps the whole aligned chunk that contains the object alive, i.e., it is designed for pinnings that are released quickly so that no objects are actually ever pinned when the GC runs, unless the GC runs in an unlucky moments. We use such pinning for example to pass pointers into byte[] arrays directly to C functions without copying. > > 2) A PinnedAllocator can be used to get objects that are non-moving for a long period of time. This is currently used for the metadata of runtime compiled code. We are actively working to make PinnedAllocator unnecessary by putting the metadata into C memory, and then hopefully we can remove PinnedAllocator and all code that is necessary for it in the GC, i.e., the notion of pinned spaces you mentioned before. Pinned objects also get in the way of "sliding compaction". If one is unlucky and there are pinned objects when the collection starts. ... peter > >> What do you think about all this? Somebody else might have thought about >> all this already, and have some insights that I don't have in my naive >> understanding? Maybe some of it is already worked on or planned? Maybe >> there are some big obstactles that I don't see yet, that make it less >> feasible? > > We certainly have ideas and plans, and they match your observations. If you are interested in contributing, we can definitely give you some guidance so that you immediately work into the right direction. > > -Christian From rkennke at redhat.com Fri Mar 1 10:16:16 2019 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 1 Mar 2019 11:16:16 +0100 Subject: Thoughts about SubstrateVM GC In-Reply-To: References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> Message-ID: Hi Christian, thanks for your replies. This is very interesting. Some additional comments below inline: >> The old generation is 2 semispaces (actually, 4 with the 2 pinned >> spaces, which I'll ask about later). When collected, live objects get >> scavenged back-and-forth between the two spaces. > > yes, in theory. In "reality" there is no contiguous memory for the > spaces, so after a GC all the aligned chunks of the from-space are > either returned to the OS or cached and immediately reused for young > generation allocation. Aha, ok. This definitely affects future design decisions. >> - The policy when to start collecting seems a bit unclear to me. In my >> understanding, there is (almost) no use (for STW GCs) in starting a >> collection before allocation space is exhausted. Which means, it seems >> an obvious trigger to start collection on allocation failure. Yet, the >> policies I'm looking at are time-based or time-and-space-based. I said >> 'almost' because the single use for time-based collection would be >> periodic GC that is able to knock out lingering garbage during >> no/little-allocation phase of an application, and then only when the GC >> is also uncommitting the resulting unused pages (which, afaics, >> Substrate GC would do: bravo!). But that doesn't seem to be the point of >> the time-based policies: it looks like the goal of those policies is to >> balance time spent in young-vs-old-gen collections.?! > > The young generation size is fixed, and a collection is started when > this space is filled. So from that point of view, the starting trigger > of a GC is always space based. > > The policy whether to do an incremental (young) collection or a full > collection is time based (but you can easily plug in any policy you > want). The goal is to balance time between incremental and full > collection. We certainly don't want to fill up the old generation > because that is the maximum heap size, and it is by default provisioned > to be very big. Ok, I see. Also, the way it's currently done, the old-gen needs to be able to absorb (worst-case) all of young-gen in next cycle, and therefore needs *plenty* of headroom. I.e. we need to collect old-gen much earlier than when it's full (like when remaining free space in old-gen is smaller than young-gen size). Alternatively, we could exhaust old-gen, and change young-gen-collection-policy to skip collection if old-gen doesn't have enough space left, and dive into full-GC right away. Or, even better, add an intermediate tenuring generation. :-) >> With a little bit of distance and squinting of eyes, one can see that >> Substrate GC's young generation is really what is called 'nursery space' >> elsewhere, which aims to reduce the rate at which objects get introduced >> into young generation. And the old generation is really what is usually >> called young generation elsewhere. What's missing is a true old >> generation space. > > Not really, because the young generation can be collected independently, > i.e., there are the generational write barriers, remembered sets, ... > > So the young generation is reduced to the nursery space, but I argue the > old generation is really an old generation. Ok. >> Considering all this, I would like to propose some improvements: >> - Introduce a notion of tenuring objects. I guess we need like 2 age >> bits in the header or elsewhere for this. Do we have that room? > > You don't need the age bits in the header. You can easily go from the > object to the aligned chunk that the object is in (we do that all the > time, for example for the write barrier to do the card marking), and > store the age in the chunk header. Requiring all objects in one chunk to > have the same age is not much of a limitation. Right. > Adding tenuring is definitely necessary to achieve reasonable GC > performance. +1 >> - Implement a true old-space (and rename the existing young to nursery >> and old to young?). In my experience, sliding/mark-compact collection >> using a mark bitmap works best for this: it tends to create a 'sediment' >> of permanent/very-long-lived objects at the bottom which would never get >> copied again. Using a bitmap, walking of live objects (e.g. during >> copying, updating etc) would be very fast: much faster than walking >> objects by their size. > > A mark-and-compact old generation algorithm definitely makes sense. > Again, the only reason why we don't have it yet is that no one had time > to implement it. > > Mark-and-compact is also great to reduce memory footprint. Right now, > during GC the memory footprint can double because of the temporary space > for copying. Yeah. However, as Peter noted, having no contiguous memory block complicates this. I'd need to see how to deal with it (per-chunk-bitmap probably, or maybe mark bit in object header, with some clever tricks to make scanning the heap fast like serial GC does). >> - I am not totally sure about the policies. My current thinking is that >> this needs some cleanup/straightening-out, or maybe I am >> misunderstanding something there. I believe (fairly strongly) that >> allocation failure is the single useful trigger for STW GC, and on top >> of that an (optional) periodic GC trigger that would kick in after X >> (milli)seconds no GC. > > As I mentioned above, the GC trigger is allocation failure for the young > generation. Ok, good. >> - Low-hanging-fruit improvement that could be done right now: allocate >> large objects(arrays) straight into old-gen instead of copying them >> around. Those are usually long-lived anyway, and copying them >> back-and-forth just costs CPU time for no benefit. This will become even >> more pronounced with a true old-gen. > > Large arrays are allocated separately in unaligned chunks. Such arrays > are never copied, but only logically moved from the young generation > into the old generation. An unaligned chunk contains exactly one large > array. Ok, good. >> Oh and a question: what's this pinned object/chunks/spaces all about? > > There are two mechanisms right now to get objects that are never moved > by the GC: > 1) A "normal" object can be temporarily pinned using > org.graalvm.nativeimage.PinnedObject. The current implementation then > keeps the whole aligned chunk that contains the object alive, i.e., it > is designed for pinnings that are released quickly so that no objects > are actually ever pinned when the GC runs, unless the GC runs in an > unlucky moments. We use such pinning for example to pass pointers into > byte[] arrays directly to C functions without copying. > > 2) A PinnedAllocator can be used to get objects that are non-moving for > a long period of time. This is currently used for the metadata of > runtime compiled code. We are actively working to make PinnedAllocator > unnecessary by putting the metadata into C memory, and then hopefully we > can remove PinnedAllocator and all code that is necessary for it in the > GC, i.e., the notion of pinned spaces you mentioned before. Ok, I guessed so. I mostly wondered about it because it's got from-space and to-space: [pinnedFromSpace: aligned: 0/0 unaligned: 0/0] [pinnedToSpace: aligned: 0/0 unaligned: 0/0]] And would it ever copy between them? I guess not. >> What do you think about all this? Somebody else might have thought about >> all this already, and have some insights that I don't have in my naive >> understanding? Maybe some of it is already worked on or planned? Maybe >> there are some big obstactles that I don't see yet, that make it less >> feasible? > > We certainly have ideas and plans, and they match your observations. If > you are interested in contributing, we can definitely give you some > guidance so that you immediately work into the right direction. Yes, I am. :-) Roman From Peter.B.Kessler at Oracle.COM Fri Mar 1 20:16:42 2019 From: Peter.B.Kessler at Oracle.COM (Peter B. Kessler) Date: Fri, 1 Mar 2019 12:16:42 -0800 Subject: Thoughts about SubstrateVM GC In-Reply-To: References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> Message-ID: <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> Two comments inline. And more encouragement to send along your ideas. ... peter On 03/ 1/19 02:16 AM, Roman Kennke wrote: > Hi Christian, > > thanks for your replies. This is very interesting. Some additional > comments below inline: > >>> The old generation is 2 semispaces (actually, 4 with the 2 pinned >>> spaces, which I'll ask about later). When collected, live objects get >>> scavenged back-and-forth between the two spaces. >> >> yes, in theory. In "reality" there is no contiguous memory for the >> spaces, so after a GC all the aligned chunks of the from-space are >> either returned to the OS or cached and immediately reused for young >> generation allocation. > > Aha, ok. This definitely affects future design decisions. > > >>> - The policy when to start collecting seems a bit unclear to me. In my >>> understanding, there is (almost) no use (for STW GCs) in starting a >>> collection before allocation space is exhausted. Which means, it seems >>> an obvious trigger to start collection on allocation failure. Yet, the >>> policies I'm looking at are time-based or time-and-space-based. I said >>> 'almost' because the single use for time-based collection would be >>> periodic GC that is able to knock out lingering garbage during >>> no/little-allocation phase of an application, and then only when the GC >>> is also uncommitting the resulting unused pages (which, afaics, >>> Substrate GC would do: bravo!). But that doesn't seem to be the point of >>> the time-based policies: it looks like the goal of those policies is to >>> balance time spent in young-vs-old-gen collections.?! >> >> The young generation size is fixed, and a collection is started when >> this space is filled. So from that point of view, the starting trigger >> of a GC is always space based. >> >> The policy whether to do an incremental (young) collection or a full >> collection is time based (but you can easily plug in any policy you >> want). The goal is to balance time between incremental and full >> collection. We certainly don't want to fill up the old generation >> because that is the maximum heap size, and it is by default provisioned >> to be very big. > > Ok, I see. > Also, the way it's currently done, the old-gen needs to be able to > absorb (worst-case) all of young-gen in next cycle, and therefore needs > *plenty* of headroom. I.e. we need to collect old-gen much earlier than > when it's full (like when remaining free space in old-gen is smaller > than young-gen size). Alternatively, we could exhaust old-gen, and > change young-gen-collection-policy to skip collection if old-gen doesn't > have enough space left, and dive into full-GC right away. Or, even > better, add an intermediate tenuring generation. :-) There is no fixed heap size, or fixed generation sizes. As long as the collector can allocate memory from the OS it can keep adding chunks as needed to the old generation (or the young generation, for that matter. E.g., to delay collection until it is "convenient".) If you run out of address space, or physical memory, then you are in trouble. > > >>> With a little bit of distance and squinting of eyes, one can see that >>> Substrate GC's young generation is really what is called 'nursery space' >>> elsewhere, which aims to reduce the rate at which objects get introduced >>> into young generation. And the old generation is really what is usually >>> called young generation elsewhere. What's missing is a true old >>> generation space. >> >> Not really, because the young generation can be collected independently, >> i.e., there are the generational write barriers, remembered sets, ... >> >> So the young generation is reduced to the nursery space, but I argue the >> old generation is really an old generation. > > Ok. > >>> Considering all this, I would like to propose some improvements: >>> - Introduce a notion of tenuring objects. I guess we need like 2 age >>> bits in the header or elsewhere for this. Do we have that room? >> >> You don't need the age bits in the header. You can easily go from the >> object to the aligned chunk that the object is in (we do that all the >> time, for example for the write barrier to do the card marking), and >> store the age in the chunk header. Requiring all objects in one chunk to >> have the same age is not much of a limitation. > Right. > >> Adding tenuring is definitely necessary to achieve reasonable GC >> performance. > > +1 > >>> - Implement a true old-space (and rename the existing young to nursery >>> and old to young?). In my experience, sliding/mark-compact collection >>> using a mark bitmap works best for this: it tends to create a 'sediment' >>> of permanent/very-long-lived objects at the bottom which would never get >>> copied again. Using a bitmap, walking of live objects (e.g. during >>> copying, updating etc) would be very fast: much faster than walking >>> objects by their size. >> >> A mark-and-compact old generation algorithm definitely makes sense. >> Again, the only reason why we don't have it yet is that no one had time >> to implement it. >> >> Mark-and-compact is also great to reduce memory footprint. Right now, >> during GC the memory footprint can double because of the temporary space >> for copying. > > Yeah. However, as Peter noted, having no contiguous memory block > complicates this. I'd need to see how to deal with it (per-chunk-bitmap > probably, or maybe mark bit in object header, with some clever tricks to > make scanning the heap fast like serial GC does). > >>> - I am not totally sure about the policies. My current thinking is that >>> this needs some cleanup/straightening-out, or maybe I am >>> misunderstanding something there. I believe (fairly strongly) that >>> allocation failure is the single useful trigger for STW GC, and on top >>> of that an (optional) periodic GC trigger that would kick in after X >>> (milli)seconds no GC. >> >> As I mentioned above, the GC trigger is allocation failure for the young >> generation. > > Ok, good. > >>> - Low-hanging-fruit improvement that could be done right now: allocate >>> large objects(arrays) straight into old-gen instead of copying them >>> around. Those are usually long-lived anyway, and copying them >>> back-and-forth just costs CPU time for no benefit. This will become even >>> more pronounced with a true old-gen. >> >> Large arrays are allocated separately in unaligned chunks. Such arrays >> are never copied, but only logically moved from the young generation >> into the old generation. An unaligned chunk contains exactly one large >> array. > > Ok, good. > >>> Oh and a question: what's this pinned object/chunks/spaces all about? >> >> There are two mechanisms right now to get objects that are never moved >> by the GC: >> 1) A "normal" object can be temporarily pinned using >> org.graalvm.nativeimage.PinnedObject. The current implementation then >> keeps the whole aligned chunk that contains the object alive, i.e., it >> is designed for pinnings that are released quickly so that no objects >> are actually ever pinned when the GC runs, unless the GC runs in an >> unlucky moments. We use such pinning for example to pass pointers into >> byte[] arrays directly to C functions without copying. >> >> 2) A PinnedAllocator can be used to get objects that are non-moving for >> a long period of time. This is currently used for the metadata of >> runtime compiled code. We are actively working to make PinnedAllocator >> unnecessary by putting the metadata into C memory, and then hopefully we >> can remove PinnedAllocator and all code that is necessary for it in the >> GC, i.e., the notion of pinned spaces you mentioned before. > > Ok, I guessed so. I mostly wondered about it because it's got from-space > and to-space: > > [pinnedFromSpace: > aligned: 0/0 unaligned: 0/0] > [pinnedToSpace: > aligned: 0/0 unaligned: 0/0]] > > And would it ever copy between them? I guess not. The collector logically moves a pinned chunk from pinned from-space to pinned to-space by updating bookkeeping information in the chunk. The contents of the pinned chunk are not moved, and their addresses do not change. If a pinned chunk is unpinned by the application, it is moved to the unpinned from-space and at the next full collection the reachable objects in it are scavenged to the unpinned to-space, like any other objects in unpinned from-space. Between collections the pinned to-space is empty. In your example, the pinned from-space is also empty. Spaces do not represent address ranges, so an empty space is just a few null pointers in the space data structure. (Spaces not being address ranges also complicates answer questions like: Is this object in the young generation.) ... peter > >>> What do you think about all this? Somebody else might have thought about >>> all this already, and have some insights that I don't have in my naive >>> understanding? Maybe some of it is already worked on or planned? Maybe >>> there are some big obstactles that I don't see yet, that make it less >>> feasible? >> >> We certainly have ideas and plans, and they match your observations. If >> you are interested in contributing, we can definitely give you some >> guidance so that you immediately work into the right direction. > > Yes, I am. :-) > > Roman > From rkennke at redhat.com Fri Mar 1 20:33:24 2019 From: rkennke at redhat.com (Roman Kennke) Date: Fri, 1 Mar 2019 21:33:24 +0100 Subject: Thoughts about SubstrateVM GC In-Reply-To: <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> Message-ID: Hi Peter, picking up on our comments here: https://github.com/oracle/graal/pull/1015 It seems like there is a significant class of applications that would be attractive with SubstrateVM that would be happy to do no GC at all. Shortlived microservices come to mind. For those applications, we'd realistically do want a GC, but only as last resort. In other words, a single-space heap, that would trigger a collection only when exhausted, and then do sliding compaction, would do it, and arguably better than the current GC: it would not require barriers overhead (i.e. pay for something that we bet would never or very rarely happen), it would be able to use the full space, rather than dividing up in generations and semispaces, and as you say, even eliminate the safepoint checks overhead (does SubstrateVM only do safepoints for GC? that would be cool!). Even if it *does* GC, it might still be better off overall: objects would have more time to die, GC frequency would be rarer. With current SubstrateVM I see relatively frequent full-GCs anyway, so rarer GCs with more relative garbage, combined with increased throughput: I guess that might actually work well for certain classes of applications, especially those that would want to run on SubstrateVM anyway? You commented about GraalVM runtime compiler allocating a lot and leaving behind lots of garbage: would that be a concern in SustrateVM's closed world view? WDYT? Roman > Two comments inline.? And more encouragement to send along your ideas. > > ??????????? ... peter > > On 03/ 1/19 02:16 AM, Roman Kennke wrote: >> Hi Christian, >> >> thanks for your replies. This is very interesting. Some additional >> comments below inline: >> >>>> The old generation is 2 semispaces (actually, 4 with the 2 pinned >>>> spaces, which I'll ask about later). When collected, live objects get >>>> scavenged back-and-forth between the two spaces. >>> >>> yes, in theory. In "reality" there is no contiguous memory for the >>> spaces, so after a GC all the aligned chunks of the from-space are >>> either returned to the OS or cached and immediately reused for young >>> generation allocation. >> >> Aha, ok. This definitely affects future design decisions. >> >> >>>> - The policy when to start collecting seems a bit unclear to me. In my >>>> understanding, there is (almost) no use (for STW GCs) in starting a >>>> collection before allocation space is exhausted. Which means, it seems >>>> an obvious trigger to start collection on allocation failure. Yet, the >>>> policies I'm looking at are time-based or time-and-space-based. I said >>>> 'almost' because the single use for time-based collection would be >>>> periodic GC that is able to knock out lingering garbage during >>>> no/little-allocation phase of an application, and then only when the GC >>>> is also uncommitting the resulting unused pages (which, afaics, >>>> Substrate GC would do: bravo!). But that doesn't seem to be the >>>> point of >>>> the time-based policies: it looks like the goal of those policies is to >>>> balance time spent in young-vs-old-gen collections.?! >>> >>> The young generation size is fixed, and a collection is started when >>> this space is filled. So from that point of view, the starting trigger >>> of a GC is always space based. >>> >>> The policy whether to do an incremental (young) collection or a full >>> collection is time based (but you can easily plug in any policy you >>> want). The goal is to balance time between incremental and full >>> collection. We certainly don't want to fill up the old generation >>> because that is the maximum heap size, and it is by default provisioned >>> to be very big. >> >> Ok, I see. >> Also, the way it's currently done, the old-gen needs to be able to >> absorb (worst-case) all of young-gen in next cycle, and therefore needs >> *plenty* of headroom. I.e. we need to collect old-gen much earlier than >> when it's full (like when remaining free space in old-gen is smaller >> than young-gen size). Alternatively, we could exhaust old-gen, and >> change young-gen-collection-policy to skip collection if old-gen doesn't >> have enough space left, and dive into full-GC right away. Or, even >> better, add an intermediate tenuring generation. :-) > > There is no fixed heap size, or fixed generation sizes.? As long as the > collector can allocate memory from the OS it can keep adding chunks as > needed to the old generation (or the young generation, for that matter.? > E.g., to delay collection until it is "convenient".)? If you run out of > address space, or physical memory, then you are in trouble. > >> >> >>>> With a little bit of distance and squinting of eyes, one can see that >>>> Substrate GC's young generation is really what is called 'nursery >>>> space' >>>> elsewhere, which aims to reduce the rate at which objects get >>>> introduced >>>> into young generation. And the old generation is really what is usually >>>> called young generation elsewhere. What's missing is a true old >>>> generation space. >>> >>> Not really, because the young generation can be collected independently, >>> i.e., there are the generational write barriers, remembered sets, ... >>> >>> So the young generation is reduced to the nursery space, but I argue the >>> old generation is really an old generation. >> >> Ok. >> >>>> Considering all this, I would like to propose some improvements: >>>> - Introduce a notion of tenuring objects. I guess we need like 2 age >>>> bits in the header or elsewhere for this. Do we have that room? >>> >>> You don't need the age bits in the header. You can easily go from the >>> object to the aligned chunk that the object is in (we do that all the >>> time, for example for the write barrier to do the card marking), and >>> store the age in the chunk header. Requiring all objects in one chunk to >>> have the same age is not much of a limitation. >> Right. >> >>> Adding tenuring is definitely necessary to achieve reasonable GC >>> performance. >> >> +1 >> >>>> - Implement a true old-space (and rename the existing young to nursery >>>> and old to young?). In my experience, sliding/mark-compact collection >>>> using a mark bitmap works best for this: it tends to create a >>>> 'sediment' >>>> of permanent/very-long-lived objects at the bottom which would never >>>> get >>>> copied again. Using a bitmap, walking of live objects (e.g. during >>>> copying, updating etc) would be very fast: much faster than walking >>>> objects by their size. >>> >>> A mark-and-compact old generation algorithm definitely makes sense. >>> Again, the only reason why we don't have it yet is that no one had time >>> to implement it. >>> >>> Mark-and-compact is also great to reduce memory footprint. Right now, >>> during GC the memory footprint can double because of the temporary space >>> for copying. >> >> Yeah. However, as Peter noted, having no contiguous memory block >> complicates this. I'd need to see how to deal with it (per-chunk-bitmap >> probably, or maybe mark bit in object header, with some clever tricks to >> make scanning the heap fast like serial GC does). >> >>>> - I am not totally sure about the policies. My current thinking is that >>>> this needs some cleanup/straightening-out, or maybe I am >>>> misunderstanding something there. I believe (fairly strongly) that >>>> allocation failure is the single useful trigger for STW GC, and on top >>>> of that an (optional) periodic GC trigger that would kick in after X >>>> (milli)seconds no GC. >>> >>> As I mentioned above, the GC trigger is allocation failure for the young >>> generation. >> >> Ok, good. >> >>>> - Low-hanging-fruit improvement that could be done right now: allocate >>>> large objects(arrays) straight into old-gen instead of copying them >>>> around. Those are usually long-lived anyway, and copying them >>>> back-and-forth just costs CPU time for no benefit. This will become >>>> even >>>> more pronounced with a true old-gen. >>> >>> Large arrays are allocated separately in unaligned chunks. Such arrays >>> are never copied, but only logically moved from the young generation >>> into the old generation. An unaligned chunk contains exactly one large >>> array. >> >> Ok, good. >> >>>> Oh and a question: what's this pinned object/chunks/spaces all about? >>> >>> There are two mechanisms right now to get objects that are never moved >>> by the GC: >>> 1) A "normal" object can be temporarily pinned using >>> org.graalvm.nativeimage.PinnedObject. The current implementation then >>> keeps the whole aligned chunk that contains the object alive, i.e., it >>> is designed for pinnings that are released quickly so that no objects >>> are actually ever pinned when the GC runs, unless the GC runs in an >>> unlucky moments. We use such pinning for example to pass pointers into >>> byte[] arrays directly to C functions without copying. >>> >>> 2) A PinnedAllocator can be used to get objects that are non-moving for >>> a long period of time. This is currently used for the metadata of >>> runtime compiled code. We are actively working to make PinnedAllocator >>> unnecessary by putting the metadata into C memory, and then hopefully we >>> can remove PinnedAllocator and all code that is necessary for it in the >>> GC, i.e., the notion of pinned spaces you mentioned before. >> >> Ok, I guessed so. I mostly wondered about it because it's got from-space >> and to-space: >> >> ???? [pinnedFromSpace: >> ?????? aligned: 0/0 unaligned: 0/0] >> ???? [pinnedToSpace: >> ?????? aligned: 0/0 unaligned: 0/0]] >> >> And would it ever copy between them? I guess not. > > The collector logically moves a pinned chunk from pinned from-space to > pinned to-space by updating bookkeeping information in the chunk.? The > contents of the pinned chunk are not moved, and their addresses do not > change.? If a pinned chunk is unpinned by the application, it is moved > to the unpinned from-space and at the next full collection the reachable > objects in it are scavenged to the unpinned to-space, like any other > objects in unpinned from-space.? Between collections the pinned to-space > is empty.? In your example, the pinned from-space is also empty.? Spaces > do not represent address ranges, so an empty space is just a few null > pointers in the space data structure.? (Spaces not being address ranges > also complicates answer questions like: Is this object in the young > generation.) > > ??????????? ... peter > >> >>>> What do you think about all this? Somebody else might have thought >>>> about >>>> all this already, and have some insights that I don't have in my naive >>>> understanding? Maybe some of it is already worked on or planned? Maybe >>>> there are some big obstactles that I don't see yet, that make it less >>>> feasible? >>> >>> We certainly have ideas and plans, and they match your observations. If >>> you are interested in contributing, we can definitely give you some >>> guidance so that you immediately work into the right direction. >> >> Yes, I am. :-) >> >> Roman >> > From Peter.B.Kessler at Oracle.COM Fri Mar 1 22:13:09 2019 From: Peter.B.Kessler at Oracle.COM (Peter B. Kessler) Date: Fri, 1 Mar 2019 14:13:09 -0800 Subject: Thoughts about SubstrateVM GC In-Reply-To: References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> Message-ID: I think we need requirements from users, representative benchmarks or real applications, inspired ideas, measurements of alternatives, and time to write garbage collectors. :-) So far, SubstrateVM has explored one point in the space, and we know that one size does not fit all when it comes to garbage collection. ... peter On 03/ 1/19 12:33 PM, Roman Kennke wrote: > Hi Peter, > > picking up on our comments here: > > https://github.com/oracle/graal/pull/1015 > > It seems like there is a significant class of applications that would be > attractive with SubstrateVM that would be happy to do no GC at all. > Shortlived microservices come to mind. For those applications, we'd > realistically do want a GC, but only as last resort. In other words, a > single-space heap, that would trigger a collection only when exhausted, > and then do sliding compaction, would do it, and arguably better than > the current GC: it would not require barriers overhead (i.e. pay for > something that we bet would never or very rarely happen), it would be > able to use the full space, rather than dividing up in generations and > semispaces, and as you say, even eliminate the safepoint checks overhead > (does SubstrateVM only do safepoints for GC? that would be cool!). > > Even if it *does* GC, it might still be better off overall: objects > would have more time to die, GC frequency would be rarer. With current > SubstrateVM I see relatively frequent full-GCs anyway, so rarer GCs with > more relative garbage, combined with increased throughput: I guess that > might actually work well for certain classes of applications, especially > those that would want to run on SubstrateVM anyway? > > You commented about GraalVM runtime compiler allocating a lot and > leaving behind lots of garbage: would that be a concern in SustrateVM's > closed world view? > > WDYT? > > Roman > >> Two comments inline. And more encouragement to send along your ideas. >> >> ... peter >> >> On 03/ 1/19 02:16 AM, Roman Kennke wrote: >>> Hi Christian, >>> >>> thanks for your replies. This is very interesting. Some additional >>> comments below inline: >>> >>>>> The old generation is 2 semispaces (actually, 4 with the 2 pinned >>>>> spaces, which I'll ask about later). When collected, live objects get >>>>> scavenged back-and-forth between the two spaces. >>>> >>>> yes, in theory. In "reality" there is no contiguous memory for the >>>> spaces, so after a GC all the aligned chunks of the from-space are >>>> either returned to the OS or cached and immediately reused for young >>>> generation allocation. >>> >>> Aha, ok. This definitely affects future design decisions. >>> >>> >>>>> - The policy when to start collecting seems a bit unclear to me. In my >>>>> understanding, there is (almost) no use (for STW GCs) in starting a >>>>> collection before allocation space is exhausted. Which means, it seems >>>>> an obvious trigger to start collection on allocation failure. Yet, the >>>>> policies I'm looking at are time-based or time-and-space-based. I said >>>>> 'almost' because the single use for time-based collection would be >>>>> periodic GC that is able to knock out lingering garbage during >>>>> no/little-allocation phase of an application, and then only when the GC >>>>> is also uncommitting the resulting unused pages (which, afaics, >>>>> Substrate GC would do: bravo!). But that doesn't seem to be the >>>>> point of >>>>> the time-based policies: it looks like the goal of those policies is to >>>>> balance time spent in young-vs-old-gen collections.?! >>>> >>>> The young generation size is fixed, and a collection is started when >>>> this space is filled. So from that point of view, the starting trigger >>>> of a GC is always space based. >>>> >>>> The policy whether to do an incremental (young) collection or a full >>>> collection is time based (but you can easily plug in any policy you >>>> want). The goal is to balance time between incremental and full >>>> collection. We certainly don't want to fill up the old generation >>>> because that is the maximum heap size, and it is by default provisioned >>>> to be very big. >>> >>> Ok, I see. >>> Also, the way it's currently done, the old-gen needs to be able to >>> absorb (worst-case) all of young-gen in next cycle, and therefore needs >>> *plenty* of headroom. I.e. we need to collect old-gen much earlier than >>> when it's full (like when remaining free space in old-gen is smaller >>> than young-gen size). Alternatively, we could exhaust old-gen, and >>> change young-gen-collection-policy to skip collection if old-gen doesn't >>> have enough space left, and dive into full-GC right away. Or, even >>> better, add an intermediate tenuring generation. :-) >> >> There is no fixed heap size, or fixed generation sizes. As long as the >> collector can allocate memory from the OS it can keep adding chunks as >> needed to the old generation (or the young generation, for that matter. >> E.g., to delay collection until it is "convenient".) If you run out of >> address space, or physical memory, then you are in trouble. >> >>> >>> >>>>> With a little bit of distance and squinting of eyes, one can see that >>>>> Substrate GC's young generation is really what is called 'nursery >>>>> space' >>>>> elsewhere, which aims to reduce the rate at which objects get >>>>> introduced >>>>> into young generation. And the old generation is really what is usually >>>>> called young generation elsewhere. What's missing is a true old >>>>> generation space. >>>> >>>> Not really, because the young generation can be collected independently, >>>> i.e., there are the generational write barriers, remembered sets, ... >>>> >>>> So the young generation is reduced to the nursery space, but I argue the >>>> old generation is really an old generation. >>> >>> Ok. >>> >>>>> Considering all this, I would like to propose some improvements: >>>>> - Introduce a notion of tenuring objects. I guess we need like 2 age >>>>> bits in the header or elsewhere for this. Do we have that room? >>>> >>>> You don't need the age bits in the header. You can easily go from the >>>> object to the aligned chunk that the object is in (we do that all the >>>> time, for example for the write barrier to do the card marking), and >>>> store the age in the chunk header. Requiring all objects in one chunk to >>>> have the same age is not much of a limitation. >>> Right. >>> >>>> Adding tenuring is definitely necessary to achieve reasonable GC >>>> performance. >>> >>> +1 >>> >>>>> - Implement a true old-space (and rename the existing young to nursery >>>>> and old to young?). In my experience, sliding/mark-compact collection >>>>> using a mark bitmap works best for this: it tends to create a >>>>> 'sediment' >>>>> of permanent/very-long-lived objects at the bottom which would never >>>>> get >>>>> copied again. Using a bitmap, walking of live objects (e.g. during >>>>> copying, updating etc) would be very fast: much faster than walking >>>>> objects by their size. >>>> >>>> A mark-and-compact old generation algorithm definitely makes sense. >>>> Again, the only reason why we don't have it yet is that no one had time >>>> to implement it. >>>> >>>> Mark-and-compact is also great to reduce memory footprint. Right now, >>>> during GC the memory footprint can double because of the temporary space >>>> for copying. >>> >>> Yeah. However, as Peter noted, having no contiguous memory block >>> complicates this. I'd need to see how to deal with it (per-chunk-bitmap >>> probably, or maybe mark bit in object header, with some clever tricks to >>> make scanning the heap fast like serial GC does). >>> >>>>> - I am not totally sure about the policies. My current thinking is that >>>>> this needs some cleanup/straightening-out, or maybe I am >>>>> misunderstanding something there. I believe (fairly strongly) that >>>>> allocation failure is the single useful trigger for STW GC, and on top >>>>> of that an (optional) periodic GC trigger that would kick in after X >>>>> (milli)seconds no GC. >>>> >>>> As I mentioned above, the GC trigger is allocation failure for the young >>>> generation. >>> >>> Ok, good. >>> >>>>> - Low-hanging-fruit improvement that could be done right now: allocate >>>>> large objects(arrays) straight into old-gen instead of copying them >>>>> around. Those are usually long-lived anyway, and copying them >>>>> back-and-forth just costs CPU time for no benefit. This will become >>>>> even >>>>> more pronounced with a true old-gen. >>>> >>>> Large arrays are allocated separately in unaligned chunks. Such arrays >>>> are never copied, but only logically moved from the young generation >>>> into the old generation. An unaligned chunk contains exactly one large >>>> array. >>> >>> Ok, good. >>> >>>>> Oh and a question: what's this pinned object/chunks/spaces all about? >>>> >>>> There are two mechanisms right now to get objects that are never moved >>>> by the GC: >>>> 1) A "normal" object can be temporarily pinned using >>>> org.graalvm.nativeimage.PinnedObject. The current implementation then >>>> keeps the whole aligned chunk that contains the object alive, i.e., it >>>> is designed for pinnings that are released quickly so that no objects >>>> are actually ever pinned when the GC runs, unless the GC runs in an >>>> unlucky moments. We use such pinning for example to pass pointers into >>>> byte[] arrays directly to C functions without copying. >>>> >>>> 2) A PinnedAllocator can be used to get objects that are non-moving for >>>> a long period of time. This is currently used for the metadata of >>>> runtime compiled code. We are actively working to make PinnedAllocator >>>> unnecessary by putting the metadata into C memory, and then hopefully we >>>> can remove PinnedAllocator and all code that is necessary for it in the >>>> GC, i.e., the notion of pinned spaces you mentioned before. >>> >>> Ok, I guessed so. I mostly wondered about it because it's got from-space >>> and to-space: >>> >>> [pinnedFromSpace: >>> aligned: 0/0 unaligned: 0/0] >>> [pinnedToSpace: >>> aligned: 0/0 unaligned: 0/0]] >>> >>> And would it ever copy between them? I guess not. >> >> The collector logically moves a pinned chunk from pinned from-space to >> pinned to-space by updating bookkeeping information in the chunk. The >> contents of the pinned chunk are not moved, and their addresses do not >> change. If a pinned chunk is unpinned by the application, it is moved >> to the unpinned from-space and at the next full collection the reachable >> objects in it are scavenged to the unpinned to-space, like any other >> objects in unpinned from-space. Between collections the pinned to-space >> is empty. In your example, the pinned from-space is also empty. Spaces >> do not represent address ranges, so an empty space is just a few null >> pointers in the space data structure. (Spaces not being address ranges >> also complicates answer questions like: Is this object in the young >> generation.) >> >> ... peter >> >>> >>>>> What do you think about all this? Somebody else might have thought >>>>> about >>>>> all this already, and have some insights that I don't have in my naive >>>>> understanding? Maybe some of it is already worked on or planned? Maybe >>>>> there are some big obstactles that I don't see yet, that make it less >>>>> feasible? >>>> >>>> We certainly have ideas and plans, and they match your observations. If >>>> you are interested in contributing, we can definitely give you some >>>> guidance so that you immediately work into the right direction. >>> >>> Yes, I am. :-) >>> >>> Roman >>> >> > From rkennke at redhat.com Fri Mar 1 23:17:58 2019 From: rkennke at redhat.com (Roman Kennke) Date: Sat, 02 Mar 2019 00:17:58 +0100 Subject: Thoughts about SubstrateVM GC In-Reply-To: References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> Message-ID: <12B29C9A-0C76-4A7D-A53F-930ED6CAA89F@redhat.com> I am working on that ;-) I have users with a case, and we can prototype and measure with Serial GC vs EpsilonGC+ Aleksey's sliding GC hack (https://shipilev.net/jvm/diy-gc/), that should give us some ideas :-) Cheers, Roman Am 1. M?rz 2019 23:13:09 MEZ schrieb "Peter B. Kessler" : >I think we need requirements from users, representative benchmarks or >real applications, inspired ideas, measurements of alternatives, and >time to write garbage collectors. :-) > >So far, SubstrateVM has explored one point in the space, and we know >that one size does not fit all when it comes to garbage collection. > > ... peter > >On 03/ 1/19 12:33 PM, Roman Kennke wrote: >> Hi Peter, >> >> picking up on our comments here: >> >> https://github.com/oracle/graal/pull/1015 >> >> It seems like there is a significant class of applications that would >be >> attractive with SubstrateVM that would be happy to do no GC at all. >> Shortlived microservices come to mind. For those applications, we'd >> realistically do want a GC, but only as last resort. In other words, >a >> single-space heap, that would trigger a collection only when >exhausted, >> and then do sliding compaction, would do it, and arguably better than >> the current GC: it would not require barriers overhead (i.e. pay for >> something that we bet would never or very rarely happen), it would be >> able to use the full space, rather than dividing up in generations >and >> semispaces, and as you say, even eliminate the safepoint checks >overhead >> (does SubstrateVM only do safepoints for GC? that would be cool!). >> >> Even if it *does* GC, it might still be better off overall: objects >> would have more time to die, GC frequency would be rarer. With >current >> SubstrateVM I see relatively frequent full-GCs anyway, so rarer GCs >with >> more relative garbage, combined with increased throughput: I guess >that >> might actually work well for certain classes of applications, >especially >> those that would want to run on SubstrateVM anyway? >> >> You commented about GraalVM runtime compiler allocating a lot and >> leaving behind lots of garbage: would that be a concern in >SustrateVM's >> closed world view? >> >> WDYT? >> >> Roman >> >>> Two comments inline. And more encouragement to send along your >ideas. >>> >>> ... peter >>> >>> On 03/ 1/19 02:16 AM, Roman Kennke wrote: >>>> Hi Christian, >>>> >>>> thanks for your replies. This is very interesting. Some additional >>>> comments below inline: >>>> >>>>>> The old generation is 2 semispaces (actually, 4 with the 2 pinned >>>>>> spaces, which I'll ask about later). When collected, live objects >get >>>>>> scavenged back-and-forth between the two spaces. >>>>> >>>>> yes, in theory. In "reality" there is no contiguous memory for the >>>>> spaces, so after a GC all the aligned chunks of the from-space are >>>>> either returned to the OS or cached and immediately reused for >young >>>>> generation allocation. >>>> >>>> Aha, ok. This definitely affects future design decisions. >>>> >>>> >>>>>> - The policy when to start collecting seems a bit unclear to me. >In my >>>>>> understanding, there is (almost) no use (for STW GCs) in starting >a >>>>>> collection before allocation space is exhausted. Which means, it >seems >>>>>> an obvious trigger to start collection on allocation failure. >Yet, the >>>>>> policies I'm looking at are time-based or time-and-space-based. I >said >>>>>> 'almost' because the single use for time-based collection would >be >>>>>> periodic GC that is able to knock out lingering garbage during >>>>>> no/little-allocation phase of an application, and then only when >the GC >>>>>> is also uncommitting the resulting unused pages (which, afaics, >>>>>> Substrate GC would do: bravo!). But that doesn't seem to be the >>>>>> point of >>>>>> the time-based policies: it looks like the goal of those policies >is to >>>>>> balance time spent in young-vs-old-gen collections.?! >>>>> >>>>> The young generation size is fixed, and a collection is started >when >>>>> this space is filled. So from that point of view, the starting >trigger >>>>> of a GC is always space based. >>>>> >>>>> The policy whether to do an incremental (young) collection or a >full >>>>> collection is time based (but you can easily plug in any policy >you >>>>> want). The goal is to balance time between incremental and full >>>>> collection. We certainly don't want to fill up the old generation >>>>> because that is the maximum heap size, and it is by default >provisioned >>>>> to be very big. >>>> >>>> Ok, I see. >>>> Also, the way it's currently done, the old-gen needs to be able to >>>> absorb (worst-case) all of young-gen in next cycle, and therefore >needs >>>> *plenty* of headroom. I.e. we need to collect old-gen much earlier >than >>>> when it's full (like when remaining free space in old-gen is >smaller >>>> than young-gen size). Alternatively, we could exhaust old-gen, and >>>> change young-gen-collection-policy to skip collection if old-gen >doesn't >>>> have enough space left, and dive into full-GC right away. Or, even >>>> better, add an intermediate tenuring generation. :-) >>> >>> There is no fixed heap size, or fixed generation sizes. As long as >the >>> collector can allocate memory from the OS it can keep adding chunks >as >>> needed to the old generation (or the young generation, for that >matter. >>> E.g., to delay collection until it is "convenient".) If you run out >of >>> address space, or physical memory, then you are in trouble. >>> >>>> >>>> >>>>>> With a little bit of distance and squinting of eyes, one can see >that >>>>>> Substrate GC's young generation is really what is called 'nursery >>>>>> space' >>>>>> elsewhere, which aims to reduce the rate at which objects get >>>>>> introduced >>>>>> into young generation. And the old generation is really what is >usually >>>>>> called young generation elsewhere. What's missing is a true old >>>>>> generation space. >>>>> >>>>> Not really, because the young generation can be collected >independently, >>>>> i.e., there are the generational write barriers, remembered sets, >... >>>>> >>>>> So the young generation is reduced to the nursery space, but I >argue the >>>>> old generation is really an old generation. >>>> >>>> Ok. >>>> >>>>>> Considering all this, I would like to propose some improvements: >>>>>> - Introduce a notion of tenuring objects. I guess we need like 2 >age >>>>>> bits in the header or elsewhere for this. Do we have that room? >>>>> >>>>> You don't need the age bits in the header. You can easily go from >the >>>>> object to the aligned chunk that the object is in (we do that all >the >>>>> time, for example for the write barrier to do the card marking), >and >>>>> store the age in the chunk header. Requiring all objects in one >chunk to >>>>> have the same age is not much of a limitation. >>>> Right. >>>> >>>>> Adding tenuring is definitely necessary to achieve reasonable GC >>>>> performance. >>>> >>>> +1 >>>> >>>>>> - Implement a true old-space (and rename the existing young to >nursery >>>>>> and old to young?). In my experience, sliding/mark-compact >collection >>>>>> using a mark bitmap works best for this: it tends to create a >>>>>> 'sediment' >>>>>> of permanent/very-long-lived objects at the bottom which would >never >>>>>> get >>>>>> copied again. Using a bitmap, walking of live objects (e.g. >during >>>>>> copying, updating etc) would be very fast: much faster than >walking >>>>>> objects by their size. >>>>> >>>>> A mark-and-compact old generation algorithm definitely makes >sense. >>>>> Again, the only reason why we don't have it yet is that no one had >time >>>>> to implement it. >>>>> >>>>> Mark-and-compact is also great to reduce memory footprint. Right >now, >>>>> during GC the memory footprint can double because of the temporary >space >>>>> for copying. >>>> >>>> Yeah. However, as Peter noted, having no contiguous memory block >>>> complicates this. I'd need to see how to deal with it >(per-chunk-bitmap >>>> probably, or maybe mark bit in object header, with some clever >tricks to >>>> make scanning the heap fast like serial GC does). >>>> >>>>>> - I am not totally sure about the policies. My current thinking >is that >>>>>> this needs some cleanup/straightening-out, or maybe I am >>>>>> misunderstanding something there. I believe (fairly strongly) >that >>>>>> allocation failure is the single useful trigger for STW GC, and >on top >>>>>> of that an (optional) periodic GC trigger that would kick in >after X >>>>>> (milli)seconds no GC. >>>>> >>>>> As I mentioned above, the GC trigger is allocation failure for the >young >>>>> generation. >>>> >>>> Ok, good. >>>> >>>>>> - Low-hanging-fruit improvement that could be done right now: >allocate >>>>>> large objects(arrays) straight into old-gen instead of copying >them >>>>>> around. Those are usually long-lived anyway, and copying them >>>>>> back-and-forth just costs CPU time for no benefit. This will >become >>>>>> even >>>>>> more pronounced with a true old-gen. >>>>> >>>>> Large arrays are allocated separately in unaligned chunks. Such >arrays >>>>> are never copied, but only logically moved from the young >generation >>>>> into the old generation. An unaligned chunk contains exactly one >large >>>>> array. >>>> >>>> Ok, good. >>>> >>>>>> Oh and a question: what's this pinned object/chunks/spaces all >about? >>>>> >>>>> There are two mechanisms right now to get objects that are never >moved >>>>> by the GC: >>>>> 1) A "normal" object can be temporarily pinned using >>>>> org.graalvm.nativeimage.PinnedObject. The current implementation >then >>>>> keeps the whole aligned chunk that contains the object alive, >i.e., it >>>>> is designed for pinnings that are released quickly so that no >objects >>>>> are actually ever pinned when the GC runs, unless the GC runs in >an >>>>> unlucky moments. We use such pinning for example to pass pointers >into >>>>> byte[] arrays directly to C functions without copying. >>>>> >>>>> 2) A PinnedAllocator can be used to get objects that are >non-moving for >>>>> a long period of time. This is currently used for the metadata of >>>>> runtime compiled code. We are actively working to make >PinnedAllocator >>>>> unnecessary by putting the metadata into C memory, and then >hopefully we >>>>> can remove PinnedAllocator and all code that is necessary for it >in the >>>>> GC, i.e., the notion of pinned spaces you mentioned before. >>>> >>>> Ok, I guessed so. I mostly wondered about it because it's got >from-space >>>> and to-space: >>>> >>>> [pinnedFromSpace: >>>> aligned: 0/0 unaligned: 0/0] >>>> [pinnedToSpace: >>>> aligned: 0/0 unaligned: 0/0]] >>>> >>>> And would it ever copy between them? I guess not. >>> >>> The collector logically moves a pinned chunk from pinned from-space >to >>> pinned to-space by updating bookkeeping information in the chunk. >The >>> contents of the pinned chunk are not moved, and their addresses do >not >>> change. If a pinned chunk is unpinned by the application, it is >moved >>> to the unpinned from-space and at the next full collection the >reachable >>> objects in it are scavenged to the unpinned to-space, like any other >>> objects in unpinned from-space. Between collections the pinned >to-space >>> is empty. In your example, the pinned from-space is also empty. >Spaces >>> do not represent address ranges, so an empty space is just a few >null >>> pointers in the space data structure. (Spaces not being address >ranges >>> also complicates answer questions like: Is this object in the young >>> generation.) >>> >>> ... peter >>> >>>> >>>>>> What do you think about all this? Somebody else might have >thought >>>>>> about >>>>>> all this already, and have some insights that I don't have in my >naive >>>>>> understanding? Maybe some of it is already worked on or planned? >Maybe >>>>>> there are some big obstactles that I don't see yet, that make it >less >>>>>> feasible? >>>>> >>>>> We certainly have ideas and plans, and they match your >observations. If >>>>> you are interested in contributing, we can definitely give you >some >>>>> guidance so that you immediately work into the right direction. >>>> >>>> Yes, I am. :-) >>>> >>>> Roman >>>> >>> >> -- Diese Nachricht wurde von meinem Android-Ger?t mit K-9 Mail gesendet. From Peter.B.Kessler at Oracle.COM Sat Mar 2 02:15:08 2019 From: Peter.B.Kessler at Oracle.COM (Peter B. Kessler) Date: Fri, 1 Mar 2019 18:15:08 -0800 Subject: Thoughts about SubstrateVM GC In-Reply-To: <12B29C9A-0C76-4A7D-A53F-930ED6CAA89F@redhat.com> References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> <12B29C9A-0C76-4A7D-A53F-930ED6CAA89F@redhat.com> Message-ID: See also https://github.com/oracle/graal/issues/853 for an example of using the `CollectionPolicy$NeverCollect` collection policy. Though, we did not get any numbers out of that. ... peter On 03/ 1/19 03:17 PM, Roman Kennke wrote: > I am working on that ;-) I have users with a case, and we can prototype and measure with Serial GC vs EpsilonGC+ Aleksey's sliding GC hack (https://shipilev.net/jvm/diy-gc/), that should give us some ideas :-) > > Cheers, Roman > > > > Am 1. M?rz 2019 23:13:09 MEZ schrieb "Peter B. Kessler" : > > I think we need requirements from users, representative benchmarks or real applications, inspired ideas, measurements of alternatives, and time to write garbage collectors. :-) > > So far, SubstrateVM has explored one point in the space, and we know that one size does not fit all when it comes to garbage collection. > > ... peter > > On 03/ 1/19 12:33 PM, Roman Kennke wrote: > > Hi Peter, > > picking up on our comments here: > > https://github.com/oracle/graal/pull/1015 > > It seems like there is a significant class of applications that would be > attractive with SubstrateVM that would be happy to do no GC at all. > Shortlived microservices come to mind. For those applications, we'd > realistically do want a GC, but only as last resort. In other words, a > single-space heap, that would trigger a collection only when exhausted, > and then do sliding compaction, would do it, and arguably better than > the current GC: it would not require barriers overhead (i.e. pay for > something that we bet would never or very rarely happen), it would be > able to use the full space, rather than dividing up in generations and > semispaces, and as you say, even eliminate the safepoint checks overhead > (does SubstrateVM only do safepoints for GC? that would be cool!). > > Even if it *does* GC, it might still be better off overall: objects > would have more time to die, GC frequency would be rarer. With current > SubstrateVM I see relatively frequent full-GCs anyway, so rarer GCs with > more relative garbage, combined with increased throughput: I guess that > might actually work well for certain classes of applications, especially > those that would want to run on SubstrateVM anyway? > > You commented about GraalVM runtime compiler allocating a lot and > leaving behind lots of garbage: would that be a concern in SustrateVM's > closed world view? > > WDYT? > > Roman > > Two comments inline. And more encouragement to send along your ideas. > > ... peter > > On 03/ 1/19 02:16 AM, Roman Kennke wrote: > > Hi Christian, > > thanks for your replies. This is very interesting. Some additional > comments below inline: > > The old generation is 2 semispaces (actually, 4 with the 2 pinned > spaces, which I'll ask about later). When collected, live objects get > scavenged back-and-forth between the two spaces. > > > yes, in theory. In "reality" there is no contiguous memory for the > spaces, so after a GC all the aligned chunks of the from-space are > either returned to the OS or cached and immediately reused for young > generation allocation. > > > Aha, ok. This definitely affects future design decisions. > > > - The policy when to start collecting seems a bit unclear to me. In my > understanding, there is (almost) no use (for STW GCs) in starting a > collection before allocation space is exhausted. Which means, it seems > an obvious trigger to start collection on allocation failure. Yet, the > policies I'm looking at are time-based or time-and-space-based. I said > 'almost' because the single use for time-based collection would be > periodic GC that is able to knock out lingering garbage during > no/little-allocation phase of an application, and then only when the GC > is also uncommitting the resulting unused pages (which, afaics, > Substrate GC would do: bravo!). But that doesn't seem to be the > point of > the time-based policies: it looks like the goal of those policies is to > balance time spent in young-vs-old-gen collections.?! > > > The young generation size is fixed, and a collection is started when > this space is filled. So from that point of view, the starting trigger > of a GC is always space based. > > The policy whether to do an incremental (young) collection or a full > collection is time based (but you can easily plug in any policy you > want). The goal is to balance time between incremental and full > collection. We certainly don't want to fill up the old generation > because that is the maximum heap size, and it is by default provisioned > to be very big. > > > Ok, I see. > Also, the way it's currently done, the old-gen needs to be able to > absorb (worst-case) all of young-gen in next cycle, and therefore needs > *plenty* of headroom. I.e. we need to collect old-gen much earlier than > when it's full (like when remaining free space in old-gen is smaller > than young-gen size). Alternatively, we could exhaust old-gen, and > change young-gen-collection-policy to skip collection if old-gen doesn't > have enough space left, and dive into full-GC right away. Or, even > better, add an intermediate tenuring generation. :-) > > > There is no fixed heap size, or fixed generation sizes. As long as the > collector can allocate memory from the OS it can keep adding chunks as > needed to the old generation (or the young generation, for that matter. > E.g., to delay collection until it is "convenient".) If you run out of > address space, or physical memory, then you are in trouble. > > > > With a little bit of distance and squinting of eyes, one can see that > Substrate GC's young generation is really what is called 'nursery > space' > elsewhere, which aims to reduce the rate at which objects get > introduced > into young generation. And the old generation is really what is usually > called young generation elsewhere. What's missing is a true old > generation space. > > > Not really, because the young generation can be collected independently, > i.e., there are the generational write barriers, remembered sets, ... > > So the young generation is reduced to the nursery space, but I argue the > old generation is really an old generation. > > > Ok. > > Considering all this, I would like to propose some improvements: > - Introduce a notion of tenuring objects. I guess we need like 2 age > bits in the header or elsewhere for this. Do we have that room? > > > You don't need the age bits in the header. You can easily go from the > object to the aligned chunk that the object is in (we do that all the > time, for example for the write barrier to do the card marking), and > store the age in the chunk header. Requiring all objects in one chunk to > have the same age is not much of a limitation. > > Right. > > Adding tenuring is definitely necessary to achieve reasonable GC > performance. > > > +1 > > - Implement a true old-space (and rename the existing young to nursery > and old to young?). In my experience, sliding/mark-compact collection > using a mark bitmap works best for this: it tends to create a > 'sediment' > of permanent/very-long-lived objects at the bottom which would never > get > copied again. Using a bitmap, walking of live objects (e.g. during > copying, updating etc) would be very fast: much faster than walking > objects by their size. > > > A mark-and-compact old generation algorithm definitely makes sense. > Again, the only reason why we don't have it yet is that no one had time > to implement it. > > Mark-and-compact is also great to reduce memory footprint. Right now, > during GC the memory footprint can double because of the temporary space > for copying. > > > Yeah. However, as Peter noted, having no contiguous memory block > complicates this. I'd need to see how to deal with it (per-chunk-bitmap > probably, or maybe mark bit in object header, with some clever tricks to > make scanning the heap fast like serial GC does). > > - I am not totally sure about the policies. My current thinking is that > this needs some cleanup/straightening-out, or maybe I am > misunderstanding something there. I believe (fairly strongly) that > allocation failure is the single useful trigger for STW GC, and on top > of that an (optional) periodic GC trigger that would kick in after X > (milli)seconds no GC. > > > As I mentioned above, the GC trigger is allocation failure for the young > generation. > > > Ok, good. > > - Low-hanging-fruit improvement that could be done right now: allocate > large objects(arrays) straight into old-gen instead of copying them > around. Those are usually long-lived anyway, and copying them > back-and-forth just costs CPU time for no benefit. This will become > even > more pronounced with a true old-gen. > > > Large arrays are allocated separately in unaligned chunks. Such arrays > are never copied, but only logically moved from the young generation > into the old generation. An unaligned chunk contains exactly one large > array. > > > Ok, good. > > Oh and a question: what's this pinned object/chunks/spaces all about? > > > There are two mechanisms right now to get objects that are never moved > by the GC: > 1) A "normal" object can be temporarily pinned using > org.graalvm.nativeimage.PinnedObject. The current implementation then > keeps the whole aligned chunk that contains the object alive, i.e., it > is designed for pinnings that are released quickly so that no objects > are actually ever pinned when the GC runs, unless the GC runs in an > unlucky moments. We use such pinning for example to pass pointers into > byte[] arrays directly to C functions without copying. > > 2) A PinnedAllocator can be used to get objects that are non-moving for > a long period of time. This is currently used for the metadata of > runtime compiled code. We are actively working to make PinnedAllocator > unnecessary by putting the metadata into C memory, and then hopefully we > can remove PinnedAllocator and all code that is necessary for it in the > GC, i.e., the notion of pinned spaces you mentioned before. > > > Ok, I guessed so. I mostly wondered about it because it's got from-space > and to-space: > > [pinnedFromSpace: > aligned: 0/0 unaligned: 0/0] > [pinnedToSpace: > aligned: 0/0 unaligned: 0/0]] > > And would it ever copy between them? I guess not. > > > The collector logically moves a pinned chunk from pinned from-space to > pinned to-space by updating bookkeeping information in the chunk. The > contents of the pinned chunk are not moved, and their addresses do not > change. If a pinned chunk is unpinned by the application, it is moved > to the unpinned from-space and at the next full collection the reachable > objects in it are scavenged to the unpinned to-space, like any other > objects in unpinned from-space. Between collections the pinned to-space > is empty. In your example, the pinned from-space is also empty. Spaces > do not represent address ranges, so an empty space is just a few null > pointers in the space data structure. (Spaces not being address ranges > also complicates answer questions like: Is this object in the young > generation.) > > ... peter > > > What do you think about all this? Somebody else might have thought > about > all this already, and have some insights that I don't have in my naive > understanding? Maybe some of it is already worked on or planned? Maybe > there are some big obstactles that I don't see yet, that make it less > feasible? > > > We certainly have ideas and plans, and they match your observations. If > you are interested in contributing, we can definitely give you some > guidance so that you immediately work into the right direction. > > > Yes, I am. :-) > > Roman > > > > > > -- > Diese Nachricht wurde von meinem Android-Ger?t mit K-9 Mail gesendet. From christian.wimmer at oracle.com Mon Mar 4 18:15:17 2019 From: christian.wimmer at oracle.com (Christian Wimmer) Date: Mon, 4 Mar 2019 10:15:17 -0800 Subject: Thoughts about SubstrateVM GC In-Reply-To: <12B29C9A-0C76-4A7D-A53F-930ED6CAA89F@redhat.com> References: <1e4af634-b642-a1f7-f9ea-4f1cb413c823@redhat.com> <74356614-0487-7a34-49ec-e45516857ff0@Oracle.COM> <12B29C9A-0C76-4A7D-A53F-930ED6CAA89F@redhat.com> Message-ID: <1acfaf48-f18e-1850-f7ca-367958110264@oracle.com> Note that Substrate VM does not have a "mark word" like HotSpot. This makes mark-and-compact algorithms a bit more complicated because by default there is no space in an object to store a forwarding pointer without overwriting some part of the object. -Christian On 3/1/19 15:17, Roman Kennke wrote: > I am working on that ;-) I have users with a case, and we can prototype > and measure with Serial GC vs EpsilonGC+ Aleksey's sliding GC hack > (https://shipilev.net/jvm/diy-gc/), > > that should give us some ideas :-) > > Cheers, Roman > > > > Am 1. M?rz 2019 23:13:09 MEZ schrieb "Peter B. Kessler" > : > > I think we need requirements from users, representative benchmarks or real applications, inspired ideas, measurements of alternatives, and time to write garbage collectors. :-) > > So far, SubstrateVM has explored one point in the space, and we know that one size does not fit all when it comes to garbage collection. > > ... peter > > On 03/ 1/19 12:33 PM, Roman Kennke wrote: > > Hi Peter, > > picking up on our comments here: > > https://github.com/oracle/graal/pull/1015 > > > It seems like there is a significant class of applications that > would be > attractive with SubstrateVM that would be happy to do no GC at all. > Shortlived microservices come to mind. For those applications, we'd > realistically do want a GC, but only as last resort. In other > words, a > single-space heap, that would trigger a collection only when > exhausted, > and then do sliding compaction, would do it, and arguably better > than > the current GC: it would not require barriers overhead (i.e. pay for > something that we bet would never or very rarely happen), it > would be > able to use the full space, rather than dividing up in > generations and > semispaces, and as you say, even eliminate the safepoint checks > overhead > (does SubstrateVM only do safepoints for GC? that would be cool!). > > Even if it *does* GC, it might still be better off overall: objects > would have more time to die, GC frequency would be rarer. With > current > SubstrateVM I see relatively frequent full-GCs anyway, so rarer > GCs with > more relative garbage, combined with increased throughput: I > guess that > might actually work well for certain classes of applications, > especially > those that would want to run on SubstrateVM anyway? > > You commented about GraalVM runtime compiler allocating a lot and > leaving behind lots of garbage: would that be a concern in > SustrateVM's > closed world view? > > WDYT? > > Roman > > Two comments inline. And more encouragement to send along > your ideas. > > ... peter > > On 03/ 1/19 02:16 AM, Roman Kennke wrote: > > Hi Christian, > > thanks for your replies. This is very interesting. Some > additional > comments below inline: > > The old generation is 2 semispaces (actually, 4 > with the 2 pinned > spaces, which I'll ask about later). When > collected, live objects get > scavenged back-and-forth between the two spaces. > > > yes, in theory. In "reality" there is no contiguous > memory for the > spaces, so after a GC all the aligned chunks of the > from-space are > either returned to the OS or cached and immediately > reused for young > generation allocation. > > > Aha, ok. This definitely affects future design decisions. > > > - The policy when to start collecting seems a > bit unclear to me. In my > understanding, there is (almost) no use (for STW > GCs) in starting a > collection before allocation space is exhausted. > Which means, it seems > an obvious trigger to start collection on > allocation failure. Yet, the > policies I'm looking at are time-based or > time-and-space-based. I said > 'almost' because the single use for time-based > collection would be > periodic GC that is able to knock out lingering > garbage during > no/little-allocation phase of an application, > and then only when the GC > is also uncommitting the resulting unused pages > (which, afaics, > Substrate GC would do: bravo!). But that doesn't > seem to be the > point of > the time-based policies: it looks like the goal > of those policies is to > balance time spent in young-vs-old-gen > collections.?! > > > The young generation size is fixed, and a collection > is started when > this space is filled. So from that point of view, > the starting trigger > of a GC is always space based. > > The policy whether to do an incremental (young) > collection or a full > collection is time based (but you can easily plug in > any policy you > want). The goal is to balance time between > incremental and full > collection. We certainly don't want to fill up the > old generation > because that is the maximum heap size, and it is by > default provisioned > to be very big. > > > Ok, I see. > Also, the way it's currently done, the old-gen needs to > be able to > absorb (worst-case) all of young-gen in next cycle, and > therefore needs > *plenty* of headroom. I.e. we need to collect old-gen > much earlier than > when it's full (like when remaining free space in > old-gen is smaller > than young-gen size). Alternatively, we could exhaust > old-gen, and > change young-gen-collection-policy to skip collection if > old-gen doesn't > have enough space left, and dive into full-GC right > away. Or, even > better, add an intermediate tenuring generation. :-) > > > There is no fixed heap size, or fixed generation sizes. As > long as the > collector can allocate memory from the OS it can keep adding > chunks as > needed to the old generation (or the young generation, for > that matter. > E.g., to delay collection until it is "convenient".) If you > run out of > address space, or physical memory, then you are in trouble. > > > > With a little bit of distance and squinting of > eyes, one can see that > Substrate GC's young generation is really what > is called 'nursery > space' > elsewhere, which aims to reduce the rate at > which objects get > introduced > into young generation. And the old generation is > really what is usually > called young generation elsewhere. What's > missing is a true old > generation space. > > > Not really, because the young generation can be > collected independently, > i.e., there are the generational write barriers, > remembered sets, ... > > So the young generation is reduced to the nursery > space, but I argue the > old generation is really an old generation. > > > Ok. > > Considering all this, I would like to propose > some improvements: > - Introduce a notion of tenuring objects. I > guess we need like 2 age > bits in the header or elsewhere for this. Do we > have that room? > > > You don't need the age bits in the header. You can > easily go from the > object to the aligned chunk that the object is in > (we do that all the > time, for example for the write barrier to do the > card marking), and > store the age in the chunk header. Requiring all > objects in one chunk to > have the same age is not much of a limitation. > > Right. > > Adding tenuring is definitely necessary to achieve > reasonable GC > performance. > > > +1 > > - Implement a true old-space (and rename the > existing young to nursery > and old to young?). In my experience, > sliding/mark-compact collection > using a mark bitmap works best for this: it > tends to create a > 'sediment' > of permanent/very-long-lived objects at the > bottom which would never > get > copied again. Using a bitmap, walking of live > objects (e.g. during > copying, updating etc) would be very fast: much > faster than walking > objects by their size. > > > A mark-and-compact old generation algorithm > definitely makes sense. > Again, the only reason why we don't have it yet is > that no one had time > to implement it. > > Mark-and-compact is also great to reduce memory > footprint. Right now, > during GC the memory footprint can double because of > the temporary space > for copying. > > > Yeah. However, as Peter noted, having no contiguous > memory block > complicates this. I'd need to see how to deal with it > (per-chunk-bitmap > probably, or maybe mark bit in object header, with some > clever tricks to > make scanning the heap fast like serial GC does). > > - I am not totally sure about the policies. My > current thinking is that > this needs some cleanup/straightening-out, or > maybe I am > misunderstanding something there. I believe > (fairly strongly) that > allocation failure is the single useful trigger > for STW GC, and on top > of that an (optional) periodic GC trigger that > would kick in after X > (milli)seconds no GC. > > > As I mentioned above, the GC trigger is allocation > failure for the young > generation. > > > Ok, good. > > - Low-hanging-fruit improvement that could be > done right now: allocate > large objects(arrays) straight into old-gen > instead of copying them > around. Those are usually long-lived anyway, and > copying them > back-and-forth just costs CPU time for no > benefit. This will become > even > more pronounced with a true old-gen. > > > Large arrays are allocated separately in unaligned > chunks. Such arrays > are never copied, but only logically moved from the > young generation > into the old generation. An unaligned chunk contains > exactly one large > array. > > > Ok, good. > > Oh and a question: what's this pinned > object/chunks/spaces all about? > > > There are two mechanisms right now to get objects > that are never moved > by the GC: > 1) A "normal" object can be temporarily pinned using > org.graalvm.nativeimage.PinnedObject. The current > implementation then > keeps the whole aligned chunk that contains the > object alive, i.e., it > is designed for pinnings that are released quickly > so that no objects > are actually ever pinned when the GC runs, unless > the GC runs in an > unlucky moments. We use such pinning for example to > pass pointers into > byte[] arrays directly to C functions without copying. > > 2) A PinnedAllocator can be used to get objects that > are non-moving for > a long period of time. This is currently used for > the metadata of > runtime compiled code. We are actively working to > make PinnedAllocator > unnecessary by putting the metadata into C memory, > and then hopefully we > can remove PinnedAllocator and all code that is > necessary for it in the > GC, i.e., the notion of pinned spaces you mentioned > before. > > > Ok, I guessed so. I mostly wondered about it because > it's got from-space > and to-space: > > [pinnedFromSpace: > aligned: 0/0 unaligned: 0/0] > [pinnedToSpace: > aligned: 0/0 unaligned: 0/0]] > > And would it ever copy between them? I guess not. > > > The collector logically moves a pinned chunk from pinned > from-space to > pinned to-space by updating bookkeeping information in the > chunk. The > contents of the pinned chunk are not moved, and their > addresses do not > change. If a pinned chunk is unpinned by the application, it > is moved > to the unpinned from-space and at the next full collection > the reachable > objects in it are scavenged to the unpinned to-space, like > any other > objects in unpinned from-space. Between collections the > pinned to-space > is empty. In your example, the pinned from-space is also > empty. Spaces > do not represent address ranges, so an empty space is just a > few null > pointers in the space data structure. (Spaces not being > address ranges > also complicates answer questions like: Is this object in > the young > generation.) > > ... peter > > > What do you think about all this? Somebody else > might have thought > about > all this already, and have some insights that I > don't have in my naive > understanding? Maybe some of it is already > worked on or planned? Maybe > there are some big obstactles that I don't see > yet, that make it less > feasible? > > > We certainly have ideas and plans, and they match > your observations. If > you are interested in contributing, we can > definitely give you some > guidance so that you immediately work into the right > direction. > > > Yes, I am. :-) > > Roman > > > > > > -- > Diese Nachricht wurde von meinem Android-Ger?t mit K-9 Mail gesendet. From doug.simon at oracle.com Thu Mar 7 09:56:41 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 7 Mar 2019 10:56:41 +0100 Subject: JVMCI 0.55 released Message-ID: Changes in JVMCI 0.55 include: * GR-14040: Be more careful about primitive types in boxing objects. * GR-13950: Extended HotSpotVMConfigAccess API to query C++ field types. * GR-13685: Serialize HotSpot speculations. * GR-13844: Implement HotSpotObjectConstantImpl.hashCode properly. * GR-13408: Free C allocated compilation failure message (JDK-8217445). * GR-13412: Must only initialize JVMCIClassLoaderFactory from the VM. * GR-13374: Expose some JVMTI capabilities via JVMCI. This GR-13685 update introduces new JVMCI API that is now used by Graal. Any dependency on Graal also needs to update to jvmci-0.55 in CI configurations. The OracleJDK based ?labsjdk? binaries are available at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). The OpenJDK based binaries are at https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-0.55 -Doug From doug.simon at oracle.com Tue Mar 12 11:08:50 2019 From: doug.simon at oracle.com (Doug Simon) Date: Tue, 12 Mar 2019 12:08:50 +0100 Subject: JVMCI 0.56 released Message-ID: <6150E7E5-C003-4830-B31C-99CF1BD3A749@oracle.com> Changes in JVMCI 0.56 include: ? GR-14359: Change TraceClassLoadingStack to TraceClassLoadingCause. ? GR-14361: Only exit VM only on unrecoverable exceptions in JVMCI. ? GR-14359: Re-add TraceClassLoadingStack flag. ? GR-14268: Revert some flag defaults for libgraal. ? GR-14278: Minor JVMCI fixes. ? GR-14244: Add missing type in `vmStructs_jvmci`. ? GR-14229: Reserve oops table slot for non-default HotSpotNmethod compiled by libgraal and fix its translation to HotSpot heap. ? GR-14207: Fix default implementation of hasBytecodes. ? GR-14043: Use Handle with asConstant. ? GR-14184: Avoid unnecessary allocation when validating speculations. ? GR-13955: Put HotSpotNmethod mirror into nmethod oops table. ? GR-14112: Remove new test from unsafe intrinsification. ? GR-14147: Fixed lazy collection of failed speculations. ? GR-14106: Properly create byte[]. ? GR-14063: Add libgraal gate. The OpenJDK based binaries are at https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-0.56 The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). -Doug From vladimir.kozlov at oracle.com Mon Mar 18 19:06:34 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 18 Mar 2019 12:06:34 -0700 Subject: RFR: JDK-8220389 - Update Graal In-Reply-To: References: Message-ID: <38826dcc-6ff0-a9d4-f463-ca3050caf4af@oracle.com> Changes are fine. There are failures in testing but they seem unrelated. There is strange thing about generated overwriiten-diffs.txt file pointed by Dean but it is the problem of the script which generates it. Thanks, Vladimir On 3/15/19 10:15 PM, jesper.wilhelmsson at oracle.com wrote: > Hi, > > Please review the patch to integrate the latest Graal changes into OpenJDK. > Graal tip to integrate: fe5d30fb9d5b1cfbf455dc161e749381a93732d1 > > JBS duplicates deferred to the next integration: > https://bugs.openjdk.java.net/browse/JDK-8214947 > > Bug: https://bugs.openjdk.java.net/browse/JDK-8220389 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8220389/webrev.00/ > > This integration did overwrite changes already in place in OpenJDK. The diff has been attached to the umbrella bug. > > Thanks, > /Jesper > From dean.long at oracle.com Mon Mar 18 19:14:05 2019 From: dean.long at oracle.com (dean.long at oracle.com) Date: Mon, 18 Mar 2019 12:14:05 -0700 Subject: RFR: JDK-8220389 - Update Graal In-Reply-To: <38826dcc-6ff0-a9d4-f463-ca3050caf4af@oracle.com> References: <38826dcc-6ff0-a9d4-f463-ca3050caf4af@oracle.com> Message-ID: The change in make/test/JtregGraalUnit.gmk seems to be changing the indentation only, but the original indentation looks correct.? Can we revert this change and correct the script? dl On 3/18/19 12:06 PM, Vladimir Kozlov wrote: > Changes are fine. > > There are failures in testing but they seem unrelated. > > There is strange thing about generated overwriiten-diffs.txt file > pointed by Dean but it is the problem of the script which generates it. > > Thanks, > Vladimir > > On 3/15/19 10:15 PM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> >> Please review the patch to integrate the latest Graal changes into >> OpenJDK. >> Graal tip to integrate: fe5d30fb9d5b1cfbf455dc161e749381a93732d1 >> >> JBS duplicates deferred to the next integration: >> https://bugs.openjdk.java.net/browse/JDK-8214947 >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8220389 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8220389/webrev.00/ >> >> This integration did overwrite changes already in place in OpenJDK. >> The diff has been attached to the umbrella bug. >> >> Thanks, >> /Jesper >> From jesper.wilhelmsson at oracle.com Tue Mar 19 00:53:10 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 19 Mar 2019 01:53:10 +0100 Subject: RFR: JDK-8220389 - Update Graal In-Reply-To: References: <38826dcc-6ff0-a9d4-f463-ca3050caf4af@oracle.com> Message-ID: <4668115D-1C38-4309-AFAE-B3F42D6E3056@oracle.com> I have reverted the make/test/JtregGraalUnit.gmk changes locally. Thanks, /Jesper > On 18 Mar 2019, at 20:14, dean.long at oracle.com wrote: > > The change in make/test/JtregGraalUnit.gmk seems to be changing the indentation only, but the original indentation looks correct. Can we revert this change and correct the script? > > dl > > On 3/18/19 12:06 PM, Vladimir Kozlov wrote: >> Changes are fine. >> >> There are failures in testing but they seem unrelated. >> >> There is strange thing about generated overwriiten-diffs.txt file pointed by Dean but it is the problem of the script which generates it. >> >> Thanks, >> Vladimir >> >> On 3/15/19 10:15 PM, jesper.wilhelmsson at oracle.com wrote: >>> Hi, >>> >>> Please review the patch to integrate the latest Graal changes into OpenJDK. >>> Graal tip to integrate: fe5d30fb9d5b1cfbf455dc161e749381a93732d1 >>> >>> JBS duplicates deferred to the next integration: >>> https://bugs.openjdk.java.net/browse/JDK-8214947 >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8220389 >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8220389/webrev.00/ >>> >>> This integration did overwrite changes already in place in OpenJDK. The diff has been attached to the umbrella bug. >>> >>> Thanks, >>> /Jesper >>> > From jesper.wilhelmsson at oracle.com Tue Mar 19 00:55:49 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Tue, 19 Mar 2019 01:55:49 +0100 Subject: RFR: JDK-8220389 - Update Graal In-Reply-To: <38826dcc-6ff0-a9d4-f463-ca3050caf4af@oracle.com> References: <38826dcc-6ff0-a9d4-f463-ca3050caf4af@oracle.com> Message-ID: Are you referring to the indentation changes in make/test/JtregGraalUnit.gmk? Those have been reverted now. Should the changes in overwritten-diffs be re-applied to the OpenJDK? Thanks, /Jesper > On 18 Mar 2019, at 20:06, Vladimir Kozlov wrote: > > Changes are fine. > > There are failures in testing but they seem unrelated. > > There is strange thing about generated overwriiten-diffs.txt file pointed by Dean but it is the problem of the script which generates it. > > Thanks, > Vladimir > > On 3/15/19 10:15 PM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> Please review the patch to integrate the latest Graal changes into OpenJDK. >> Graal tip to integrate: fe5d30fb9d5b1cfbf455dc161e749381a93732d1 >> JBS duplicates deferred to the next integration: >> https://bugs.openjdk.java.net/browse/JDK-8214947 >> Bug: https://bugs.openjdk.java.net/browse/JDK-8220389 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8220389/webrev.00/ >> This integration did overwrite changes already in place in OpenJDK. The diff has been attached to the umbrella bug. >> Thanks, >> /Jesper From raffaello.giulietti at supsi.ch Wed Mar 20 20:46:56 2019 From: raffaello.giulietti at supsi.ch (raffaello.giulietti at supsi.ch) Date: Wed, 20 Mar 2019 21:46:56 +0100 Subject: Truffle on stock OpenJDK >= 11 Message-ID: Hi, hope this is still the right mailing list for this kind of question. Given that Graal is part of recent stock OpenJDKs, is it possible to use Truffle on them? This is not stated explicitly on the Truffle GitHub repo [1], which mentions prebuilts in GraalVM but does not mention OpenJDK. More generally, which components of GraalVM listed in [2] can be built and used on OpenJDK 11 or newer? Greetings Raffaello ---- [1] https://github.com/oracle/graal/tree/master/truffle [2] https://github.com/oracle/graal From thomas.wuerthinger at oracle.com Wed Mar 20 20:57:16 2019 From: thomas.wuerthinger at oracle.com (Thomas Wuerthinger) Date: Wed, 20 Mar 2019 21:57:16 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: References: Message-ID: Yes. This article describes for example how to run GraalVM?s JavaScript engine via Truffle on OpenJDK 11: https://medium.com/graalvm/graalvms-javascript-engine-on-jdk11-with-high-performance-3e79f968a819 . Not supported on OpenJDK 11 is creating native images. There is ongoing work to make it possible. - thomas > On 20 Mar 2019, at 21:46, raffaello.giulietti at supsi.ch wrote: > > Hi, > > hope this is still the right mailing list for this kind of question. > > Given that Graal is part of recent stock OpenJDKs, is it possible to use > Truffle on them? This is not stated explicitly on the Truffle GitHub > repo [1], which mentions prebuilts in GraalVM but does not mention OpenJDK. > > More generally, which components of GraalVM listed in [2] can be built > and used on OpenJDK 11 or newer? > > > Greetings > Raffaello > > ---- > > [1] https://github.com/oracle/graal/tree/master/truffle > [2] https://github.com/oracle/graal From raffaello.giulietti at supsi.ch Wed Mar 20 21:25:15 2019 From: raffaello.giulietti at supsi.ch (raffaello.giulietti at supsi.ch) Date: Wed, 20 Mar 2019 22:25:15 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: References: Message-ID: <6d60a27f-c4dd-4e91-2f6d-5260d39841bf@supsi.ch> Hi Thomas, On 2019-03-20 21:57, Thomas Wuerthinger wrote: > Yes. This article describes for example how to run GraalVM?s JavaScript > engine via Truffle on OpenJDK > 11:?https://medium.com/graalvm/graalvms-javascript-engine-on-jdk11-with-high-performance-3e79f968a819. > Fine, thanks. > Not supported on OpenJDK 11 is creating native images. There is ongoing > work to make it possible. > I understand that questions about performance are mostly answered cautiously because too many factors are involved. But what could be a very rough, general, "safe harbor", unofficial estimate on the speedup of GraalVM CE/EE versus OpenJDK+Graal in long running applications? Stated otherwise, what's the performance penalty of using OpenJDK+Graal rather than GraalVM (if at all) and why is there a penalty in the first place? (The article mentions a OpenJDK+Graal speedup factor of 2 when compared to Nashorn for that specific application. Further below a benchmark shows a GraalVM composite score of 4 with respect to Nashhorn.) > - thomas > > >> On 20 Mar 2019, at 21:46, raffaello.giulietti at supsi.ch >> wrote: >> >> Hi, >> >> hope this is still the right mailing list for this kind of question. >> >> Given that Graal is part of recent stock OpenJDKs, is it possible to use >> Truffle on them? This is not stated explicitly on the Truffle GitHub >> repo [1], which mentions prebuilts in GraalVM but does not mention >> OpenJDK. >> >> More generally, which components of GraalVM listed in [2] can be built >> and used on OpenJDK 11 or newer? >> >> >> Greetings >> Raffaello >> >> ---- >> >> [1] https://github.com/oracle/graal/tree/master/truffle >> [2] https://github.com/oracle/graal > From jaroslav.tulach at oracle.com Thu Mar 21 05:28:19 2019 From: jaroslav.tulach at oracle.com (Jaroslav Tulach) Date: Thu, 21 Mar 2019 06:28:19 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: References: Message-ID: <2189078.Wi6Yl1QKbv@logonaut> Hello Raffaello, recently I needed a "getting started" project for educational purposes. I've created: https://github.com/jaroslavtulach/talk2compiler The `Main.java` class is all that is needed to switch into Truffle compilation mode and use all the Truffle APIs to "talk to the compiler". -jt > Given that Graal is part of recent stock OpenJDKs, is it possible to use > Truffle on them? This is not stated explicitly on the Truffle GitHub > repo [1], which mentions prebuilts in GraalVM but does not mention OpenJDK. > > More generally, which components of GraalVM listed in [2] can be built > and used on OpenJDK 11 or newer? > > > Greetings > Raffaello > > ---- > > [1] https://github.com/oracle/graal/tree/master/truffle > [2] https://github.com/oracle/graal From raffaello.giulietti at supsi.ch Thu Mar 21 08:48:49 2019 From: raffaello.giulietti at supsi.ch (Raffaello Giulietti) Date: Thu, 21 Mar 2019 09:48:49 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: <2189078.Wi6Yl1QKbv@logonaut> References: <2189078.Wi6Yl1QKbv@logonaut> Message-ID: <301011db-c944-533c-77d8-ff28d0ccb9b6@supsi.ch> Hi Jaroslav, thanks for the reference. Did you perhaps run some measurements to assess the relative performance of GraalVM versus OpenJDK+Graal? I'm curious whether there are substantial differences and why, if at all. Greetings Raffaello On 2019-03-21 06:28, Jaroslav Tulach wrote: > Hello Raffaello, > recently I needed a "getting started" project for educational purposes. I've > created: > > https://github.com/jaroslavtulach/talk2compiler > > The `Main.java` class is all that is needed to switch into Truffle compilation > mode and use all the Truffle APIs to "talk to the compiler". > > -jt > >> Given that Graal is part of recent stock OpenJDKs, is it possible to use >> Truffle on them? This is not stated explicitly on the Truffle GitHub >> repo [1], which mentions prebuilts in GraalVM but does not mention OpenJDK. >> >> More generally, which components of GraalVM listed in [2] can be built >> and used on OpenJDK 11 or newer? >> >> >> Greetings >> Raffaello >> >> ---- >> >> [1] https://github.com/oracle/graal/tree/master/truffle >> [2] https://github.com/oracle/graal > > > > From jaroslav.tulach at oracle.com Thu Mar 21 17:24:43 2019 From: jaroslav.tulach at oracle.com (Jaroslav Tulach) Date: Thu, 21 Mar 2019 18:24:43 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: <301011db-c944-533c-77d8-ff28d0ccb9b6@supsi.ch> References: <2189078.Wi6Yl1QKbv@logonaut> <301011db-c944-533c-77d8-ff28d0ccb9b6@supsi.ch> Message-ID: <2287872.ssARbH8qAk@logonaut> Dne ?tvrtek 21. b?ezna 2019 9:48:49 CET, Raffaello Giulietti napsal(a): > Hi Jaroslav, > Did you perhaps run some measurements to assess the relative performance > of GraalVM versus OpenJDK+Graal? I'm curious whether there are > substantial differences and why, if at all. Hello Raffaello, as far as I can say I haven't noticed any difference between [GraalVM CE] (https://github.com-oracle/graal/releases) RC 12 and OpenJDK11 with the binaries for RC12 like [truffle-api](https://repo1.maven.org/maven2/org/ graalvm/truffle/truffle-api-1.0.0-rc12/) and co. downloaded from Maven. -jt > On 2019-03-21 06:28, Jaroslav Tulach wrote: > > Hello Raffaello, > > recently I needed a "getting started" project for educational purposes. > > I've created: > > > > https://github.com/jaroslavtulach/talk2compiler > > > > The `Main.java` class is all that is needed to switch into Truffle > > compilation mode and use all the Truffle APIs to "talk to the compiler". > > > > -jt > > > >> Given that Graal is part of recent stock OpenJDKs, is it possible to use > >> Truffle on them? This is not stated explicitly on the Truffle GitHub > >> repo [1], which mentions prebuilts in GraalVM but does not mention > >> OpenJDK. > >> > >> More generally, which components of GraalVM listed in [2] can be built > >> and used on OpenJDK 11 or newer? > >> > >> > >> Greetings > >> Raffaello > >> > >> ---- > >> > >> [1] https://github.com/oracle/graal/tree/master/truffle > >> [2] https://github.com/oracle/graal From raffaello.giulietti at supsi.ch Thu Mar 21 17:44:46 2019 From: raffaello.giulietti at supsi.ch (Raffaello Giulietti) Date: Thu, 21 Mar 2019 18:44:46 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: <2287872.ssARbH8qAk@logonaut> References: <2189078.Wi6Yl1QKbv@logonaut> <301011db-c944-533c-77d8-ff28d0ccb9b6@supsi.ch> <2287872.ssARbH8qAk@logonaut> Message-ID: On 2019-03-21 18:24, Jaroslav Tulach wrote: > Dne ?tvrtek 21. b?ezna 2019 9:48:49 CET, Raffaello Giulietti napsal(a): >> Hi Jaroslav, >> Did you perhaps run some measurements to assess the relative performance >> of GraalVM versus OpenJDK+Graal? I'm curious whether there are >> substantial differences and why, if at all. > > Hello Raffaello, > as far as I can say I haven't noticed any difference between [GraalVM CE] > (https://github.com-oracle/graal/releases) RC 12 and OpenJDK11 with the > binaries for RC12 like [truffle-api](https://repo1.maven.org/maven2/org/ > graalvm/truffle/truffle-api-1.0.0-rc12/) and co. downloaded from Maven. > > -jt > Hi Jaroslav, good to know. Then the question becomes: what's the point of using GraalVM to execute a polyglot long-running application environment? Don't get me wrong. I admire GraalVM and the related technologies. But OpenJDK+Graal seems a more cautious, enterprise-friendly choice and seems to be equipped with better GCs. So, if there are no big performance differences (if at all), the choice between GraalVM and OpenJDK+Graal seems to lean towards the latter. Am I getting the picture correctly? Best Raffaello >> On 2019-03-21 06:28, Jaroslav Tulach wrote: >>> Hello Raffaello, >>> recently I needed a "getting started" project for educational purposes. >>> I've created: >>> >>> https://github.com/jaroslavtulach/talk2compiler >>> >>> The `Main.java` class is all that is needed to switch into Truffle >>> compilation mode and use all the Truffle APIs to "talk to the compiler". >>> >>> -jt >>> >>>> Given that Graal is part of recent stock OpenJDKs, is it possible to use >>>> Truffle on them? This is not stated explicitly on the Truffle GitHub >>>> repo [1], which mentions prebuilts in GraalVM but does not mention >>>> OpenJDK. >>>> >>>> More generally, which components of GraalVM listed in [2] can be built >>>> and used on OpenJDK 11 or newer? >>>> >>>> >>>> Greetings >>>> Raffaello >>>> >>>> ---- >>>> >>>> [1] https://github.com/oracle/graal/tree/master/truffle >>>> [2] https://github.com/oracle/graal > > > > From thomas.wuerthinger at oracle.com Thu Mar 21 18:49:46 2019 From: thomas.wuerthinger at oracle.com (Thomas Wuerthinger) Date: Thu, 21 Mar 2019 19:49:46 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: References: <2189078.Wi6Yl1QKbv@logonaut> <301011db-c944-533c-77d8-ff28d0ccb9b6@supsi.ch> <2287872.ssARbH8qAk@logonaut> Message-ID: No. GraalVM includes an OpenJDK-based JVM with the Graal compiler that has the same GC options as standard OpenJDK, so there is no degradation in terms of GC performance. It includes an additional mechanism to create native images with instant startup and low footprint. Plus features like execution of LLVM bitcode or node.js applications and more. We have an overview at "https://www.graalvm.org/docs/why-graal". - thomas > On 21 Mar 2019, at 18:44, Raffaello Giulietti wrote: > > On 2019-03-21 18:24, Jaroslav Tulach wrote: >> Dne ?tvrtek 21. b?ezna 2019 9:48:49 CET, Raffaello Giulietti napsal(a): >>> Hi Jaroslav, >>> Did you perhaps run some measurements to assess the relative performance >>> of GraalVM versus OpenJDK+Graal? I'm curious whether there are >>> substantial differences and why, if at all. >> Hello Raffaello, >> as far as I can say I haven't noticed any difference between [GraalVM CE] >> (https://github.com-oracle/graal/releases) RC 12 and OpenJDK11 with the >> binaries for RC12 like [truffle-api](https://repo1.maven.org/maven2/org/ >> graalvm/truffle/truffle-api-1.0.0-rc12/) and co. downloaded from Maven. >> -jt > > Hi Jaroslav, > > good to know. > > Then the question becomes: what's the point of using GraalVM to execute a polyglot long-running application environment? > > Don't get me wrong. I admire GraalVM and the related technologies. But OpenJDK+Graal seems a more cautious, enterprise-friendly choice and seems to be equipped with better GCs. So, if there are no big performance differences (if at all), the choice between GraalVM and OpenJDK+Graal seems to lean towards the latter. > > Am I getting the picture correctly? > > > Best > Raffaello From raffaello.giulietti at supsi.ch Thu Mar 21 20:43:24 2019 From: raffaello.giulietti at supsi.ch (raffaello.giulietti at supsi.ch) Date: Thu, 21 Mar 2019 21:43:24 +0100 Subject: Truffle on stock OpenJDK >= 11 In-Reply-To: References: <2189078.Wi6Yl1QKbv@logonaut> <301011db-c944-533c-77d8-ff28d0ccb9b6@supsi.ch> <2287872.ssARbH8qAk@logonaut> Message-ID: Hi Thomas, I'm happy to hear that the same GCs are present in GraalVM as well. I guess I misunderstood a discussion about SubstrateVM's GC, extrapolating it to GraalVM. Take care Raffaello On 2019-03-21 19:49, Thomas Wuerthinger wrote: > No. > > GraalVM includes an OpenJDK-based JVM with the Graal compiler that has the same GC options as standard OpenJDK, so there is no degradation in terms of GC performance. > > It includes an additional mechanism to create native images with instant startup and low footprint. > > Plus features like execution of LLVM bitcode or node.js applications and more. We have an overview at "https://www.graalvm.org/docs/why-graal". > > - thomas > > >> On 21 Mar 2019, at 18:44, Raffaello Giulietti wrote: >> >> On 2019-03-21 18:24, Jaroslav Tulach wrote: >>> Dne ?tvrtek 21. b?ezna 2019 9:48:49 CET, Raffaello Giulietti napsal(a): >>>> Hi Jaroslav, >>>> Did you perhaps run some measurements to assess the relative performance >>>> of GraalVM versus OpenJDK+Graal? I'm curious whether there are >>>> substantial differences and why, if at all. >>> Hello Raffaello, >>> as far as I can say I haven't noticed any difference between [GraalVM CE] >>> (https://github.com-oracle/graal/releases) RC 12 and OpenJDK11 with the >>> binaries for RC12 like [truffle-api](https://repo1.maven.org/maven2/org/ >>> graalvm/truffle/truffle-api-1.0.0-rc12/) and co. downloaded from Maven. >>> -jt >> >> Hi Jaroslav, >> >> good to know. >> >> Then the question becomes: what's the point of using GraalVM to execute a polyglot long-running application environment? >> >> Don't get me wrong. I admire GraalVM and the related technologies. But OpenJDK+Graal seems a more cautious, enterprise-friendly choice and seems to be equipped with better GCs. So, if there are no big performance differences (if at all), the choice between GraalVM and OpenJDK+Graal seems to lean towards the latter. >> >> Am I getting the picture correctly? >> >> >> Best >> Raffaello From doug.simon at oracle.com Fri Mar 22 21:44:10 2019 From: doug.simon at oracle.com (Doug Simon) Date: Fri, 22 Mar 2019 22:44:10 +0100 Subject: JVMCI 0.57 released Message-ID: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> Changes in JVMCI 0.57 include: ? GR-13902: Replace adjustCompilationLevel mechanism. ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. ? GR-14509: Fixed order of method mirror invalidation. ? GR-14475: Fix support for jvmci.InitTimer. ? GR-14105: Remove uses of system properties. The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. The OpenJDK based binaries are at https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-0.57 The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). -Doug From david.lloyd at redhat.com Sat Mar 23 14:59:56 2019 From: david.lloyd at redhat.com (David Lloyd) Date: Sat, 23 Mar 2019 09:59:56 -0500 Subject: JVMCI 0.57 released In-Reply-To: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> Message-ID: Is there any possibility of the JVMCI API classes being released as a Maven artifact? It would allow development of software which can optionally consume JVMCI without requiring a JVMCI JDK for building. On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: > > Changes in JVMCI 0.57 include: > > ? GR-13902: Replace adjustCompilationLevel mechanism. > ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. > ? GR-14509: Fixed order of method mirror invalidation. > ? GR-14475: Fix support for jvmci.InitTimer. > ? GR-14105: Remove uses of system properties. > > The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. > > The OpenJDK based binaries are at https://github.com/graalvm/openjdk8-jvmci-builder/releases/tag/jvmci-0.57 > > The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). > > -Doug -- - DML From doug.simon at oracle.com Sun Mar 24 10:19:45 2019 From: doug.simon at oracle.com (Doug Simon) Date: Sun, 24 Mar 2019 11:19:45 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> Message-ID: <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Hi David, > On 23 Mar 2019, at 15:59, David Lloyd wrote: > > Is there any possibility of the JVMCI API classes being released as a > Maven artifact? It would allow development of software which can > optionally consume JVMCI without requiring a JVMCI JDK for building. I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://github.com/graalvm/mx#versioning-sources-for-different-jdk-releases). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. What would a compelling use case be for developing against JVMCI without actually executing the artifact? -Doug > > On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: >> >> Changes in JVMCI 0.57 include: >> >> ? GR-13902: Replace adjustCompilationLevel mechanism. >> ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. >> ? GR-14509: Fixed order of method mirror invalidation. >> ? GR-14475: Fix support for jvmci.InitTimer. >> ? GR-14105: Remove uses of system properties. >> >> The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. >> >> The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= >> >> The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). >> >> -Doug > > > > -- > - DML From david.lloyd at redhat.com Sun Mar 24 18:19:21 2019 From: david.lloyd at redhat.com (David Lloyd) Date: Sun, 24 Mar 2019 13:19:21 -0500 Subject: JVMCI 0.57 released In-Reply-To: <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: My use case is including a specialized GraalVM feature in an artifact, the API of which relies on JVMCI classes. The feature would only be used when using the SubstrateVM native image compiler, otherwise the classes would remain unused. I'm hesitant to require using the JVMCI or GraalVM JDK to build the project; the only alternative I can think of would be an external artifact with the classes in it. On Sun, Mar 24, 2019 at 5:19 AM Doug Simon wrote: > > Hi David, > > On 23 Mar 2019, at 15:59, David Lloyd wrote: > > Is there any possibility of the JVMCI API classes being released as a > Maven artifact? It would allow development of software which can > optionally consume JVMCI without requiring a JVMCI JDK for building. > > > I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. > > Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://github.com/graalvm/mx#versioning-sources-for-different-jdk-releases). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. > > What would a compelling use case be for developing against JVMCI without actually executing the artifact? > > -Doug > > > On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: > > > Changes in JVMCI 0.57 include: > > ? GR-13902: Replace adjustCompilationLevel mechanism. > ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. > ? GR-14509: Fixed order of method mirror invalidation. > ? GR-14475: Fix support for jvmci.InitTimer. > ? GR-14105: Remove uses of system properties. > > The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. > > The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= > > The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). > > -Doug > > > > > -- > - DML > > -- - DML From doug.simon at oracle.com Sun Mar 24 18:40:42 2019 From: doug.simon at oracle.com (Doug Simon) Date: Sun, 24 Mar 2019 19:40:42 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: > On 24 Mar 2019, at 19:19, David Lloyd wrote: > > My use case is including a specialized GraalVM feature in an artifact, > the API of which relies on JVMCI classes. The feature would only be > used when using the SubstrateVM native image compiler, otherwise the > classes would remain unused. I'm hesitant to require using the JVMCI > or GraalVM JDK to build the project; the only alternative I can think > of would be an external artifact with the classes in it. Maybe you can expand on this hesitation a bit. I?m not sure how you can use this feature without an actual JVMCI implementation underneath. Are you able to sketch out a simplified picture of the feature? -Doug > > On Sun, Mar 24, 2019 at 5:19 AM Doug Simon > wrote: >> >> Hi David, >> >> On 23 Mar 2019, at 15:59, David Lloyd > wrote: >> >> Is there any possibility of the JVMCI API classes being released as a >> Maven artifact? It would allow development of software which can >> optionally consume JVMCI without requiring a JVMCI JDK for building. >> >> >> I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. >> >> Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_mx-23versioning-2Dsources-2Dfor-2Ddifferent-2Djdk-2Dreleases&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ftfZhJgS3d2iQ_0_mpAIneaYGlfExJ1WNxONRCAlaOk&s=JR_oMGVUidDKax3Z8LCC_1_Z15YAlraGkWTHTEKuDMU&e= ). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. >> >> What would a compelling use case be for developing against JVMCI without actually executing the artifact? >> >> -Doug >> >> >> On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: >> >> >> Changes in JVMCI 0.57 include: >> >> ? GR-13902: Replace adjustCompilationLevel mechanism. >> ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. >> ? GR-14509: Fixed order of method mirror invalidation. >> ? GR-14475: Fix support for jvmci.InitTimer. >> ? GR-14105: Remove uses of system properties. >> >> The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. >> >> The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= >> >> The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). >> >> -Doug >> >> >> >> >> -- >> - DML >> >> > > > -- > - DML From david.lloyd at redhat.com Mon Mar 25 15:05:56 2019 From: david.lloyd at redhat.com (David Lloyd) Date: Mon, 25 Mar 2019 10:05:56 -0500 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: On Sun, Mar 24, 2019 at 1:42 PM Doug Simon wrote: > On 24 Mar 2019, at 19:19, David Lloyd wrote: > > My use case is including a specialized GraalVM feature in an artifact, > the API of which relies on JVMCI classes. The feature would only be > used when using the SubstrateVM native image compiler, otherwise the > classes would remain unused. I'm hesitant to require using the JVMCI > or GraalVM JDK to build the project; the only alternative I can think > of would be an external artifact with the classes in it. > > > Maybe you can expand on this hesitation a bit. I?m not sure how you can use this feature without an actual JVMCI implementation underneath. Are you able to sketch out a simplified picture of the feature? Sorry I meant literally the `GraalFeature` API within the GraalVM project. This allows an artifact within a native image compilation to sort of "hack in" to the compilation process to do various things like specialized optimizations. But, it relies on the JVMCI API so you can't generally compile such classes without it. But the classes are only used when a native image is being generated and aren't otherwise loaded, so the artifact can still work fine in a regular JVM. So it would be nice to be able to compile against these classes without them being present in the JDK being used for compilation. > > -Doug > > > On Sun, Mar 24, 2019 at 5:19 AM Doug Simon wrote: > > > Hi David, > > On 23 Mar 2019, at 15:59, David Lloyd wrote: > > Is there any possibility of the JVMCI API classes being released as a > Maven artifact? It would allow development of software which can > optionally consume JVMCI without requiring a JVMCI JDK for building. > > > I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. > > Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_mx-23versioning-2Dsources-2Dfor-2Ddifferent-2Djdk-2Dreleases&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ftfZhJgS3d2iQ_0_mpAIneaYGlfExJ1WNxONRCAlaOk&s=JR_oMGVUidDKax3Z8LCC_1_Z15YAlraGkWTHTEKuDMU&e=). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. > > What would a compelling use case be for developing against JVMCI without actually executing the artifact? > > -Doug > > > On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: > > > Changes in JVMCI 0.57 include: > > ? GR-13902: Replace adjustCompilationLevel mechanism. > ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. > ? GR-14509: Fixed order of method mirror invalidation. > ? GR-14475: Fix support for jvmci.InitTimer. > ? GR-14105: Remove uses of system properties. > > The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. > > The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= > > The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). > > -Doug > > > > > -- > - DML > > > > > -- > - DML > > -- - DML From doug.simon at oracle.com Mon Mar 25 15:41:52 2019 From: doug.simon at oracle.com (Doug Simon) Date: Mon, 25 Mar 2019 16:41:52 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: > On 25 Mar 2019, at 16:05, David Lloyd wrote: > > On Sun, Mar 24, 2019 at 1:42 PM Doug Simon > wrote: >> On 24 Mar 2019, at 19:19, David Lloyd > wrote: >> >> My use case is including a specialized GraalVM feature in an artifact, >> the API of which relies on JVMCI classes. The feature would only be >> used when using the SubstrateVM native image compiler, otherwise the >> classes would remain unused. I'm hesitant to require using the JVMCI >> or GraalVM JDK to build the project; the only alternative I can think >> of would be an external artifact with the classes in it. >> >> >> Maybe you can expand on this hesitation a bit. I?m not sure how you can use this feature without an actual JVMCI implementation underneath. Are you able to sketch out a simplified picture of the feature? > > Sorry I meant literally the `GraalFeature` API within the GraalVM > project. This allows an artifact within a native image compilation to > sort of "hack in" to the compilation process to do various things like > specialized optimizations. But, it relies on the JVMCI API so you > can't generally compile such classes without it. This seems like a discussion that should involve the native image team if there is some artifact available via Maven that has a JVMCI dependency. Paul, how would you suggest David can proceed with developing against com.oracle.svm.core.graal.GraalFeature in terms of satisfying the JVMCI dependency? Is there maybe an alternative way to achieve this that doesn?t involve having to resolve JVMCI for compilation? -Doug > > But the classes are only used when a native image is being generated > and aren't otherwise loaded, so the artifact can still work fine in a > regular JVM. So it would be nice to be able to compile against these > classes without them being present in the JDK being used for > compilation. > >> >> -Doug >> >> >> On Sun, Mar 24, 2019 at 5:19 AM Doug Simon wrote: >> >> >> Hi David, >> >> On 23 Mar 2019, at 15:59, David Lloyd wrote: >> >> Is there any possibility of the JVMCI API classes being released as a >> Maven artifact? It would allow development of software which can >> optionally consume JVMCI without requiring a JVMCI JDK for building. >> >> >> I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. >> >> Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_mx-23versioning-2Dsources-2Dfor-2Ddifferent-2Djdk-2Dreleases&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ftfZhJgS3d2iQ_0_mpAIneaYGlfExJ1WNxONRCAlaOk&s=JR_oMGVUidDKax3Z8LCC_1_Z15YAlraGkWTHTEKuDMU&e=). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. >> >> What would a compelling use case be for developing against JVMCI without actually executing the artifact? >> >> -Doug >> >> >> On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: >> >> >> Changes in JVMCI 0.57 include: >> >> ? GR-13902: Replace adjustCompilationLevel mechanism. >> ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. >> ? GR-14509: Fixed order of method mirror invalidation. >> ? GR-14475: Fix support for jvmci.InitTimer. >> ? GR-14105: Remove uses of system properties. >> >> The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. >> >> The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= >> >> The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). >> >> -Doug >> >> >> >> >> -- >> - DML >> >> >> >> >> -- >> - DML >> >> > > > -- > - DML From david.lloyd at redhat.com Mon Mar 25 17:00:55 2019 From: david.lloyd at redhat.com (David Lloyd) Date: Mon, 25 Mar 2019 12:00:55 -0500 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: The question is - which one of these artifacts actually contains the JVMCI classes? On Mon, Mar 25, 2019 at 11:41 AM Paul W?gerer wrote: > > There are maven artifacts that have a JVMCI dependency (and they exist for quite a while now (since RC8)). > They are needed for the native-image-maven-plugin described in > https://medium.com/graalvm/simplifying-native-image-generation-with-maven-plugin-and-embeddable-configuration-d5b283b92f57 > > The latest artifacts have the following JVMCI related dependency subtree: > > [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ native-image-maven-plugin --- > [INFO] com.oracle.substratevm:native-image-maven-plugin:maven-plugin:1.0.0-rc14 > [INFO] +- com.oracle.substratevm:svm-driver:jar:1.0.0-rc14:compile > [INFO] | \- com.oracle.substratevm:library-support:jar:1.0.0-rc14:compile > [INFO] | +- org.graalvm.sdk:graal-sdk:jar:1.0.0-rc14:compile > [INFO] | +- com.oracle.substratevm:svm:jar:1.0.0-rc14:compile > [INFO] | | +- com.oracle.substratevm:svm-hosted-native-linux-amd64:tar.gz:1.0.0-rc14:compile > [INFO] | | +- com.oracle.substratevm:svm-hosted-native-darwin-amd64:tar.gz:1.0.0-rc14:compile > [INFO] | | +- com.oracle.substratevm:svm-hosted-native-windows-amd64:tar.gz:1.0.0-rc14:compile > [INFO] | | +- com.oracle.substratevm:pointsto:jar:1.0.0-rc14:compile > [INFO] | | \- org.graalvm.truffle:truffle-nfi:jar:1.0.0-rc14:compile > [INFO] | | +- org.graalvm.truffle:truffle-nfi-native-linux-amd64:tar.gz:1.0.0-rc14:compile > [INFO] | | \- org.graalvm.truffle:truffle-nfi-native-darwin-amd64:tar.gz:1.0.0-rc14:compile > [INFO] | +- com.oracle.substratevm:objectfile:jar:1.0.0-rc14:compile > [INFO] | +- org.graalvm.compiler:compiler:jar:1.0.0-rc14:compile > [INFO] | | \- org.graalvm.truffle:truffle-api:jar:1.0.0-rc14:compile > [INFO] | \- jline:jline:jar:2.14.6:compile > > Since the version tag of this tree is 1.0.0-rc14 the JVMCI API dependency is the same as we have in the respective GraalVM RC14 release https://github.com/oracle/graal/releases/tag/vm-1.0.0-rc14. For RC14 release that was JVMCI 0.56, iirc. > > Note that is possible to develop against later version of JVMCI by using SubstrateVM master from https://github.com/oracle/graal/tree/master/substratevm and use > > mx build > mx maven-plugin-install --deploy-dependencies > > This will give you a bleeding edge native-image-maven-plugin with all it's transitive dependencies installed into the maven local repository. > ATM, you would get 1.0.0-rc15-SNAPSHOT with JVMCI 0.57 dependency. > > Maybe the problem is that we put JVMCI updates out in the wild without waiting until we do the next GraalVM RC release update. > But on the other hand no one is forced to update to JVMCI 0.57 before RC15 gets released. > > -- > Not sure if all this info is of any help, > > Paul > > On 3/25/19 4:41 PM, Doug Simon wrote: > > > > On 25 Mar 2019, at 16:05, David Lloyd wrote: > > On Sun, Mar 24, 2019 at 1:42 PM Doug Simon wrote: > > On 24 Mar 2019, at 19:19, David Lloyd wrote: > > My use case is including a specialized GraalVM feature in an artifact, > the API of which relies on JVMCI classes. The feature would only be > used when using the SubstrateVM native image compiler, otherwise the > classes would remain unused. I'm hesitant to require using the JVMCI > or GraalVM JDK to build the project; the only alternative I can think > of would be an external artifact with the classes in it. > > > Maybe you can expand on this hesitation a bit. I?m not sure how you can use this feature without an actual JVMCI implementation underneath. Are you able to sketch out a simplified picture of the feature? > > > Sorry I meant literally the `GraalFeature` API within the GraalVM > project. This allows an artifact within a native image compilation to > sort of "hack in" to the compilation process to do various things like > specialized optimizations. But, it relies on the JVMCI API so you > can't generally compile such classes without it. > > > This seems like a discussion that should involve the native image team if there is some artifact available via Maven that has a JVMCI dependency. Paul, how would you suggest David can proceed with developing against com.oracle.svm.core.graal.GraalFeature in terms of satisfying the JVMCI dependency? Is there maybe an alternative way to achieve this that doesn?t involve having to resolve JVMCI for compilation? > > -Doug > > > But the classes are only used when a native image is being generated > and aren't otherwise loaded, so the artifact can still work fine in a > regular JVM. So it would be nice to be able to compile against these > classes without them being present in the JDK being used for > compilation. > > > -Doug > > > On Sun, Mar 24, 2019 at 5:19 AM Doug Simon wrote: > > > Hi David, > > On 23 Mar 2019, at 15:59, David Lloyd wrote: > > Is there any possibility of the JVMCI API classes being released as a > Maven artifact? It would allow development of software which can > optionally consume JVMCI without requiring a JVMCI JDK for building. > > > I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. > > Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_mx-23versioning-2Dsources-2Dfor-2Ddifferent-2Djdk-2Dreleases&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ftfZhJgS3d2iQ_0_mpAIneaYGlfExJ1WNxONRCAlaOk&s=JR_oMGVUidDKax3Z8LCC_1_Z15YAlraGkWTHTEKuDMU&e=). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. > > What would a compelling use case be for developing against JVMCI without actually executing the artifact? > > -Doug > > > On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: > > > Changes in JVMCI 0.57 include: > > ? GR-13902: Replace adjustCompilationLevel mechanism. > ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. > ? GR-14509: Fixed order of method mirror invalidation. > ? GR-14475: Fix support for jvmci.InitTimer. > ? GR-14105: Remove uses of system properties. > > The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. > > The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= > > The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). > > -Doug > > > > > -- > - DML > > > > > -- > - DML > > > > > -- > - DML > > -- - DML From david.lloyd at redhat.com Mon Mar 25 17:28:59 2019 From: david.lloyd at redhat.com (David Lloyd) Date: Mon, 25 Mar 2019 12:28:59 -0500 Subject: JVMCI 0.57 released In-Reply-To: <8c39bdf8-8bda-ce46-4633-5bbd8f70b731@oracle.com> References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> <8c39bdf8-8bda-ce46-4633-5bbd8f70b731@oracle.com> Message-ID: On Mon, Mar 25, 2019 at 12:25 PM Paul W?gerer wrote: > On 3/25/19 6:00 PM, David Lloyd wrote: > > The question is - which one of these artifacts actually contains the > > JVMCI classes? > > Hmmm ... None because JVMCI is defined within the JVMCI JDK. > > Specifically the jvmci-api is in JDK>/jre/lib/jvmci/jvmci-api.jar Exactly. So, if you need to build an artifact which can optionally consume these APIs - you can't, unless you require a JDK to be used with JVMCI in it, which is exactly what I wish to avoid, because it makes it considerably harder to contribute to the project. Having a separate published version of these APIs (even stubs) would be very helpful in this regard. -- - DML From paul.woegerer at oracle.com Mon Mar 25 16:39:20 2019 From: paul.woegerer at oracle.com (=?UTF-8?Q?Paul_W=c3=b6gerer?=) Date: Mon, 25 Mar 2019 17:39:20 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: There are maven artifacts that have a JVMCI dependency (and they exist for quite a while now (since RC8)). They are needed for the native-image-maven-plugin described in https://medium.com/graalvm/simplifying-native-image-generation-with-maven-plugin-and-embeddable-configuration-d5b283b92f57 The latest artifacts have the following JVMCI related dependency subtree: [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ native-image-maven-plugin --- [INFO] com.oracle.substratevm:native-image-maven-plugin:maven-plugin:1.0.0-rc14 [INFO] +- com.oracle.substratevm:svm-driver:jar:1.0.0-rc14:compile [INFO] |? \- com.oracle.substratevm:library-support:jar:1.0.0-rc14:compile [INFO] |???? +- org.graalvm.sdk:graal-sdk:jar:1.0.0-rc14:compile [INFO] |???? +- com.oracle.substratevm:svm:jar:1.0.0-rc14:compile [INFO] |???? |? +- com.oracle.substratevm:svm-hosted-native-linux-amd64:tar.gz:1.0.0-rc14:compile [INFO] |???? |? +- com.oracle.substratevm:svm-hosted-native-darwin-amd64:tar.gz:1.0.0-rc14:compile [INFO] |???? |? +- com.oracle.substratevm:svm-hosted-native-windows-amd64:tar.gz:1.0.0-rc14:compile [INFO] |???? |? +- com.oracle.substratevm:pointsto:jar:1.0.0-rc14:compile [INFO] |???? |? \- org.graalvm.truffle:truffle-nfi:jar:1.0.0-rc14:compile [INFO] |???? |???? +- org.graalvm.truffle:truffle-nfi-native-linux-amd64:tar.gz:1.0.0-rc14:compile [INFO] |???? |???? \- org.graalvm.truffle:truffle-nfi-native-darwin-amd64:tar.gz:1.0.0-rc14:compile [INFO] |???? +- com.oracle.substratevm:objectfile:jar:1.0.0-rc14:compile [INFO] |???? +- org.graalvm.compiler:compiler:jar:1.0.0-rc14:compile [INFO] |???? |? \- org.graalvm.truffle:truffle-api:jar:1.0.0-rc14:compile [INFO] |???? \- jline:jline:jar:2.14.6:compile Since the version tag of this tree is 1.0.0-rc14 the JVMCI API dependency is the same as we have in the respective GraalVM RC14 release https://github.com/oracle/graal/releases/tag/vm-1.0.0-rc14. For RC14 release that was JVMCI 0.56, iirc. Note that is possible to develop against later version of JVMCI by using SubstrateVM master from https://github.com/oracle/graal/tree/master/substratevm and use mx build mx maven-plugin-install --deploy-dependencies This will give you a bleeding edge native-image-maven-plugin with all it's transitive dependencies installed into the maven local repository. ATM, you would get 1.0.0-rc15-SNAPSHOT with JVMCI 0.57 dependency. Maybe the problem is that we put JVMCI updates out in the wild without waiting until we do the next GraalVM RC release update. But on the other hand no one is forced to update to JVMCI 0.57 before RC15 gets released. -- Not sure if all this info is of any help, Paul On 3/25/19 4:41 PM, Doug Simon wrote: > > >> On 25 Mar 2019, at 16:05, David Lloyd > > wrote: >> >> On Sun, Mar 24, 2019 at 1:42 PM Doug Simon > > wrote: >>> On 24 Mar 2019, at 19:19, David Lloyd >> > wrote: >>> >>> My use case is including a specialized GraalVM feature in an artifact, >>> the API of which relies on JVMCI classes. ?The feature would only be >>> used when using the SubstrateVM native image compiler, otherwise the >>> classes would remain unused. ?I'm hesitant to require using the JVMCI >>> or GraalVM JDK to build the project; the only alternative I can think >>> of would be an external artifact with the classes in it. >>> >>> >>> Maybe you can expand on this hesitation a bit. I?m not sure how you >>> can use this feature without an actual JVMCI implementation >>> underneath. Are you able to sketch out a simplified picture of the >>> feature? >> >> Sorry I meant literally the `GraalFeature` API within the GraalVM >> project. ?This allows an artifact within a native image compilation to >> sort of "hack in" to the compilation process to do various things like >> specialized optimizations. ?But, it relies on the JVMCI API so you >> can't generally compile such classes without it. > > This seems like a discussion that should involve the native image team > if there is some artifact available via Maven that has a JVMCI > dependency. Paul, how would you suggest David can proceed with > developing against?com.oracle.svm.core.graal.GraalFeature in terms of > satisfying the JVMCI dependency? Is there maybe an alternative way to > achieve this that doesn?t involve having to resolve JVMCI for compilation? > > -Doug > >> >> But the classes are only used when a native image is being generated >> and aren't otherwise loaded, so the artifact can still work fine in a >> regular JVM. ?So it would be nice to be able to compile against these >> classes without them being present in the JDK being used for >> compilation. >> >>> >>> -Doug >>> >>> >>> On Sun, Mar 24, 2019 at 5:19 AM Doug Simon >> > wrote: >>> >>> >>> Hi David, >>> >>> On 23 Mar 2019, at 15:59, David Lloyd >> > wrote: >>> >>> Is there any possibility of the JVMCI API classes being released as a >>> Maven artifact? ?It would allow development of software which can >>> optionally consume JVMCI without requiring a JVMCI JDK for building. >>> >>> >>> I suspect that is a very small software niche. Graal is the only >>> consumer of JVMCI I?m currently aware of and it makes very little >>> sense to develop Graal without a JVMCI JDK. >>> >>> Note also that there is not just one current JVMCI version but >>> potentially one per JDK version that Graal supports. Graal can (and >>> does) use differing JVMCI API using versioned sources >>> (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_mx-23versioning-2Dsources-2Dfor-2Ddifferent-2Djdk-2Dreleases&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ftfZhJgS3d2iQ_0_mpAIneaYGlfExJ1WNxONRCAlaOk&s=JR_oMGVUidDKax3Z8LCC_1_Z15YAlraGkWTHTEKuDMU&e=). >>> For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation >>> method added in jvmci-0.57 is used from JVMCI JDK8 specific code. >>> >>> What would a compelling use case be for developing against JVMCI >>> without actually executing the artifact? >>> >>> -Doug >>> >>> >>> On Fri, Mar 22, 2019 at 4:47 PM Doug Simon >> > wrote: >>> >>> >>> Changes in JVMCI 0.57 include: >>> >>> ? GR-13902: Replace adjustCompilationLevel mechanism. >>> ? GR-14526: Replace JVMCINMethodData constructor and operator new >>> with initialize. >>> ? GR-14509: Fixed order of method mirror invalidation. >>> ? GR-14475: Fix support for jvmci.InitTimer. >>> ? GR-14105: Remove uses of system properties. >>> >>> The GR-13902 change introduced new API for implementing >>> -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call >>> from the VM when scheduling a method for compilation. This fixes a >>> number of subtle bugs and unexpected VM behavior (see >>> https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of >>> the API change, you need to update Graal to commit b3ec4830e02 >>> >>> or later when using this JVMCI release. >>> >>> The OpenJDK based binaries are at >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= >>> >>> The OracleJDK based ?labsjdk? binaries will be available soon at >>> https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html >>> (in the lower half of the page). >>> >>> -Doug >>> >>> >>> >>> >>> -- >>> - DML >>> >>> >>> >>> >>> -- >>> - DML >>> >>> >> >> >> --? >> - DML > From paul.woegerer at oracle.com Mon Mar 25 17:23:15 2019 From: paul.woegerer at oracle.com (=?UTF-8?Q?Paul_W=c3=b6gerer?=) Date: Mon, 25 Mar 2019 18:23:15 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> Message-ID: <8c39bdf8-8bda-ce46-4633-5bbd8f70b731@oracle.com> On 3/25/19 6:00 PM, David Lloyd wrote: > The question is - which one of these artifacts actually contains the > JVMCI classes? Hmmm ... None because JVMCI is defined within the JVMCI JDK. Specifically the jvmci-api is in /jre/lib/jvmci/jvmci-api.jar HTH, Paul > > On Mon, Mar 25, 2019 at 11:41 AM Paul W?gerer wrote: >> There are maven artifacts that have a JVMCI dependency (and they exist for quite a while now (since RC8)). >> They are needed for the native-image-maven-plugin described in >> https://urldefense.proofpoint.com/v2/url?u=https-3A__medium.com_graalvm_simplifying-2Dnative-2Dimage-2Dgeneration-2Dwith-2Dmaven-2Dplugin-2Dand-2Dembeddable-2Dconfiguration-2Dd5b283b92f57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=fVhHR6zF-pIHH8ihu3edPeAzcQV0cwVWLqqYCcpFtw0&m=yIIt50oSRMn3y5Fj5zPfsKQbYIKrJbAwGUNeKvGhkak&s=BQtsSXyvZOsag_WU8_c8WoDA4mYcJh6EXnTgnRGPp88&e= >> >> The latest artifacts have the following JVMCI related dependency subtree: >> >> [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ native-image-maven-plugin --- >> [INFO] com.oracle.substratevm:native-image-maven-plugin:maven-plugin:1.0.0-rc14 >> [INFO] +- com.oracle.substratevm:svm-driver:jar:1.0.0-rc14:compile >> [INFO] | \- com.oracle.substratevm:library-support:jar:1.0.0-rc14:compile >> [INFO] | +- org.graalvm.sdk:graal-sdk:jar:1.0.0-rc14:compile >> [INFO] | +- com.oracle.substratevm:svm:jar:1.0.0-rc14:compile >> [INFO] | | +- com.oracle.substratevm:svm-hosted-native-linux-amd64:tar.gz:1.0.0-rc14:compile >> [INFO] | | +- com.oracle.substratevm:svm-hosted-native-darwin-amd64:tar.gz:1.0.0-rc14:compile >> [INFO] | | +- com.oracle.substratevm:svm-hosted-native-windows-amd64:tar.gz:1.0.0-rc14:compile >> [INFO] | | +- com.oracle.substratevm:pointsto:jar:1.0.0-rc14:compile >> [INFO] | | \- org.graalvm.truffle:truffle-nfi:jar:1.0.0-rc14:compile >> [INFO] | | +- org.graalvm.truffle:truffle-nfi-native-linux-amd64:tar.gz:1.0.0-rc14:compile >> [INFO] | | \- org.graalvm.truffle:truffle-nfi-native-darwin-amd64:tar.gz:1.0.0-rc14:compile >> [INFO] | +- com.oracle.substratevm:objectfile:jar:1.0.0-rc14:compile >> [INFO] | +- org.graalvm.compiler:compiler:jar:1.0.0-rc14:compile >> [INFO] | | \- org.graalvm.truffle:truffle-api:jar:1.0.0-rc14:compile >> [INFO] | \- jline:jline:jar:2.14.6:compile >> >> Since the version tag of this tree is 1.0.0-rc14 the JVMCI API dependency is the same as we have in the respective GraalVM RC14 release https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_oracle_graal_releases_tag_vm-2D1.0.0-2Drc14&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=fVhHR6zF-pIHH8ihu3edPeAzcQV0cwVWLqqYCcpFtw0&m=yIIt50oSRMn3y5Fj5zPfsKQbYIKrJbAwGUNeKvGhkak&s=Jo2mJgEP8hN-3Pv0QXgfYiczCAcQEqefkky09rnyN1g&e=. For RC14 release that was JVMCI 0.56, iirc. >> >> Note that is possible to develop against later version of JVMCI by using SubstrateVM master from https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_oracle_graal_tree_master_substratevm&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=fVhHR6zF-pIHH8ihu3edPeAzcQV0cwVWLqqYCcpFtw0&m=yIIt50oSRMn3y5Fj5zPfsKQbYIKrJbAwGUNeKvGhkak&s=7K7gfqJl5elAwa3oNzPuWly_2PqrYTo2xtAxCQFzW4Y&e= and use >> >> mx build >> mx maven-plugin-install --deploy-dependencies >> >> This will give you a bleeding edge native-image-maven-plugin with all it's transitive dependencies installed into the maven local repository. >> ATM, you would get 1.0.0-rc15-SNAPSHOT with JVMCI 0.57 dependency. >> >> Maybe the problem is that we put JVMCI updates out in the wild without waiting until we do the next GraalVM RC release update. >> But on the other hand no one is forced to update to JVMCI 0.57 before RC15 gets released. >> >> -- >> Not sure if all this info is of any help, >> >> Paul >> >> On 3/25/19 4:41 PM, Doug Simon wrote: >> >> >> >> On 25 Mar 2019, at 16:05, David Lloyd wrote: >> >> On Sun, Mar 24, 2019 at 1:42 PM Doug Simon wrote: >> >> On 24 Mar 2019, at 19:19, David Lloyd wrote: >> >> My use case is including a specialized GraalVM feature in an artifact, >> the API of which relies on JVMCI classes. The feature would only be >> used when using the SubstrateVM native image compiler, otherwise the >> classes would remain unused. I'm hesitant to require using the JVMCI >> or GraalVM JDK to build the project; the only alternative I can think >> of would be an external artifact with the classes in it. >> >> >> Maybe you can expand on this hesitation a bit. I?m not sure how you can use this feature without an actual JVMCI implementation underneath. Are you able to sketch out a simplified picture of the feature? >> >> >> Sorry I meant literally the `GraalFeature` API within the GraalVM >> project. This allows an artifact within a native image compilation to >> sort of "hack in" to the compilation process to do various things like >> specialized optimizations. But, it relies on the JVMCI API so you >> can't generally compile such classes without it. >> >> >> This seems like a discussion that should involve the native image team if there is some artifact available via Maven that has a JVMCI dependency. Paul, how would you suggest David can proceed with developing against com.oracle.svm.core.graal.GraalFeature in terms of satisfying the JVMCI dependency? Is there maybe an alternative way to achieve this that doesn?t involve having to resolve JVMCI for compilation? >> >> -Doug >> >> >> But the classes are only used when a native image is being generated >> and aren't otherwise loaded, so the artifact can still work fine in a >> regular JVM. So it would be nice to be able to compile against these >> classes without them being present in the JDK being used for >> compilation. >> >> >> -Doug >> >> >> On Sun, Mar 24, 2019 at 5:19 AM Doug Simon wrote: >> >> >> Hi David, >> >> On 23 Mar 2019, at 15:59, David Lloyd wrote: >> >> Is there any possibility of the JVMCI API classes being released as a >> Maven artifact? It would allow development of software which can >> optionally consume JVMCI without requiring a JVMCI JDK for building. >> >> >> I suspect that is a very small software niche. Graal is the only consumer of JVMCI I?m currently aware of and it makes very little sense to develop Graal without a JVMCI JDK. >> >> Note also that there is not just one current JVMCI version but potentially one per JDK version that Graal supports. Graal can (and does) use differing JVMCI API using versioned sources (https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_mx-23versioning-2Dsources-2Dfor-2Ddifferent-2Djdk-2Dreleases&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ftfZhJgS3d2iQ_0_mpAIneaYGlfExJ1WNxONRCAlaOk&s=JR_oMGVUidDKax3Z8LCC_1_Z15YAlraGkWTHTEKuDMU&e=). For example, the new HotSPOTJVMCIRuntime.excludeFromJVMCICompilation method added in jvmci-0.57 is used from JVMCI JDK8 specific code. >> >> What would a compelling use case be for developing against JVMCI without actually executing the artifact? >> >> -Doug >> >> >> On Fri, Mar 22, 2019 at 4:47 PM Doug Simon wrote: >> >> >> Changes in JVMCI 0.57 include: >> >> ? GR-13902: Replace adjustCompilationLevel mechanism. >> ? GR-14526: Replace JVMCINMethodData constructor and operator new with initialize. >> ? GR-14509: Fixed order of method mirror invalidation. >> ? GR-14475: Fix support for jvmci.InitTimer. >> ? GR-14105: Remove uses of system properties. >> >> The GR-13902 change introduced new API for implementing -Dgraal.CompileGraalWithC1Only=true without requiring a Java up-call from the VM when scheduling a method for compilation. This fixes a number of subtle bugs and unexpected VM behavior (see https://bugs.openjdk.java.net/browse/JDK-8219403). As a result of the API change, you need to update Graal to commit b3ec4830e02 or later when using this JVMCI release. >> >> The OpenJDK based binaries are at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_graalvm_openjdk8-2Djvmci-2Dbuilder_releases_tag_jvmci-2D0.57&d=DwIFaQ&c=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE&r=BmNY5KuefACTr_P43s8fXOXgNDkDiqlviyafeiVaP18&m=ZXOFIL3cRL2FIe2PYoKmuGRJYgPnkR05P1BNCwV6zeo&s=jnieBzEva0XUzWqVXNZ2O1DmNRW1FW-w6FE3HPAp1fk&e= >> >> The OracleJDK based ?labsjdk? binaries will be available soon at https://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html (in the lower half of the page). >> >> -Doug >> >> >> >> >> -- >> - DML >> >> >> >> >> -- >> - DML >> >> >> >> >> -- >> - DML >> >> > From gilles.m.duboscq at oracle.com Tue Mar 26 11:14:06 2019 From: gilles.m.duboscq at oracle.com (Gilles Duboscq) Date: Tue, 26 Mar 2019 12:14:06 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> <8c39bdf8-8bda-ce46-4633-5bbd8f70b731@oracle.com> Message-ID: <44aca3df-015d-1594-7dd5-392d6397ca62@oracle.com> Hi David, I think the more interesting question here is around the things you are using (e.g., `GraalFeature`) that in turn make you need to depend on JVMCI. The reason why it's not easy to consume at the moment is that none of this (classes in `com.oracle.svm.core` and others) is API. The only APIs available at the moment for interaction with SVM are the classes in `org.graalvm.nativeimage` which are part of the the Graal SDK. This is available on maven as `org.graalvm.sdk:graal-sdk`. Since you are using things from `com.oracle.svm.core.graal`, i'm guessing you didn't find the APIs you need in the SDK. Could you describe your use-case in more detail? Maybe then we could then see if there is a way we can improve/extend the API in the SDK to fit your use-case. Gilles PS: we(I) was rather hesitant to publish any of the Graal and SVM jars on maven since this gives the false impression that any of this is an API with some kind of stability while this is not the case. The only things we try to keep stable as APIs are the Graal SDK and the Truffle API. On the other hand, we consider all the rest (graal compiler, SVM, truffle languages implementations...) as implementation code that can happily be refactored, where things can be removed without any deprecation notice etc. Even in minor releases! On 25/03/2019 18:28, David Lloyd wrote: > On Mon, Mar 25, 2019 at 12:25 PM Paul W?gerer wrote: >> On 3/25/19 6:00 PM, David Lloyd wrote: >>> The question is - which one of these artifacts actually contains the >>> JVMCI classes? >> >> Hmmm ... None because JVMCI is defined within the JVMCI JDK. >> >> Specifically the jvmci-api is in > JDK>/jre/lib/jvmci/jvmci-api.jar > > Exactly. So, if you need to build an artifact which can optionally > consume these APIs - you can't, unless you require a JDK to be used > with JVMCI in it, which is exactly what I wish to avoid, because it > makes it considerably harder to contribute to the project. Having a > separate published version of these APIs (even stubs) would be very > helpful in this regard. > From david.lloyd at redhat.com Tue Mar 26 17:01:39 2019 From: david.lloyd at redhat.com (David Lloyd) Date: Tue, 26 Mar 2019 12:01:39 -0500 Subject: JVMCI 0.57 released In-Reply-To: <44aca3df-015d-1594-7dd5-392d6397ca62@oracle.com> References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> <8c39bdf8-8bda-ce46-4633-5bbd8f70b731@oracle.com> <44aca3df-015d-1594-7dd5-392d6397ca62@oracle.com> Message-ID: On Tue, Mar 26, 2019 at 6:14 AM Gilles Duboscq wrote: > > Hi David, > > I think the more interesting question here is around the things you are using (e.g., `GraalFeature`) that in turn make you need to depend on JVMCI. > The reason why it's not easy to consume at the moment is that none of this (classes in `com.oracle.svm.core` and others) is API. > > The only APIs available at the moment for interaction with SVM are the classes in `org.graalvm.nativeimage` which are part of the the Graal SDK. > This is available on maven as `org.graalvm.sdk:graal-sdk`. > > Since you are using things from `com.oracle.svm.core.graal`, i'm guessing you didn't find the APIs you need in the SDK. > Could you describe your use-case in more detail? Maybe then we could then see if there is a way we can improve/extend the API in the SDK to fit your use-case. Basically I want to pilot a few possible optimizations outside of the SubstrateVM tree, some of which might be very specific to a particular library. Some of these might end up as upstream feature requests or PRs to SubstrateVM, and some might just end up getting discarded. But regardless, any manipulation of the compile tree is not possible without the more specific `GraalFeature` API AFAICT. I'm not too worried about breaking changes because we're presently mandating specific SubstrateVM version(s) to be used with the Quarkus project. But, I don't really want to force people to use GraalVM or the Labs SDK to *build* Quarkus as it might really discourage outside development/contribution. -- - DML From jesper.wilhelmsson at oracle.com Thu Mar 28 07:37:23 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 28 Mar 2019 08:37:23 +0100 Subject: RFR: JDK-8221341 - Update Graal Message-ID: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> Hi, Please review the patch to integrate the latest Graal changes into OpenJDK. Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 JBS duplicates fixed by this integration: https://bugs.openjdk.java.net/browse/JDK-8220643 https://bugs.openjdk.java.net/browse/JDK-8220810 JBS duplicates deferred to the next integration: https://bugs.openjdk.java.net/browse/JDK-8214947 Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ Thanks, /Jesper From vladimir.kozlov at oracle.com Thu Mar 28 17:48:05 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 28 Mar 2019 10:48:05 -0700 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> Message-ID: <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> On 3/28/19 12:37 AM, jesper.wilhelmsson at oracle.com wrote: > Hi, > > Please review the patch to integrate the latest Graal changes into OpenJDK. > Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 > > JBS duplicates fixed by this integration: > https://bugs.openjdk.java.net/browse/JDK-8220643 > https://bugs.openjdk.java.net/browse/JDK-8220810 > > JBS duplicates deferred to the next integration: > https://bugs.openjdk.java.net/browse/JDK-8214947 We should investigate why this bug is still referenced in RFR. We already discussed it: https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html > > Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 > Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ We also discussed indentation change in make/test/JtregGraalUnit.gmk Why the change showed up again? Otherwise changes looks good. Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use jdk.internal.misc.Unsafe. It is for an other update. Thanks, Vladimir > > Thanks, > /Jesper > From doug.simon at oracle.com Thu Mar 28 17:51:36 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 28 Mar 2019 18:51:36 +0100 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> Message-ID: <71A07FCC-2B29-4856-872A-3D028BE66B80@oracle.com> > On 28 Mar 2019, at 18:48, Vladimir Kozlov wrote: > > On 3/28/19 12:37 AM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> Please review the patch to integrate the latest Graal changes into OpenJDK. >> Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 >> JBS duplicates fixed by this integration: >> https://bugs.openjdk.java.net/browse/JDK-8220643 >> https://bugs.openjdk.java.net/browse/JDK-8220810 >> JBS duplicates deferred to the next integration: >> https://bugs.openjdk.java.net/browse/JDK-8214947 > > We should investigate why this bug is still referenced in RFR. We already discussed it: > https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html > >> Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ > > We also discussed indentation change in make/test/JtregGraalUnit.gmk > Why the change showed up again? > > Otherwise changes looks good. > > Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. > But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use jdk.internal.misc.Unsafe. It is for an other update. Shouldn?t we use sun.misc.Unsafe for as long as it?s available? The advantage is that it is publicly exported and means no need for ?add-exports when running/testing Graal from outside JDK. -Doug From vladimir.kozlov at oracle.com Thu Mar 28 18:10:58 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 28 Mar 2019 11:10:58 -0700 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: <71A07FCC-2B29-4856-872A-3D028BE66B80@oracle.com> References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> <71A07FCC-2B29-4856-872A-3D028BE66B80@oracle.com> Message-ID: <35b291f0-565b-b722-b8e5-aeee732a5f1a@oracle.com> > Shouldn?t we use sun.misc.Unsafe for as long as it?s available? The advantage is that it is publicly exported and means no need for ?add-exports when running/testing Graal from outside JDK. I thought it is oversight. But I am fine if it is done intentionally. Agree. Vladimir On 3/28/19 10:51 AM, Doug Simon wrote: > > >> On 28 Mar 2019, at 18:48, Vladimir Kozlov wrote: >> >> On 3/28/19 12:37 AM, jesper.wilhelmsson at oracle.com wrote: >>> Hi, >>> Please review the patch to integrate the latest Graal changes into OpenJDK. >>> Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 >>> JBS duplicates fixed by this integration: >>> https://bugs.openjdk.java.net/browse/JDK-8220643 >>> https://bugs.openjdk.java.net/browse/JDK-8220810 >>> JBS duplicates deferred to the next integration: >>> https://bugs.openjdk.java.net/browse/JDK-8214947 >> >> We should investigate why this bug is still referenced in RFR. We already discussed it: >> https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html >> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ >> >> We also discussed indentation change in make/test/JtregGraalUnit.gmk >> Why the change showed up again? >> >> Otherwise changes looks good. >> >> Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. >> But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use jdk.internal.misc.Unsafe. It is for an other update. > > Shouldn?t we use sun.misc.Unsafe for as long as it?s available? The advantage is that it is publicly exported and means no need for ?add-exports when running/testing Graal from outside JDK. > > -Doug > From dean.long at oracle.com Thu Mar 28 18:21:03 2019 From: dean.long at oracle.com (dean.long at oracle.com) Date: Thu, 28 Mar 2019 11:21:03 -0700 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> Message-ID: <7b59a110-de9c-e204-75d4-edf69cf9b532@oracle.com> On 3/28/19 10:48 AM, Vladimir Kozlov wrote: > We also discussed indentation change in make/test/JtregGraalUnit.gmk > Why the change showed up again? It looks like the indentation in that file was changed by 8220383. The mx script needs to be smarter about preserving the existing indentation. dl From jesper.wilhelmsson at oracle.com Thu Mar 28 18:33:23 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Thu, 28 Mar 2019 19:33:23 +0100 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> Message-ID: Hi Vladimir, Thanks for reviewing! > On 28 Mar 2019, at 18:48, Vladimir Kozlov wrote: > > On 3/28/19 12:37 AM, jesper.wilhelmsson at oracle.com wrote: >> Hi, >> Please review the patch to integrate the latest Graal changes into OpenJDK. >> Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 >> JBS duplicates fixed by this integration: >> https://bugs.openjdk.java.net/browse/JDK-8220643 >> https://bugs.openjdk.java.net/browse/JDK-8220810 >> JBS duplicates deferred to the next integration: >> https://bugs.openjdk.java.net/browse/JDK-8214947 > > We should investigate why this bug is still referenced in RFR. We already discussed it: > https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html Oops, sorry, missed that. I have removed the link from the next update issue so it won't show up again. >> Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ > > We also discussed indentation change in make/test/JtregGraalUnit.gmk > Why the change showed up again? My understanding was that this was caused by a bug in the mx script. As long as that bug is there this will keep happening. I will try to remember to revert this going forward, but it is something that I will need to do manually every time. Someone should fix the mx script. I don't know who owns that though. Thanks, /Jesper > > Otherwise changes looks good. > > Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. > But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use jdk.internal.misc.Unsafe. It is for an other update. > > Thanks, > Vladimir > >> Thanks, >> /Jesper From doug.simon at oracle.com Thu Mar 28 18:52:01 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 28 Mar 2019 19:52:01 +0100 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> Message-ID: <6192C685-DB3B-4399-8D28-B8E1F8D4AE72@oracle.com> > On 28 Mar 2019, at 19:33, jesper.wilhelmsson at oracle.com wrote: > > Hi Vladimir, > > Thanks for reviewing! > >> On 28 Mar 2019, at 18:48, Vladimir Kozlov > wrote: >> >> On 3/28/19 12:37 AM, jesper.wilhelmsson at oracle.com wrote: >>> Hi, >>> Please review the patch to integrate the latest Graal changes into OpenJDK. >>> Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 >>> JBS duplicates fixed by this integration: >>> https://bugs.openjdk.java.net/browse/JDK-8220643 >>> https://bugs.openjdk.java.net/browse/JDK-8220810 >>> JBS duplicates deferred to the next integration: >>> https://bugs.openjdk.java.net/browse/JDK-8214947 >> >> We should investigate why this bug is still referenced in RFR. We already discussed it: >> https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html > > Oops, sorry, missed that. I have removed the link from the next update issue so it won't show up again. > >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ >> >> We also discussed indentation change in make/test/JtregGraalUnit.gmk >> Why the change showed up again? > > My understanding was that this was caused by a bug in the mx script. As long as that bug is there this will keep happening. I will try to remember to revert this going forward, but it is something that I will need to do manually every time. Someone should fix the mx script. I don't know who owns that though. Anyone who can code Python and submit a pull request ;-) I believe these are the relevant lines: https://github.com/oracle/graal/blob/master/compiler/mx.compiler/mx_updategraalinopenjdk.py#L297-L308 -Doug > >> >> Otherwise changes looks good. >> >> Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. >> But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use jdk.internal.misc.Unsafe. It is for an other update. >> >> Thanks, >> Vladimir >> >>> Thanks, >>> /Jesper > From vladimir.kozlov at oracle.com Thu Mar 28 19:15:31 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 28 Mar 2019 12:15:31 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library Message-ID: https://bugs.openjdk.java.net/browse/JDK-8220623 http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ Update JVMCI to support pre-compiled as shared library Graal. Using aoted Graal can offers benefits including: - fast startup - compile time similar to native JIt compilers (C2) - memory usage disjoint from the application Java heap - no profile pollution of JDK code used by the application This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. Changes were collected in Metropolis repo [2] and tested there. Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested only in tier3. And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were found which were present before these changes. Thanks, Vladimir [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af [2] http://hg.openjdk.java.net/metropolis/dev/ From doug.simon at oracle.com Thu Mar 28 19:32:23 2019 From: doug.simon at oracle.com (Doug Simon) Date: Thu, 28 Mar 2019 20:32:23 +0100 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: Message-ID: Not a Reviewer, but I like it! ;-) -Doug > On 28 Mar 2019, at 20:15, Vladimir Kozlov wrote: > > https://bugs.openjdk.java.net/browse/JDK-8220623 > http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ > > Update JVMCI to support pre-compiled as shared library Graal. > Using aoted Graal can offers benefits including: > - fast startup > - compile time similar to native JIt compilers (C2) > - memory usage disjoint from the application Java heap > - no profile pollution of JDK code used by the application > > This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. > Changes were collected in Metropolis repo [2] and tested there. > > Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. > Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. > > I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was tested only in tier3. > > And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue were found which were present before these changes. > > Thanks, > Vladimir > > [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af > [2] http://hg.openjdk.java.net/metropolis/dev/ From vladimir.kozlov at oracle.com Thu Mar 28 21:44:13 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 28 Mar 2019 14:44:13 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: Message-ID: Thank you, Stefan On 3/28/19 12:54 PM, Stefan Karlsson wrote: > Hi Vladimir, > > I started to check the GC code. > > ======================================================================== > I see that you've added guarded includes in the middle of the include list: > ? #include "gc/shared/strongRootsScope.hpp" > ? #include "gc/shared/weakProcessor.hpp" > + #if INCLUDE_JVMCI > + #include "jvmci/jvmci.hpp" > + #endif > ? #include "oops/instanceRefKlass.hpp" > ? #include "oops/oop.inline.hpp" > > The style we use is to put these conditional includes at the end of the include lists. okay > > ======================================================================== > Could you also change the following: > > + #if INCLUDE_JVMCI > +???? // Clean JVMCI metadata handles. > +???? JVMCI::do_unloading(is_alive_closure(), purged_class); > + #endif > > to: > +???? // Clean JVMCI metadata handles. > +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) > > to get rid of some of the line noise in the GC files. okay > > ======================================================================== > In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. Yes, we need to support concurrent cleaning in a future. > > ======================================================================== > What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? > > 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, > 3276???????????????????????????????????????? bool class_unloading_occurred) { > 3277?? uint num_workers = workers()->active_workers(); > 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); > 3279?? workers()->run_task(&unlink_task); > 3280 #if INCLUDE_JVMCI > 3281?? // No parallel processing of JVMCI metadata handles for now. > 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); > 3283 #endif > 3284 } There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in product VM) and check: http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 If Graal is used it should not have big impact since these metadata has regular pattern (32 handles per array and array per MetadataHandleBlock block which are linked in list) and not large. If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. > > ======================================================================== > Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? > > See how other tasks are claimed by one worker: > void KlassCleaningTask::work() { > ? ResourceMark rm; > > ? // One worker will clean the subklass/sibling klass tree. > ? if (claim_clean_klass_tree_task()) { > ??? Klass::clean_subklass_tree(); > ? } These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no ParallelCleaningTask in JDK8. Your suggestion is interesting and I agree that we should investigate it. > > ======================================================================== > In MetadataHandleBlock::do_unloading: > > +??????? if (klass->class_loader_data()->is_unloading()) { > +????????? // This needs to be marked so that it's no longer scanned > +????????? // but can't be put on the free list yet. The > +????????? // ReferenceCleaner will set this to NULL and > +????????? // put it on the free list. > > I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? I think it is typo (I will fix it) - it references new HandleCleaner class: http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html Thanks, Vladimir > > Thanks, > StefanK > > On 2019-03-28 20:15, Vladimir Kozlov wrote: >> https://bugs.openjdk.java.net/browse/JDK-8220623 >> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >> >> Update JVMCI to support pre-compiled as shared library Graal. >> Using aoted Graal can offers benefits including: >> ?- fast startup >> ?- compile time similar to native JIt compilers (C2) >> ?- memory usage disjoint from the application Java heap >> ?- no profile pollution of JDK code used by the application >> >> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >> Changes were collected in Metropolis repo [2] and tested there. >> >> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >> >> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was >> tested only in tier3. >> >> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue >> were found which were present before these changes. >> >> Thanks, >> Vladimir >> >> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >> [2] http://hg.openjdk.java.net/metropolis/dev/ > From jean-philippe.halimi at intel.com Fri Mar 29 00:04:49 2019 From: jean-philippe.halimi at intel.com (Halimi, Jean-Philippe) Date: Fri, 29 Mar 2019 00:04:49 +0000 Subject: x86 FMA intrinsic support design Message-ID: Hello, I am currently looking into adding support for FMA intrinsics in Graal. I would like to share what I plan to do to make sure it is how it should be implemented. 1. Add VexRVMOp class support in AMD64Assembler with the corresponding FMA instructions a. It requires to add the VexOpAssertion.FMA and CPUFeature.FMA flags 2. Add UseFMA flag from HotSpot flags in GraalHotSpotVMConfig.java 3. Add a registerFMA method in AMD64GraphBuilderPlugins::registerMathPlugins a. This requires to add a specific FMAIntrinsicNode, which will emit the corresponding FMA instructions. Is there anything else that is needed in this case? Thanks for your insights, Jp From jesper.wilhelmsson at oracle.com Fri Mar 29 00:26:27 2019 From: jesper.wilhelmsson at oracle.com (jesper.wilhelmsson at oracle.com) Date: Fri, 29 Mar 2019 01:26:27 +0100 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: <6192C685-DB3B-4399-8D28-B8E1F8D4AE72@oracle.com> References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> <6192C685-DB3B-4399-8D28-B8E1F8D4AE72@oracle.com> Message-ID: > On 28 Mar 2019, at 19:52, Doug Simon wrote: >> On 28 Mar 2019, at 19:33, jesper.wilhelmsson at oracle.com wrote: >> >> Hi Vladimir, >> >> Thanks for reviewing! >> >>> On 28 Mar 2019, at 18:48, Vladimir Kozlov > wrote: >>> >>> On 3/28/19 12:37 AM, jesper.wilhelmsson at oracle.com wrote: >>>> Hi, >>>> Please review the patch to integrate the latest Graal changes into OpenJDK. >>>> Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 >>>> JBS duplicates fixed by this integration: >>>> https://bugs.openjdk.java.net/browse/JDK-8220643 >>>> https://bugs.openjdk.java.net/browse/JDK-8220810 >>>> JBS duplicates deferred to the next integration: >>>> https://bugs.openjdk.java.net/browse/JDK-8214947 >>> >>> We should investigate why this bug is still referenced in RFR. We already discussed it: >>> https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html >> >> Oops, sorry, missed that. I have removed the link from the next update issue so it won't show up again. >> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8221341 >>>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ >>> >>> We also discussed indentation change in make/test/JtregGraalUnit.gmk >>> Why the change showed up again? >> >> My understanding was that this was caused by a bug in the mx script. As long as that bug is there this will keep happening. I will try to remember to revert this going forward, but it is something that I will need to do manually every time. Someone should fix the mx script. I don't know who owns that though. > > Anyone who can code Python and submit a pull request ;-) I believe these are the relevant lines: > > https://github.com/oracle/graal/blob/master/compiler/mx.compiler/mx_updategraalinopenjdk.py#L297-L308 It seems to me that any logic to figure out the correct indentation would be fragile at best. The change that caused this breakage was cleaning up the indentation to make it the same as the rest of the file. I wouldn't expect this to change again in a way that wouldn't require the logic to change as well. I suggest to simply add the two missing spaces in line 304. /Jesper > -Doug > >> >>> >>> Otherwise changes looks good. >>> >>> Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. >>> But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use jdk.internal.misc.Unsafe. It is for an other update. >>> >>> Thanks, >>> Vladimir >>> >>>> Thanks, >>>> /Jesper From vladimir.kozlov at oracle.com Fri Mar 29 00:39:19 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 28 Mar 2019 17:39:19 -0700 Subject: RFR: JDK-8221341 - Update Graal In-Reply-To: References: <9B53B1EC-1420-4844-9276-EA25E8F13987@oracle.com> <9ce1ee74-d524-e29e-3175-48279af339a2@oracle.com> <6192C685-DB3B-4399-8D28-B8E1F8D4AE72@oracle.com> Message-ID: <85e285cf-9eac-0ce8-adde-d21bc7ab1536@oracle.com> I filed GR-14808 to fix indentation generated for JtregGraalUnit.gmk Vladimir On 3/28/19 5:26 PM, jesper.wilhelmsson at oracle.com wrote: >> On 28 Mar 2019, at 19:52, Doug Simon > wrote: >>> On 28 Mar 2019, at 19:33,jesper.wilhelmsson at oracle.com wrote: >>> >>> Hi Vladimir, >>> >>> Thanks for reviewing! >>> >>>> On 28 Mar 2019, at 18:48, Vladimir Kozlov >>> > wrote: >>>> >>>> On 3/28/19 12:37 AM,jesper.wilhelmsson at oracle.com wrote: >>>>> Hi, >>>>> Please review the patch to integrate the latest Graal changes into OpenJDK. >>>>> Graal tip to integrate: 7970bd76ff60600ab5a2fc96cd24ddd7ed017cf8 >>>>> JBS duplicates fixed by this integration: >>>>> https://bugs.openjdk.java.net/browse/JDK-8220643 >>>>> https://bugs.openjdk.java.net/browse/JDK-8220810 >>>>> JBS duplicates deferred to the next integration: >>>>> https://bugs.openjdk.java.net/browse/JDK-8214947 >>>> >>>> We should investigate why this bug is still referenced in RFR. We already discussed it: >>>> https://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2019-March/033130.html >>> >>> Oops, sorry, missed that. I have removed the link from the next update issue so it won't show up >>> again. >>> >>>>> Bug:https://bugs.openjdk.java.net/browse/JDK-8221341 >>>>> Webrev:http://cr.openjdk.java.net/~jwilhelm/8221341/webrev.00/ >>>> >>>> We also discussed indentation change in make/test/JtregGraalUnit.gmk >>>> Why the change showed up again? >>> >>> My understanding was that this was caused by a bug in the mx script. As long as that bug is there >>> this will keep happening. I will try to remember to revert this going forward, but it is >>> something that I will need to do manually every time. Someone should fix the mx script. I don't >>> know who owns that though. >> >> Anyone who can code Python and submit a pull request ;-) I believe these are the relevant lines: >> >> https://github.com/oracle/graal/blob/master/compiler/mx.compiler/mx_updategraalinopenjdk.py#L297-L308 > > It seems to me that any logic to figure out the correct indentation would be fragile at best. The > change that caused this breakage was cleaning up the indentation to make it the same as the rest of > the file. I wouldn't expect this to change again in a way that wouldn't require the logic to change > as well. I suggest to simply add the two missing spaces in line 304. > > /Jesper > >> -Doug >> >>> >>>> >>>> Otherwise changes looks good. >>>> >>>> Doug, it is good to have only one Graal class to access Unsafe class - GraalUnsafeAccess.java. >>>> But we should not use sun.misc.Unsafe in JDK 13 - we should have version for JDK9+ which use >>>> jdk.internal.misc.Unsafe. It is for an other update. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>>> Thanks, >>>>> /Jesper > From vladimir.kozlov at oracle.com Fri Mar 29 02:07:14 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 28 Mar 2019 19:07:14 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: Message-ID: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> Hi Stefan, I collected some data on MetadataHandleBlock. First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare case. It should not affect normal G1 remark pause. Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the end of execution: max_blocks = 232 max_handles_per_block = 32 (since handles array has 32 elements) max_total_alive_values = 4631 Thanks, Vladimir On 3/28/19 2:44 PM, Vladimir Kozlov wrote: > Thank you, Stefan > > On 3/28/19 12:54 PM, Stefan Karlsson wrote: >> Hi Vladimir, >> >> I started to check the GC code. >> >> ======================================================================== >> I see that you've added guarded includes in the middle of the include list: >> ?? #include "gc/shared/strongRootsScope.hpp" >> ?? #include "gc/shared/weakProcessor.hpp" >> + #if INCLUDE_JVMCI >> + #include "jvmci/jvmci.hpp" >> + #endif >> ?? #include "oops/instanceRefKlass.hpp" >> ?? #include "oops/oop.inline.hpp" >> >> The style we use is to put these conditional includes at the end of the include lists. > > okay > >> >> ======================================================================== >> Could you also change the following: >> >> + #if INCLUDE_JVMCI >> +???? // Clean JVMCI metadata handles. >> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >> + #endif >> >> to: >> +???? // Clean JVMCI metadata handles. >> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >> >> to get rid of some of the line noise in the GC files. > > okay > >> >> ======================================================================== >> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. > > Yes, we need to support concurrent cleaning in a future. > >> >> ======================================================================== >> What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? >> >> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >> 3276???????????????????????????????????????? bool class_unloading_occurred) { >> 3277?? uint num_workers = workers()->active_workers(); >> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >> 3279?? workers()->run_task(&unlink_task); >> 3280 #if INCLUDE_JVMCI >> 3281?? // No parallel processing of JVMCI metadata handles for now. >> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >> 3283 #endif >> 3284 } > > There should not be impact if Graal is not used. Only cost of call (which most likely is inlined in > product VM) and check: > > http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 > > > If Graal is used it should not have big impact since these metadata has regular pattern (32? handles > per array and array per MetadataHandleBlock block which are linked in list) and not large. > If there will be noticeable impact - we will work on it as you suggested by using ParallelCleaningTask. > >> >> ======================================================================== >> Did you consider adding it as a task for one of the worker threads to execute in >> ParallelCleaningTask? >> >> See how other tasks are claimed by one worker: >> void KlassCleaningTask::work() { >> ?? ResourceMark rm; >> >> ?? // One worker will clean the subklass/sibling klass tree. >> ?? if (claim_clean_klass_tree_task()) { >> ???? Klass::clean_subklass_tree(); >> ?? } > > These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no > ParallelCleaningTask in JDK8. > > Your suggestion is interesting and I agree that we should investigate it. > >> >> ======================================================================== >> In MetadataHandleBlock::do_unloading: >> >> +??????? if (klass->class_loader_data()->is_unloading()) { >> +????????? // This needs to be marked so that it's no longer scanned >> +????????? // but can't be put on the free list yet. The >> +????????? // ReferenceCleaner will set this to NULL and >> +????????? // put it on the free list. >> >> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? > > I think it is typo (I will fix it) - it references new HandleCleaner class: > > http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html > > > Thanks, > Vladimir > >> >> Thanks, >> StefanK >> >> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>> >>> Update JVMCI to support pre-compiled as shared library Graal. >>> Using aoted Graal can offers benefits including: >>> ?- fast startup >>> ?- compile time similar to native JIt compilers (C2) >>> ?- memory usage disjoint from the application Java heap >>> ?- no profile pollution of JDK code used by the application >>> >>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>> Changes were collected in Metropolis repo [2] and tested there. >>> >>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>> >>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was >>> tested only in tier3. >>> >>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several issue >>> were found which were present before these changes. >>> >>> Thanks, >>> Vladimir >>> >>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>> [2] http://hg.openjdk.java.net/metropolis/dev/ >> From stefan.karlsson at oracle.com Thu Mar 28 19:54:19 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Thu, 28 Mar 2019 20:54:19 +0100 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: Message-ID: Hi Vladimir, I started to check the GC code. ======================================================================== I see that you've added guarded includes in the middle of the include list: ? #include "gc/shared/strongRootsScope.hpp" ? #include "gc/shared/weakProcessor.hpp" + #if INCLUDE_JVMCI + #include "jvmci/jvmci.hpp" + #endif ? #include "oops/instanceRefKlass.hpp" ? #include "oops/oop.inline.hpp" The style we use is to put these conditional includes at the end of the include lists. ======================================================================== Could you also change the following: + #if INCLUDE_JVMCI +???? // Clean JVMCI metadata handles. +???? JVMCI::do_unloading(is_alive_closure(), purged_class); + #endif to: +???? // Clean JVMCI metadata handles. +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) to get rid of some of the line noise in the GC files. ======================================================================== In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for ZGC. ======================================================================== What's the performance impact for G1 remark pause with this serial walk over the MetadataHandleBlock? 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, 3276???????????????????????????????????????? bool class_unloading_occurred) { 3277?? uint num_workers = workers()->active_workers(); 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); 3279?? workers()->run_task(&unlink_task); 3280 #if INCLUDE_JVMCI 3281?? // No parallel processing of JVMCI metadata handles for now. 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); 3283 #endif 3284 } ======================================================================== Did you consider adding it as a task for one of the worker threads to execute in ParallelCleaningTask? See how other tasks are claimed by one worker: void KlassCleaningTask::work() { ? ResourceMark rm; ? // One worker will clean the subklass/sibling klass tree. ? if (claim_clean_klass_tree_task()) { ??? Klass::clean_subklass_tree(); ? } ======================================================================== In MetadataHandleBlock::do_unloading: +??????? if (klass->class_loader_data()->is_unloading()) { +????????? // This needs to be marked so that it's no longer scanned +????????? // but can't be put on the free list yet. The +????????? // ReferenceCleaner will set this to NULL and +????????? // put it on the free list. I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? Thanks, StefanK On 2019-03-28 20:15, Vladimir Kozlov wrote: > https://bugs.openjdk.java.net/browse/JDK-8220623 > http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ > > Update JVMCI to support pre-compiled as shared library Graal. > Using aoted Graal can offers benefits including: > ?- fast startup > ?- compile time similar to native JIt compilers (C2) > ?- memory usage disjoint from the application Java heap > ?- no profile pollution of JDK code used by the application > > This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. > Changes were collected in Metropolis repo [2] and tested there. > > Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and > our compiler group. > Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. > > I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was > clean. In this set Graal was tested only in tier3. > > And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available > in our system. Several issue were found which were present before > these changes. > > Thanks, > Vladimir > > [1] > https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af > [2] http://hg.openjdk.java.net/metropolis/dev/ From stefan.karlsson at oracle.com Fri Mar 29 07:36:40 2019 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Fri, 29 Mar 2019 08:36:40 +0100 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> Message-ID: On 2019-03-29 03:07, Vladimir Kozlov wrote: > Hi Stefan, > > I collected some data on MetadataHandleBlock. > > First, do_unloading() code is executed only when > class_unloading_occurred is 'true' - it is rare case. It should not > affect normal G1 remark pause. It's only rare for applications that don't do dynamic class loading and unloading. The applications that do, will be affected. > > Second, I run a test with -Xcomp. I got about 10,000 compilations by > Graal and next data at the end of execution: > > max_blocks = 232 > max_handles_per_block = 32 (since handles array has 32 elements) > max_total_alive_values = 4631 OK. Thanks for the info. StefanK > > Thanks, > Vladimir > > On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >> Thank you, Stefan >> >> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>> Hi Vladimir, >>> >>> I started to check the GC code. >>> >>> ======================================================================== >>> I see that you've added guarded includes in the middle of the include >>> list: >>> ?? #include "gc/shared/strongRootsScope.hpp" >>> ?? #include "gc/shared/weakProcessor.hpp" >>> + #if INCLUDE_JVMCI >>> + #include "jvmci/jvmci.hpp" >>> + #endif >>> ?? #include "oops/instanceRefKlass.hpp" >>> ?? #include "oops/oop.inline.hpp" >>> >>> The style we use is to put these conditional includes at the end of >>> the include lists. >> >> okay >> >>> >>> ======================================================================== >>> Could you also change the following: >>> >>> + #if INCLUDE_JVMCI >>> +???? // Clean JVMCI metadata handles. >>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>> + #endif >>> >>> to: >>> +???? // Clean JVMCI metadata handles. >>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>> >>> to get rid of some of the line noise in the GC files. >> >> okay >> >>> >>> ======================================================================== >>> In the future we will need version of JVMCI::do_unloading that >>> supports concurrent cleaning for ZGC. >> >> Yes, we need to support concurrent cleaning in a future. >> >>> >>> ======================================================================== >>> What's the performance impact for G1 remark pause with this serial >>> walk over the MetadataHandleBlock? >>> >>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* >>> is_alive, >>> 3276???????????????????????????????????????? bool >>> class_unloading_occurred) { >>> 3277?? uint num_workers = workers()->active_workers(); >>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, >>> class_unloading_occurred, false); >>> 3279?? workers()->run_task(&unlink_task); >>> 3280 #if INCLUDE_JVMCI >>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>> 3283 #endif >>> 3284 } >> >> There should not be impact if Graal is not used. Only cost of call >> (which most likely is inlined in product VM) and check: >> >> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >> >> >> If Graal is used it should not have big impact since these metadata >> has regular pattern (32? handles per array and array per >> MetadataHandleBlock block which are linked in list) and not large. >> If there will be noticeable impact - we will work on it as you >> suggested by using ParallelCleaningTask. >> >>> >>> ======================================================================== >>> Did you consider adding it as a task for one of the worker threads to >>> execute in ParallelCleaningTask? >>> >>> See how other tasks are claimed by one worker: >>> void KlassCleaningTask::work() { >>> ?? ResourceMark rm; >>> >>> ?? // One worker will clean the subklass/sibling klass tree. >>> ?? if (claim_clean_klass_tree_task()) { >>> ???? Klass::clean_subklass_tree(); >>> ?? } >> >> These changes were ported from JDK8u based changes in graal-jvmci-8 >> and there are no ParallelCleaningTask in JDK8. >> >> Your suggestion is interesting and I agree that we should investigate it. >> >>> >>> ======================================================================== >>> In MetadataHandleBlock::do_unloading: >>> >>> +??????? if (klass->class_loader_data()->is_unloading()) { >>> +????????? // This needs to be marked so that it's no longer scanned >>> +????????? // but can't be put on the free list yet. The >>> +????????? // ReferenceCleaner will set this to NULL and >>> +????????? // put it on the free list. >>> >>> I couldn't find the ReferenceCleaner in the patch or in the source. >>> Where can I find this code? >> >> I think it is typo (I will fix it) - it references new HandleCleaner >> class: >> >> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >> >> >> Thanks, >> Vladimir >> >>> >>> Thanks, >>> StefanK >>> >>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>> >>>> Update JVMCI to support pre-compiled as shared library Graal. >>>> Using aoted Graal can offers benefits including: >>>> ?- fast startup >>>> ?- compile time similar to native JIt compilers (C2) >>>> ?- memory usage disjoint from the application Java heap >>>> ?- no profile pollution of JDK code used by the application >>>> >>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to >>>> date. >>>> Changes were collected in Metropolis repo [2] and tested there. >>>> >>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and >>>> our compiler group. >>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI >>>> flags. >>>> >>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was >>>> clean. In this set Graal was tested only in tier3. >>>> >>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available >>>> in our system. Several issue were found which were present before >>>> these changes. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> [1] >>>> https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>> >>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>> From gilles.m.duboscq at oracle.com Fri Mar 29 09:33:55 2019 From: gilles.m.duboscq at oracle.com (Gilles Duboscq) Date: Fri, 29 Mar 2019 10:33:55 +0100 Subject: x86 FMA intrinsic support design In-Reply-To: References: Message-ID: <7c0a8607-4585-619d-2d4f-9cbb9d7caf62@oracle.com> Hi Jean-Philippe, That sounds like a good plan! In terms of naming, i would call such a node `FusedMultiplyAddNode`: spelling out what it does is much more important than the fact that it comes from an intrinsic. Thanks, Gilles On 29/03/2019 01:04, Halimi, Jean-Philippe wrote: > Hello, > > I am currently looking into adding support for FMA intrinsics in Graal. I would like to share what I plan to do to make sure it is how it should be implemented. > > > 1. Add VexRVMOp class support in AMD64Assembler with the corresponding FMA instructions > > a. It requires to add the VexOpAssertion.FMA and CPUFeature.FMA flags > > 2. Add UseFMA flag from HotSpot flags in GraalHotSpotVMConfig.java > > 3. Add a registerFMA method in AMD64GraphBuilderPlugins::registerMathPlugins > > a. This requires to add a specific FMAIntrinsicNode, which will emit the corresponding FMA instructions. > > Is there anything else that is needed in this case? > > Thanks for your insights, > Jp > From yudi.zheng at oracle.com Fri Mar 29 10:04:02 2019 From: yudi.zheng at oracle.com (Yudi Zheng) Date: Fri, 29 Mar 2019 11:04:02 +0100 Subject: x86 FMA intrinsic support design In-Reply-To: References: Message-ID: Hi Jp, Thanks in advance for the contribution! > 1. Add VexRVMOp class support in AMD64Assembler with the corresponding FMA instructions > > a. It requires to add the VexOpAssertion.FMA and CPUFeature.FMA flags We already have the CPUFeature.FMA flag. For adding the VexOpAssertion.FMA, you might refer to this commit [1] that adds BMI VexOpAssertion (with quite some refactoring which you can ignore). > 2. Add UseFMA flag from HotSpot flags in GraalHotSpotVMConfig.java > > 3. Add a registerFMA method in AMD64GraphBuilderPlugins::registerMathPlugins > > a. This requires to add a specific FMAIntrinsicNode, which will emit the corresponding FMA instructions. Sounds good to me. Maybe the name of the node could be unabbreviated as FusedMultiplyAddNode? Before implementing it, you might take a look into the Math exact plugins, the corresponding nodes (e.g., IntegerAddExactNode), and how we handleArithmeticException caused by such invocations. -Yudi [1]: https://github.com/oracle/graal/commit/97af3a3e43e4818b7a6bb9d1f905f7ada3ea4319 From gilles.m.duboscq at oracle.com Fri Mar 29 13:57:43 2019 From: gilles.m.duboscq at oracle.com (Gilles Duboscq) Date: Fri, 29 Mar 2019 14:57:43 +0100 Subject: JVMCI 0.57 released In-Reply-To: References: <78EFBB7C-582C-431C-9140-DF4817CFC177@oracle.com> <331FCEBC-2FC5-47D7-A2C1-DD33B8A9EFB5@oracle.com> <8c39bdf8-8bda-ce46-4633-5bbd8f70b731@oracle.com> <44aca3df-015d-1594-7dd5-392d6397ca62@oracle.com> Message-ID: <8d0d822d-1aac-46c1-0da3-896fdac4a415@oracle.com> OK got it. In the meanwhile, ianal but as far as i know JVMCI is GPLv2 with classpath exception so i guess you could have a copy of a built `jvmci-api.jar` from graal-jvmci-8 in your repo (~320kB). You can then point your build to that. (I understand this is sub-optimal) Gilles On 26/03/2019 18:01, David Lloyd wrote: > Basically I want to pilot a few possible optimizations outside of the > SubstrateVM tree, some of which might be very specific to a particular > library. Some of these might end up as upstream feature requests or > PRs to SubstrateVM, and some might just end up getting discarded. But > regardless, any manipulation of the compile tree is not possible > without the more specific `GraalFeature` API AFAICT. > > I'm not too worried about breaking changes because we're presently > mandating specific SubstrateVM version(s) to be used with the Quarkus > project. But, I don't really want to force people to use GraalVM or > the Labs SDK to*build* Quarkus as it might really discourage outside > development/contribution. From doug.simon at oracle.com Fri Mar 29 15:16:36 2019 From: doug.simon at oracle.com (Doug Simon) Date: Fri, 29 Mar 2019 16:16:36 +0100 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: Message-ID: <82C212AE-CE8A-4B6A-A1A0-DD08B41A06A5@oracle.com> Hi Robbin, > From: Robbin Ehn > > Hi, > > 434 for (; JavaThread *thr = jtiwh.next(); ) { > 435 if (thr!=thr_cur && thr->thread_state() == _thread_in_native) { > 436 num_active++; > 437 if (thr->is_Compiler_thread()) { > 438 CompilerThread* ct = (CompilerThread*) thr; > 439 if (ct->compiler() == NULL || !ct->compiler()->is_jvmci()) { > 440 num_active_compiler_thread++; > 441 } else { > 442 // When using a Java based JVMCI compiler, it's possible > 443 // for one compiler thread to grab a Java lock, enter > 444 // HotSpot and go to sleep on the shutdown safepoint. > 445 // Another JVMCI compiler thread can then attempt grab > 446 // the lock and thus never make progress. > 447 } > 448 } > 449 } > 450 } > > We inc num_active on threads in native. > If such thread is a compiler thread we also inc num_active_compiler_thread. > JavaThread blocking on safepoint would be state blocked. > JavaThread waiting on the 'Java lock' would also be blocked. > > Why are you not blocked when waiting on that contended Java lock ? This change was made primarily in the context of libgraal. It can happen that a JVMCI compiler thread acquires a lock in libgraal, enters HotSpot and goes to sleep in the shutdown safepoint. Another JVMCI compiler thread then attempts to acquire the same lock and goes to sleep in libgraal which from HotSpot?s perspective is the _thread_in_native state. This is the original fix I had for this: CompilerThread* ct = (CompilerThread*) thr; if (ct->compiler() == NULL || !ct->compiler()->is_jvmci() JVMCI_ONLY(|| !UseJVMCINativeLibrary)) { num_active_compiler_thread++; } else { // When using a compiler in a JVMCI shared library, it's possible // for one compiler thread to grab a lock in the shared library, // enter HotSpot and go to sleep on the shutdown safepoint. Another // JVMCI shared library compiler thread can then attempt to grab the // lock and thus never make progress. } which is probably the right one. I hadn?t realized that a JavaGraal (as opposed to libgraal) JVMCI compiler thread blocked on a lock will be in the blocked state, not in the _thread_in_native state. -Doug From tom.rodriguez at oracle.com Fri Mar 29 16:44:50 2019 From: tom.rodriguez at oracle.com (Tom Rodriguez) Date: Fri, 29 Mar 2019 09:44:50 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <82C212AE-CE8A-4B6A-A1A0-DD08B41A06A5@oracle.com> References: <82C212AE-CE8A-4B6A-A1A0-DD08B41A06A5@oracle.com> Message-ID: <9f3b0352-3474-24f3-6c0f-b82e29754444@oracle.com> > This is the original fix I had for this: > > CompilerThread* ct = (CompilerThread*) thr; > if (ct->compiler() == NULL || !ct->compiler()->is_jvmci() JVMCI_ONLY(|| !UseJVMCINativeLibrary)) { > num_active_compiler_thread++; > } else { > // When using a compiler in a JVMCI shared library, it's possible > // for one compiler thread to grab a lock in the shared library, > // enter HotSpot and go to sleep on the shutdown safepoint. Another > // JVMCI shared library compiler thread can then attempt to grab the > // lock and thus never make progress. > } > > which is probably the right one. I hadn?t realized that a JavaGraal > (as opposed to libgraal) JVMCI compiler thread blocked on a lock will be in > the blocked state, not in the _thread_in_native state. I think it would be ok to go back to your original fix. tom From vladimir.kozlov at oracle.com Fri Mar 29 16:55:18 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 29 Mar 2019 09:55:18 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: References: <4b759550-f185-724f-7139-9bab648a1966@oracle.com> Message-ID: Stefan, Do you have a test (and flags) which can allow me to measure effect of this code on G1 remark pause? Thanks, Vladimir On 3/29/19 12:36 AM, Stefan Karlsson wrote: > On 2019-03-29 03:07, Vladimir Kozlov wrote: >> Hi Stefan, >> >> I collected some data on MetadataHandleBlock. >> >> First, do_unloading() code is executed only when class_unloading_occurred is 'true' - it is rare >> case. It should not affect normal G1 remark pause. > > It's only rare for applications that don't do dynamic class loading and unloading. The applications > that do, will be affected. > >> >> Second, I run a test with -Xcomp. I got about 10,000 compilations by Graal and next data at the >> end of execution: >> >> max_blocks = 232 >> max_handles_per_block = 32 (since handles array has 32 elements) >> max_total_alive_values = 4631 > > OK. Thanks for the info. > > StefanK > >> >> Thanks, >> Vladimir >> >> On 3/28/19 2:44 PM, Vladimir Kozlov wrote: >>> Thank you, Stefan >>> >>> On 3/28/19 12:54 PM, Stefan Karlsson wrote: >>>> Hi Vladimir, >>>> >>>> I started to check the GC code. >>>> >>>> ======================================================================== >>>> I see that you've added guarded includes in the middle of the include list: >>>> ?? #include "gc/shared/strongRootsScope.hpp" >>>> ?? #include "gc/shared/weakProcessor.hpp" >>>> + #if INCLUDE_JVMCI >>>> + #include "jvmci/jvmci.hpp" >>>> + #endif >>>> ?? #include "oops/instanceRefKlass.hpp" >>>> ?? #include "oops/oop.inline.hpp" >>>> >>>> The style we use is to put these conditional includes at the end of the include lists. >>> >>> okay >>> >>>> >>>> ======================================================================== >>>> Could you also change the following: >>>> >>>> + #if INCLUDE_JVMCI >>>> +???? // Clean JVMCI metadata handles. >>>> +???? JVMCI::do_unloading(is_alive_closure(), purged_class); >>>> + #endif >>>> >>>> to: >>>> +???? // Clean JVMCI metadata handles. >>>> +???? JVMCI_ONLY(JVMCI::do_unloading(is_alive_closure(), purged_class);) >>>> >>>> to get rid of some of the line noise in the GC files. >>> >>> okay >>> >>>> >>>> ======================================================================== >>>> In the future we will need version of JVMCI::do_unloading that supports concurrent cleaning for >>>> ZGC. >>> >>> Yes, we need to support concurrent cleaning in a future. >>> >>>> >>>> ======================================================================== >>>> What's the performance impact for G1 remark pause with this serial walk over the >>>> MetadataHandleBlock? >>>> >>>> 3275 void G1CollectedHeap::complete_cleaning(BoolObjectClosure* is_alive, >>>> 3276???????????????????????????????????????? bool class_unloading_occurred) { >>>> 3277?? uint num_workers = workers()->active_workers(); >>>> 3278?? ParallelCleaningTask unlink_task(is_alive, num_workers, class_unloading_occurred, false); >>>> 3279?? workers()->run_task(&unlink_task); >>>> 3280 #if INCLUDE_JVMCI >>>> 3281?? // No parallel processing of JVMCI metadata handles for now. >>>> 3282?? JVMCI::do_unloading(is_alive, class_unloading_occurred); >>>> 3283 #endif >>>> 3284 } >>> >>> There should not be impact if Graal is not used. Only cost of call (which most likely is inlined >>> in product VM) and check: >>> >>> http://hg.openjdk.java.net/metropolis/dev/file/530fc1427d02/src/hotspot/share/jvmci/jvmciRuntime.cpp#l1237 >>> >>> >>> If Graal is used it should not have big impact since these metadata has regular pattern (32 >>> handles per array and array per MetadataHandleBlock block which are linked in list) and not large. >>> If there will be noticeable impact - we will work on it as you suggested by using >>> ParallelCleaningTask. >>> >>>> >>>> ======================================================================== >>>> Did you consider adding it as a task for one of the worker threads to execute in >>>> ParallelCleaningTask? >>>> >>>> See how other tasks are claimed by one worker: >>>> void KlassCleaningTask::work() { >>>> ?? ResourceMark rm; >>>> >>>> ?? // One worker will clean the subklass/sibling klass tree. >>>> ?? if (claim_clean_klass_tree_task()) { >>>> ???? Klass::clean_subklass_tree(); >>>> ?? } >>> >>> These changes were ported from JDK8u based changes in graal-jvmci-8 and there are no >>> ParallelCleaningTask in JDK8. >>> >>> Your suggestion is interesting and I agree that we should investigate it. >>> >>>> >>>> ======================================================================== >>>> In MetadataHandleBlock::do_unloading: >>>> >>>> +??????? if (klass->class_loader_data()->is_unloading()) { >>>> +????????? // This needs to be marked so that it's no longer scanned >>>> +????????? // but can't be put on the free list yet. The >>>> +????????? // ReferenceCleaner will set this to NULL and >>>> +????????? // put it on the free list. >>>> >>>> I couldn't find the ReferenceCleaner in the patch or in the source. Where can I find this code? >>> >>> I think it is typo (I will fix it) - it references new HandleCleaner class: >>> >>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/src/jdk.internal.vm.ci/share/classes/jdk.vm.ci.hotspot/src/jdk/vm/ci/hotspot/HandleCleaner.java.html >>> >>> >>> Thanks, >>> Vladimir >>> >>>> >>>> Thanks, >>>> StefanK >>>> >>>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>>> >>>>> Update JVMCI to support pre-compiled as shared library Graal. >>>>> Using aoted Graal can offers benefits including: >>>>> ?- fast startup >>>>> ?- compile time similar to native JIt compilers (C2) >>>>> ?- memory usage disjoint from the application Java heap >>>>> ?- no profile pollution of JDK code used by the application >>>>> >>>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>>> Changes were collected in Metropolis repo [2] and tested there. >>>>> >>>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>>> >>>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal >>>>> was tested only in tier3. >>>>> >>>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several >>>>> issue were found which were present before these changes. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>>> [2] http://hg.openjdk.java.net/metropolis/dev/ >>>> From vladimir.kozlov at oracle.com Sat Mar 30 00:38:45 2019 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 29 Mar 2019 17:38:45 -0700 Subject: [13] RFR(L) 8220623: [JVMCI] Update JVMCI to support JVMCI based Compiler compiled into shared library In-Reply-To: <28412d92-f173-0372-a2d0-4227b6f43b46@oracle.com> References: <6f31b2e5-ab7a-cad3-d610-296e80174e01@oracle.com> <577392af-a512-bd13-aa60-46c9dccf9723@oracle.com> <28412d92-f173-0372-a2d0-4227b6f43b46@oracle.com> Message-ID: I did additional changes based on reviews and tested them with tier1-3 testing: http://cr.openjdk.java.net/~kvn/8220623/webrev_delta.04/ - For G1 moved JVMCI::do_unloading() call to ParallelCleaningTask to execute by one worker thread. - Added #if INCLUDE_JVMCI for code which is used only by JVMCI. - Used JVMCI_ONLY() macro for one line JVMCI code. - Fixed JVMCI code which count compiler threads in vmOperations.cpp as discussed. - Fixed typo in CompilerThreadStackSize setting in libgraal case. Thanks, Vladimir On 3/29/19 5:23 PM, Vladimir Kozlov wrote: > Thank you, Nils > > Yes, it looks like typo but it is the same in graal-jvmci-8 code. I asked Graal guys to make sure it > is typo which should be fixed (or not). > > Vladimir > > On 3/29/19 2:22 PM, Nils Eliasson wrote: >> I killed the formatting somehow. Second try: >> >> Hi Vladimir, >> >> I've started going through the review. This one caught my eye: >> >> compilerDefinitions.cpp: >> >> + if (UseJVMCINativeLibrary) { >> +?? // SVM compiled code requires more stack space >> +?? if (FLAG_IS_DEFAULT(CompilerThreadStackSize)) { >> +?? FLAG_SET_DEFAULT(CompilerThreadStackSize, 2*M); >> + } >> >> CompilerThreadStackSize is in Ks, so that default will turn into 2G stacks. I guess that isn't >> your intention :) >> >> Regards, >> >> Nils >> >> On 2019-03-29 22:09, Nils Eliasson wrote: >>> Hi Vladimir, I've started going through the review. This one caught my eye: *+ if >>> (UseJVMCINativeLibrary) {* >>> *+ // SVM compiled code requires more stack space* >>> *+ if (FLAG_IS_DEFAULT(CompilerThreadStackSize)) {* >>> *+ FLAG_SET_DEFAULT(CompilerThreadStackSize, 2*M);* >>> *+ } *CompilerThreadStackSize is in Ks, so that default will turn into 2G stacks. I guess that >>> isn't your intention :) Regards, Nils ** >>> >>> On 2019-03-28 20:15, Vladimir Kozlov wrote: >>>> https://bugs.openjdk.java.net/browse/JDK-8220623 >>>> http://cr.openjdk.java.net/~kvn/8220623/webrev.03/ >>>> >>>> Update JVMCI to support pre-compiled as shared library Graal. >>>> Using aoted Graal can offers benefits including: >>>> ?- fast startup >>>> ?- compile time similar to native JIt compilers (C2) >>>> ?- memory usage disjoint from the application Java heap >>>> ?- no profile pollution of JDK code used by the application >>>> >>>> This is JDK13 port of JVMCI changes done in graal-jvmci-8 [1] up to date. >>>> Changes were collected in Metropolis repo [2] and tested there. >>>> >>>> Changes we reviewed by Oracle Labs (authors of JVMCI and Graal) and our compiler group. >>>> Changes in shared code are guarded by #if INCLUDE_JVMCI and JVMCI flags. >>>> >>>> I ran tier1-tier8 (which includes HotSpot and JDK tests) and it was clean. In this set Graal was >>>> tested only in tier3. >>>> >>>> And I ran all hs-tier3-graal .. hs-tier8-graal Graal tests available in our system. Several >>>> issue were found which were present before these changes. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> [1] https://github.com/graalvm/graal-jvmci-8/commit/49ff2045fb603e35516a3a427d8023c00e1607af >>>> [2] http://hg.openjdk.java.net/metropolis/dev/