From sergey.gabdurakhmanov at oracle.com Mon Sep 1 11:19:51 2014 From: sergey.gabdurakhmanov at oracle.com (Sergey Gabdurakhmanov) Date: Mon, 01 Sep 2014 15:19:51 +0400 Subject: Request Review: 6883953: java -client -XX:ValueMapInitialSize=0 crashes In-Reply-To: <540455F9.1010009@oracle.com> References: <540455F9.1010009@oracle.com> Message-ID: <54045657.4090001@oracle.com> Looks good. BR, Sergey On 01.09.2014 15:18, Vladimir Kempik wrote: > Hello! > > I'd like to get a review for this backport of 6883953 into jdk7. > > The patch does not apply cleanly, the required modifications are > quite small. > I will need a review for this. > > The webrev for jdk7: > http://cr.openjdk.java.net/~vkempik/6883953/webrev.00/ > > The difference, compared to jdk9 is the code around the added lines. > > Bug: https://bugs.openjdk.java.net/browse/JDK-6883953 > Jdk9 changeset: > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/b97166f236bd > Jdk9 review: > http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2014-May/011630.html > http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-May/014527.html > > > JDK8 change is on the way as well. > > The change was tested with jprt on all supported platforms. > > Thanks, Vladimir. > From magnus.ihse.bursie at oracle.com Mon Sep 1 12:11:36 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 01 Sep 2014 14:11:36 +0200 Subject: RFR (preliminary): JDK-8056999 Make hotspot builds less verbose on default log level Message-ID: <54046278.7050404@oracle.com> Even in the default log level ("warn"), hotspots builds are extremely verbose. With the new jigsaw build system, hotspot is build in parallel with the jdk, and the sheer amount of hotspot output makes the jdk output practically disappear. This fix will make the following changes: * When hotspot is build from the top dir with the default log level, all repetetive and purely informative output is hidden (e.g. names of files compiled, and the "INFO:" blobs). * When hotspot is build from the top dir, with any other log level (info, debug, trace), all output will be there, as before. * When hotspot is build from the hotspot repo, all output will be there, as before. Note! This is a preliminary review -- I have made the necessary changes for Linux only. If this fix gets thumbs up, I'll continue and apply the same pattern to the rest of the platforms. But I didn't want to do all that duplication until I felt certain that I wouldn't have to change something major. The changes themselves are mostly trivial, but they are all over the place :-(. Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.01 /Magnus From tobeg3oogle at gmail.com Tue Sep 2 09:08:32 2014 From: tobeg3oogle at gmail.com (tobe) Date: Tue, 2 Sep 2014 17:08:32 +0800 Subject: Fwd: Disastrous bug when running jinfo and jmap In-Reply-To: References: Message-ID: When I run jinfo or jmap to any Java process, it will "suspend" the Java process. It's 100% reproduced for the long running processes. Here're the detailed steps: 1. Pick a Java process which is running over 25 days(It's wired because this doesn't work for new processes). 2. Run ps to check the state of the process, should be "Sl" which is expected. 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this issue). 4. Run ps to check the state of the process. This time it changes to "Tl" which means STOPPED and the process doesn't response any requests. Here's the output of our process: [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 /opt/soft/jdk/bin/java -cp /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m -XX:MaxPermSize=512m -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 -Dproc_regionserver -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf -Djava.net.preferIPv4Stack=true -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug -Dhbase.policy.file=hbase-policy.xml -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer start [work at hadoop ~]$ jinfo 36663 > tobe.jinfo Attaching to process ID 36663, please wait... Debugger attached successfully. Server compiler detected. JVM version is 20.12-b01 [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 /opt/soft/jdk/bin/java -cp /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m -XX:MaxPermSize=512m -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 -Dproc_regionserver -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf -Djava.net.preferIPv4Stack=true -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug -Dhbase.policy.file=hbase-policy.xml -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer start I hope some JVM experts here could help. $ java -version java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06) Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) From mikael.gerdin at oracle.com Tue Sep 2 09:15:07 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 02 Sep 2014 11:15:07 +0200 Subject: Fwd: Disastrous bug when running jinfo and jmap In-Reply-To: References: Message-ID: <54058A9B.8040307@oracle.com> Hi, This is the expected behavior for jmap and jinfo. If you call jstack with the "-F" flag you will see the same behavior. The reason for this is that jmap, jinfo and jstack -F all attach to your target JVM as a debugger and read the memory from the process. That needs to be done when the target process is in a frozen state. /Mikael On 2014-09-02 11:08, tobe wrote: > When I run jinfo or jmap to any Java process, it will "suspend" the Java > process. It's 100% reproduced for the long running processes. > > Here're the detailed steps: > > 1. Pick a Java process which is running over 25 days(It's wired because > this doesn't work for new processes). > 2. Run ps to check the state of the process, should be "Sl" which is > expected. > 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this issue). > 4. Run ps to check the state of the process. This time it changes to "Tl" > which means STOPPED and the process doesn't response any requests. > > Here's the output of our process: > > [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" > work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 > /opt/soft/jdk/bin/java -cp > /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* > -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 > -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar > -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m > -XX:MaxPermSize=512m > -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log > -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError > -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log > -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 > -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 > -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled > -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled > -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m > -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark > -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 > -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled > -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout > -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 > -Dproc_regionserver > -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf > -Djava.net.preferIPv4Stack=true > -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log > -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug > -Dhbase.policy.file=hbase-policy.xml > -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package > -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf > -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer start > [work at hadoop ~]$ jinfo 36663 > tobe.jinfo > Attaching to process ID 36663, please wait... > Debugger attached successfully. > Server compiler detected. > JVM version is 20.12-b01 > [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" > work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 > /opt/soft/jdk/bin/java -cp > /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* > -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 > -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar > -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m > -XX:MaxPermSize=512m > -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log > -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError > -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log > -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc > -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 > -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 > -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled > -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled > -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m > -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark > -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 > -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled > -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout > -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 > -Dproc_regionserver > -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf > -Djava.net.preferIPv4Stack=true > -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log > -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug > -Dhbase.policy.file=hbase-policy.xml > -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package > -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf > -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer start > > > I hope some JVM experts here could help. > > $ java -version > java version "1.6.0_37" > Java(TM) SE Runtime Environment (build 1.6.0_37-b06) > Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) > From staffan.larsen at oracle.com Tue Sep 2 09:28:44 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 2 Sep 2014 11:28:44 +0200 Subject: Disastrous bug when running jinfo and jmap In-Reply-To: <54058A9B.8040307@oracle.com> References: <54058A9B.8040307@oracle.com> Message-ID: <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> On 2 sep 2014, at 11:15, Mikael Gerdin wrote: > Hi, > > This is the expected behavior for jmap and jinfo. If you call jstack with the "-F" flag you will see the same behavior. > > The reason for this is that jmap, jinfo and jstack -F all attach to your target JVM as a debugger and read the memory from the process. That needs to be done when the target process is in a frozen state. But when jinfo/jmap/jstack is done with the process it should continue execution. Is this reproducible with JDK 8? /Staffan > > /Mikael > > On 2014-09-02 11:08, tobe wrote: >> When I run jinfo or jmap to any Java process, it will "suspend" the Java >> process. It's 100% reproduced for the long running processes. >> >> Here're the detailed steps: >> >> 1. Pick a Java process which is running over 25 days(It's wired because >> this doesn't work for new processes). >> 2. Run ps to check the state of the process, should be "Sl" which is >> expected. >> 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this issue). >> 4. Run ps to check the state of the process. This time it changes to "Tl" >> which means STOPPED and the process doesn't response any requests. >> >> Here's the output of our process: >> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >> work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 >> /opt/soft/jdk/bin/java -cp >> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >> -XX:MaxPermSize=512m >> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >> -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 >> -Dproc_regionserver >> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >> -Djava.net.preferIPv4Stack=true >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >> -Dhbase.policy.file=hbase-policy.xml >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >> -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer start >> [work at hadoop ~]$ jinfo 36663 > tobe.jinfo >> Attaching to process ID 36663, please wait... >> Debugger attached successfully. >> Server compiler detected. >> JVM version is 20.12-b01 >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >> work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 >> /opt/soft/jdk/bin/java -cp >> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >> -XX:MaxPermSize=512m >> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >> -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 -XX:GCLogFileSize=128m >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 >> -Dproc_regionserver >> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >> -Djava.net.preferIPv4Stack=true >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >> -Dhbase.policy.file=hbase-policy.xml >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >> -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer start >> >> >> I hope some JVM experts here could help. >> >> $ java -version >> java version "1.6.0_37" >> Java(TM) SE Runtime Environment (build 1.6.0_37-b06) >> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) >> From tobeg3oogle at gmail.com Tue Sep 2 09:38:24 2014 From: tobeg3oogle at gmail.com (tobe) Date: Tue, 2 Sep 2014 17:38:24 +0800 Subject: Disastrous bug when running jinfo and jmap In-Reply-To: <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> References: <54058A9B.8040307@oracle.com> <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> Message-ID: Thank @mikael for replying. But I can see the complete message "Server compiler detected" and expect the JVM to continue. It's wired that this doesn't happen when jinfo the new processes. On Tue, Sep 2, 2014 at 5:28 PM, Staffan Larsen wrote: > > On 2 sep 2014, at 11:15, Mikael Gerdin wrote: > > > Hi, > > > > This is the expected behavior for jmap and jinfo. If you call jstack > with the "-F" flag you will see the same behavior. > > > > The reason for this is that jmap, jinfo and jstack -F all attach to your > target JVM as a debugger and read the memory from the process. That needs > to be done when the target process is in a frozen state. > > But when jinfo/jmap/jstack is done with the process it should continue > execution. > > Is this reproducible with JDK 8? > > /Staffan > > > > > > /Mikael > > > > On 2014-09-02 11:08, tobe wrote: > >> When I run jinfo or jmap to any Java process, it will "suspend" the Java > >> process. It's 100% reproduced for the long running processes. > >> > >> Here're the detailed steps: > >> > >> 1. Pick a Java process which is running over 25 days(It's wired because > >> this doesn't work for new processes). > >> 2. Run ps to check the state of the process, should be "Sl" which is > >> expected. > >> 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this > issue). > >> 4. Run ps to check the state of the process. This time it changes to > "Tl" > >> which means STOPPED and the process doesn't response any requests. > >> > >> Here's the output of our process: > >> > >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" > >> work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 > >> /opt/soft/jdk/bin/java -cp > >> > /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* > >> > -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 > >> > -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar > >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m > >> -XX:MaxPermSize=512m > >> > -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log > >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError > >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log > >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc > >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 > >> -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 > >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled > >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled > >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 > >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 > -XX:GCLogFileSize=128m > >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark > >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 > >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled > >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout > >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 > >> -Dproc_regionserver > >> > -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf > >> -Djava.net.preferIPv4Stack=true > >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log > >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug > >> -Dhbase.policy.file=hbase-policy.xml > >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package > >> > -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf > >> -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer > start > >> [work at hadoop ~]$ jinfo 36663 > tobe.jinfo > >> Attaching to process ID 36663, please wait... > >> Debugger attached successfully. > >> Server compiler detected. > >> JVM version is 20.12-b01 > >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" > >> work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 > >> /opt/soft/jdk/bin/java -cp > >> > /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* > >> > -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 > >> > -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar > >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m > >> -XX:MaxPermSize=512m > >> > -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log > >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError > >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log > >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc > >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 > >> -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=75 > >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled > >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled > >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 > >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 > -XX:GCLogFileSize=128m > >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark > >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 -XX:ParallelGCThreads=16 > >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled > >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout > >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 > >> -Dproc_regionserver > >> > -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf > >> -Djava.net.preferIPv4Stack=true > >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log > >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug > >> -Dhbase.policy.file=hbase-policy.xml > >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package > >> > -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf > >> -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer > start > >> > >> > >> I hope some JVM experts here could help. > >> > >> $ java -version > >> java version "1.6.0_37" > >> Java(TM) SE Runtime Environment (build 1.6.0_37-b06) > >> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) > >> > > From tobeg3oogle at gmail.com Tue Sep 2 10:05:44 2014 From: tobeg3oogle at gmail.com (tobe) Date: Tue, 2 Sep 2014 18:05:44 +0800 Subject: Disastrous bug when running jinfo and jmap In-Reply-To: References: <54058A9B.8040307@oracle.com> <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> Message-ID: Hi @martijn. Do you mean you can run jmap and jinfo on the Java process which has ran over 25 days? Have you checked the status of that process? Our 1.6 jvms were suspended but not exited. If it's the issue on 1.6, can anyone help to find out that issue and patch? On Tue, Sep 2, 2014 at 5:38 PM, tobe wrote: > Thank @mikael for replying. But I can see the complete message "Server > compiler detected" and expect the JVM to continue. It's wired that this > doesn't happen when jinfo the new processes. > > > > On Tue, Sep 2, 2014 at 5:28 PM, Staffan Larsen > wrote: > >> >> On 2 sep 2014, at 11:15, Mikael Gerdin wrote: >> >> > Hi, >> > >> > This is the expected behavior for jmap and jinfo. If you call jstack >> with the "-F" flag you will see the same behavior. >> > >> > The reason for this is that jmap, jinfo and jstack -F all attach to >> your target JVM as a debugger and read the memory from the process. That >> needs to be done when the target process is in a frozen state. >> >> But when jinfo/jmap/jstack is done with the process it should continue >> execution. >> >> Is this reproducible with JDK 8? >> >> /Staffan >> >> >> > >> > /Mikael >> > >> > On 2014-09-02 11:08, tobe wrote: >> >> When I run jinfo or jmap to any Java process, it will "suspend" the >> Java >> >> process. It's 100% reproduced for the long running processes. >> >> >> >> Here're the detailed steps: >> >> >> >> 1. Pick a Java process which is running over 25 days(It's wired because >> >> this doesn't work for new processes). >> >> 2. Run ps to check the state of the process, should be "Sl" which is >> >> expected. >> >> 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this >> issue). >> >> 4. Run ps to check the state of the process. This time it changes to >> "Tl" >> >> which means STOPPED and the process doesn't response any requests. >> >> >> >> Here's the output of our process: >> >> >> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >> >> work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 >> >> /opt/soft/jdk/bin/java -cp >> >> >> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >> >> >> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >> >> >> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >> >> -XX:MaxPermSize=512m >> >> >> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc >> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >> >> -XX:+UseCMSCompactAtFullCollection >> -XX:CMSInitiatingOccupancyFraction=75 >> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >> -XX:GCLogFileSize=128m >> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >> -XX:ParallelGCThreads=16 >> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 >> >> -Dproc_regionserver >> >> >> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >> >> -Djava.net.preferIPv4Stack=true >> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >> >> -Dhbase.policy.file=hbase-policy.xml >> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >> >> >> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >> >> -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer >> start >> >> [work at hadoop ~]$ jinfo 36663 > tobe.jinfo >> >> Attaching to process ID 36663, please wait... >> >> Debugger attached successfully. >> >> Server compiler detected. >> >> JVM version is 20.12-b01 >> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >> >> work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 >> >> /opt/soft/jdk/bin/java -cp >> >> >> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >> >> >> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >> >> >> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >> >> -XX:MaxPermSize=512m >> >> >> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc >> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >> >> -XX:+UseCMSCompactAtFullCollection >> -XX:CMSInitiatingOccupancyFraction=75 >> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >> -XX:GCLogFileSize=128m >> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >> -XX:ParallelGCThreads=16 >> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking -XX:MaxTenuringThreshold=3 >> >> -Dproc_regionserver >> >> >> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >> >> -Djava.net.preferIPv4Stack=true >> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >> >> -Dhbase.policy.file=hbase-policy.xml >> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >> >> >> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >> >> -Dhbase.id.str=work org.apache.hadoop.hbase.regionserver.HRegionServer >> start >> >> >> >> >> >> I hope some JVM experts here could help. >> >> >> >> $ java -version >> >> java version "1.6.0_37" >> >> Java(TM) SE Runtime Environment (build 1.6.0_37-b06) >> >> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) >> >> >> >> > From tobeg3oogle at gmail.com Tue Sep 2 12:03:43 2014 From: tobeg3oogle at gmail.com (tobe) Date: Tue, 2 Sep 2014 20:03:43 +0800 Subject: Disastrous bug when running jinfo and jmap In-Reply-To: References: <54058A9B.8040307@oracle.com> <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> Message-ID: Just like what @mikael said, running jstack -F has the same behaviour while jstack doesn't. But our processes have been suspended for several days and it's quite abnormal. I think there's something preventing the processes from recovering. Is it related to our running environment or jdk1.6? On Tue, Sep 2, 2014 at 6:05 PM, tobe wrote: > Hi @martijn. Do you mean you can run jmap and jinfo on the Java process > which has ran over 25 days? Have you checked the status of that process? > Our 1.6 jvms were suspended but not exited. > > If it's the issue on 1.6, can anyone help to find out that issue and patch? > > > On Tue, Sep 2, 2014 at 5:38 PM, tobe wrote: > >> Thank @mikael for replying. But I can see the complete message "Server >> compiler detected" and expect the JVM to continue. It's wired that this >> doesn't happen when jinfo the new processes. >> >> >> >> On Tue, Sep 2, 2014 at 5:28 PM, Staffan Larsen > > wrote: >> >>> >>> On 2 sep 2014, at 11:15, Mikael Gerdin wrote: >>> >>> > Hi, >>> > >>> > This is the expected behavior for jmap and jinfo. If you call jstack >>> with the "-F" flag you will see the same behavior. >>> > >>> > The reason for this is that jmap, jinfo and jstack -F all attach to >>> your target JVM as a debugger and read the memory from the process. That >>> needs to be done when the target process is in a frozen state. >>> >>> But when jinfo/jmap/jstack is done with the process it should continue >>> execution. >>> >>> Is this reproducible with JDK 8? >>> >>> /Staffan >>> >>> >>> > >>> > /Mikael >>> > >>> > On 2014-09-02 11:08, tobe wrote: >>> >> When I run jinfo or jmap to any Java process, it will "suspend" the >>> Java >>> >> process. It's 100% reproduced for the long running processes. >>> >> >>> >> Here're the detailed steps: >>> >> >>> >> 1. Pick a Java process which is running over 25 days(It's wired >>> because >>> >> this doesn't work for new processes). >>> >> 2. Run ps to check the state of the process, should be "Sl" which is >>> >> expected. >>> >> 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this >>> issue). >>> >> 4. Run ps to check the state of the process. This time it changes to >>> "Tl" >>> >> which means STOPPED and the process doesn't response any requests. >>> >> >>> >> Here's the output of our process: >>> >> >>> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >>> >> work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 >>> >> /opt/soft/jdk/bin/java -cp >>> >> >>> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >>> >> >>> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >>> >> >>> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >>> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >>> >> -XX:MaxPermSize=512m >>> >> >>> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >>> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >>> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >>> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc >>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >>> >> -XX:+UseCMSCompactAtFullCollection >>> -XX:CMSInitiatingOccupancyFraction=75 >>> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >>> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >>> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >>> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >>> -XX:GCLogFileSize=128m >>> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >>> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >>> -XX:ParallelGCThreads=16 >>> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >>> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >>> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking >>> -XX:MaxTenuringThreshold=3 >>> >> -Dproc_regionserver >>> >> >>> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >>> >> -Djava.net.preferIPv4Stack=true >>> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >>> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >>> >> -Dhbase.policy.file=hbase-policy.xml >>> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >>> >> >>> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >>> >> -Dhbase.id.str=work >>> org.apache.hadoop.hbase.regionserver.HRegionServer start >>> >> [work at hadoop ~]$ jinfo 36663 > tobe.jinfo >>> >> Attaching to process ID 36663, please wait... >>> >> Debugger attached successfully. >>> >> Server compiler detected. >>> >> JVM version is 20.12-b01 >>> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >>> >> work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 >>> >> /opt/soft/jdk/bin/java -cp >>> >> >>> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >>> >> >>> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >>> >> >>> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >>> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >>> >> -XX:MaxPermSize=512m >>> >> >>> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >>> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >>> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >>> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC -verbose:gc >>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >>> >> -XX:+UseCMSCompactAtFullCollection >>> -XX:CMSInitiatingOccupancyFraction=75 >>> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >>> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >>> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >>> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >>> -XX:GCLogFileSize=128m >>> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >>> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >>> -XX:ParallelGCThreads=16 >>> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >>> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >>> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking >>> -XX:MaxTenuringThreshold=3 >>> >> -Dproc_regionserver >>> >> >>> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >>> >> -Djava.net.preferIPv4Stack=true >>> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >>> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >>> >> -Dhbase.policy.file=hbase-policy.xml >>> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >>> >> >>> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >>> >> -Dhbase.id.str=work >>> org.apache.hadoop.hbase.regionserver.HRegionServer start >>> >> >>> >> >>> >> I hope some JVM experts here could help. >>> >> >>> >> $ java -version >>> >> java version "1.6.0_37" >>> >> Java(TM) SE Runtime Environment (build 1.6.0_37-b06) >>> >> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) >>> >> >>> >>> >> > From coleen.phillimore at oracle.com Tue Sep 2 12:29:20 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 02 Sep 2014 08:29:20 -0400 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <53FFA281.7050701@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> Message-ID: <5405B820.3060505@oracle.com> Serguei, I didn't answer one of your questions. On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >> This bit is set during purging previous versions when all methods >> have been marked on_stack() if found in various places. The bit is >> only used for setting breakpoints. > > I had to ask slightly different. > "How precise must be the control of this bit?" > Part of this question is the question below about what happens when > the method invocation is finished. > I realized now that it can impact only setting breakpoints. > Suppose, we did not clear the bit in time and then another breakpoint > is set. > The only bad thing is that this new breakpoint will be useless. Yes. We set the on_stack bit which causes setting the is_running_emcp bit during safepoints for class redefinition and class unloading. After the safepoint, the on_stack bit is cleared. After the safepoint, we may also set breakpoints using the is_running_emcp bit. If the method has exited we would set a breakpoint in a method that is never reached. But this shouldn't be noticeable to the programmer. The method's is_running_emcp bit and maybe metadata would be cleaned up the next time we do class unloading at a safepoint. > > But let me look at new webrev first to see if any update is needed here. > Yes, please review this again and let me know if this does what I claim it does. Thank you! Coleen From tobeg3oogle at gmail.com Tue Sep 2 13:37:56 2014 From: tobeg3oogle at gmail.com (tobe) Date: Tue, 2 Sep 2014 21:37:56 +0800 Subject: Disastrous bug when running jinfo and jmap In-Reply-To: References: <54058A9B.8040307@oracle.com> <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> Message-ID: Now I'm considering something about ptrace. Our kernel version is 2.6.32-279. Maybe it doesn't resume the threads correctly. Is it related to http://kernel.opensuse.org/cgit/kernel/commit/?h=openSUSE-13.1&id=d1f26676dad578a65c94782f0c2bd00b7aa68f1b ? On Tue, Sep 2, 2014 at 8:03 PM, tobe wrote: > Just like what @mikael said, running jstack -F has the same behaviour > while jstack doesn't. But our processes have been suspended for several > days and it's quite abnormal. I think there's something preventing the > processes from recovering. Is it related to our running environment or > jdk1.6? > > > On Tue, Sep 2, 2014 at 6:05 PM, tobe wrote: > >> Hi @martijn. Do you mean you can run jmap and jinfo on the Java process >> which has ran over 25 days? Have you checked the status of that process? >> Our 1.6 jvms were suspended but not exited. >> >> If it's the issue on 1.6, can anyone help to find out that issue and >> patch? >> >> >> On Tue, Sep 2, 2014 at 5:38 PM, tobe wrote: >> >>> Thank @mikael for replying. But I can see the complete message "Server >>> compiler detected" and expect the JVM to continue. It's wired that this >>> doesn't happen when jinfo the new processes. >>> >>> >>> >>> On Tue, Sep 2, 2014 at 5:28 PM, Staffan Larsen < >>> staffan.larsen at oracle.com> wrote: >>> >>>> >>>> On 2 sep 2014, at 11:15, Mikael Gerdin >>>> wrote: >>>> >>>> > Hi, >>>> > >>>> > This is the expected behavior for jmap and jinfo. If you call jstack >>>> with the "-F" flag you will see the same behavior. >>>> > >>>> > The reason for this is that jmap, jinfo and jstack -F all attach to >>>> your target JVM as a debugger and read the memory from the process. That >>>> needs to be done when the target process is in a frozen state. >>>> >>>> But when jinfo/jmap/jstack is done with the process it should continue >>>> execution. >>>> >>>> Is this reproducible with JDK 8? >>>> >>>> /Staffan >>>> >>>> >>>> > >>>> > /Mikael >>>> > >>>> > On 2014-09-02 11:08, tobe wrote: >>>> >> When I run jinfo or jmap to any Java process, it will "suspend" the >>>> Java >>>> >> process. It's 100% reproduced for the long running processes. >>>> >> >>>> >> Here're the detailed steps: >>>> >> >>>> >> 1. Pick a Java process which is running over 25 days(It's wired >>>> because >>>> >> this doesn't work for new processes). >>>> >> 2. Run ps to check the state of the process, should be "Sl" which is >>>> >> expected. >>>> >> 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this >>>> issue). >>>> >> 4. Run ps to check the state of the process. This time it changes to >>>> "Tl" >>>> >> which means STOPPED and the process doesn't response any requests. >>>> >> >>>> >> Here's the output of our process: >>>> >> >>>> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >>>> >> work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 >>>> >> /opt/soft/jdk/bin/java -cp >>>> >> >>>> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >>>> >> >>>> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >>>> >> >>>> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >>>> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >>>> >> -XX:MaxPermSize=512m >>>> >> >>>> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >>>> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >>>> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >>>> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC >>>> -verbose:gc >>>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >>>> >> -XX:+UseCMSCompactAtFullCollection >>>> -XX:CMSInitiatingOccupancyFraction=75 >>>> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >>>> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >>>> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >>>> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >>>> -XX:GCLogFileSize=128m >>>> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >>>> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >>>> -XX:ParallelGCThreads=16 >>>> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >>>> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >>>> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking >>>> -XX:MaxTenuringThreshold=3 >>>> >> -Dproc_regionserver >>>> >> >>>> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >>>> >> -Djava.net.preferIPv4Stack=true >>>> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >>>> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >>>> >> -Dhbase.policy.file=hbase-policy.xml >>>> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >>>> >> >>>> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >>>> >> -Dhbase.id.str=work >>>> org.apache.hadoop.hbase.regionserver.HRegionServer start >>>> >> [work at hadoop ~]$ jinfo 36663 > tobe.jinfo >>>> >> Attaching to process ID 36663, please wait... >>>> >> Debugger attached successfully. >>>> >> Server compiler detected. >>>> >> JVM version is 20.12-b01 >>>> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >>>> >> work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 >>>> >> /opt/soft/jdk/bin/java -cp >>>> >> >>>> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >>>> >> >>>> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >>>> >> >>>> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >>>> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >>>> >> -XX:MaxPermSize=512m >>>> >> >>>> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >>>> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >>>> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >>>> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC >>>> -verbose:gc >>>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >>>> >> -XX:+UseCMSCompactAtFullCollection >>>> -XX:CMSInitiatingOccupancyFraction=75 >>>> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >>>> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >>>> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >>>> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >>>> -XX:GCLogFileSize=128m >>>> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >>>> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >>>> -XX:ParallelGCThreads=16 >>>> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >>>> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >>>> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking >>>> -XX:MaxTenuringThreshold=3 >>>> >> -Dproc_regionserver >>>> >> >>>> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >>>> >> -Djava.net.preferIPv4Stack=true >>>> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >>>> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >>>> >> -Dhbase.policy.file=hbase-policy.xml >>>> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >>>> >> >>>> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >>>> >> -Dhbase.id.str=work >>>> org.apache.hadoop.hbase.regionserver.HRegionServer start >>>> >> >>>> >> >>>> >> I hope some JVM experts here could help. >>>> >> >>>> >> $ java -version >>>> >> java version "1.6.0_37" >>>> >> Java(TM) SE Runtime Environment (build 1.6.0_37-b06) >>>> >> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) >>>> >> >>>> >>>> >>> >> > From tobeg3oogle at gmail.com Tue Sep 2 13:49:16 2014 From: tobeg3oogle at gmail.com (tobe) Date: Tue, 2 Sep 2014 21:49:16 +0800 Subject: Disastrous bug when running jinfo and jmap In-Reply-To: References: <54058A9B.8040307@oracle.com> <7FDE560B-A7C3-4DFE-9DFC-FE701B8E05C7@oracle.com> Message-ID: And I see this http://ebergen.net/wordpress/2008/06/25/ptrace-on-threads-and-linux-signal-handling-issues/ . On Tue, Sep 2, 2014 at 9:37 PM, tobe wrote: > Now I'm considering something about ptrace. Our kernel version is > 2.6.32-279. Maybe it doesn't resume the threads correctly. Is it related to > http://kernel.opensuse.org/cgit/kernel/commit/?h=openSUSE-13.1&id=d1f26676dad578a65c94782f0c2bd00b7aa68f1b > ? > > > On Tue, Sep 2, 2014 at 8:03 PM, tobe wrote: > >> Just like what @mikael said, running jstack -F has the same behaviour >> while jstack doesn't. But our processes have been suspended for several >> days and it's quite abnormal. I think there's something preventing the >> processes from recovering. Is it related to our running environment or >> jdk1.6? >> >> >> On Tue, Sep 2, 2014 at 6:05 PM, tobe wrote: >> >>> Hi @martijn. Do you mean you can run jmap and jinfo on the Java process >>> which has ran over 25 days? Have you checked the status of that process? >>> Our 1.6 jvms were suspended but not exited. >>> >>> If it's the issue on 1.6, can anyone help to find out that issue and >>> patch? >>> >>> >>> On Tue, Sep 2, 2014 at 5:38 PM, tobe wrote: >>> >>>> Thank @mikael for replying. But I can see the complete message "Server >>>> compiler detected" and expect the JVM to continue. It's wired that this >>>> doesn't happen when jinfo the new processes. >>>> >>>> >>>> >>>> On Tue, Sep 2, 2014 at 5:28 PM, Staffan Larsen < >>>> staffan.larsen at oracle.com> wrote: >>>> >>>>> >>>>> On 2 sep 2014, at 11:15, Mikael Gerdin >>>>> wrote: >>>>> >>>>> > Hi, >>>>> > >>>>> > This is the expected behavior for jmap and jinfo. If you call jstack >>>>> with the "-F" flag you will see the same behavior. >>>>> > >>>>> > The reason for this is that jmap, jinfo and jstack -F all attach to >>>>> your target JVM as a debugger and read the memory from the process. That >>>>> needs to be done when the target process is in a frozen state. >>>>> >>>>> But when jinfo/jmap/jstack is done with the process it should continue >>>>> execution. >>>>> >>>>> Is this reproducible with JDK 8? >>>>> >>>>> /Staffan >>>>> >>>>> >>>>> > >>>>> > /Mikael >>>>> > >>>>> > On 2014-09-02 11:08, tobe wrote: >>>>> >> When I run jinfo or jmap to any Java process, it will "suspend" the >>>>> Java >>>>> >> process. It's 100% reproduced for the long running processes. >>>>> >> >>>>> >> Here're the detailed steps: >>>>> >> >>>>> >> 1. Pick a Java process which is running over 25 days(It's wired >>>>> because >>>>> >> this doesn't work for new processes). >>>>> >> 2. Run ps to check the state of the process, should be "Sl" which is >>>>> >> expected. >>>>> >> 3. Run jinfo or jmap to this process(BTY, jstack doesn't have this >>>>> issue). >>>>> >> 4. Run ps to check the state of the process. This time it changes >>>>> to "Tl" >>>>> >> which means STOPPED and the process doesn't response any requests. >>>>> >> >>>>> >> Here's the output of our process: >>>>> >> >>>>> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >>>>> >> work 36663 0.1 1.7 24157828 1150820 ? Sl Aug06 72:54 >>>>> >> /opt/soft/jdk/bin/java -cp >>>>> >> >>>>> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >>>>> >> >>>>> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >>>>> >> >>>>> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >>>>> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >>>>> >> -XX:MaxPermSize=512m >>>>> >> >>>>> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >>>>> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >>>>> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >>>>> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC >>>>> -verbose:gc >>>>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >>>>> >> -XX:+UseCMSCompactAtFullCollection >>>>> -XX:CMSInitiatingOccupancyFraction=75 >>>>> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >>>>> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >>>>> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >>>>> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >>>>> -XX:GCLogFileSize=128m >>>>> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >>>>> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >>>>> -XX:ParallelGCThreads=16 >>>>> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >>>>> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >>>>> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking >>>>> -XX:MaxTenuringThreshold=3 >>>>> >> -Dproc_regionserver >>>>> >> >>>>> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >>>>> >> -Djava.net.preferIPv4Stack=true >>>>> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >>>>> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >>>>> >> -Dhbase.policy.file=hbase-policy.xml >>>>> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >>>>> >> >>>>> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >>>>> >> -Dhbase.id.str=work >>>>> org.apache.hadoop.hbase.regionserver.HRegionServer start >>>>> >> [work at hadoop ~]$ jinfo 36663 > tobe.jinfo >>>>> >> Attaching to process ID 36663, please wait... >>>>> >> Debugger attached successfully. >>>>> >> Server compiler detected. >>>>> >> JVM version is 20.12-b01 >>>>> >> [work at hadoop ~]$ ps aux |grep "qktst" |grep "RegionServer" >>>>> >> work 36663 0.1 1.7 24157828 1151008 ? Tl Aug06 72:54 >>>>> >> /opt/soft/jdk/bin/java -cp >>>>> >> >>>>> /home/work/app/hbase/qktst-qk/regionserver/:/home/work/app/hbase/qktst-qk/regionserver/package//:/home/work/app/hbase/qktst-qk/regionserver/package//lib/*:/home/work/app/hbase/qktst-qk/regionserver/package//* >>>>> >> >>>>> -Djava.library.path=:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/:/home/work/app/hbase/qktst-qk/regionserver/package/lib/native/Linux-amd64-64 >>>>> >> >>>>> -Xbootclasspath/p:/home/work/app/hbase/qktst-qk/regionserver/package/lib/hadoop-security-2.0.0-mdh1.1.0.jar >>>>> >> -Xmx10240m -Xms10240m -Xmn1024m -XX:MaxDirectMemorySize=1024m >>>>> >> -XX:MaxPermSize=512m >>>>> >> >>>>> -Xloggc:/home/work/app/hbase/qktst-qk/regionserver/stdout/regionserver_gc_20140806-211157.log >>>>> >> -Xss256k -XX:PermSize=64m -XX:+HeapDumpOnOutOfMemoryError >>>>> >> -XX:HeapDumpPath=/home/work/app/hbase/qktst-qk/regionserver/log >>>>> >> -XX:+PrintGCApplicationStoppedTime -XX:+UseConcMarkSweepGC >>>>> -verbose:gc >>>>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:SurvivorRatio=6 >>>>> >> -XX:+UseCMSCompactAtFullCollection >>>>> -XX:CMSInitiatingOccupancyFraction=75 >>>>> >> -XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSParallelRemarkEnabled >>>>> >> -XX:+UseNUMA -XX:+CMSClassUnloadingEnabled >>>>> >> -XX:CMSMaxAbortablePrecleanTime=10000 -XX:TargetSurvivorRatio=80 >>>>> >> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=100 >>>>> -XX:GCLogFileSize=128m >>>>> >> -XX:CMSWaitDuration=2000 -XX:+CMSScavengeBeforeRemark >>>>> >> -XX:+PrintPromotionFailure -XX:ConcGCThreads=16 >>>>> -XX:ParallelGCThreads=16 >>>>> >> -XX:PretenureSizeThreshold=2097088 -XX:+CMSConcurrentMTEnabled >>>>> >> -XX:+ExplicitGCInvokesConcurrent -XX:+SafepointTimeout >>>>> >> -XX:MonitorBound=16384 -XX:-UseBiasedLocking >>>>> -XX:MaxTenuringThreshold=3 >>>>> >> -Dproc_regionserver >>>>> >> >>>>> -Djava.security.auth.login.config=/home/work/app/hbase/qktst-qk/regionserver/jaas.conf >>>>> >> -Djava.net.preferIPv4Stack=true >>>>> >> -Dhbase.log.dir=/home/work/app/hbase/qktst-qk/regionserver/log >>>>> >> -Dhbase.pid=36663 -Dhbase.cluster=qktst-qk -Dhbase.log.level=debug >>>>> >> -Dhbase.policy.file=hbase-policy.xml >>>>> >> -Dhbase.home.dir=/home/work/app/hbase/qktst-qk/regionserver/package >>>>> >> >>>>> -Djava.security.krb5.conf=/home/work/app/hbase/qktst-qk/regionserver/krb5.conf >>>>> >> -Dhbase.id.str=work >>>>> org.apache.hadoop.hbase.regionserver.HRegionServer start >>>>> >> >>>>> >> >>>>> >> I hope some JVM experts here could help. >>>>> >> >>>>> >> $ java -version >>>>> >> java version "1.6.0_37" >>>>> >> Java(TM) SE Runtime Environment (build 1.6.0_37-b06) >>>>> >> Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01, mixed mode) >>>>> >> >>>>> >>>>> >>>> >>> >> > From daniel.daugherty at oracle.com Tue Sep 2 17:42:50 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Tue, 02 Sep 2014 11:42:50 -0600 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <53FF9370.9090603@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> Message-ID: <5406019A.1000808@oracle.com> On 8/28/14 2:39 PM, Coleen Phillimore wrote: > > Serguei, Thank you for the code review and discussions! > > This led me to find a bug and here is the new webrev, but there are > comments below to explain: > > open webrev at http://cr.openjdk.java.net/~coleenp/8055008_3/ src/share/vm/oops/instanceKlass.hpp No comments. src/share/vm/oops/instanceKlass.cpp No comments. src/share/vm/oops/method.hpp No comments. src/share/vm/classfile/metadataOnStackMark.hpp No comments. src/share/vm/classfile/metadataOnStackMark.cpp No comments. src/share/vm/classfile/classLoaderData.cpp No comments. src/share/vm/prims/jvmtiRedefineClasses.hpp No comments. src/share/vm/prims/jvmtiRedefineClasses.cpp line 138: MetadataOnStackMark md_on_stack(true); So this "new has_redefined_a_class" parameter that we're passing "true" here allows us to call CodeCache:: alive_nmethods_do() earlier. That call was previously keyed off a call to JvmtiExport::has_redefined_a_class() which wouldn't be true until after the first round of RedefineClasses() was pretty much done. So I assume we were not updating some things in the CodeCache during our first RedefineClasses() call so we had some incorrect (probably obsolete) methods being called? Update: Now that I've read the replies below the webrev link I see that my questions are already answered. src/share/vm/prims/jvmtiImpl.cpp No comments. src/share/vm/code/nmethod.cpp No comments. src/share/vm/memory/universe.cpp No comments. test/runtime/RedefineTests/RedefineFinalizer.java No comments. test/runtime/RedefineTests/RedefineRunningMethods.java No comments. Thumbs up! Dan > > > On 8/27/14, 8:20 AM, serguei.spitsyn at oracle.com wrote: >> Hi Coleen, >> >> >> src/share/vm/code/nmethod.cpp >> >> Nice simplification. >> >> >> src/share/vm/memory/universe.cpp >> >> No comments >> >> >> src/share/vm/oops/instanceKlass.cpp >> >> A minor question about two related fragments: >> >> 3505 // next previous version >> 3506 last = pv_node; >> 3507 pv_node = pv_node->previous_versions(); >> 3508 version++; >> >> Should the version be incremented to the case 3462-3469 as at the >> line 3508? >> It is not a big issue as the version number is used at the RC_TRACE >> line only: >> 3496 method->signature()->as_C_string(), j, version)); >> >> > > Yes, I fixed that. > >> We still have no consensus on the following question: >> Can a non-running EMCP method become running again after the flag >> was cleared? >> >> 3487 if (!method->on_stack()) { >> 3488 if (method->is_running_emcp()) { >> 3489 method->set_running_emcp(false); // no >> breakpoints for non-running methods >> 3490 } >> >> Just wanted to be sure what is the current view on this. :) >> > > Not unless there is a bug, such that we hit a safepoint with the EMCP > method not in any place that MetadataOnStackMark can get to it but we > have a Method* pointer to it. This situation could result in the EMCP > Method* getting deallocated as well. > > Stefan filed a bug a while ago that Method* pointers are not safe > without a methodHandle but we currently don't have any verification > that they are properly handled, such as CheckUnhandledOops. > >> >> src/share/vm/oops/instanceKlass.hpp >> >> No comments >> >> >> src/share/vm/oops/method.hpp >> >> Just some questions. >> Usefulness of this new function depends on basic ability of a >> non-running method to become running again: >> is_running_emcp() >> >> The questions are: >> - How precise is the control of this bit? > > This bit is set during purging previous versions when all methods have > been marked on_stack() if found in various places. The bit is only > used for setting breakpoints. > >> - Should we clear this bit after all method invocations have been >> finished? >> - Can a EMCP method become running again after the bit was cleared >> or not set? >> >> >> src/share/vm/prims/jvmtiImpl.cpp >> >> 300 if (method->is_running_emcp() && >> >> Is it possible that an EMCP method becomes running after the bit >> is_running_emcp() is set? >> Do we miss breakpoints in such a case? >> > > I do think this is only possible if there is a bug and the Method* > would probably be deallocated first and crash. Since this code is > called during class unloading, a crash is something I'd expect to see. > > So I added an additional check to find resurrected emcp methods: > > if (!method->on_stack()) { > if (method->is_running_emcp()) { > method->set_running_emcp(false); // no breakpoints for > non-running methods > } > } else { > assert (method->is_obsolete() || method->is_running_emcp(), > "emcp method cannot run after emcp bit is cleared"); > // RC_TRACE macro has an embedded ResourceMark > RC_TRACE(0x00000200, > ("purge: %s(%s): prev method @%d in version @%d is alive", > > > Unfortunately, this assert did fire in NSK testing. And I just now > found the bug and fixed it (see above webrev). We weren't walking the > code cache for the first redefinition, and an emcp method was in the > code cache. The methods in the code cache are not exactly running but > we don't know if they could be called directly by running compiled code. > > This fix is to add a boolean flag to MetadataOnStackMark. I reran the > consistently failing tests, nsk.quick tests, and the jtreg tests in > java/lang/instrument on the fix. >> >> 303 RC_TRACE(0x00000800, ("%sing breakpoint in %s(%s)", >> 304 meth_act == &Method::set_breakpoint ? "set" : "clear", >> >> The change from "sett" to "set" seems to be wrong (see the line >> 303): >> > > Yes, I didn't realize that. Dan pointed this out to me also. >> >> src/share/vm/prims/jvmtiRedefineClasses.cpp >> src/share/vm/prims/jvmtiRedefineClasses.hpp >> >> No comments >> > > Thank you Serguei for the in-depth comments and pushing on these > issues. I think this has improved the code tremendously. I really > appreciate it! > > Coleen >> >> Thanks, >> Serguei >> >> >> On 8/22/14 7:26 AM, Coleen Phillimore wrote: >>> >>> Thanks Dan, Serguei and Roland for discussion on this change. The >>> latest version is here: >>> >>> open webrev at http://cr.openjdk.java.net/~coleenp/8055008_2/ >>> >>> Changes from the last version (don't have the setup to do a diff >>> webrev, sorry) are that I have a new flag to mark running emcp >>> methods so we can set breakpoints in only those. Also, confirmed >>> that we need to clean_weak_method_links in obsolete methods too. >>> Made changes per review comments. Also, added more to the test so >>> that all tracing in InstanceKlass comes out. Reran all tests (nsk, >>> jck, jtreg, java/lang/instrument). >>> >>> Thanks to whoever made command line processing handle hex numbers! >>> >>> Thanks, >>> Coleen >>> >>> On 8/20/14, 9:26 PM, Coleen Phillimore wrote: >>>> >>>> On 8/20/14, 6:45 PM, Daniel D. Daugherty wrote: >>>>> On 8/20/14 2:01 PM, Coleen Phillimore wrote: >>>>>> On 8/20/14, 3:49 PM, serguei.spitsyn at oracle.com wrote: >>>>>>>> >>>>>>>> If an EMCP method is not running, should we save it on a >>>>>>>> previous version list anyway so that we can make it obsolete if >>>>>>>> it's redefined and made obsolete? >>>>>>> >>>>>>> I hope, Dan will catch me if I'm wrong... >>>>>>> >>>>>>> I think, we should not. >>>>>>> An EMCP method can not be made obsolete if it is not running. >>>>>>> >>>>>> >>>>>> >>>>>> It should be this way otherwise we'd have to hang onto things >>>>>> forever. >>>>> >>>>> An EMCP method should only be made obsolete if a RedefineClasses() or >>>>> RetransformClasses() operation made it so. We should not be >>>>> leveraging >>>>> off the obsolete-ness attribute to solve a life-cycle problem. >>>> >>>> Yes, this was my error in the change. This is why I made things >>>> obsolete if they were not running. I think I can't reuse this >>>> flag. My latest changes add a new explicit flag (which we have >>>> space for in Method*). >>>>> >>>>> In the pre-PGR world, we could trust GC to make a completely unused >>>>> EMCP method collectible and eventually our weak reference would go >>>>> away. Just because an EMCP method is not on a stack does not mean >>>>> that it is not used so we need a different way to determine whether >>>>> it is OK to no longer track an EMCP method. >>>> >>>> Our on_stack marking is supposed to look at all the places where GC >>>> used to look so I think we can use on_stack to track the lifecycle >>>> of EMCP methods. If the EMCP method is somewhere, we will find it! >>>> >>>> I'm running tests on the latest change, but am also waiting for >>>> confirmation from Roland because we were only cleaning out >>>> MethodData for EMCP methods and not for running obsolete methods >>>> and I think we need to do that for obsolete methods also, which my >>>> change does now. I think it was a bug. >>>> >>>> Thanks Dan for remembering all of this for me! >>>> >>>> Coleen >>>>> >>>>> >>>>>> >>>>>>> BTW, I'm reviewing the webrev too, but probably it'd be better >>>>>>> to switch to updated webrev after it is ready. >>>>>> >>>>>> Yes, this is a good idea. I figured out why I made emcp methods >>>>>> obsolete, and I'm fixing that as well as Dan's comments. Thanks! >>>>> >>>>> Cool! I'm looking forward to the next review. >>>>> >>>>> Dan >>>>> >>>>> >>>>>> >>>>>> Coleen >>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Serguei >>>>>> >>>>> >>>> >>> >> > From tom.deneau at amd.com Tue Sep 2 18:12:21 2014 From: tom.deneau at amd.com (Deneau, Tom) Date: Tue, 2 Sep 2014 18:12:21 +0000 Subject: detecting jit compilation Message-ID: Hi All -- I was directed to this list for the following question which I asked on the jmh-dev list Is there a way thru Management Beans that I can find out from the Java side whether a particular method has been JIT compiled? Aleksey Shipilev mentioned the WhiteBox API which I am not familiar with. Is this something that can be used today from Java 8? -- Tom From volker.simonis at gmail.com Tue Sep 2 18:36:39 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 2 Sep 2014 20:36:39 +0200 Subject: detecting jit compilation In-Reply-To: References: Message-ID: Depending on your use case you could perhaps use JVMTI and the JVMTI_EVENT_COMPILED_METHOD_LOAD, JVMTI_EVENT_COMPILED_METHOD_UNLOAD, JVMTI_EVENT_DYNAMIC_CODE_GENERATED events although the result may be not what you actually expect because of inlining.You'd also had to do you own bookkeeping of what was compiled so this is probably not exactly what you want. I think in general your question is not so easy to answer because of inlining. It may be that a certain method was never compiled stand alone but it may very well have been inlined into several other methods. So it always depends on the call site. Regards, Volker On Tue, Sep 2, 2014 at 8:12 PM, Deneau, Tom wrote: > Hi All -- > > I was directed to this list for the following question which I asked on the jmh-dev list > > Is there a way thru Management Beans that I can find out from the Java > side whether a particular method has been JIT compiled? > > Aleksey Shipilev mentioned the WhiteBox API which I am not familiar with. Is this something that can be used today from Java 8? > > -- Tom > From mikael.gerdin at oracle.com Tue Sep 2 18:46:28 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Tue, 02 Sep 2014 20:46:28 +0200 Subject: detecting jit compilation In-Reply-To: References: Message-ID: <6381977.s2df069sFp@vboxbuntu> Hi Tom, On Tuesday 02 September 2014 18.12.21 Deneau, Tom wrote: > Hi All -- > > I was directed to this list for the following question which I asked on the > jmh-dev list > > Is there a way thru Management Beans that I can find out from the Java > side whether a particular method has been JIT compiled? > > Aleksey Shipilev mentioned the WhiteBox API which I am not familiar with. > Is this something that can be used today from Java 8? The WhiteBox API is an internal testing API. It's used by some compiler tests, see: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/file/f80bb126b5bb/test/compiler/whitebox/IsMethodCompilableTest.java The VM-side implementation is located at: http://hg.openjdk.java.net/jdk9/hs-gc/hotspot/file/f80bb126b5bb/src/share/vm/prims/whitebox.cpp I cannot stress this enough: this is by no means intended to be used in any production type environment. If it crashes you get to keep the pieces. It is primarily designed to be used during JVM development. /Mikael > > -- Tom From serguei.spitsyn at oracle.com Tue Sep 2 19:58:30 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Tue, 02 Sep 2014 12:58:30 -0700 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <5405B820.3060505@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> Message-ID: <54062166.4030603@oracle.com> Coleen, Thank you for the answer! Yes, I hope to finish this review today. Thanks, Serguei On 9/2/14 5:29 AM, Coleen Phillimore wrote: > > Serguei, I didn't answer one of your questions. > > On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>> This bit is set during purging previous versions when all methods >>> have been marked on_stack() if found in various places. The bit is >>> only used for setting breakpoints. >> >> I had to ask slightly different. >> "How precise must be the control of this bit?" >> Part of this question is the question below about what happens when >> the method invocation is finished. >> I realized now that it can impact only setting breakpoints. >> Suppose, we did not clear the bit in time and then another breakpoint >> is set. >> The only bad thing is that this new breakpoint will be useless. > > Yes. We set the on_stack bit which causes setting the is_running_emcp > bit during safepoints for class redefinition and class unloading. > After the safepoint, the on_stack bit is cleared. After the > safepoint, we may also set breakpoints using the is_running_emcp bit. > If the method has exited we would set a breakpoint in a method that is > never reached. But this shouldn't be noticeable to the programmer. > > The method's is_running_emcp bit and maybe metadata would be cleaned > up the next time we do class unloading at a safepoint. > >> >> But let me look at new webrev first to see if any update is needed here. >> > > Yes, please review this again and let me know if this does what I > claim it does. > > Thank you! > Coleen From coleen.phillimore at oracle.com Tue Sep 2 21:33:22 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 02 Sep 2014 17:33:22 -0400 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <5406019A.1000808@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <5406019A.1000808@oracle.com> Message-ID: <540637A2.2090002@oracle.com> Thanks, Dan. See below. On 9/2/14, 1:42 PM, Daniel D. Daugherty wrote: > On 8/28/14 2:39 PM, Coleen Phillimore wrote: >> >> Serguei, Thank you for the code review and discussions! >> >> This led me to find a bug and here is the new webrev, but there are >> comments below to explain: >> >> open webrev at http://cr.openjdk.java.net/~coleenp/8055008_3/ > > src/share/vm/oops/instanceKlass.hpp > No comments. > > src/share/vm/oops/instanceKlass.cpp > No comments. > > src/share/vm/oops/method.hpp > No comments. > > src/share/vm/classfile/metadataOnStackMark.hpp > No comments. > > src/share/vm/classfile/metadataOnStackMark.cpp > No comments. > > src/share/vm/classfile/classLoaderData.cpp > No comments. > > src/share/vm/prims/jvmtiRedefineClasses.hpp > No comments. > > src/share/vm/prims/jvmtiRedefineClasses.cpp > line 138: MetadataOnStackMark md_on_stack(true); > So this "new has_redefined_a_class" parameter that we're > passing "true" here allows us to call CodeCache:: > alive_nmethods_do() earlier. That call was previously keyed > off a call to JvmtiExport::has_redefined_a_class() which > wouldn't be true until after the first round of > RedefineClasses() was pretty much done. > > So I assume we were not updating some things in the > CodeCache during our first RedefineClasses() call so we > had some incorrect (probably obsolete) methods being called? > > Update: Now that I've read the replies below the webrev > link I see that my questions are already answered. Yes, exactly. Thanks! Coleen > > src/share/vm/prims/jvmtiImpl.cpp > No comments. > > src/share/vm/code/nmethod.cpp > No comments. > > src/share/vm/memory/universe.cpp > No comments. > > test/runtime/RedefineTests/RedefineFinalizer.java > No comments. > > test/runtime/RedefineTests/RedefineRunningMethods.java > No comments. > > Thumbs up! > > Dan > > >> >> >> On 8/27/14, 8:20 AM, serguei.spitsyn at oracle.com wrote: >>> Hi Coleen, >>> >>> >>> src/share/vm/code/nmethod.cpp >>> >>> Nice simplification. >>> >>> >>> src/share/vm/memory/universe.cpp >>> >>> No comments >>> >>> >>> src/share/vm/oops/instanceKlass.cpp >>> >>> A minor question about two related fragments: >>> >>> 3505 // next previous version >>> 3506 last = pv_node; >>> 3507 pv_node = pv_node->previous_versions(); >>> 3508 version++; >>> >>> Should the version be incremented to the case 3462-3469 as at the >>> line 3508? >>> It is not a big issue as the version number is used at the >>> RC_TRACE line only: >>> 3496 method->signature()->as_C_string(), j, version)); >>> >>> >> >> Yes, I fixed that. >> >>> We still have no consensus on the following question: >>> Can a non-running EMCP method become running again after the flag >>> was cleared? >>> >>> 3487 if (!method->on_stack()) { >>> 3488 if (method->is_running_emcp()) { >>> 3489 method->set_running_emcp(false); // no >>> breakpoints for non-running methods >>> 3490 } >>> >>> Just wanted to be sure what is the current view on this. :) >>> >> >> Not unless there is a bug, such that we hit a safepoint with the EMCP >> method not in any place that MetadataOnStackMark can get to it but we >> have a Method* pointer to it. This situation could result in the >> EMCP Method* getting deallocated as well. >> >> Stefan filed a bug a while ago that Method* pointers are not safe >> without a methodHandle but we currently don't have any verification >> that they are properly handled, such as CheckUnhandledOops. >> >>> >>> src/share/vm/oops/instanceKlass.hpp >>> >>> No comments >>> >>> >>> src/share/vm/oops/method.hpp >>> >>> Just some questions. >>> Usefulness of this new function depends on basic ability of a >>> non-running method to become running again: >>> is_running_emcp() >>> >>> The questions are: >>> - How precise is the control of this bit? >> >> This bit is set during purging previous versions when all methods >> have been marked on_stack() if found in various places. The bit is >> only used for setting breakpoints. >> >>> - Should we clear this bit after all method invocations have been >>> finished? >>> - Can a EMCP method become running again after the bit was >>> cleared or not set? >>> >>> >>> src/share/vm/prims/jvmtiImpl.cpp >>> >>> 300 if (method->is_running_emcp() && >>> >>> Is it possible that an EMCP method becomes running after the bit >>> is_running_emcp() is set? >>> Do we miss breakpoints in such a case? >>> >> >> I do think this is only possible if there is a bug and the Method* >> would probably be deallocated first and crash. Since this code is >> called during class unloading, a crash is something I'd expect to see. >> >> So I added an additional check to find resurrected emcp methods: >> >> if (!method->on_stack()) { >> if (method->is_running_emcp()) { >> method->set_running_emcp(false); // no breakpoints for >> non-running methods >> } >> } else { >> assert (method->is_obsolete() || method->is_running_emcp(), >> "emcp method cannot run after emcp bit is cleared"); >> // RC_TRACE macro has an embedded ResourceMark >> RC_TRACE(0x00000200, >> ("purge: %s(%s): prev method @%d in version @%d is alive", >> >> >> Unfortunately, this assert did fire in NSK testing. And I just now >> found the bug and fixed it (see above webrev). We weren't walking >> the code cache for the first redefinition, and an emcp method was in >> the code cache. The methods in the code cache are not exactly >> running but we don't know if they could be called directly by running >> compiled code. >> >> This fix is to add a boolean flag to MetadataOnStackMark. I reran >> the consistently failing tests, nsk.quick tests, and the jtreg tests >> in java/lang/instrument on the fix. >>> >>> 303 RC_TRACE(0x00000800, ("%sing breakpoint in %s(%s)", >>> 304 meth_act == &Method::set_breakpoint ? "set" : "clear", >>> >>> The change from "sett" to "set" seems to be wrong (see the line >>> 303): >>> >> >> Yes, I didn't realize that. Dan pointed this out to me also. >>> >>> src/share/vm/prims/jvmtiRedefineClasses.cpp >>> src/share/vm/prims/jvmtiRedefineClasses.hpp >>> >>> No comments >>> >> >> Thank you Serguei for the in-depth comments and pushing on these >> issues. I think this has improved the code tremendously. I really >> appreciate it! >> >> Coleen >>> >>> Thanks, >>> Serguei >>> >>> >>> On 8/22/14 7:26 AM, Coleen Phillimore wrote: >>>> >>>> Thanks Dan, Serguei and Roland for discussion on this change. The >>>> latest version is here: >>>> >>>> open webrev at http://cr.openjdk.java.net/~coleenp/8055008_2/ >>>> >>>> Changes from the last version (don't have the setup to do a diff >>>> webrev, sorry) are that I have a new flag to mark running emcp >>>> methods so we can set breakpoints in only those. Also, confirmed >>>> that we need to clean_weak_method_links in obsolete methods too. >>>> Made changes per review comments. Also, added more to the test so >>>> that all tracing in InstanceKlass comes out. Reran all tests (nsk, >>>> jck, jtreg, java/lang/instrument). >>>> >>>> Thanks to whoever made command line processing handle hex numbers! >>>> >>>> Thanks, >>>> Coleen >>>> >>>> On 8/20/14, 9:26 PM, Coleen Phillimore wrote: >>>>> >>>>> On 8/20/14, 6:45 PM, Daniel D. Daugherty wrote: >>>>>> On 8/20/14 2:01 PM, Coleen Phillimore wrote: >>>>>>> On 8/20/14, 3:49 PM, serguei.spitsyn at oracle.com wrote: >>>>>>>>> >>>>>>>>> If an EMCP method is not running, should we save it on a >>>>>>>>> previous version list anyway so that we can make it obsolete >>>>>>>>> if it's redefined and made obsolete? >>>>>>>> >>>>>>>> I hope, Dan will catch me if I'm wrong... >>>>>>>> >>>>>>>> I think, we should not. >>>>>>>> An EMCP method can not be made obsolete if it is not running. >>>>>>>> >>>>>>> >>>>>>> >>>>>>> It should be this way otherwise we'd have to hang onto things >>>>>>> forever. >>>>>> >>>>>> An EMCP method should only be made obsolete if a >>>>>> RedefineClasses() or >>>>>> RetransformClasses() operation made it so. We should not be >>>>>> leveraging >>>>>> off the obsolete-ness attribute to solve a life-cycle problem. >>>>> >>>>> Yes, this was my error in the change. This is why I made things >>>>> obsolete if they were not running. I think I can't reuse this >>>>> flag. My latest changes add a new explicit flag (which we have >>>>> space for in Method*). >>>>>> >>>>>> In the pre-PGR world, we could trust GC to make a completely unused >>>>>> EMCP method collectible and eventually our weak reference would go >>>>>> away. Just because an EMCP method is not on a stack does not mean >>>>>> that it is not used so we need a different way to determine whether >>>>>> it is OK to no longer track an EMCP method. >>>>> >>>>> Our on_stack marking is supposed to look at all the places where >>>>> GC used to look so I think we can use on_stack to track the >>>>> lifecycle of EMCP methods. If the EMCP method is somewhere, we >>>>> will find it! >>>>> >>>>> I'm running tests on the latest change, but am also waiting for >>>>> confirmation from Roland because we were only cleaning out >>>>> MethodData for EMCP methods and not for running obsolete methods >>>>> and I think we need to do that for obsolete methods also, which my >>>>> change does now. I think it was a bug. >>>>> >>>>> Thanks Dan for remembering all of this for me! >>>>> >>>>> Coleen >>>>>> >>>>>> >>>>>>> >>>>>>>> BTW, I'm reviewing the webrev too, but probably it'd be better >>>>>>>> to switch to updated webrev after it is ready. >>>>>>> >>>>>>> Yes, this is a good idea. I figured out why I made emcp methods >>>>>>> obsolete, and I'm fixing that as well as Dan's comments. Thanks! >>>>>> >>>>>> Cool! I'm looking forward to the next review. >>>>>> >>>>>> Dan >>>>>> >>>>>> >>>>>>> >>>>>>> Coleen >>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Serguei >>>>>>> >>>>>> >>>>> >>>> >>> >> > From serguei.spitsyn at oracle.com Wed Sep 3 05:14:49 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Tue, 02 Sep 2014 22:14:49 -0700 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <5405B820.3060505@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> Message-ID: <5406A3C9.6050205@oracle.com> Coleen, It looks good in general. But I have some questions and minor suggestions below. || src/share/vm/oops/instanceKlass.cpp In the fragment below: 3488 if (!method->on_stack()) { 3489 if (method->is_running_emcp()) { 3490 method->set_running_emcp(false); // no breakpoints for non-running methods 3491 } 3492 } else { 3493 assert (method->is_obsolete() || method->is_running_emcp(), 3494 "emcp method cannot run after emcp bit is cleared"); 3495 // RC_TRACE macro has an embedded ResourceMark 3496 RC_TRACE(0x00000200, 3497 ("purge: %s(%s): prev method @%d in version @%d is alive", 3498 method->name()->as_C_string(), 3499 method->signature()->as_C_string(), j, version)); 3500 if (method->method_data() != NULL) { 3501 // Clean out any weak method links for running methods 3502 // (also should include not EMCP methods) 3503 method->method_data()->clean_weak_method_links(); 3504 } 3505 } It is not clear to me what happens in the situation when the method is not running (both emcp and obsolete cases). Why do we keep such methods, should we remove them? Is it because the method_data needs to be collected by the GC first? If it is so, do we need a comment explaining it? Another question is about setting is_running_emcp. Any method after redefinition can be in one of the three states: method->on_stack() ? 'is_obsolete' or 'is_emcp' !method->on_stack() ? 'to-be-removed' I think, the 'is_running_emcp' flag is confusing (a suggestion: use 'is_emcp'). It is not consistent with the 'is_obsolete' flag because we do not spell 'is_running_obsolete' as non-running obsolete methods hit the category 'to-be-removed'. Just some thoughts. Sorry for looping around this. :( A minor suggestion on the following fragment: 3611 if (cp_ref->on_stack()) { 3612 if (emcp_method_count == 0) { 3613 RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); 3614 } else { 3615 // At least one method is still running, check for EMCP methods 3616 for (int i = 0; i < old_methods->length(); i++) { 3617 Method* old_method = old_methods->at(i); 3618 if (!old_method->is_obsolete() && old_method->on_stack()) { 3619 // if EMCP method (not obsolete) is on the stack, mark as EMCP so that 3620 // we can add breakpoints for it. 3621 old_method->set_running_emcp(true); 3622 RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); 3623 } else if (!old_method->is_obsolete()) { 3624 RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); 3625 } 3626 } 3627 } 3628 3629 RC_TRACE(0x00000400, ("add: scratch class added; one of its methods is on_stack")); 3630 assert(scratch_class->previous_versions() == NULL, "shouldn't have a previous version"); 3631 scratch_class->link_previous_versions(previous_versions()); 3632 link_previous_versions(scratch_class()); 3633 } else { 3634 RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); 3635 } It'd be a simplification to reduce the indent like this: if (!cp_ref->on_stack()) { RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); return; } if (emcp_method_count == 0) { RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); return; } // At least one method is still running, check for EMCP methods for (int i = 0; i < old_methods->length(); i++) { Method* old_method = old_methods->at(i); if (!old_method->is_obsolete() && old_method->on_stack()) { // if EMCP method (not obsolete) is on the stack, mark as EMCP so that // we can add breakpoints for it. old_method->set_running_emcp(true);\ RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); } else if (!old_method->is_obsolete()) { RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); } } RC_TRACE(0x00000400, ("add: scratch class added; one of its methods is on_stack")); assert(scratch_class->previous_versions() == NULL, "shouldn't have a previous version"); scratch_class->link_previous_versions(previous_versions()); link_previous_versions(scratch_class()); Also, from the 1st round review email exchange... It'd be nice to add a comment somewhere above to explain this: > Yes. We set the on_stack bit which causes setting the is_running_emcp bit during safepoints > for class redefinition and class unloading. After the safepoint, the on_stack bit is cleared. > After the safepoint, we may also set breakpoints using the is_running_emcp bit. > If the method has exited we would set a breakpoint in a method that is never reached.> But this shouldn't be noticeable to the programmer. Thanks, Serguei On 9/2/14 5:29 AM, Coleen Phillimore wrote: > > Serguei, I didn't answer one of your questions. > > On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>> This bit is set during purging previous versions when all methods >>> have been marked on_stack() if found in various places. The bit is >>> only used for setting breakpoints. >> >> I had to ask slightly different. >> "How precise must be the control of this bit?" >> Part of this question is the question below about what happens when >> the method invocation is finished. >> I realized now that it can impact only setting breakpoints. >> Suppose, we did not clear the bit in time and then another breakpoint >> is set. >> The only bad thing is that this new breakpoint will be useless. > > Yes. We set the on_stack bit which causes setting the is_running_emcp > bit during safepoints for class redefinition and class unloading. > After the safepoint, the on_stack bit is cleared. After the > safepoint, we may also set breakpoints using the is_running_emcp bit. > If the method has exited we would set a breakpoint in a method that is > never reached. But this shouldn't be noticeable to the programmer. > > The method's is_running_emcp bit and maybe metadata would be cleaned > up the next time we do class unloading at a safepoint. > >> >> But let me look at new webrev first to see if any update is needed here. >> > > Yes, please review this again and let me know if this does what I > claim it does. > > Thank you! > Coleen From tobias.hartmann at oracle.com Wed Sep 3 07:28:26 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 03 Sep 2014 09:28:26 +0200 Subject: [8u40] RFR(S): 8048879: "unexpected yanked node" opto/postaloc.cpp:139 Message-ID: <5406C31A.5000408@oracle.com> Hi, please review this 8u40 backport request. The changes were pushed two weeks ago and nightly testing showed no problems. The patch applies cleanly to 8u40. Master Bug: https://bugs.openjdk.java.net/browse/JDK-8048879 Webrev: http://cr.openjdk.java.net/~thartmann/8048879/webrev.00/ Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/7c8d05c88072 Thanks, Tobias From tobias.hartmann at oracle.com Wed Sep 3 11:10:24 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Wed, 03 Sep 2014 13:10:24 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5400B975.5030703@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> Message-ID: <5406F720.2080603@oracle.com> Hi Vladimir, thanks for the review. On 29.08.2014 19:33, Vladimir Kozlov wrote: > On 8/29/14 7:10 AM, Tobias Hartmann wrote: >> Hi Vladimir, >> >> thanks for the review. >> >> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>> For the record, SegmentedCodeCache is enabled by default when >>> TieredCompilation is enabled and ReservedCodeCacheSize >>> >= 240 MB. Otherwise it is false by default. >> >> Exactly. >> >>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>> setting and segments size adjustment - do adjustment >>> only if SegmentedCodeCache is enabled. >> >> Done. >> >>> Also I think each flag should be checked and adjusted separately. >>> You may bail out (vm_exit_during_initialization) if >>> sizes do not add up. >> >> I think we should only increase the sizes if they are all default. >> Otherwise we would for example fail if the user sets >> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >> NonProfiledCodeHeap size is multiplied by 5. What do >> you think? > > But ReservedCodeCacheSize is scaled anyway and you will get sum of > sizes != whole size. We need to do something. I agree. I changed it as you suggested first: The code heap sizes are scaled individually and we bail out if the sizes are not consistent with ReservedCodeCacheSize. > BTW the error message for next check should print all sizes, user may > not know the default value of some which he did not specified on > command line. > > (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + > ProfiledCodeHeapSize) != ReservedCodeCacheSize) The error message now prints the sizes in brackets. >>> And use >> >> I think the rest of this sentence is missing :) > > And use FLAG_SET_ERGO() when you scale. :) Done. I also changed the implementation of CodeCache::initialize_heaps() accordingly. >>> Align second line: >>> >>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >> >> Done. >> >>> codeCache.cpp - in initialize_heaps() add new methods in C1 and C2 >>> to return buffer_size they need. Add >>> assert(SegmentedCodeCache) to this method to show that we call it >>> only in such case. >> >> Done. >> >>> You do adjustment only when all flags are default. But you still >>> need to check that you have space in >>> NonMethodCodeHeap for scratch buffers. >> >> I added a the following check: >> >> // Make sure we have enough space for the code buffers >> if (NonMethodCodeHeapSize < code_buffers_size) { >> vm_exit_during_initialization("Not enough space for code buffers >> in CodeCache"); >> } > > I think, you need to take into account min_code_cache_size as in > arguments.cpp: > uint min_code_cache_size = (CodeCacheMinimumUseSpace DEBUG_ONLY(* 3)) > + CodeCacheMinimumFreeSpace; > > if (NonMethodCodeHeapSize < (min_code_cache_size+code_buffers_size)) { True, I changed it. > I would be nice if this code in initialize_heaps() could be moved > called during arguments parsing if we could get number of compiler > threads there. But I understand that we can't do that until > compilation policy is set :( Yes, this is not possible because we need to know the number of C1/C2 compiler threads. New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ Thanks, Tobias > >> >>> codeCache.hpp - comment alignment: >>> + // Creates a new heap with the given name and size, containing >>> CodeBlobs of the given type >>> ! static void add_heap(ReservedSpace rs, const char* name, size_t >>> size_initial, int code_blob_type); >> >> Done. >> >>> nmethod.cpp - in new() can we mark nmethod allocation critical only >>> when SegmentedCodeCache is enabled? >> >> Yes, that's what we do with: >> >> 809 bool is_critical = SegmentedCodeCache; >> >> Or what are you referring to? > > Somehow I missed that SegmentedCodeCache is used already. It is fine > then. > > Thanks, > Vladimir > >> >> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >> >> Thanks, >> Tobias >> >>> Thanks, >>> Vladimir >>> >>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> the segmented code cache JEP is now targeted. Please review the final >>>> implementation before integration. The previous RFR, including a short >>>> description, can be found here [1]. >>>> >>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>> Implementation: >>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>> JDK-Test fix: >>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>> >>>> Changes since the last review: >>>> - Merged with other changes (for example, G1 class unloading >>>> changes [2]) >>>> - Fixed some minor bugs that showed up during testing >>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>> - Non-method CodeHeap size increased to 5 MB >>>> - Fallback solution: Store non-method code in the non-profiled code >>>> heap >>>> if there is not enough space in the non-method code heap (see >>>> 'CodeCache::allocate') >>>> >>>> Additional testing: >>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>> - Compiler and GC nightlies >>>> - jtreg tests >>>> - VM (NSK) Testbase >>>> - More performance testing (results attached to the bug) >>>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] >>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>> >>>> >>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >> From volker.simonis at gmail.com Wed Sep 3 12:48:49 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 3 Sep 2014 14:48:49 +0200 Subject: RFR(XXS): 8057129: Fix AIX build after the Extend CompileCommand=option change 8055286 Message-ID: Hi, could somebody please review and sponsor this tiny change which fixes an AIX build failure after "8055286: Extend CompileCommand=option to handle numeric parameters" (details see below). It would be nice if this fix could be pushed to hs-comp before hs-comp gets pushed to the other hs repos: http://cr.openjdk.java.net/~simonis/webrevs/8057129/ https://bugs.openjdk.java.net/browse/JDK-8057129 The AIX xlC compiler is overly picky with regard to section 14.6.4.2 "Candidate functions" of the C++ standard (see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3242.pdf) which states: "If the function name is an unqualified-id and the call would be ill-formed or would find a better match had the lookup within the associated namespaces considered all the function declarations with external linkage introduced in those namespaces in all translation units, not just considering those declarations found in the template definition and template instantiation contexts, then the program has undefined behavior." xlC implements this by not taking into account static functions which have internal linkage and terminates with the error message: "hotspot-comp/src/share/vm/compiler/compilerOracle.cpp", line 364.10: 1540-0274 (S) The name lookup for "get_option_value" did not find a declaration. "hotspot-comp/src/share/vm/compiler/compilerOracle.cpp", line 364.10: 1540-1292 (I) Static declarations are not considered for a function call if the function is not qualified. The fix is trivial - just qualify the call to "get_option_value" like this: return ::get_option_value(method, option, value); Thank you and best regards, Volker From aph at redhat.com Wed Sep 3 14:16:32 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 15:16:32 +0100 Subject: Release store in C2 putfield Message-ID: <540722C0.1060404@redhat.com> In Parse::do_put_xxx, I see const MemNode::MemOrd mo = is_vol ? // Volatile fields need releasing stores. MemNode::release : // Non-volatile fields also need releasing stores if they hold an // object reference, because the object reference might point to // a freshly created object. StoreNode::release_if_reference(bt); AArch64 doesn't need a release store here: its memory guarantees are strong enough that a simple store is sufficient. But my question is not about that, but how to handle it properly. I can, of course, do something like: - StoreNode::release_if_reference(bt); - + NOT_AARCH64(StoreNode::release_if_reference(bt)) + AARCH64_ONLY(MemNode::unordered); But I don't want to put AArch64-specific code in shared files. There doesn't seem to be a better way to do it, though. Any suggestions? Thanks, Andrew. From stefan.johansson at oracle.com Wed Sep 3 14:34:06 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Wed, 03 Sep 2014 16:34:06 +0200 Subject: RFR: 8u40: Thread and management extension support Message-ID: <540726DE.20207@oracle.com> Hi, Please review these changes to allow thread and management extensions in the VM. http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ There is currently no JBS issue open for this issue but one will be open shortly. Best regards, Stefan From vladimir.kozlov at oracle.com Wed Sep 3 16:47:43 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 03 Sep 2014 09:47:43 -0700 Subject: [8u40] RFR(S): 8048879: "unexpected yanked node" opto/postaloc.cpp:139 In-Reply-To: <5406C31A.5000408@oracle.com> References: <5406C31A.5000408@oracle.com> Message-ID: <5407462F.2060802@oracle.com> Good. Thanks, Vladimir On 9/3/14 12:28 AM, Tobias Hartmann wrote: > Hi, > > please review this 8u40 backport request. The changes were pushed two weeks ago and nightly testing showed no problems. > > The patch applies cleanly to 8u40. > > Master Bug: https://bugs.openjdk.java.net/browse/JDK-8048879 > Webrev: http://cr.openjdk.java.net/~thartmann/8048879/webrev.00/ > Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/7c8d05c88072 > > Thanks, > Tobias From aleksey.shipilev at oracle.com Wed Sep 3 16:49:14 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 03 Sep 2014 20:49:14 +0400 Subject: Release store in C2 putfield In-Reply-To: <540722C0.1060404@redhat.com> References: <540722C0.1060404@redhat.com> Message-ID: <5407468A.2020004@oracle.com> Hi Andrew, On 09/03/2014 06:16 PM, Andrew Haley wrote: > In Parse::do_put_xxx, I see > > const MemNode::MemOrd mo = > is_vol ? > // Volatile fields need releasing stores. > MemNode::release : > // Non-volatile fields also need releasing stores if they hold an > // object reference, because the object reference might point to > // a freshly created object. > StoreNode::release_if_reference(bt); > > AArch64 doesn't need a release store here: its memory guarantees are > strong enough that a simple store is sufficient. But my question is > not about that, but how to handle it properly. I can't answer the question you posed, but let me challenge your premise. Why is a simple store is sufficient here for AArch64? Do the stores ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" only applies to explicit synchronization instructions. -Aleksey. From vitalyd at gmail.com Wed Sep 3 16:54:07 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 3 Sep 2014 12:54:07 -0400 Subject: Release store in C2 putfield In-Reply-To: <5407468A.2020004@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> Message-ID: Also I thought the memord recorded in the node (also) prevents compiler from reordering the stores. So even if AArch64 cpu doesn't reorder, what would prevent compiler reordering? Sent from my phone On Sep 3, 2014 12:49 PM, "Aleksey Shipilev" wrote: > Hi Andrew, > > On 09/03/2014 06:16 PM, Andrew Haley wrote: > > In Parse::do_put_xxx, I see > > > > const MemNode::MemOrd mo = > > is_vol ? > > // Volatile fields need releasing stores. > > MemNode::release : > > // Non-volatile fields also need releasing stores if they hold an > > // object reference, because the object reference might point to > > // a freshly created object. > > StoreNode::release_if_reference(bt); > > > > AArch64 doesn't need a release store here: its memory guarantees are > > strong enough that a simple store is sufficient. But my question is > > not about that, but how to handle it properly. > > I can't answer the question you posed, but let me challenge your premise. > > Why is a simple store is sufficient here for AArch64? Do the stores > ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" > only applies to explicit synchronization instructions. > > -Aleksey. > > From aph at redhat.com Wed Sep 3 17:00:50 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 18:00:50 +0100 Subject: Release store in C2 putfield In-Reply-To: <5407468A.2020004@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> Message-ID: <54074942.9050506@redhat.com> On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: > Hi Andrew, > > On 09/03/2014 06:16 PM, Andrew Haley wrote: >> In Parse::do_put_xxx, I see >> >> const MemNode::MemOrd mo = is_vol ? // Volatile fields need releasing stores. MemNode::release : // Non-volatile fields also need releasing stores if they hold an // object reference, because the object reference might point to // a freshly created object. StoreNode::release_if_reference(bt); >> >> AArch64 doesn't need a release store here: its memory guarantees are strong enough that a simple store is sufficient. But my question is not about that, but how to handle it properly. > > I can't answer the question you posed, but let me challenge your premise. > > Why is a simple store is sufficient here for AArch64? Do the stores ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" only applies to explicit synchronization instructions. I discussed this with Peter Sewell, and it's explained in his (co-authored) paper "A Tutorial Introduction to the ARM and POWER Relaxed Memory Models" at http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in Section 4.1, "Enforcing Order with Dependencies" In the AArch64 spec, we have: B2.7.2 Ordering requirements If an address dependency exists between two reads or between a read and a write, then those memory accesses are observed in program order by all observers within the shareability domain of the memory So, an address dependency and a DMB when an object is created is all we need. Andrew. From aph at redhat.com Wed Sep 3 17:02:58 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 18:02:58 +0100 Subject: Release store in C2 putfield In-Reply-To: References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> Message-ID: <540749C2.5020506@redhat.com> On 09/03/2014 05:54 PM, Vitaly Davidovich wrote: > Also I thought the memord recorded in the node (also) prevents compiler > from reordering the stores. So even if AArch64 cpu doesn't reorder, what > would prevent compiler reordering? What reordering do you wish to prevent? The store can't happen before the object has been created. Andrew. From vitalyd at gmail.com Wed Sep 3 17:04:47 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 3 Sep 2014 13:04:47 -0400 Subject: Release store in C2 putfield In-Reply-To: <540749C2.5020506@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <540749C2.5020506@redhat.com> Message-ID: Order of new mem allocated, ctor running with additional stores, and the assignment to the reference. Or is this point where allocation + ctor are already ordered before? Sent from my phone On Sep 3, 2014 1:03 PM, "Andrew Haley" wrote: > On 09/03/2014 05:54 PM, Vitaly Davidovich wrote: > > Also I thought the memord recorded in the node (also) prevents compiler > > from reordering the stores. So even if AArch64 cpu doesn't reorder, what > > would prevent compiler reordering? > > What reordering do you wish to prevent? The store can't happen > before the object has been created. > > Andrew. > > From aph at redhat.com Wed Sep 3 17:07:51 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 18:07:51 +0100 Subject: Release store in C2 putfield In-Reply-To: References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <540749C2.5020506@redhat.com> Message-ID: <54074AE7.4080600@redhat.com> On 09/03/2014 06:04 PM, Vitaly Davidovich wrote: > Order of new mem allocated, ctor running with additional stores, and the > assignment to the reference. Or is this point where allocation + ctor are > already ordered before? Yes. There's a store barrier after the object is created. > Sent from my phone > On Sep 3, 2014 1:03 PM, "Andrew Haley" wrote: > >> On 09/03/2014 05:54 PM, Vitaly Davidovich wrote: >>> Also I thought the memord recorded in the node (also) prevents compiler >>> from reordering the stores. So even if AArch64 cpu doesn't reorder, what >>> would prevent compiler reordering? >> >> What reordering do you wish to prevent? The store can't happen >> before the object has been created. >> >> Andrew. >> >> > From vladimir.kozlov at oracle.com Wed Sep 3 17:21:45 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 03 Sep 2014 10:21:45 -0700 Subject: Release store in C2 putfield In-Reply-To: <540722C0.1060404@redhat.com> References: <540722C0.1060404@redhat.com> Message-ID: <54074E29.8030500@oracle.com> Andrew, Do you need unordered in Parse::array_store() too? Another way of doing it is to define MemNode::release_if_reference() in .ad files in 'source %{' section. Vladimir On 9/3/14 7:16 AM, Andrew Haley wrote: > In Parse::do_put_xxx, I see > > const MemNode::MemOrd mo = > is_vol ? > // Volatile fields need releasing stores. > MemNode::release : > // Non-volatile fields also need releasing stores if they hold an > // object reference, because the object reference might point to > // a freshly created object. > StoreNode::release_if_reference(bt); > > AArch64 doesn't need a release store here: its memory guarantees are > strong enough that a simple store is sufficient. But my question is > not about that, but how to handle it properly. > > I can, of course, do something like: > > - StoreNode::release_if_reference(bt); > - > + NOT_AARCH64(StoreNode::release_if_reference(bt)) > + AARCH64_ONLY(MemNode::unordered); > > But I don't want to put AArch64-specific code in shared files. There > doesn't seem to be a better way to do it, though. > > Any suggestions? > > Thanks, > Andrew. > From aleksey.shipilev at oracle.com Wed Sep 3 17:47:10 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 03 Sep 2014 21:47:10 +0400 Subject: Release store in C2 putfield In-Reply-To: <54074942.9050506@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> Message-ID: <5407541E.1070707@oracle.com> On 09/03/2014 09:00 PM, Andrew Haley wrote: > On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: >> Why is a simple store is sufficient here for AArch64? Do the stores >> ordered on AArch64 (I thought not)? I thought the "RC" part in >> "RCsc" only applies to explicit synchronization instructions. > > I discussed this with Peter Sewell, and it's explained in his > (co-authored) paper "A Tutorial Introduction to the ARM and POWER > Relaxed Memory Models" at > http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in Section > 4.1, "Enforcing Order with Dependencies" > > In the AArch64 spec, we have: > > B2.7.2 Ordering requirements > > If an address dependency exists between two reads or between a read > and a write, then those memory accesses are observed in program order > by all observers within the shareability domain of the memory > > So, an address dependency and a DMB when an object is created is all > we need. Okay, I read that paper from Sewell et al. before. I don't quite believe this is about "reading" the constructed objects, and so I don't see how address dependencies are applicable here. If that part of HS code is indeed about piggy-backing on address dependencies, and it is fine for AArch64, then it is as well fine for all other architectures (except Alpha)... I do think this is an over-cautious secondary release when publishing the reference to the field. History tells that block is added with PPC changes: $ hg log -r 5983 changeset: 5983:2113136690bc parent: 5981:eb178e97560c user: goetz date: Fri Nov 15 11:05:32 2013 -0800 summary: 8024921: PPC64 (part 113): Extend Load and Store nodes to know about memory ordering I'm puzzled why do we have this in the code. Goetz? Maybe instead of special-casing AArch64, PPC should be special-cased? -Aleksey. From aph at redhat.com Wed Sep 3 18:02:52 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 19:02:52 +0100 Subject: Release store in C2 putfield In-Reply-To: <5407541E.1070707@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> Message-ID: <540757CC.2050904@redhat.com> On 09/03/2014 06:47 PM, Aleksey Shipilev wrote: > On 09/03/2014 09:00 PM, Andrew Haley wrote: >> On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: >>> Why is a simple store is sufficient here for AArch64? Do the stores ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" only applies to explicit synchronization instructions. >> >> I discussed this with Peter Sewell, and it's explained in his (co-authored) paper "A Tutorial Introduction to the ARM and POWER Relaxed Memory Models" at http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in Section 4.1, "Enforcing Order with Dependencies" >> >> In the AArch64 spec, we have: >> >> B2.7.2 Ordering requirements >> >> If an address dependency exists between two reads or between a read and a write, then those memory accesses are observed in program order by all observers within the shareability domain of the memory >> >> So, an address dependency and a DMB when an object is created is all we need. > > Okay, I read that paper from Sewell et al. before. I don't quite believe this is about "reading" the constructed objects, and so I don't see how address dependencies are applicable here. IMO the case he describes is exactly the case we're talking about: there is an address dependency between a write which stores the address of an object and another thread which reads that stored address and uses it to form a field reference. And that's Sewell's own interpretation: I did ask this exact question. > If that part of HS code is indeed about piggy-backing on address dependencies, and it is fine for AArch64, Does what the code is about matter? It's just a putfield. > then it is as well fine for all other architectures (except Alpha)... I think so, but I'm only sure about AArch64. > I do think this is an over-cautious secondary release when publishing the reference to the field. Indeed. Please note that I have done a lot of jcstress testing; not that that proves anything with respect to the abstract architecture, of course. Andrew. From aleksey.shipilev at oracle.com Wed Sep 3 18:10:28 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 03 Sep 2014 22:10:28 +0400 Subject: Release store in C2 putfield In-Reply-To: <540757CC.2050904@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> Message-ID: <54075994.1050609@oracle.com> On 09/03/2014 10:02 PM, Andrew Haley wrote: > On 09/03/2014 06:47 PM, Aleksey Shipilev wrote: >> On 09/03/2014 09:00 PM, Andrew Haley wrote: >>> On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: >>>> Why is a simple store is sufficient here for AArch64? Do the stores ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" only applies to explicit synchronization instructions. >>> >>> I discussed this with Peter Sewell, and it's explained in his (co-authored) paper "A Tutorial Introduction to the ARM and POWER Relaxed Memory Models" at http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in Section 4.1, "Enforcing Order with Dependencies" >>> >>> In the AArch64 spec, we have: >>> >>> B2.7.2 Ordering requirements >>> >>> If an address dependency exists between two reads or between a read and a write, then those memory accesses are observed in program order by all observers within the shareability domain of the memory >>> >>> So, an address dependency and a DMB when an object is created is all we need. >> >> Okay, I read that paper from Sewell et al. before. I don't quite believe this is about "reading" the constructed objects, and so I don't see how address dependencies are applicable here. > > IMO the case he describes is exactly the case we're talking about: > there is an address dependency between a write which stores the > address of an object and another thread which reads that stored > address and uses it to form a field reference. Nope. Address dependency is between the *read* of the object reference, and the field read which uses the value from the first read. There is no address dependency between the write and the read. To quote the paper: "There is an address dependency from a read instruction to a program-order-later read or write instruction when the value read by the first is used to compute the address used for the second." I understand what you meant, but that's for the *consumer* side. put_xxx seems to be a *producer* side, and address dependencies are not applicable here. >> then it is as well fine for all other architectures (except Alpha)... > > I think so, but I'm only sure about AArch64. So there, let's figure out whether we should just purge the entire block! :) >> I do think this is an over-cautious secondary release when publishing the reference to the field. > > Indeed. Please note that I have done a lot of jcstress testing; not > that that proves anything with respect to the abstract architecture, > of course. Alas, I think the coverage for storing references in subpar in jcstress. And the proper release store in Parse::do_exits() masks this barrier anyway. -Aleksey. From aph at redhat.com Wed Sep 3 18:16:28 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 19:16:28 +0100 Subject: Release store in C2 putfield In-Reply-To: <54074E29.8030500@oracle.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> Message-ID: <54075AFC.6020209@redhat.com> On 09/03/2014 06:21 PM, Vladimir Kozlov wrote: > Andrew, > > Do you need unordered in Parse::array_store() too? Yes. > Another way of doing it is to define MemNode::release_if_reference() in .ad files in 'source %{' section. Yes, that's what we've got now. Thanks. Andrew. # HG changeset patch # User Edward Nevill edward.nevill at linaro.org # Date 1409307165 -3600 # Fri Aug 29 11:12:45 2014 +0100 # Node ID fc245bc14fa3589074c78ceb0e25ecf36ee3e110 # Parent 32fae3443576ac6b4b5ac0770c0829ce6c08764e Dont use a release store when storing an OOP in a non-volatile field. diff -r 32fae3443576 -r fc245bc14fa3 src/share/vm/opto/memnode.hpp --- a/src/share/vm/opto/memnode.hpp Mon Sep 01 13:10:18 2014 -0400 +++ b/src/share/vm/opto/memnode.hpp Fri Aug 29 11:12:45 2014 +0100 @@ -503,6 +503,12 @@ // Conservatively release stores of object references in order to // ensure visibility of object initialization. static inline MemOrd release_if_reference(const BasicType t) { + // AArch64 doesn't need a release store because if there is an + // address dependency between a read and a write, then those + // memory accesses are observed in program order by all observers + // within the shareability domain. + AARCH64_ONLY(return unordered); + const MemOrd mo = (t == T_ARRAY || t == T_ADDRESS || // Might be the address of an object reference (`boxing'). t == T_OBJECT) ? release : unordered; diff -r 32fae3443576 -r fc245bc14fa3 src/share/vm/opto/parse2.cpp --- a/src/share/vm/opto/parse2.cpp Mon Sep 01 13:10:18 2014 -0400 +++ b/src/share/vm/opto/parse2.cpp Fri Aug 29 11:12:45 2014 +0100 @@ -1689,7 +1689,7 @@ a = pop(); // the array itself const TypeOopPtr* elemtype = _gvn.type(a)->is_aryptr()->elem()->make_oopptr(); const TypeAryPtr* adr_type = TypeAryPtr::OOPS; - Node* store = store_oop_to_array(control(), a, d, adr_type, c, elemtype, T_OBJECT, MemNode::release); + Node* store = store_oop_to_array(control(), a, d, adr_type, c, elemtype, T_OBJECT, StoreNode::release_if_reference(T_OBJECT)); break; } case Bytecodes::_lastore: { From coleen.phillimore at oracle.com Wed Sep 3 18:20:44 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 03 Sep 2014 14:20:44 -0400 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <5406A3C9.6050205@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> <5406A3C9.6050205@oracle.com> Message-ID: <54075BFC.2070800@oracle.com> Thank you Serguei for these good review comments and questions. I made the changes that you suggest and comments and you can see the webrev here: open webrev at http://cr.openjdk.java.net/~coleenp/8055008_04/ but comments are below: On 9/3/14, 1:14 AM, serguei.spitsyn at oracle.com wrote: > Coleen, > > It looks good in general. > > But I have some questions and minor suggestions below. > || > > src/share/vm/oops/instanceKlass.cpp > > In the fragment below: > 3488 if (!method->on_stack()) { > 3489 if (method->is_running_emcp()) { > 3490 method->set_running_emcp(false); // no breakpoints for non-running methods > 3491 } > 3492 } else { > 3493 assert (method->is_obsolete() || method->is_running_emcp(), > 3494 "emcp method cannot run after emcp bit is cleared"); > 3495 // RC_TRACE macro has an embedded ResourceMark > 3496 RC_TRACE(0x00000200, > 3497 ("purge: %s(%s): prev method @%d in version @%d is alive", > 3498 method->name()->as_C_string(), > 3499 method->signature()->as_C_string(), j, version)); > 3500 if (method->method_data() != NULL) { > 3501 // Clean out any weak method links for running methods > 3502 // (also should include not EMCP methods) > 3503 method->method_data()->clean_weak_method_links(); > 3504 } > 3505 } > > It is not clear to me what happens in the situation when the method is > not running > (both emcp and obsolete cases). Why do we keep such methods, should we > remove them? No, we don't remove methods individually. > Is it because the method_data needs to be collected by the GC first? No, we don't remove them because we remove all of the metadata together with InstanceKlass being the holder for all the data. We have to do method_data cleaning for all on_stack methods so that they don't refer to not on_stack methods. The parenthetical comment was because we were only doing this for emcp methods and now we are doing this for all methods (it was a bug). > If it is so, do we need a comment explaining it? Ok, how about adding to the comment in the top of the methods() loop: // At least one method is live in this previous version so clean its MethodData. // Reset dead EMCP methods not to get breakpoints. // All methods are deallocated when all of the methods for this class are no // longer running. Array* method_refs = pv_node->methods(); if (method_refs != NULL) { RC_TRACE(0x00000200, ("purge: previous methods length=%d", method_refs->length())); > > Another question is about setting is_running_emcp. > Any method after redefinition can be in one of the three states: > method->on_stack() ? 'is_obsolete' or 'is_emcp' > !method->on_stack() ? 'to-be-removed' > > I think, the 'is_running_emcp' flag is confusing (a suggestion: use > 'is_emcp'). > It is not consistent with the 'is_obsolete' flag because we do not > spell 'is_running_obsolete' > as non-running obsolete methods hit the category 'to-be-removed'. It's different than just saying it's emcp. It's emcp and it's running also so needs a breakpoint. The states are really: is_obsolete() or !is_obsolete() same as is_emcp() is_running_emcp() == !is_obsolete() && method->on_stack() We need to distinguish the running emcp methods from the non-running emcp methods. I guess we could just set breakpoints in all emcp methods whether they are running or not, and not have this flag. This seemed to preserve the old behavior better. > > Just some thoughts. Sorry for looping around this. :( > > > A minor suggestion on the following fragment: > 3611 if (cp_ref->on_stack()) { > 3612 if (emcp_method_count == 0) { > 3613 RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); > 3614 } else { > 3615 // At least one method is still running, check for EMCP methods > 3616 for (int i = 0; i < old_methods->length(); i++) { > 3617 Method* old_method = old_methods->at(i); > 3618 if (!old_method->is_obsolete() && old_method->on_stack()) { > 3619 // if EMCP method (not obsolete) is on the stack, mark as EMCP so that > 3620 // we can add breakpoints for it. > 3621 old_method->set_running_emcp(true); > 3622 RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); > 3623 } else if (!old_method->is_obsolete()) { > 3624 RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); > 3625 } > 3626 } > 3627 } > 3628 > 3629 RC_TRACE(0x00000400, ("add: scratch class added; one of its methods is on_stack")); > 3630 assert(scratch_class->previous_versions() == NULL, "shouldn't have a previous version"); > 3631 scratch_class->link_previous_versions(previous_versions()); > 3632 link_previous_versions(scratch_class()); > 3633 } else { > 3634 RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); > 3635 } > It'd be a simplification to reduce the indent like this: > if (!cp_ref->on_stack()) { > RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); > return; > } > if (emcp_method_count == 0) { > RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); > return; In this case, we have to add the previous_version because one of the old methods in the previous klass is running. We keep the previous klass on the list so that we can clean_weak_method_links in the obsolete methods. But I changed it to below which is simpler: if (!cp_ref->on_stack()) { RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); return; } if (emcp_method_count == 0) { RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); } else { the bit that loops around looking for emcp methods } add to previous version list. > } > // At least one method is still running, check for EMCP methods > for (int i = 0; i < old_methods->length(); i++) { > Method* old_method = old_methods->at(i); > if (!old_method->is_obsolete() && old_method->on_stack()) { > // if EMCP method (not obsolete) is on the stack, mark as EMCP so that > // we can add breakpoints for it. > old_method->set_running_emcp(true);\ > RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, > old_method->name_and_sig_as_C_string(), old_method)); > } else if (!old_method->is_obsolete()) { > RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, > old_method->name_and_sig_as_C_string(), old_method)); > } > } > RC_TRACE(0x00000400, ("add: scratch class added; one of its methods is on_stack")); > assert(scratch_class->previous_versions() == NULL, "shouldn't have a previous version"); > scratch_class->link_previous_versions(previous_versions()); > link_previous_versions(scratch_class()); > > > Also, from the 1st round review email exchange... > It'd be nice to add a comment somewhere above to explain this: > > > Yes. We set the on_stack bit which causes setting the is_running_emcp bit during safepoints > > for class redefinition and class unloading. After the safepoint, the on_stack bit is cleared. > > After the safepoint, we may also set breakpoints using the is_running_emcp bit. > > If the method has exited we would set a breakpoint in a method that is never reached.> But this shouldn't be noticeable to the programmer. How about here (and reworded slightly) // At least one method is still running, check for EMCP methods for (int i = 0; i < old_methods->length(); i++) { Method* old_method = old_methods->at(i); if (!old_method->is_obsolete() && old_method->on_stack()) { // if EMCP method (not obsolete) is on the stack, mark as EMCP so that // we can add breakpoints for it. // We set the method->on_stack bit during safepoints for class redefinition and // class unloading and use this bit to set the is_running_emcp bit. // After the safepoint, the on_stack bit is cleared and the running emcp // method may exit. If so, we would set a breakpoint in a method that // is never reached, but this won't be noticeable to the programmer. old_method->set_running_emcp(true); RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); } else if (!old_method->is_obsolete()) { RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); } Thanks, Coleen > Thanks, > Serguei > > > On 9/2/14 5:29 AM, Coleen Phillimore wrote: >> >> Serguei, I didn't answer one of your questions. >> >> On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>>> This bit is set during purging previous versions when all methods >>>> have been marked on_stack() if found in various places. The bit is >>>> only used for setting breakpoints. >>> >>> I had to ask slightly different. >>> "How precise must be the control of this bit?" >>> Part of this question is the question below about what happens when >>> the method invocation is finished. >>> I realized now that it can impact only setting breakpoints. >>> Suppose, we did not clear the bit in time and then another >>> breakpoint is set. >>> The only bad thing is that this new breakpoint will be useless. >> >> Yes. We set the on_stack bit which causes setting the >> is_running_emcp bit during safepoints for class redefinition and >> class unloading. After the safepoint, the on_stack bit is cleared. >> After the safepoint, we may also set breakpoints using the >> is_running_emcp bit. If the method has exited we would set a >> breakpoint in a method that is never reached. But this shouldn't be >> noticeable to the programmer. >> >> The method's is_running_emcp bit and maybe metadata would be cleaned >> up the next time we do class unloading at a safepoint. >> >>> >>> But let me look at new webrev first to see if any update is needed >>> here. >>> >> >> Yes, please review this again and let me know if this does what I >> claim it does. >> >> Thank you! >> Coleen > From aph at redhat.com Wed Sep 3 18:25:55 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 19:25:55 +0100 Subject: Release store in C2 putfield In-Reply-To: <54075994.1050609@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> Message-ID: <54075D33.7060400@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 09/03/2014 07:10 PM, Aleksey Shipilev wrote: > On 09/03/2014 10:02 PM, Andrew Haley wrote: >> On 09/03/2014 06:47 PM, Aleksey Shipilev wrote: >>> On 09/03/2014 09:00 PM, Andrew Haley wrote: >>>> On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: >>>>> Why is a simple store is sufficient here for AArch64? Do the stores ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" only applies to explicit synchronization instructions. >>>> >>>> I discussed this with Peter Sewell, and it's explained in his (co-authored) paper "A Tutorial Introduction to the ARM and POWER Relaxed Memory Models" at http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in Section 4.1, "Enforcing Order with Dependencies" >>>> >>>> In the AArch64 spec, we have: >>>> >>>> B2.7.2 Ordering requirements >>>> >>>> If an address dependency exists between two reads or between a read and a write, then those memory accesses are observed in program order by all observers within the shareability domain of the memory >>>> >>>> So, an address dependency and a DMB when an object is created is all we need. >>> >>> Okay, I read that paper from Sewell et al. before. I don't quite believe this is about "reading" the constructed objects, and so I don't see how address dependencies are applicable here. >> >> IMO the case he describes is exactly the case we're talking about: there is an address dependency between a write which stores the address of an object and another thread which reads that stored address and uses it to form a field reference. > > Nope. Address dependency is between the *read* of the object reference, and the field read which uses the value from the first read. There is no address dependency between the write and the read. To quote the paper: "There is an address dependency from a read instruction to a program-order-later read or write instruction when the value read by the first is used to compute the address used for the second." Well alright, the address dependency is indeed between the read of an object reference and the later use of it to form an address, but it seems to me that the example in 4.1 is almost exactly our case, and the Forbidden state is exactly what the JMM says we mustn't see. > I understand what you meant, but that's for the *consumer* side. put_xxx seems to be a *producer* side, and address dependencies are not applicable here. > >>> then it is as well fine for all other architectures (except Alpha)... >> >> I think so, but I'm only sure about AArch64. > > So there, let's figure out whether we should just purge the entire block! :) Okay. It's better than arguing about interpretation of the paper. Andrew. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.14 (GNU/Linux) iQEcBAEBCAAGBQJUB10zAAoJEKXNYDUzL6Zx4PYIAJGAx70hKX+qNEYRWfs5CLJl 1Nfx81xPWvPaqHzIqyjQoPShSHUZD+9MQDghAnmF/V6l0fJFesstTrwVbJ2SfLsN Q08JD/pzsklRpVxIIA72zrA0cbc0bq+H6rivrOqFroBPaaXlldnv9yz9XBCpsuYu xIopDBYyb+PNvyzkUfKp+fwpWLbuzVZ8jMIMpphH38GlBbbHH8K6MnLRpjJmc4kC c6CI6GvzHZvHELoQCTNXXJK+RHo3NEwAnku/NlO9n1z+Bh6Z4kD65BwILROaPUHc y8uTO6jiJFXN68NQs25cuVV5F0wc4szbrUHNzoPdLOQsCO0DuWHs+x43WoFt0yU= =SgTQ -----END PGP SIGNATURE----- From aleksey.shipilev at oracle.com Wed Sep 3 18:29:15 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Wed, 03 Sep 2014 22:29:15 +0400 Subject: Release store in C2 putfield In-Reply-To: <54075D33.7060400@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> Message-ID: <54075DFB.4050807@oracle.com> On 09/03/2014 10:25 PM, Andrew Haley wrote: > On 09/03/2014 07:10 PM, Aleksey Shipilev wrote: >> So there, let's figure out whether we should just purge the entire block! :) > > Okay. It's better than arguing about interpretation of the paper. Let's wait a bit for Goetz's input on this. It was his commit that introduced this in the first place: $ hg log -r 5983 changeset: 5983:2113136690bc parent: 5981:eb178e97560c user: goetz date: Fri Nov 15 11:05:32 2013 -0800 summary: 8024921: PPC64 (part 113): Extend Load and Store nodes to know about memory ordering We can dig in the mail history if Goetz does not reply any time soon. -Aleksey. From aph at redhat.com Wed Sep 3 18:58:40 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 03 Sep 2014 19:58:40 +0100 Subject: Release store in C2 putfield In-Reply-To: <54075DFB.4050807@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> Message-ID: <540764E0.9030601@redhat.com> On 09/03/2014 07:29 PM, Aleksey Shipilev wrote: > On 09/03/2014 10:25 PM, Andrew Haley wrote: >> On 09/03/2014 07:10 PM, Aleksey Shipilev wrote: >>> So there, let's figure out whether we should just purge the entire block! :) >> >> Okay. It's better than arguing about interpretation of the paper. > > Let's wait a bit for Goetz's input on this. It was his commit that introduced this in the first place: > > $ hg log -r 5983 changeset: 5983:2113136690bc parent: 5981:eb178e97560c user: goetz date: Fri Nov 15 11:05:32 2013 -0800 summary: 8024921: PPC64 (part 113): Extend Load and Store nodes to know about memory ordering > > We can dig in the mail history if Goetz does not reply any time soon. Okay. While we're discussing this, I'd better tell you that I am also looking at why the card table write is a release store. But that's for later. Andrew. From calvin.cheung at oracle.com Wed Sep 3 19:07:34 2014 From: calvin.cheung at oracle.com (Calvin Cheung) Date: Wed, 03 Sep 2014 12:07:34 -0700 Subject: [8u40] Request for Approval: 8048150 and 8056175 Message-ID: <540766F6.4060205@oracle.com> Please approve the backport of the following 2 fixes into jdk8u40. Changes were pushed about one week ago into jdk9 and no problems were found. 1) bug: https://bugs.openjdk.java.net/browse/JDK-8048150 jdk9 review thread: http://comments.gmane.org/gmane.comp.java.openjdk.hotspot.runtime.devel/12369 jdk9 webrev: http://cr.openjdk.java.net/~ccheung/8048150/webrev/ jdk8u40 webrev: http://cr.openjdk.java.net/~ccheung/8048150_8u40/webrev/ 2) bug: https://bugs.openjdk.java.net/browse/JDK-8056175 jdk9 review thread: http://permalink.gmane.org/gmane.comp.java.openjdk.hotspot.devel/15300 jdk9 webrev: http://cr.openjdk.java.net/~simonis/webrevs/8056175 Both changes can be applied cleanly to jdk8u-hs-dev repo. thanks, Calvin From serguei.spitsyn at oracle.com Wed Sep 3 19:27:23 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 03 Sep 2014 12:27:23 -0700 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <54075BFC.2070800@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> <5406A3C9.6050205@oracle.com> <54075BFC.2070800@oracle.com> Message-ID: <54076B9B.4060202@oracle.com> Hi Coleen, Please, see comments below. On 9/3/14 11:20 AM, Coleen Phillimore wrote: > > Thank you Serguei for these good review comments and questions. I > made the changes that you suggest and comments and you can see the > webrev here: > > open webrev at http://cr.openjdk.java.net/~coleenp/8055008_04/ Ok, thanks! > > but comments are below: > > On 9/3/14, 1:14 AM, serguei.spitsyn at oracle.com wrote: >> Coleen, >> >> It looks good in general. >> >> But I have some questions and minor suggestions below. >> || >> >> src/share/vm/oops/instanceKlass.cpp >> >> In the fragment below: >> 3488 if (!method->on_stack()) { >> 3489 if (method->is_running_emcp()) { >> 3490 method->set_running_emcp(false); // no breakpoints for non-running methods >> 3491 } >> 3492 } else { >> 3493 assert (method->is_obsolete() || method->is_running_emcp(), >> 3494 "emcp method cannot run after emcp bit is cleared"); >> 3495 // RC_TRACE macro has an embedded ResourceMark >> 3496 RC_TRACE(0x00000200, >> 3497 ("purge: %s(%s): prev method @%d in version @%d is alive", >> 3498 method->name()->as_C_string(), >> 3499 method->signature()->as_C_string(), j, version)); >> 3500 if (method->method_data() != NULL) { >> 3501 // Clean out any weak method links for running methods >> 3502 // (also should include not EMCP methods) >> 3503 method->method_data()->clean_weak_method_links(); >> 3504 } >> 3505 } >> >> It is not clear to me what happens in the situation when the method >> is not running >> (both emcp and obsolete cases). Why do we keep such methods, should >> we remove them? > > No, we don't remove methods individually. > >> Is it because the method_data needs to be collected by the GC first? > > No, we don't remove them because we remove all of the metadata > together with InstanceKlass being the holder for all the data. We > have to do method_data cleaning for all on_stack methods so that they > don't refer to not on_stack methods. The parenthetical comment was > because we were only doing this for emcp methods and now we are doing > this for all methods (it was a bug). Thank you for the explanation! There is also a potential scalability issue for class redefinitions as we do a search through all these previous_versions and their old methods in the mark_newly_obsolete_methods (). In the case of sub-sequential the same class redefinitions this search will become worse and worse. However, I'm not suggesting to fix this now. :) > >> If it is so, do we need a comment explaining it? > > Ok, how about adding to the comment in the top of the methods() loop: > > // At least one method is live in this previous version so clean > its MethodData. > // Reset dead EMCP methods not to get breakpoints. > // All methods are deallocated when all of the methods for this > class are no > // longer running. > Array* method_refs = pv_node->methods(); > if (method_refs != NULL) { > RC_TRACE(0x00000200, ("purge: previous methods length=%d", > method_refs->length())); The comment looks good. Thanks! >> Another question is about setting is_running_emcp. >> Any method after redefinition can be in one of the three states: >> method->on_stack() ? 'is_obsolete' or 'is_emcp' >> !method->on_stack() ? 'to-be-removed' >> >> I think, the 'is_running_emcp' flag is confusing (a suggestion: use >> 'is_emcp'). >> It is not consistent with the 'is_obsolete' flag because we do not >> spell 'is_running_obsolete' >> as non-running obsolete methods hit the category 'to-be-removed'. > > It's different than just saying it's emcp. It's emcp and it's > running also so needs a breakpoint. > > The states are really: > > is_obsolete() or !is_obsolete() same as is_emcp() > > is_running_emcp() == !is_obsolete() && method->on_stack() > > We need to distinguish the running emcp methods from the non-running > emcp methods. I suspect, sometimes this invariant is going to be broken: is_running_emcp() == !is_obsolete() && method->on_stack() When the method has been finished and the on_stack is cleared, the method is_running_emcp bit can remain still uncleared, right? Would it be more simple just to use "!is_obsolete() && method->on_stack()" ? It must be just in a couple of places. > > I guess we could just set breakpoints in all emcp methods whether they > are running or not, and not have this flag. This seemed to preserve > the old behavior better. I was thinking about the same but do not really have a preference. It is hard to estimate how big memory leak will cause these unneeded breakpoints. > >> >> Just some thoughts. Sorry for looping around this. :( >> >> >> A minor suggestion on the following fragment: >> 3611 if (cp_ref->on_stack()) { >> 3612 if (emcp_method_count == 0) { >> 3613 RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); >> 3614 } else { >> 3615 // At least one method is still running, check for EMCP methods >> 3616 for (int i = 0; i < old_methods->length(); i++) { >> 3617 Method* old_method = old_methods->at(i); >> 3618 if (!old_method->is_obsolete() && old_method->on_stack()) { >> 3619 // if EMCP method (not obsolete) is on the stack, mark as EMCP so that >> 3620 // we can add breakpoints for it. >> 3621 old_method->set_running_emcp(true); >> 3622 RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); >> 3623 } else if (!old_method->is_obsolete()) { >> 3624 RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); >> 3625 } >> 3626 } >> 3627 } >> 3628 >> 3629 RC_TRACE(0x00000400, ("add: scratch class added; one of its methods is on_stack")); >> 3630 assert(scratch_class->previous_versions() == NULL, "shouldn't have a previous version"); >> 3631 scratch_class->link_previous_versions(previous_versions()); >> 3632 link_previous_versions(scratch_class()); >> 3633 } else { >> 3634 RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); >> 3635 } >> It'd be a simplification to reduce the indent like this: >> if (!cp_ref->on_stack()) { >> RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); >> return; >> } >> if (emcp_method_count == 0) { >> RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); >> return; > > In this case, we have to add the previous_version because one of the > old methods in the previous klass is running. We keep the previous > klass on the list so that we can clean_weak_method_links in the > obsolete methods. You are right. Sorry, I overlooked it. > > But I changed it to below which is simpler: > if (!cp_ref->on_stack()) { > RC_TRACE(0x00000400, ("add: scratch class not added; no methods are running")); > return; > } > if (emcp_method_count == 0) { > RC_TRACE(0x00000400, ("add: all methods are obsolete; no added EMCP refs")); > } else { > the bit that loops around looking for emcp methods > } > add to previous version list. Agreed. >> } >> // At least one method is still running, check for EMCP methods >> for (int i = 0; i < old_methods->length(); i++) { >> Method* old_method = old_methods->at(i); >> if (!old_method->is_obsolete() && old_method->on_stack()) { >> // if EMCP method (not obsolete) is on the stack, mark as EMCP so that >> // we can add breakpoints for it. >> old_method->set_running_emcp(true);\ >> RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, >> old_method->name_and_sig_as_C_string(), old_method)); >> } else if (!old_method->is_obsolete()) { >> RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, >> old_method->name_and_sig_as_C_string(), old_method)); >> } >> } >> RC_TRACE(0x00000400, ("add: scratch class added; one of its methods is on_stack")); >> assert(scratch_class->previous_versions() == NULL, "shouldn't have a previous version"); >> scratch_class->link_previous_versions(previous_versions()); >> link_previous_versions(scratch_class()); >> >> >> Also, from the 1st round review email exchange... >> It'd be nice to add a comment somewhere above to explain this: >> >> > Yes. We set the on_stack bit which causes setting the is_running_emcp bit during safepoints >> > for class redefinition and class unloading. After the safepoint, the on_stack bit is cleared. >> > After the safepoint, we may also set breakpoints using the is_running_emcp bit. >> > If the method has exited we would set a breakpoint in a method that is never reached.> But this shouldn't be noticeable to the programmer. > > How about here (and reworded slightly) > > // At least one method is still running, check for EMCP methods > for (int i = 0; i < old_methods->length(); i++) { > Method* old_method = old_methods->at(i); > if (!old_method->is_obsolete() && old_method->on_stack()) { > // if EMCP method (not obsolete) is on the stack, mark as EMCP > so that > // we can add breakpoints for it. > > // We set the method->on_stack bit during safepoints for class > redefinition and > // class unloading and use this bit to set the is_running_emcp > bit. > // After the safepoint, the on_stack bit is cleared and the > running emcp > // method may exit. If so, we would set a breakpoint in a > method that > // is never reached, but this won't be noticeable to the > programmer. > old_method->set_running_emcp(true); > RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " > INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); > } else if (!old_method->is_obsolete()) { > RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " > INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); > } This is nice, thanks! I'm looking at the new webrev version now. Thanks, Serguei > > Thanks, > Coleen > >> Thanks, >> Serguei >> >> >> On 9/2/14 5:29 AM, Coleen Phillimore wrote: >>> >>> Serguei, I didn't answer one of your questions. >>> >>> On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>>>> This bit is set during purging previous versions when all methods >>>>> have been marked on_stack() if found in various places. The bit >>>>> is only used for setting breakpoints. >>>> >>>> I had to ask slightly different. >>>> "How precise must be the control of this bit?" >>>> Part of this question is the question below about what happens when >>>> the method invocation is finished. >>>> I realized now that it can impact only setting breakpoints. >>>> Suppose, we did not clear the bit in time and then another >>>> breakpoint is set. >>>> The only bad thing is that this new breakpoint will be useless. >>> >>> Yes. We set the on_stack bit which causes setting the >>> is_running_emcp bit during safepoints for class redefinition and >>> class unloading. After the safepoint, the on_stack bit is >>> cleared. After the safepoint, we may also set breakpoints using >>> the is_running_emcp bit. If the method has exited we would set a >>> breakpoint in a method that is never reached. But this shouldn't be >>> noticeable to the programmer. >>> >>> The method's is_running_emcp bit and maybe metadata would be cleaned >>> up the next time we do class unloading at a safepoint. >>> >>>> >>>> But let me look at new webrev first to see if any update is needed >>>> here. >>>> >>> >>> Yes, please review this again and let me know if this does what I >>> claim it does. >>> >>> Thank you! >>> Coleen >> > From coleen.phillimore at oracle.com Wed Sep 3 19:55:33 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 03 Sep 2014 15:55:33 -0400 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <54076B9B.4060202@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> <5406A3C9.6050205@oracle.com> <54075BFC.2070800@oracle.com> <54076B9B.4060202@oracle.com> Message-ID: <54077235.9050905@oracle.com> Hi Serguei, I'm going to cut some things... <> > Thank you for the explanation! > > There is also a potential scalability issue for class redefinitions as > we do a search through > all these previous_versions and their old methods in the > mark_newly_obsolete_methods (). > In the case of sub-sequential the same class redefinitions this search > will become worse and worse. > However, I'm not suggesting to fix this now. :) I agree, it seems to take way too long to clear old methods once they are in the CodeCache. > >> It's different than just saying it's emcp. It's emcp and it's >> running also so needs a breakpoint. >> >> The states are really: >> >> is_obsolete() or !is_obsolete() same as is_emcp() >> >> is_running_emcp() == !is_obsolete() && method->on_stack() >> >> We need to distinguish the running emcp methods from the non-running >> emcp methods. > > I suspect, sometimes this invariant is going to be broken: > is_running_emcp() == !is_obsolete() && method->on_stack() > > When the method has been finished and the on_stack is cleared, > the method is_running_emcp bit can remain still uncleared, right? > Would it be more simple just to use "!is_obsolete() && > method->on_stack()" ? > It must be just in a couple of places. We only set on_stack when we do class redefinition and class unloading with MetadataOnStackMark. After this safepoint, the bit is cleared. We don't clear it when the method finishes. Is running_emcp is in only 4 places, but the place where we really need it (setting breakpoints) the "on_stack" bit isn't set because we don't do MetadataOnStackMark at that safepoint. It's sort of an expensive operation. So I need is_running_emcp() to capture the last known running state. > >> >> I guess we could just set breakpoints in all emcp methods whether >> they are running or not, and not have this flag. This seemed to >> preserve the old behavior better. > > I was thinking about the same but do not really have a preference. > It is hard to estimate how big memory leak will cause these unneeded > breakpoints. > It's not so much leakage, because the methods are there anyway but it seems inefficient to do breakpoints on methods that have exited. Setting these breakpoints looks expensive as well! > <> > This is nice, thanks! > I'm looking at the new webrev version now. Ok, let me know if there's anything else. Coleen > > > Thanks, > Serguei > > >> >> Thanks, >> Coleen >> >>> Thanks, >>> Serguei >>> >>> >>> On 9/2/14 5:29 AM, Coleen Phillimore wrote: >>>> >>>> Serguei, I didn't answer one of your questions. >>>> >>>> On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>>>>> This bit is set during purging previous versions when all methods >>>>>> have been marked on_stack() if found in various places. The bit >>>>>> is only used for setting breakpoints. >>>>> >>>>> I had to ask slightly different. >>>>> "How precise must be the control of this bit?" >>>>> Part of this question is the question below about what happens >>>>> when the method invocation is finished. >>>>> I realized now that it can impact only setting breakpoints. >>>>> Suppose, we did not clear the bit in time and then another >>>>> breakpoint is set. >>>>> The only bad thing is that this new breakpoint will be useless. >>>> >>>> Yes. We set the on_stack bit which causes setting the >>>> is_running_emcp bit during safepoints for class redefinition and >>>> class unloading. After the safepoint, the on_stack bit is >>>> cleared. After the safepoint, we may also set breakpoints using >>>> the is_running_emcp bit. If the method has exited we would set a >>>> breakpoint in a method that is never reached. But this shouldn't >>>> be noticeable to the programmer. >>>> >>>> The method's is_running_emcp bit and maybe metadata would be >>>> cleaned up the next time we do class unloading at a safepoint. >>>> >>>>> >>>>> But let me look at new webrev first to see if any update is needed >>>>> here. >>>>> >>>> >>>> Yes, please review this again and let me know if this does what I >>>> claim it does. >>>> >>>> Thank you! >>>> Coleen >>> >> > From ioi.lam at oracle.com Wed Sep 3 20:10:30 2014 From: ioi.lam at oracle.com (Ioi Lam) Date: Wed, 03 Sep 2014 13:10:30 -0700 Subject: [8u40] Request for Approval: 8048150 and 8056175 In-Reply-To: <540766F6.4060205@oracle.com> References: <540766F6.4060205@oracle.com> Message-ID: <540775B6.6030605@oracle.com> Looks good to me. Thanks - Ioi On 9/3/14, 12:07 PM, Calvin Cheung wrote: > Please approve the backport of the following 2 fixes into jdk8u40. > Changes were pushed about one week ago into jdk9 and no problems were > found. > > 1) > bug: https://bugs.openjdk.java.net/browse/JDK-8048150 > jdk9 review thread: > http://comments.gmane.org/gmane.comp.java.openjdk.hotspot.runtime.devel/12369 > jdk9 webrev: http://cr.openjdk.java.net/~ccheung/8048150/webrev/ > jdk8u40 webrev: http://cr.openjdk.java.net/~ccheung/8048150_8u40/webrev/ > > 2) > bug: https://bugs.openjdk.java.net/browse/JDK-8056175 > jdk9 review thread: > http://permalink.gmane.org/gmane.comp.java.openjdk.hotspot.devel/15300 > jdk9 webrev: http://cr.openjdk.java.net/~simonis/webrevs/8056175 > > Both changes can be applied cleanly to jdk8u-hs-dev repo. > > thanks, > Calvin > From serguei.spitsyn at oracle.com Wed Sep 3 20:22:39 2014 From: serguei.spitsyn at oracle.com (serguei.spitsyn at oracle.com) Date: Wed, 03 Sep 2014 13:22:39 -0700 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <54077235.9050905@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> <5406A3C9.6050205@oracle.com> <54075BFC.2070800@oracle.com> <54076B9B.4060202@oracle.com> <54077235.9050905@oracle.com> Message-ID: <5407788F.8000501@oracle.com> Coleen, Thank you for the answers! They are helpful. The new webrev looks good to me. The only minor comment is about the fragment with long lines: 3633 RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); 3634 } else if (!old_method->is_obsolete()) { 3635 RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); 3636 } Could you, please, split these lines? Thanks! Serguei On 9/3/14 12:55 PM, Coleen Phillimore wrote: > > Hi Serguei, > > I'm going to cut some things... > > <> >> Thank you for the explanation! >> >> There is also a potential scalability issue for class redefinitions >> as we do a search through >> all these previous_versions and their old methods in the >> mark_newly_obsolete_methods (). >> In the case of sub-sequential the same class redefinitions this >> search will become worse and worse. >> However, I'm not suggesting to fix this now. :) > > I agree, it seems to take way too long to clear old methods once they > are in the CodeCache. >> >>> It's different than just saying it's emcp. It's emcp and it's >>> running also so needs a breakpoint. >>> >>> The states are really: >>> >>> is_obsolete() or !is_obsolete() same as is_emcp() >>> >>> is_running_emcp() == !is_obsolete() && method->on_stack() >>> >>> We need to distinguish the running emcp methods from the non-running >>> emcp methods. >> >> I suspect, sometimes this invariant is going to be broken: >> is_running_emcp() == !is_obsolete() && method->on_stack() >> >> When the method has been finished and the on_stack is cleared, >> the method is_running_emcp bit can remain still uncleared, right? >> Would it be more simple just to use "!is_obsolete() && >> method->on_stack()" ? >> It must be just in a couple of places. > > We only set on_stack when we do class redefinition and class unloading > with MetadataOnStackMark. After this safepoint, the bit is cleared. > We don't clear it when the method finishes. > > Is running_emcp is in only 4 places, but the place where we really > need it (setting breakpoints) the "on_stack" bit isn't set because we > don't do MetadataOnStackMark at that safepoint. It's sort of an > expensive operation. > > So I need is_running_emcp() to capture the last known running state. > >> >>> >>> I guess we could just set breakpoints in all emcp methods whether >>> they are running or not, and not have this flag. This seemed to >>> preserve the old behavior better. >> >> I was thinking about the same but do not really have a preference. >> It is hard to estimate how big memory leak will cause these unneeded >> breakpoints. >> > > It's not so much leakage, because the methods are there anyway but it > seems inefficient to do breakpoints on methods that have exited. > > Setting these breakpoints looks expensive as well! > >> <> >> This is nice, thanks! >> I'm looking at the new webrev version now. > > Ok, let me know if there's anything else. > Coleen > >> >> >> Thanks, >> Serguei >> >> >>> >>> Thanks, >>> Coleen >>> >>>> Thanks, >>>> Serguei >>>> >>>> >>>> On 9/2/14 5:29 AM, Coleen Phillimore wrote: >>>>> >>>>> Serguei, I didn't answer one of your questions. >>>>> >>>>> On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>>>>>> This bit is set during purging previous versions when all >>>>>>> methods have been marked on_stack() if found in various places. >>>>>>> The bit is only used for setting breakpoints. >>>>>> >>>>>> I had to ask slightly different. >>>>>> "How precise must be the control of this bit?" >>>>>> Part of this question is the question below about what happens >>>>>> when the method invocation is finished. >>>>>> I realized now that it can impact only setting breakpoints. >>>>>> Suppose, we did not clear the bit in time and then another >>>>>> breakpoint is set. >>>>>> The only bad thing is that this new breakpoint will be useless. >>>>> >>>>> Yes. We set the on_stack bit which causes setting the >>>>> is_running_emcp bit during safepoints for class redefinition and >>>>> class unloading. After the safepoint, the on_stack bit is >>>>> cleared. After the safepoint, we may also set breakpoints using >>>>> the is_running_emcp bit. If the method has exited we would set a >>>>> breakpoint in a method that is never reached. But this shouldn't >>>>> be noticeable to the programmer. >>>>> >>>>> The method's is_running_emcp bit and maybe metadata would be >>>>> cleaned up the next time we do class unloading at a safepoint. >>>>> >>>>>> >>>>>> But let me look at new webrev first to see if any update is >>>>>> needed here. >>>>>> >>>>> >>>>> Yes, please review this again and let me know if this does what I >>>>> claim it does. >>>>> >>>>> Thank you! >>>>> Coleen >>>> >>> >> > From vladimir.kozlov at oracle.com Wed Sep 3 22:17:40 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 03 Sep 2014 15:17:40 -0700 Subject: RFR(XXS): 8057129: Fix AIX build after the Extend CompileCommand=option change 8055286 In-Reply-To: References: Message-ID: <54079384.30904@oracle.com> Looks good. I will push it today. Thanks, Vladimir On 9/3/14 5:48 AM, Volker Simonis wrote: > Hi, > > could somebody please review and sponsor this tiny change which fixes > an AIX build failure after "8055286: Extend CompileCommand=option to > handle numeric parameters" (details see below). > > It would be nice if this fix could be pushed to hs-comp before hs-comp > gets pushed to the other hs repos: > > http://cr.openjdk.java.net/~simonis/webrevs/8057129/ > https://bugs.openjdk.java.net/browse/JDK-8057129 > > The AIX xlC compiler is overly picky with regard to section 14.6.4.2 > "Candidate functions" of the C++ standard (see > http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3242.pdf) > which states: > > "If the function name is an unqualified-id and the call would be > ill-formed or would find a better match had the lookup within the > associated namespaces considered all the function declarations with > external linkage introduced in those namespaces in all translation > units, not just considering those declarations found in the template > definition and template instantiation contexts, then the program has > undefined behavior." > > xlC implements this by not taking into account static functions which > have internal linkage and terminates with the error message: > > "hotspot-comp/src/share/vm/compiler/compilerOracle.cpp", line 364.10: > 1540-0274 (S) The name lookup for "get_option_value" did not find a > declaration. > "hotspot-comp/src/share/vm/compiler/compilerOracle.cpp", line 364.10: > 1540-1292 (I) Static declarations are not considered for a function > call if the function is not qualified. > > The fix is trivial - just qualify the call to "get_option_value" like this: > > return ::get_option_value(method, option, value); > > Thank you and best regards, > Volker > From coleen.phillimore at oracle.com Wed Sep 3 22:32:20 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Wed, 03 Sep 2014 18:32:20 -0400 Subject: RFR 8055008: Clean up code that saves the previous versions of redefined classes In-Reply-To: <5407788F.8000501@oracle.com> References: <53EE6B22.3040107@oracle.com> <53F3C426.9080408@oracle.com> <53F3D571.7060609@oracle.com> <53F3E35B.5010205@oracle.com> <53F4C49D.4070404@oracle.com> <53F4FBDB.9080508@oracle.com> <53F4FEA5.2090606@oracle.com> <53F52521.1080309@oracle.com> <53F54AC0.7010007@oracle.com> <53F752F9.9090908@oracle.com> <53FDCD05.1080307@oracle.com> <53FF9370.9090603@oracle.com> <53FFA281.7050701@oracle.com> <5405B820.3060505@oracle.com> <5406A3C9.6050205@oracle.com> <54075BFC.2070800@oracle.com> <54076B9B.4060202@oracle.com> <54077235.9050905@oracle.com> <5407788F.8000501@oracle.com> Message-ID: <540796F4.8080501@oracle.com> On 9/3/14, 4:22 PM, serguei.spitsyn at oracle.com wrote: > Coleen, > > > Thank you for the answers! > They are helpful. > > The new webrev looks good to me. > The only minor comment is about the fragment with long lines: > > 3633 RC_TRACE(0x00000400, ("add: EMCP method %s is on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); > 3634 } else if (!old_method->is_obsolete()) { > 3635 RC_TRACE(0x00000400, ("add: EMCP method %s is NOT on_stack " INTPTR_FORMAT, old_method->name_and_sig_as_C_string(), old_method)); > 3636 } > Could you, please, split these lines? Okay. Thanks for the very thorough code review. Coleen > > > Thanks! > Serguei > > > > On 9/3/14 12:55 PM, Coleen Phillimore wrote: >> >> Hi Serguei, >> >> I'm going to cut some things... >> >> <> >>> Thank you for the explanation! >>> >>> There is also a potential scalability issue for class redefinitions >>> as we do a search through >>> all these previous_versions and their old methods in the >>> mark_newly_obsolete_methods (). >>> In the case of sub-sequential the same class redefinitions this >>> search will become worse and worse. >>> However, I'm not suggesting to fix this now. :) >> >> I agree, it seems to take way too long to clear old methods once they >> are in the CodeCache. >>> >>>> It's different than just saying it's emcp. It's emcp and it's >>>> running also so needs a breakpoint. >>>> >>>> The states are really: >>>> >>>> is_obsolete() or !is_obsolete() same as is_emcp() >>>> >>>> is_running_emcp() == !is_obsolete() && method->on_stack() >>>> >>>> We need to distinguish the running emcp methods from the >>>> non-running emcp methods. >>> >>> I suspect, sometimes this invariant is going to be broken: >>> is_running_emcp() == !is_obsolete() && method->on_stack() >>> >>> When the method has been finished and the on_stack is cleared, >>> the method is_running_emcp bit can remain still uncleared, right? >>> Would it be more simple just to use "!is_obsolete() && >>> method->on_stack()" ? >>> It must be just in a couple of places. >> >> We only set on_stack when we do class redefinition and class >> unloading with MetadataOnStackMark. After this safepoint, the bit is >> cleared. We don't clear it when the method finishes. >> >> Is running_emcp is in only 4 places, but the place where we really >> need it (setting breakpoints) the "on_stack" bit isn't set because we >> don't do MetadataOnStackMark at that safepoint. It's sort of an >> expensive operation. >> >> So I need is_running_emcp() to capture the last known running state. >> >>> >>>> >>>> I guess we could just set breakpoints in all emcp methods whether >>>> they are running or not, and not have this flag. This seemed to >>>> preserve the old behavior better. >>> >>> I was thinking about the same but do not really have a preference. >>> It is hard to estimate how big memory leak will cause these unneeded >>> breakpoints. >>> >> >> It's not so much leakage, because the methods are there anyway but it >> seems inefficient to do breakpoints on methods that have exited. >> >> Setting these breakpoints looks expensive as well! >> >>> <> >>> This is nice, thanks! >>> I'm looking at the new webrev version now. >> >> Ok, let me know if there's anything else. >> Coleen >> >>> >>> >>> Thanks, >>> Serguei >>> >>> >>>> >>>> Thanks, >>>> Coleen >>>> >>>>> Thanks, >>>>> Serguei >>>>> >>>>> >>>>> On 9/2/14 5:29 AM, Coleen Phillimore wrote: >>>>>> >>>>>> Serguei, I didn't answer one of your questions. >>>>>> >>>>>> On 8/28/14, 5:43 PM, serguei.spitsyn at oracle.com wrote: >>>>>>>> This bit is set during purging previous versions when all >>>>>>>> methods have been marked on_stack() if found in various >>>>>>>> places. The bit is only used for setting breakpoints. >>>>>>> >>>>>>> I had to ask slightly different. >>>>>>> "How precise must be the control of this bit?" >>>>>>> Part of this question is the question below about what happens >>>>>>> when the method invocation is finished. >>>>>>> I realized now that it can impact only setting breakpoints. >>>>>>> Suppose, we did not clear the bit in time and then another >>>>>>> breakpoint is set. >>>>>>> The only bad thing is that this new breakpoint will be useless. >>>>>> >>>>>> Yes. We set the on_stack bit which causes setting the >>>>>> is_running_emcp bit during safepoints for class redefinition and >>>>>> class unloading. After the safepoint, the on_stack bit is >>>>>> cleared. After the safepoint, we may also set breakpoints using >>>>>> the is_running_emcp bit. If the method has exited we would set a >>>>>> breakpoint in a method that is never reached. But this shouldn't >>>>>> be noticeable to the programmer. >>>>>> >>>>>> The method's is_running_emcp bit and maybe metadata would be >>>>>> cleaned up the next time we do class unloading at a safepoint. >>>>>> >>>>>>> >>>>>>> But let me look at new webrev first to see if any update is >>>>>>> needed here. >>>>>>> >>>>>> >>>>>> Yes, please review this again and let me know if this does what I >>>>>> claim it does. >>>>>> >>>>>> Thank you! >>>>>> Coleen >>>>> >>>> >>> >> > From vladimir.kozlov at oracle.com Wed Sep 3 22:45:18 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 03 Sep 2014 15:45:18 -0700 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5406F720.2080603@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> <5406F720.2080603@oracle.com> Message-ID: <540799FE.5030309@oracle.com> Looks good. You need second review. And, please, add a WB test which verifies a reaction on flags settings similar to what test/compiler/codecache/CheckUpperLimit.java does. Both positive (SegmentedCodeCache is enabled) and negative (sizes does not match or ReservedCodeCacheSize is small) and etc. Thanks, Vladimir On 9/3/14 4:10 AM, Tobias Hartmann wrote: > Hi Vladimir, > > thanks for the review. > > On 29.08.2014 19:33, Vladimir Kozlov wrote: >> On 8/29/14 7:10 AM, Tobias Hartmann wrote: >>> Hi Vladimir, >>> >>> thanks for the review. >>> >>> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>>> For the record, SegmentedCodeCache is enabled by default when >>>> TieredCompilation is enabled and ReservedCodeCacheSize >>>> >= 240 MB. Otherwise it is false by default. >>> >>> Exactly. >>> >>>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>>> setting and segments size adjustment - do adjustment >>>> only if SegmentedCodeCache is enabled. >>> >>> Done. >>> >>>> Also I think each flag should be checked and adjusted separately. >>>> You may bail out (vm_exit_during_initialization) if >>>> sizes do not add up. >>> >>> I think we should only increase the sizes if they are all default. >>> Otherwise we would for example fail if the user sets >>> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >>> NonProfiledCodeHeap size is multiplied by 5. What do >>> you think? >> >> But ReservedCodeCacheSize is scaled anyway and you will get sum of >> sizes != whole size. We need to do something. > > I agree. I changed it as you suggested first: The code heap sizes are > scaled individually and we bail out if the sizes are not consistent with > ReservedCodeCacheSize. > >> BTW the error message for next check should print all sizes, user may >> not know the default value of some which he did not specified on >> command line. >> >> (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >> ProfiledCodeHeapSize) != ReservedCodeCacheSize) > > The error message now prints the sizes in brackets. > >>>> And use >>> >>> I think the rest of this sentence is missing :) >> >> And use FLAG_SET_ERGO() when you scale. :) > > Done. I also changed the implementation of CodeCache::initialize_heaps() > accordingly. > >>>> Align second line: >>>> >>>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >>> >>> Done. >>> >>>> codeCache.cpp - in initialize_heaps() add new methods in C1 and C2 >>>> to return buffer_size they need. Add >>>> assert(SegmentedCodeCache) to this method to show that we call it >>>> only in such case. >>> >>> Done. >>> >>>> You do adjustment only when all flags are default. But you still >>>> need to check that you have space in >>>> NonMethodCodeHeap for scratch buffers. >>> >>> I added a the following check: >>> >>> // Make sure we have enough space for the code buffers >>> if (NonMethodCodeHeapSize < code_buffers_size) { >>> vm_exit_during_initialization("Not enough space for code buffers >>> in CodeCache"); >>> } >> >> I think, you need to take into account min_code_cache_size as in >> arguments.cpp: >> uint min_code_cache_size = (CodeCacheMinimumUseSpace DEBUG_ONLY(* 3)) >> + CodeCacheMinimumFreeSpace; >> >> if (NonMethodCodeHeapSize < (min_code_cache_size+code_buffers_size)) { > > True, I changed it. > >> I would be nice if this code in initialize_heaps() could be moved >> called during arguments parsing if we could get number of compiler >> threads there. But I understand that we can't do that until >> compilation policy is set :( > > Yes, this is not possible because we need to know the number of C1/C2 > compiler threads. > > New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ > > Thanks, > Tobias > >> >>> >>>> codeCache.hpp - comment alignment: >>>> + // Creates a new heap with the given name and size, containing >>>> CodeBlobs of the given type >>>> ! static void add_heap(ReservedSpace rs, const char* name, size_t >>>> size_initial, int code_blob_type); >>> >>> Done. >>> >>>> nmethod.cpp - in new() can we mark nmethod allocation critical only >>>> when SegmentedCodeCache is enabled? >>> >>> Yes, that's what we do with: >>> >>> 809 bool is_critical = SegmentedCodeCache; >>> >>> Or what are you referring to? >> >> Somehow I missed that SegmentedCodeCache is used already. It is fine >> then. >> >> Thanks, >> Vladimir >> >>> >>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >>> >>> Thanks, >>> Tobias >>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> the segmented code cache JEP is now targeted. Please review the final >>>>> implementation before integration. The previous RFR, including a short >>>>> description, can be found here [1]. >>>>> >>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>> Implementation: >>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>> JDK-Test fix: >>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>> >>>>> Changes since the last review: >>>>> - Merged with other changes (for example, G1 class unloading >>>>> changes [2]) >>>>> - Fixed some minor bugs that showed up during testing >>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>> - Non-method CodeHeap size increased to 5 MB >>>>> - Fallback solution: Store non-method code in the non-profiled code >>>>> heap >>>>> if there is not enough space in the non-method code heap (see >>>>> 'CodeCache::allocate') >>>>> >>>>> Additional testing: >>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>> - Compiler and GC nightlies >>>>> - jtreg tests >>>>> - VM (NSK) Testbase >>>>> - More performance testing (results attached to the bug) >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>> [1] >>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>> >>>>> >>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>> > From vladimir.kozlov at oracle.com Thu Sep 4 00:28:19 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 03 Sep 2014 17:28:19 -0700 Subject: Release store in C2 putfield In-Reply-To: <54075AFC.6020209@redhat.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> Message-ID: <5407B223.5010300@oracle.com> In general I am fine with such platform specific change in shared code. But in current case non-volatile release_store code affects only PPC64. As Aleksey suggested may be we should use NOT_PPC64() macro instead. Also I don't see calls as_Store()->is_unordered() in ppc.ad file. Looks like PPC64 doesn't check this flag. Thanks, Vladimir On 9/3/14 11:16 AM, Andrew Haley wrote: > On 09/03/2014 06:21 PM, Vladimir Kozlov wrote: >> Andrew, >> >> Do you need unordered in Parse::array_store() too? > > Yes. > >> Another way of doing it is to define MemNode::release_if_reference() in .ad files in 'source %{' section. > > Yes, that's what we've got now. Thanks. > > Andrew. > > > # HG changeset patch > # User Edward Nevill edward.nevill at linaro.org > # Date 1409307165 -3600 > # Fri Aug 29 11:12:45 2014 +0100 > # Node ID fc245bc14fa3589074c78ceb0e25ecf36ee3e110 > # Parent 32fae3443576ac6b4b5ac0770c0829ce6c08764e > Dont use a release store when storing an OOP in a non-volatile field. > > diff -r 32fae3443576 -r fc245bc14fa3 src/share/vm/opto/memnode.hpp > --- a/src/share/vm/opto/memnode.hpp Mon Sep 01 13:10:18 2014 -0400 > +++ b/src/share/vm/opto/memnode.hpp Fri Aug 29 11:12:45 2014 +0100 > @@ -503,6 +503,12 @@ > // Conservatively release stores of object references in order to > // ensure visibility of object initialization. > static inline MemOrd release_if_reference(const BasicType t) { > + // AArch64 doesn't need a release store because if there is an > + // address dependency between a read and a write, then those > + // memory accesses are observed in program order by all observers > + // within the shareability domain. > + AARCH64_ONLY(return unordered); > + > const MemOrd mo = (t == T_ARRAY || > t == T_ADDRESS || // Might be the address of an object reference (`boxing'). > t == T_OBJECT) ? release : unordered; > diff -r 32fae3443576 -r fc245bc14fa3 src/share/vm/opto/parse2.cpp > --- a/src/share/vm/opto/parse2.cpp Mon Sep 01 13:10:18 2014 -0400 > +++ b/src/share/vm/opto/parse2.cpp Fri Aug 29 11:12:45 2014 +0100 > @@ -1689,7 +1689,7 @@ > a = pop(); // the array itself > const TypeOopPtr* elemtype = _gvn.type(a)->is_aryptr()->elem()->make_oopptr(); > const TypeAryPtr* adr_type = TypeAryPtr::OOPS; > - Node* store = store_oop_to_array(control(), a, d, adr_type, c, elemtype, T_OBJECT, MemNode::release); > + Node* store = store_oop_to_array(control(), a, d, adr_type, c, elemtype, T_OBJECT, StoreNode::release_if_reference(T_OBJECT)); > break; > } > case Bytecodes::_lastore: { > From david.holmes at oracle.com Thu Sep 4 02:08:44 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 04 Sep 2014 12:08:44 +1000 Subject: Release store in C2 putfield In-Reply-To: <54074942.9050506@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> Message-ID: <5407C9AC.70800@oracle.com> On 4/09/2014 3:00 AM, Andrew Haley wrote: > On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: >> Hi Andrew, >> >> On 09/03/2014 06:16 PM, Andrew Haley wrote: >>> In Parse::do_put_xxx, I see >>> >>> const MemNode::MemOrd mo = is_vol ? // Volatile fields need releasing stores. MemNode::release : // Non-volatile fields also need releasing stores if they hold an // object reference, because the object reference might point to // a freshly created object. StoreNode::release_if_reference(bt); >>> >>> AArch64 doesn't need a release store here: its memory guarantees are strong enough that a simple store is sufficient. But my question is not about that, but how to handle it properly. >> >> I can't answer the question you posed, but let me challenge your premise. >> >> Why is a simple store is sufficient here for AArch64? Do the stores ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" only applies to explicit synchronization instructions. > > I discussed this with Peter Sewell, and it's explained in his > (co-authored) paper "A Tutorial Introduction to the ARM and POWER > Relaxed Memory Models" at > http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in > Section 4.1, "Enforcing Order with Dependencies" > > In the AArch64 spec, we have: > > B2.7.2 Ordering requirements > > If an address dependency exists between two reads or between a read > and a write, then those memory accesses are observed in program > order by all observers within the shareability domain of the memory > > So, an address dependency and a DMB when an object is created is all > we need. I don't see how that applies to general volatile putfields?? The reason the PPC64 port added the additional MemNodes was so that some of the special cases could be handled via those nodes to avoid redundant barriers. David ------ > Andrew. > From david.holmes at oracle.com Thu Sep 4 02:20:50 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 04 Sep 2014 12:20:50 +1000 Subject: Release store in C2 putfield In-Reply-To: <5407C9AC.70800@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407C9AC.70800@oracle.com> Message-ID: <5407CC82.9000904@oracle.com> Ignore this. Was trying to do too many things at once and missed the whole point. Sorry. David On 4/09/2014 12:08 PM, David Holmes wrote: > On 4/09/2014 3:00 AM, Andrew Haley wrote: >> On 09/03/2014 05:49 PM, Aleksey Shipilev wrote: >>> Hi Andrew, >>> >>> On 09/03/2014 06:16 PM, Andrew Haley wrote: >>>> In Parse::do_put_xxx, I see >>>> >>>> const MemNode::MemOrd mo = is_vol ? // Volatile fields need >>>> releasing stores. MemNode::release : // Non-volatile fields also >>>> need releasing stores if they hold an // object reference, because >>>> the object reference might point to // a freshly created object. >>>> StoreNode::release_if_reference(bt); >>>> >>>> AArch64 doesn't need a release store here: its memory guarantees are >>>> strong enough that a simple store is sufficient. But my question is >>>> not about that, but how to handle it properly. >>> >>> I can't answer the question you posed, but let me challenge your >>> premise. >>> >>> Why is a simple store is sufficient here for AArch64? Do the stores >>> ordered on AArch64 (I thought not)? I thought the "RC" part in "RCsc" >>> only applies to explicit synchronization instructions. >> >> I discussed this with Peter Sewell, and it's explained in his >> (co-authored) paper "A Tutorial Introduction to the ARM and POWER >> Relaxed Memory Models" at >> http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf in >> Section 4.1, "Enforcing Order with Dependencies" >> >> In the AArch64 spec, we have: >> >> B2.7.2 Ordering requirements >> >> If an address dependency exists between two reads or between a read >> and a write, then those memory accesses are observed in program >> order by all observers within the shareability domain of the memory >> >> So, an address dependency and a DMB when an object is created is all >> we need. > > I don't see how that applies to general volatile putfields?? > > The reason the PPC64 port added the additional MemNodes was so that some > of the special cases could be handled via those nodes to avoid redundant > barriers. > > David > ------ > > > > > > >> Andrew. >> From david.holmes at oracle.com Thu Sep 4 04:35:45 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 04 Sep 2014 14:35:45 +1000 Subject: RFR (preliminary): JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <54046278.7050404@oracle.com> References: <54046278.7050404@oracle.com> Message-ID: <5407EC21.8050709@oracle.com> Hi Magnus, On 1/09/2014 10:11 PM, Magnus Ihse Bursie wrote: > Even in the default log level ("warn"), hotspots builds are extremely > verbose. With the new jigsaw build system, hotspot is build in parallel > with the jdk, and the sheer amount of hotspot output makes the jdk > output practically disappear. > > This fix will make the following changes: > * When hotspot is build from the top dir with the default log level, all > repetetive and purely informative output is hidden (e.g. names of files > compiled, and the "INFO:" blobs). I think I probably want a default log level a little more informative than that - I like to see visible progress indicators. :) > * When hotspot is build from the top dir, with any other log level > (info, debug, trace), all output will be there, as before. Would be nice to have fixed the excessive/repetitive INFO blocks re FDS :) but that requires more than just controlling an on/off switch. > * When hotspot is build from the hotspot repo, all output will be there, > as before. > > Note! This is a preliminary review -- I have made the necessary changes > for Linux only. If this fix gets thumbs up, I'll continue and apply the > same pattern to the rest of the platforms. But I didn't want to do all > that duplication until I felt certain that I wouldn't have to change > something major. The changes themselves are mostly trivial, but they are > all over the place :-(. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 > WebRev: > http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.01 Seems to be some overlap with the $(QUIETLY) mechanism - but to be honest I always have trouble remembering how that works. In looking at it now it seems to me that "$(QUIETLY) echo" is incorrect as the text is always echoed, what gets suppressed is the echoing of the echo command itself - which seems pointless. So I think all "$(QUIETLY) echo" should just be @echo. But then replacing @echo with a $(ECHO) that may be silent would seem a bit cleaner that "@echo $(LOG_INFO). (Not sure what you are doing in the rest of the build). print_info is nice. Cheers, David > > /Magnus From tobias.hartmann at oracle.com Thu Sep 4 04:57:37 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 04 Sep 2014 06:57:37 +0200 Subject: [8u40] RFR(S): 8048879: "unexpected yanked node" opto/postaloc.cpp:139 In-Reply-To: <5407462F.2060802@oracle.com> References: <5406C31A.5000408@oracle.com> <5407462F.2060802@oracle.com> Message-ID: <5407F141.4060607@oracle.com> Thanks, Vladimir. Best, Tobias On 03.09.2014 18:47, Vladimir Kozlov wrote: > Good. > > Thanks, > Vladimir > > On 9/3/14 12:28 AM, Tobias Hartmann wrote: >> Hi, >> >> please review this 8u40 backport request. The changes were pushed two >> weeks ago and nightly testing showed no problems. >> >> The patch applies cleanly to 8u40. >> >> Master Bug: https://bugs.openjdk.java.net/browse/JDK-8048879 >> Webrev: http://cr.openjdk.java.net/~thartmann/8048879/webrev.00/ >> Changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/7c8d05c88072 >> >> Thanks, >> Tobias From mikael.gerdin at oracle.com Thu Sep 4 06:19:24 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 04 Sep 2014 08:19:24 +0200 Subject: RFR: 8u40: Thread and management extension support In-Reply-To: <540726DE.20207@oracle.com> References: <540726DE.20207@oracle.com> Message-ID: <2314263.Riq1Hd4g4K@mgerdin-lap> Hi Stefan, On Wednesday 03 September 2014 16.34.06 Stefan Johansson wrote: > Hi, > > Please review these changes to allow thread and management extensions in > the VM. > http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ This looks like a pretty clean refactoring to allow further extensions with per-thread data. It looks like you removed a random newline in the Thread constructor and another newline in the Thread class declaration. thread_ext.cpp should only need two includes: precompiled.hpp and thread_ext.hpp it does not reference anything in the Thread class. I've verified that the code which is moved outside INCLUDE_MANAGEMENT in management.cpp is a clean copy. With those small nits fixed immediately or deferred for future cleanup this looks good to me. /Mikael > > There is currently no JBS issue open for this issue but one will be open > shortly. > > Best regards, > Stefan From John.Coomes at oracle.com Thu Sep 4 06:36:38 2014 From: John.Coomes at oracle.com (John Coomes) Date: Wed, 3 Sep 2014 23:36:38 -0700 Subject: RFR: 8u40: Thread and management extension support In-Reply-To: <540726DE.20207@oracle.com> References: <540726DE.20207@oracle.com> Message-ID: <21512.2166.18393.591941@mykonos.us.oracle.com> Stefan Johansson (stefan.johansson at oracle.com) wrote: > Hi, > > Please review these changes to allow thread and management extensions in > the VM. > http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ > > There is currently no JBS issue open for this issue but one will be open > shortly. Looks good to me. FWIW, I don't care too much for the "break" in Threads::find_java_thread_from_java_tid() (line 3891); it should instead be "return thread". But since this code is just being moved verbatim from management.cpp, it's ok to leave as is. -John From stefan.johansson at oracle.com Thu Sep 4 06:43:39 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 04 Sep 2014 08:43:39 +0200 Subject: RFR: 8u40: Thread and management extension support In-Reply-To: <2314263.Riq1Hd4g4K@mgerdin-lap> References: <540726DE.20207@oracle.com> <2314263.Riq1Hd4g4K@mgerdin-lap> Message-ID: <54080A1B.5040004@oracle.com> Thanks Mikael for looking at this. On 2014-09-04 08:19, Mikael Gerdin wrote: > Hi Stefan, > > On Wednesday 03 September 2014 16.34.06 Stefan Johansson wrote: >> Hi, >> >> Please review these changes to allow thread and management extensions in >> the VM. >> http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ > This looks like a pretty clean refactoring to allow further extensions with > per-thread data. > > It looks like you removed a random newline in the Thread constructor and > another newline in the Thread class declaration. Fixed. > thread_ext.cpp should only need two includes: > precompiled.hpp and thread_ext.hpp it does not reference anything in the > Thread class. Fixed. > I've verified that the code which is moved outside INCLUDE_MANAGEMENT in > management.cpp is a clean copy. > > With those small nits fixed immediately or deferred for future cleanup this > looks good to me. New webrev at: http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.01/ Stefan > /Mikael > >> There is currently no JBS issue open for this issue but one will be open >> shortly. >> >> Best regards, >> Stefan From stefan.johansson at oracle.com Thu Sep 4 06:44:26 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 04 Sep 2014 08:44:26 +0200 Subject: RFR: 8u40: Thread and management extension support In-Reply-To: <21512.2166.18393.591941@mykonos.us.oracle.com> References: <540726DE.20207@oracle.com> <21512.2166.18393.591941@mykonos.us.oracle.com> Message-ID: <54080A4A.8030805@oracle.com> On 2014-09-04 08:36, John Coomes wrote: > Stefan Johansson (stefan.johansson at oracle.com) wrote: >> Hi, >> >> Please review these changes to allow thread and management extensions in >> the VM. >> http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ >> >> There is currently no JBS issue open for this issue but one will be open >> shortly. > Looks good to me. Thanks John, See updated webrev due to Mikaels comments: http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.01/ Stefan > > FWIW, I don't care too much for the "break" in > Threads::find_java_thread_from_java_tid() (line 3891); it should > instead be "return thread". But since this code is just being moved > verbatim from management.cpp, it's ok to leave as is. > > -John From mikael.gerdin at oracle.com Thu Sep 4 07:04:13 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 04 Sep 2014 09:04:13 +0200 Subject: RFR: 8u40: Thread and management extension support In-Reply-To: <54080A1B.5040004@oracle.com> References: <540726DE.20207@oracle.com> <2314263.Riq1Hd4g4K@mgerdin-lap> <54080A1B.5040004@oracle.com> Message-ID: <11103996.PgrLdTYxyV@mgerdin-lap> Stefan, On Thursday 04 September 2014 08.43.39 Stefan Johansson wrote: > Thanks Mikael for looking at this. > > On 2014-09-04 08:19, Mikael Gerdin wrote: > > Hi Stefan, > > > > On Wednesday 03 September 2014 16.34.06 Stefan Johansson wrote: > >> Hi, > >> > >> Please review these changes to allow thread and management extensions in > >> the VM. > >> http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ > > > > This looks like a pretty clean refactoring to allow further extensions > > with > > per-thread data. > > > > It looks like you removed a random newline in the Thread constructor and > > another newline in the Thread class declaration. > > Fixed. > > > thread_ext.cpp should only need two includes: > > precompiled.hpp and thread_ext.hpp it does not reference anything in the > > Thread class. > > Fixed. > > > I've verified that the code which is moved outside INCLUDE_MANAGEMENT in > > management.cpp is a clean copy. > > > > With those small nits fixed immediately or deferred for future cleanup > > this > > looks good to me. > > New webrev at: > http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.01/ Looks good. /Mikael > > Stefan > > > /Mikael > > > >> There is currently no JBS issue open for this issue but one will be open > >> shortly. > >> > >> Best regards, > >> Stefan From volker.simonis at gmail.com Thu Sep 4 07:37:18 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 4 Sep 2014 09:37:18 +0200 Subject: RFR(XXS): 8057129: Fix AIX build after the Extend CompileCommand=option change 8055286 In-Reply-To: <54079384.30904@oracle.com> References: <54079384.30904@oracle.com> Message-ID: Thanks a lot Vladimir! Volker On Thu, Sep 4, 2014 at 12:17 AM, Vladimir Kozlov wrote: > Looks good. I will push it today. > > Thanks, > Vladimir > > > On 9/3/14 5:48 AM, Volker Simonis wrote: >> >> Hi, >> >> could somebody please review and sponsor this tiny change which fixes >> an AIX build failure after "8055286: Extend CompileCommand=option to >> handle numeric parameters" (details see below). >> >> It would be nice if this fix could be pushed to hs-comp before hs-comp >> gets pushed to the other hs repos: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8057129/ >> https://bugs.openjdk.java.net/browse/JDK-8057129 >> >> The AIX xlC compiler is overly picky with regard to section 14.6.4.2 >> "Candidate functions" of the C++ standard (see >> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3242.pdf) >> which states: >> >> "If the function name is an unqualified-id and the call would be >> ill-formed or would find a better match had the lookup within the >> associated namespaces considered all the function declarations with >> external linkage introduced in those namespaces in all translation >> units, not just considering those declarations found in the template >> definition and template instantiation contexts, then the program has >> undefined behavior." >> >> xlC implements this by not taking into account static functions which >> have internal linkage and terminates with the error message: >> >> "hotspot-comp/src/share/vm/compiler/compilerOracle.cpp", line 364.10: >> 1540-0274 (S) The name lookup for "get_option_value" did not find a >> declaration. >> "hotspot-comp/src/share/vm/compiler/compilerOracle.cpp", line 364.10: >> 1540-1292 (I) Static declarations are not considered for a function >> call if the function is not qualified. >> >> The fix is trivial - just qualify the call to "get_option_value" like >> this: >> >> return ::get_option_value(method, option, value); >> >> Thank you and best regards, >> Volker >> > From John.Coomes at oracle.com Thu Sep 4 07:48:15 2014 From: John.Coomes at oracle.com (John Coomes) Date: Thu, 4 Sep 2014 00:48:15 -0700 Subject: RFR: 8u40: Thread and management extension support In-Reply-To: <54080A4A.8030805@oracle.com> References: <540726DE.20207@oracle.com> <21512.2166.18393.591941@mykonos.us.oracle.com> <54080A4A.8030805@oracle.com> Message-ID: <21512.6463.751311.507391@mykonos.us.oracle.com> Stefan Johansson (stefan.johansson at oracle.com) wrote: > On 2014-09-04 08:36, John Coomes wrote: > > Stefan Johansson (stefan.johansson at oracle.com) wrote: > >> Hi, > >> > >> Please review these changes to allow thread and management extensions in > >> the VM. > >> http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.00/ > >> > >> There is currently no JBS issue open for this issue but one will be open > >> shortly. > > Looks good to me. > Thanks John, > > See updated webrev due to Mikaels comments: > http://cr.openjdk.java.net/~sjohanss/thread-ext/webrev.01/ Looks good, ship it. -John > > FWIW, I don't care too much for the "break" in > > Threads::find_java_thread_from_java_tid() (line 3891); it should > > instead be "return thread". But since this code is just being moved > > verbatim from management.cpp, it's ok to leave as is. > > > > -John > From volker.simonis at gmail.com Thu Sep 4 08:14:57 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Thu, 4 Sep 2014 10:14:57 +0200 Subject: Release store in C2 putfield In-Reply-To: <5407B223.5010300@oracle.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> Message-ID: I've just checked that we only use this flag (i.e. as_Store()->is_unordered()/as_Store()->is_release()) on our Itanium port. Other platforms use the MemBarRelease/MemBarStoreStore nodes which are there anyway. On Itanium, we don't use these nodes that's why we need the flag. But I don't see how this bothers anybody. If you don't use this flag in your .ad file in your Store nodes it will have no side effect. Or am I missing something? Regards, Martin and Volker On Thu, Sep 4, 2014 at 2:28 AM, Vladimir Kozlov wrote: > In general I am fine with such platform specific change in shared code. > > But in current case non-volatile release_store code affects only PPC64. As > Aleksey suggested may be we should use NOT_PPC64() macro instead. > Also I don't see calls as_Store()->is_unordered() in ppc.ad file. Looks like > PPC64 doesn't check this flag. > > Thanks, > Vladimir > > > On 9/3/14 11:16 AM, Andrew Haley wrote: >> >> On 09/03/2014 06:21 PM, Vladimir Kozlov wrote: >>> >>> Andrew, >>> >>> Do you need unordered in Parse::array_store() too? >> >> >> Yes. >> >>> Another way of doing it is to define MemNode::release_if_reference() in >>> .ad files in 'source %{' section. >> >> >> Yes, that's what we've got now. Thanks. >> >> Andrew. >> >> >> # HG changeset patch >> # User Edward Nevill edward.nevill at linaro.org >> # Date 1409307165 -3600 >> # Fri Aug 29 11:12:45 2014 +0100 >> # Node ID fc245bc14fa3589074c78ceb0e25ecf36ee3e110 >> # Parent 32fae3443576ac6b4b5ac0770c0829ce6c08764e >> Dont use a release store when storing an OOP in a non-volatile field. >> >> diff -r 32fae3443576 -r fc245bc14fa3 src/share/vm/opto/memnode.hpp >> --- a/src/share/vm/opto/memnode.hpp Mon Sep 01 13:10:18 2014 -0400 >> +++ b/src/share/vm/opto/memnode.hpp Fri Aug 29 11:12:45 2014 +0100 >> @@ -503,6 +503,12 @@ >> // Conservatively release stores of object references in order to >> // ensure visibility of object initialization. >> static inline MemOrd release_if_reference(const BasicType t) { >> + // AArch64 doesn't need a release store because if there is an >> + // address dependency between a read and a write, then those >> + // memory accesses are observed in program order by all observers >> + // within the shareability domain. >> + AARCH64_ONLY(return unordered); >> + >> const MemOrd mo = (t == T_ARRAY || >> t == T_ADDRESS || // Might be the address of an >> object reference (`boxing'). >> t == T_OBJECT) ? release : unordered; >> diff -r 32fae3443576 -r fc245bc14fa3 src/share/vm/opto/parse2.cpp >> --- a/src/share/vm/opto/parse2.cpp Mon Sep 01 13:10:18 2014 -0400 >> +++ b/src/share/vm/opto/parse2.cpp Fri Aug 29 11:12:45 2014 +0100 >> @@ -1689,7 +1689,7 @@ >> a = pop(); // the array itself >> const TypeOopPtr* elemtype = >> _gvn.type(a)->is_aryptr()->elem()->make_oopptr(); >> const TypeAryPtr* adr_type = TypeAryPtr::OOPS; >> - Node* store = store_oop_to_array(control(), a, d, adr_type, c, >> elemtype, T_OBJECT, MemNode::release); >> + Node* store = store_oop_to_array(control(), a, d, adr_type, c, >> elemtype, T_OBJECT, StoreNode::release_if_reference(T_OBJECT)); >> break; >> } >> case Bytecodes::_lastore: { >> > From aph at redhat.com Thu Sep 4 08:21:46 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 09:21:46 +0100 Subject: Release store in C2 putfield In-Reply-To: References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> Message-ID: <5408211A.5060307@redhat.com> On 04/09/14 09:14, Volker Simonis wrote: > I've just checked that we only use this flag (i.e. > as_Store()->is_unordered()/as_Store()->is_release()) on our Itanium > port. Other platforms use the MemBarRelease/MemBarStoreStore nodes > which are there anyway. On Itanium, we don't use these nodes that's > why we need the flag. > > But I don't see how this bothers anybody. If you don't use this flag > in your .ad file in your Store nodes it will have no side effect. Or > am I missing something? I think you must be. We generate a release store if the node asks for one. AArch64 doesn't usually need separate barriers. But that's not the point, really: the point is that not every reference store needs to be a release, which is how it is right now. Andrew. From erik.osterlund at lnu.se Thu Sep 4 09:05:13 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 4 Sep 2014 09:05:13 +0000 Subject: Single byte Atomic::cmpxchg implementation Message-ID: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> Hi, The implementation of single byte Atomic::cmpxchg on x86 (and all other platforms) emulates the single byte cmpxchgb instruction using a loop of jint-sized load and cmpxchgl and code to dynamically align the destination address. This code is used for GC-code related to remembered sets currently. I have the changes on my platform (amd64, bsd) to simply use the native cmpxchgb instead but could provide a patch fixing this unnecessary performance glitch for all supported x86 if anybody wants this? /Erik From aleksey.shipilev at oracle.com Thu Sep 4 09:15:05 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Thu, 04 Sep 2014 13:15:05 +0400 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> Message-ID: <54082D99.409@oracle.com> Hi, On 09/04/2014 01:05 PM, Erik ?sterlund wrote: > The implementation of single byte Atomic::cmpxchg on x86 (and all > other platforms) emulates the single byte cmpxchgb instruction using > a loop of jint-sized load and cmpxchgl and code to dynamically align > the destination address. > > This code is used for GC-code related to remembered sets currently. > > I have the changes on my platform (amd64, bsd) to simply use the > native cmpxchgb instead but could provide a patch fixing this > unnecessary performance glitch for all supported x86 if anybody wants > this? Yes please, let's have look at this. Do you have a quantifiable performance improvement with your change? -Aleksey. From mikael.gerdin at oracle.com Thu Sep 4 09:20:06 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 04 Sep 2014 11:20:06 +0200 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> Message-ID: <3855070.ADDnZ0LX5H@mgerdin-lap> Hi Erik, On Thursday 04 September 2014 09.05.13 Erik ?sterlund wrote: > Hi, > > The implementation of single byte Atomic::cmpxchg on x86 (and all other > platforms) emulates the single byte cmpxchgb instruction using a loop of > jint-sized load and cmpxchgl and code to dynamically align the destination > address. > > This code is used for GC-code related to remembered sets currently. > > I have the changes on my platform (amd64, bsd) to simply use the native > cmpxchgb instead but could provide a patch fixing this unnecessary > performance glitch for all supported x86 if anybody wants this? I think that sounds good. Would you mind looking at other cpu arches to see if they provide something similar? It's ok if you can't build the code for the other arches, I can help you with that. /Mikael > > /Erik From bertrand.delsart at oracle.com Thu Sep 4 09:30:38 2014 From: bertrand.delsart at oracle.com (Bertrand Delsart) Date: Thu, 04 Sep 2014 11:30:38 +0200 Subject: Release store in C2 putfield In-Reply-To: <540764E0.9030601@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> Message-ID: <5408313E.1020500@oracle.com> I'm not a C2 expert but from what I have quickly checked, an issue may be that we need StoreStore ordering on some platforms. This should for instance be true for cardmarking (new stored oop must be visible before the card is marked). This may also be true for the oop stores in general, as initially discussed. [IMHO this is related to final fields, which have to be visible when the object escape. Barriers are the end of the constructors may not be sufficient if objects can can escape before the end of their init() method. Systematic StoreStore barriers on oop_store are an easy way to solve the issue without escape analysis]. Unfortunately, MemNode only defines higher level 'release' and 'acquire'. This means that if you want to use MemNode to guarantee the StoreStore (instead of using a separate membar), then you need to use MemNode::release... which uselessly adds the LoadStore semantic. LoadStore may require a stronger barrier (for instance "DMB SY" instead of "DMB ST" on ARM32). We have the same issue with some other code in hotspot. Some StoreStore constraints just before a write have been implemented using OrderAccess::release_store(). The later is currently slightly more efficient on SPARC/x86... but this is because OrderAccess::storestore() is not fully optimized (we just need to prevent C++ compiler reordering, we should not have to actually generate code for storestore on TSO systems). Unfortunately, on platforms with weaker memory models, release_store() may be less efficient than a storestore() followed by a write :-( May be we need in OrderAccess and in MemNode a new store operation weaker than release_store, ordering only the store wrt previous stores (e.g. "membar #StoreStore; write"). This will be a simple store on TSO systems (+ something to prevent compiler reordering) but will be expanded to whatever is the most efficient barrier on the other systems. Regards, Bertrand. On 03/09/14 20:58, Andrew Haley wrote: > On 09/03/2014 07:29 PM, Aleksey Shipilev wrote: >> On 09/03/2014 10:25 PM, Andrew Haley wrote: >>> On 09/03/2014 07:10 PM, Aleksey Shipilev wrote: >>>> So there, let's figure out whether we should just purge the entire block! :) >>> >>> Okay. It's better than arguing about interpretation of the paper. >> >> Let's wait a bit for Goetz's input on this. It was his commit that introduced this in the first place: >> >> $ hg log -r 5983 changeset: 5983:2113136690bc parent: 5981:eb178e97560c user: goetz date: Fri Nov 15 11:05:32 2013 -0800 summary: 8024921: PPC64 (part 113): Extend Load and Store nodes to know about memory ordering >> >> We can dig in the mail history if Goetz does not reply any time soon. > > Okay. While we're discussing this, I'd better tell you that I am also > looking at why the card table write is a release store. But that's for > later. > > Andrew. > -- Bertrand Delsart, Grenoble Engineering Center Oracle, 180 av. de l'Europe, ZIRST de Montbonnot 38334 Saint Ismier, FRANCE bertrand.delsart at oracle.com Phone : +33 4 76 18 81 23 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From martin.doerr at sap.com Thu Sep 4 09:29:48 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 4 Sep 2014 09:29:48 +0000 Subject: Release store in C2 putfield In-Reply-To: <5408211A.5060307@redhat.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> Hi Andrew, got it. You want to use releasing stores on AArch64 like we do it on IA64. We exploit that the releasing stores perform better than separate memory barrier instructions on IA64. That's why we implemented the MemBarRelease and MemBarStoreStore as empty nodes and rely on the release flag of the store nodes. The release flag for volatile store replaces the MemBarRelease and the release_if_reference case replaces the MemBarStoreStore which is needed to make the object initialization visible for other threads like GC. In other words, we skip the separate memory barriers and release all oop stores. I'm not familiar with AArch64. Don't you want to implement the 2 nodes with empty encoding? The ARM memory model is similar to PPC where we definitely need release barriers. I've read parts of the email thread in which you were citing "A Tutorial Introduction to the ARM and POWER Relaxed Memory Models", but I've only seen explanations about the ordering on the reader's side. I agree with that the reader of the oop doesn't need barriers. But how do you ensure that the writer "publishes" the initializing stores before the oop store? Best regards, Martin -----Original Message----- From: Andrew Haley [mailto:aph at redhat.com] Sent: Donnerstag, 4. September 2014 10:22 To: Volker Simonis; Vladimir Kozlov Cc: hotspot-dev Source Developers; Lindenmaier, Goetz; Doerr, Martin Subject: Re: Release store in C2 putfield On 04/09/14 09:14, Volker Simonis wrote: > I've just checked that we only use this flag (i.e. > as_Store()->is_unordered()/as_Store()->is_release()) on our Itanium > port. Other platforms use the MemBarRelease/MemBarStoreStore nodes > which are there anyway. On Itanium, we don't use these nodes that's > why we need the flag. > > But I don't see how this bothers anybody. If you don't use this flag > in your .ad file in your Store nodes it will have no side effect. Or > am I missing something? I think you must be. We generate a release store if the node asks for one. AArch64 doesn't usually need separate barriers. But that's not the point, really: the point is that not every reference store needs to be a release, which is how it is right now. Andrew. From aph at redhat.com Thu Sep 4 09:45:14 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 10:45:14 +0100 Subject: Release store in C2 putfield In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> Message-ID: <540834AA.3050805@redhat.com> Hi, On 09/04/2014 10:29 AM, Doerr, Martin wrote: > got it. You want to use releasing stores on AArch64 like we do it on IA64. > > We exploit that the releasing stores perform better than separate > memory barrier instructions on IA64. > That's why we implemented the MemBarRelease and MemBarStoreStore as > empty nodes and rely on the release flag of the store nodes. Okay. > The release flag for volatile store replaces the MemBarRelease and > the release_if_reference case replaces the MemBarStoreStore which is > needed to make the object initialization visible for other threads > like GC. In other words, we skip the separate memory barriers and > release all oop stores. Right. But the thing I'm complaining about is the release store on every putfield, regardless of whether it's volatile. That's surely wrong. It has a performance hit. I'm sure IA64 would be better without it, too. > I'm not familiar with AArch64. Don't you want to implement the 2 > nodes with empty encoding? No, because I want (and I'm sure we all want) a memory barrier at the end of object creation. Then we don't need one on every putfield. > The ARM memory model is similar to PPC where we definitely need > release barriers. > I've read parts of the email thread in which you were citing "A > Tutorial Introduction to the ARM and POWER Relaxed Memory Models", > but I've only seen explanations about the ordering on the reader's > side. I agree with that the reader of the oop doesn't need barriers. That's true, as long as there is a store barrier when an object is created. > But how do you ensure that the writer "publishes" the initializing > stores before the oop store? I have a store barrier when the object is created: that's correct. Andrew. From martin.doerr at sap.com Thu Sep 4 10:00:00 2014 From: martin.doerr at sap.com (Doerr, Martin) Date: Thu, 4 Sep 2014 10:00:00 +0000 Subject: Release store in C2 putfield In-Reply-To: <540834AA.3050805@redhat.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> <540834AA.3050805@redhat.com> Message-ID: <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> Hi Andrew, the problem on IA64 is that there's no separate release memory barrier. Issuing a full fence after every object creation is more expensive than releasing all oop stores. This might be different on AArch64 where other memory barriers are available. I'd appreciate a proposal how we can support both ways. Best regards, Martin -----Original Message----- From: Andrew Haley [mailto:aph at redhat.com] Sent: Donnerstag, 4. September 2014 11:45 To: Doerr, Martin; Volker Simonis; Vladimir Kozlov Cc: hotspot-dev Source Developers; Lindenmaier, Goetz Subject: Re: Release store in C2 putfield Hi, On 09/04/2014 10:29 AM, Doerr, Martin wrote: > got it. You want to use releasing stores on AArch64 like we do it on IA64. > > We exploit that the releasing stores perform better than separate > memory barrier instructions on IA64. > That's why we implemented the MemBarRelease and MemBarStoreStore as > empty nodes and rely on the release flag of the store nodes. Okay. > The release flag for volatile store replaces the MemBarRelease and > the release_if_reference case replaces the MemBarStoreStore which is > needed to make the object initialization visible for other threads > like GC. In other words, we skip the separate memory barriers and > release all oop stores. Right. But the thing I'm complaining about is the release store on every putfield, regardless of whether it's volatile. That's surely wrong. It has a performance hit. I'm sure IA64 would be better without it, too. > I'm not familiar with AArch64. Don't you want to implement the 2 > nodes with empty encoding? No, because I want (and I'm sure we all want) a memory barrier at the end of object creation. Then we don't need one on every putfield. > The ARM memory model is similar to PPC where we definitely need > release barriers. > I've read parts of the email thread in which you were citing "A > Tutorial Introduction to the ARM and POWER Relaxed Memory Models", > but I've only seen explanations about the ordering on the reader's > side. I agree with that the reader of the oop doesn't need barriers. That's true, as long as there is a store barrier when an object is created. > But how do you ensure that the writer "publishes" the initializing > stores before the oop store? I have a store barrier when the object is created: that's correct. Andrew. From david.holmes at oracle.com Thu Sep 4 10:15:55 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 04 Sep 2014 20:15:55 +1000 Subject: Release store in C2 putfield In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> <540834AA.3050805@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> Message-ID: <54083BDB.2010407@oracle.com> On 4/09/2014 8:00 PM, Doerr, Martin wrote: > Hi Andrew, > > the problem on IA64 is that there's no separate release memory barrier. > Issuing a full fence after every object creation is more expensive than releasing all oop stores. > This might be different on AArch64 where other memory barriers are available. > I'd appreciate a proposal how we can support both ways. I would think an abstraction to represent each special context. One will be a no-op on IA64, the other a no-op on AArch64. Though I think what we are highlighting here is that shared C2 abstractions do not fit when we want to optimize for specific architectures. A way to redefine this code in a platform specific file may be a better long term solution. I'll also add that a "barrier" at the end of construction is not necessarily sufficient as Bertrand has already mentioned, if 'this' can escape. David > Best regards, > Martin > > -----Original Message----- > From: Andrew Haley [mailto:aph at redhat.com] > Sent: Donnerstag, 4. September 2014 11:45 > To: Doerr, Martin; Volker Simonis; Vladimir Kozlov > Cc: hotspot-dev Source Developers; Lindenmaier, Goetz > Subject: Re: Release store in C2 putfield > > Hi, > > On 09/04/2014 10:29 AM, Doerr, Martin wrote: > >> got it. You want to use releasing stores on AArch64 like we do it on IA64. >> >> We exploit that the releasing stores perform better than separate >> memory barrier instructions on IA64. >> That's why we implemented the MemBarRelease and MemBarStoreStore as >> empty nodes and rely on the release flag of the store nodes. > > Okay. > >> The release flag for volatile store replaces the MemBarRelease and >> the release_if_reference case replaces the MemBarStoreStore which is >> needed to make the object initialization visible for other threads >> like GC. In other words, we skip the separate memory barriers and >> release all oop stores. > > Right. But the thing I'm complaining about is the release store on > every putfield, regardless of whether it's volatile. That's surely > wrong. It has a performance hit. I'm sure IA64 would be better > without it, too. > >> I'm not familiar with AArch64. Don't you want to implement the 2 >> nodes with empty encoding? > > No, because I want (and I'm sure we all want) a memory barrier at the > end of object creation. Then we don't need one on every putfield. > >> The ARM memory model is similar to PPC where we definitely need >> release barriers. > >> I've read parts of the email thread in which you were citing "A >> Tutorial Introduction to the ARM and POWER Relaxed Memory Models", >> but I've only seen explanations about the ordering on the reader's >> side. I agree with that the reader of the oop doesn't need barriers. > > That's true, as long as there is a store barrier when an object is > created. > >> But how do you ensure that the writer "publishes" the initializing >> stores before the oop store? > > I have a store barrier when the object is created: that's correct. > > Andrew. > From aph at redhat.com Thu Sep 4 10:21:20 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 11:21:20 +0100 Subject: Release store in C2 putfield In-Reply-To: <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> <540834AA.3050805@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> Message-ID: <54083D20.5060402@redhat.com> On 09/04/2014 11:00 AM, Doerr, Martin wrote: > the problem on IA64 is that there's no separate release memory barrier. > Issuing a full fence after every object creation is more expensive > than releasing all oop stores. Really? Gosh. > This might be different on AArch64 where other memory barriers are available. It is: we have a full set of barriers and load acquire/store release. I'd like to generate those where possible. > I'd appreciate a proposal how we can support both ways. Okay. I'd like more types of barrier (and perhaps more types of memNode) so that the back end can choose what it needs to do. Then I could simply ignore release barriers generated for store release instructions, etc. It might also be a good idea to mark your IA64 release on every oop store as IA64-only. After all, it doesn't seem to be relevant to any other platform. Then my immediate problem would go away. Andrew. From aph at redhat.com Thu Sep 4 10:22:19 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 11:22:19 +0100 Subject: Release store in C2 putfield In-Reply-To: <54083BDB.2010407@oracle.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> <540834AA.3050805@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> <54083BDB.2010407@oracle.com> Message-ID: <54083D5B.5080502@redhat.com> On 09/04/2014 11:15 AM, David Holmes wrote: > I'll also add that a "barrier" at the end of construction is not > necessarily sufficient as Bertrand has already mentioned, if 'this' can > escape. Can you provide us with an example of this which isn't a Java programmer error? Andrew. From aph at redhat.com Thu Sep 4 10:30:56 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 11:30:56 +0100 Subject: Release store in C2 putfield In-Reply-To: <5408313E.1020500@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> Message-ID: <54083F60.80206@redhat.com> On 09/04/2014 10:30 AM, Bertrand Delsart wrote: > I'm not a C2 expert but from what I have quickly checked, an issue may > be that we need StoreStore ordering on some platforms. > > This should for instance be true for cardmarking (new stored oop > must be visible before the card is marked). Okay. I can live with that. Is there a corresponding read barrier in the code which scans the card table? > This may also be true for the oop stores in general, as initially > discussed. [IMHO this is related to final fields, which have to be > visible when the object escape. Barriers are the end of the > constructors may not be sufficient if objects can can escape before > the end of their init() method. I'm pretty sure we do this correctly. Are you aware of any place (except unsafe publication, which is a programmer error) where this might happen? We generate a barrier at the end of a constructor if there is a final field and at the end of object creation. I should explain: I respect all nodes, and generate all barriers, except when there is a redundant barrier. So, if a barrier is immediately preceded by a load acquire I don't emit anything; likewise for a store release. So, for something like volatile int x; void foo() { while (true) x = 1; } I get ;; membar_release (elided) 0x0000007fa822f1e4: add x10, x19, #0xc ;; 0x1 0x0000007fa822f1e8: orr w12, wzr, #0x1 0x0000007fa822f1ec: stlr w12, [x10] 0x0000007fa822f1f0: dmb ish ;*putfield x ; - VolatileStore::foo at 2 (line 5) 0x0000007fa822f1f4: adrp x11, 0x0000007fb7ff7000 ; OopMap{r19=Oop off=120} ;*goto ; - VolatileStore::foo at 5 (line 5) ; {poll} 0x0000007fa822f1f8: ldr wzr, [x11] ;*goto ; - VolatileStore::foo at 5 (line 5) ; {poll} 0x0000007fa822f1fc: b 0x0000007fa822f1e4 Which is correct, I think. > Unfortunately, MemNode only defines higher level 'release' and > 'acquire'. This means that if you want to use MemNode to guarantee the > StoreStore (instead of using a separate membar), then you need to use > MemNode::release... which uselessly adds the LoadStore semantic. > > LoadStore may require a stronger barrier (for instance "DMB SY" instead > of "DMB ST" on ARM32). > > We have the same issue with some other code in hotspot. Some StoreStore > constraints just before a write have been implemented using > OrderAccess::release_store(). The later is currently slightly more > efficient on SPARC/x86... but this is because OrderAccess::storestore() > is not fully optimized (we just need to prevent C++ compiler reordering, > we should not have to actually generate code for storestore on TSO > systems). Unfortunately, on platforms with weaker memory models, > release_store() may be less efficient than a storestore() followed by a > write :-( Indeed. I think the back end should drive this. > May be we need in OrderAccess and in MemNode a new store operation > weaker than release_store, ordering only the store wrt previous stores > (e.g. "membar #StoreStore; write"). This will be a simple store on TSO > systems (+ something to prevent compiler reordering) but will be > expanded to whatever is the most efficient barrier on the other systems. It would help back-end writers tremendously if there were more kinds of MemNodes, preferably annotated in such a way that we could tell what each barrier was for. That way we could tell whether this is a barrier at the end of object creation, etc. And I could simply not generate anything for barriers that are unneeded because of store release instructions. Andrew. From david.holmes at oracle.com Thu Sep 4 10:36:12 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 04 Sep 2014 20:36:12 +1000 Subject: Release store in C2 putfield In-Reply-To: <54083D5B.5080502@redhat.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> <540834AA.3050805@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> <54083BDB.2010407@oracle.com> <54083D5B.5080502@redhat.com> Message-ID: <5408409C.1030300@oracle.com> On 4/09/2014 8:22 PM, Andrew Haley wrote: > On 09/04/2014 11:15 AM, David Holmes wrote: >> I'll also add that a "barrier" at the end of construction is not >> necessarily sufficient as Bertrand has already mentioned, if 'this' can >> escape. > > Can you provide us with an example of this which isn't a Java > programmer error? To clarify there are two sets of object state we are concerned with: - internal VM state that ensures the platform safety guarantees are met (ie all fields default initialized, valid vtable and object header seen) - Java level state The first requires a suitable "barrier" before Java constructor code is executed so that Java programmer errors do not lead to crashes. So a barrier at the end of construction is not sufficient. If you have the barrier at the start of construction then you don't need a barrier at the end except potentially as part of the freeze semantic for final fields. (Though updated JMM in Java 9 may indeed require such a barrier.) David > Andrew. > From tobias.hartmann at oracle.com Thu Sep 4 10:38:31 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Thu, 04 Sep 2014 12:38:31 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <540799FE.5030309@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> <5406F720.2080603@oracle.com> <540799FE.5030309@oracle.com> Message-ID: <54084127.8000202@oracle.com> Thank you, Vladimir. I added a test that checks the result of segmented code cache related VM options. New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.06/ Can I get a second review please? Best, Tobias On 04.09.2014 00:45, Vladimir Kozlov wrote: > Looks good. You need second review. > > And, please, add a WB test which verifies a reaction on flags settings > similar to what test/compiler/codecache/CheckUpperLimit.java does. > Both positive (SegmentedCodeCache is enabled) and negative (sizes does > not match or ReservedCodeCacheSize is small) and etc. > > Thanks, > Vladimir > > On 9/3/14 4:10 AM, Tobias Hartmann wrote: >> Hi Vladimir, >> >> thanks for the review. >> >> On 29.08.2014 19:33, Vladimir Kozlov wrote: >>> On 8/29/14 7:10 AM, Tobias Hartmann wrote: >>>> Hi Vladimir, >>>> >>>> thanks for the review. >>>> >>>> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>>>> For the record, SegmentedCodeCache is enabled by default when >>>>> TieredCompilation is enabled and ReservedCodeCacheSize >>>>> >= 240 MB. Otherwise it is false by default. >>>> >>>> Exactly. >>>> >>>>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>>>> setting and segments size adjustment - do adjustment >>>>> only if SegmentedCodeCache is enabled. >>>> >>>> Done. >>>> >>>>> Also I think each flag should be checked and adjusted separately. >>>>> You may bail out (vm_exit_during_initialization) if >>>>> sizes do not add up. >>>> >>>> I think we should only increase the sizes if they are all default. >>>> Otherwise we would for example fail if the user sets >>>> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >>>> NonProfiledCodeHeap size is multiplied by 5. What do >>>> you think? >>> >>> But ReservedCodeCacheSize is scaled anyway and you will get sum of >>> sizes != whole size. We need to do something. >> >> I agree. I changed it as you suggested first: The code heap sizes are >> scaled individually and we bail out if the sizes are not consistent with >> ReservedCodeCacheSize. >> >>> BTW the error message for next check should print all sizes, user may >>> not know the default value of some which he did not specified on >>> command line. >>> >>> (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) >> >> The error message now prints the sizes in brackets. >> >>>>> And use >>>> >>>> I think the rest of this sentence is missing :) >>> >>> And use FLAG_SET_ERGO() when you scale. :) >> >> Done. I also changed the implementation of CodeCache::initialize_heaps() >> accordingly. >> >>>>> Align second line: >>>>> >>>>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>>>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>>>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>>>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >>>> >>>> Done. >>>> >>>>> codeCache.cpp - in initialize_heaps() add new methods in C1 and C2 >>>>> to return buffer_size they need. Add >>>>> assert(SegmentedCodeCache) to this method to show that we call it >>>>> only in such case. >>>> >>>> Done. >>>> >>>>> You do adjustment only when all flags are default. But you still >>>>> need to check that you have space in >>>>> NonMethodCodeHeap for scratch buffers. >>>> >>>> I added a the following check: >>>> >>>> // Make sure we have enough space for the code buffers >>>> if (NonMethodCodeHeapSize < code_buffers_size) { >>>> vm_exit_during_initialization("Not enough space for code buffers >>>> in CodeCache"); >>>> } >>> >>> I think, you need to take into account min_code_cache_size as in >>> arguments.cpp: >>> uint min_code_cache_size = (CodeCacheMinimumUseSpace DEBUG_ONLY(* 3)) >>> + CodeCacheMinimumFreeSpace; >>> >>> if (NonMethodCodeHeapSize < (min_code_cache_size+code_buffers_size)) { >> >> True, I changed it. >> >>> I would be nice if this code in initialize_heaps() could be moved >>> called during arguments parsing if we could get number of compiler >>> threads there. But I understand that we can't do that until >>> compilation policy is set :( >> >> Yes, this is not possible because we need to know the number of C1/C2 >> compiler threads. >> >> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ >> >> Thanks, >> Tobias >> >>> >>>> >>>>> codeCache.hpp - comment alignment: >>>>> + // Creates a new heap with the given name and size, containing >>>>> CodeBlobs of the given type >>>>> ! static void add_heap(ReservedSpace rs, const char* name, size_t >>>>> size_initial, int code_blob_type); >>>> >>>> Done. >>>> >>>>> nmethod.cpp - in new() can we mark nmethod allocation critical only >>>>> when SegmentedCodeCache is enabled? >>>> >>>> Yes, that's what we do with: >>>> >>>> 809 bool is_critical = SegmentedCodeCache; >>>> >>>> Or what are you referring to? >>> >>> Somehow I missed that SegmentedCodeCache is used already. It is fine >>> then. >>> >>> Thanks, >>> Vladimir >>> >>>> >>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >>>> >>>> Thanks, >>>> Tobias >>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>>>> Hi, >>>>>> >>>>>> the segmented code cache JEP is now targeted. Please review the >>>>>> final >>>>>> implementation before integration. The previous RFR, including a >>>>>> short >>>>>> description, can be found here [1]. >>>>>> >>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>>> Implementation: >>>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>>> JDK-Test fix: >>>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>>> >>>>>> Changes since the last review: >>>>>> - Merged with other changes (for example, G1 class unloading >>>>>> changes [2]) >>>>>> - Fixed some minor bugs that showed up during testing >>>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>>> - Non-method CodeHeap size increased to 5 MB >>>>>> - Fallback solution: Store non-method code in the non-profiled code >>>>>> heap >>>>>> if there is not enough space in the non-method code heap (see >>>>>> 'CodeCache::allocate') >>>>>> >>>>>> Additional testing: >>>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>>> - Compiler and GC nightlies >>>>>> - jtreg tests >>>>>> - VM (NSK) Testbase >>>>>> - More performance testing (results attached to the bug) >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>> [1] >>>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>>> >>>>>> >>>>>> >>>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>>> >> From bertrand.delsart at oracle.com Thu Sep 4 12:19:13 2014 From: bertrand.delsart at oracle.com (Bertrand Delsart) Date: Thu, 04 Sep 2014 14:19:13 +0200 Subject: Release store in C2 putfield In-Reply-To: <54083F60.80206@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> Message-ID: <540858C1.6010300@oracle.com> On 04/09/14 12:30, Andrew Haley wrote: > On 09/04/2014 10:30 AM, Bertrand Delsart wrote: >> I'm not a C2 expert but from what I have quickly checked, an issue may >> be that we need StoreStore ordering on some platforms. >> >> This should for instance be true for cardmarking (new stored oop >> must be visible before the card is marked). > > Okay. I can live with that. Is there a corresponding read barrier in > the code which scans the card table? See for instance the storeload() in G1SATBCardTableLoggingModRefBS::write_ref_field_work There are in fact a lot of other barriers in concurrent card scanning and cleaning (some of them being implicit due to compare and swap operations). > >> This may also be true for the oop stores in general, as initially >> discussed. [IMHO this is related to final fields, which have to be >> visible when the object escape. Barriers are the end of the >> constructors may not be sufficient if objects can can escape before >> the end of their init() method. > > I'm pretty sure we do this correctly. Are you aware of any place > (except unsafe publication, which is a programmer error) where this > might happen? We generate a barrier at the end of a constructor if > there is a final field and at the end of object creation. I agree that this is not a good programming style but I'm not sure this can always be considered a programmer error. Do you see anything in the java specification that forbid publication before the end of object creation ? For instance, objects may have to be linked at creation time. In general, the publication should be safe because it will hit barriers (because what the object is exported too will often needs to be protected). However, I do not think this is mandatory according to the specifications. Now, the problem is to see what the JMM requires in that case. I'm not 100% sure that a StoreStore is needed here. This is why I said "may not be sufficient". The JSR-133 cookbook has several "(outside of constructor)" statements that might mean it is not needed (if you have a membar at the end of the constructor). However, while I'm familiar with barriers because of my runtime, GC and embedded background, I do not consider myself to be a JMM expert. I will let one chime in. Of course, from a support point of view, it may be easier to add a StoreStore semantic on oop stores (should not be too expensive, taking into account the cost of GC barriers cost and the frequency of oop store) than to investigate the kind of troubles a strange ordering can lead to and explain to the customer why his Java code must be changed for platforms with weaker memory models. Did you measure the performance regression ? Bertrand. > > I should explain: I respect all nodes, and generate all barriers, > except when there is a redundant barrier. So, if a barrier is > immediately preceded by a load acquire I don't emit anything; likewise > for a store release. > > So, for something like > > volatile int x; > > void foo() { > while (true) x = 1; > } > > I get > > ;; membar_release (elided) > 0x0000007fa822f1e4: add x10, x19, #0xc > ;; 0x1 > 0x0000007fa822f1e8: orr w12, wzr, #0x1 > 0x0000007fa822f1ec: stlr w12, [x10] > 0x0000007fa822f1f0: dmb ish ;*putfield x > ; - VolatileStore::foo at 2 (line 5) > > 0x0000007fa822f1f4: adrp x11, 0x0000007fb7ff7000 > ; OopMap{r19=Oop off=120} > ;*goto > ; - VolatileStore::foo at 5 (line 5) > ; {poll} > 0x0000007fa822f1f8: ldr wzr, [x11] ;*goto > ; - VolatileStore::foo at 5 (line 5) > ; {poll} > 0x0000007fa822f1fc: b 0x0000007fa822f1e4 > > Which is correct, I think. > >> Unfortunately, MemNode only defines higher level 'release' and >> 'acquire'. This means that if you want to use MemNode to guarantee the >> StoreStore (instead of using a separate membar), then you need to use >> MemNode::release... which uselessly adds the LoadStore semantic. >> >> LoadStore may require a stronger barrier (for instance "DMB SY" instead >> of "DMB ST" on ARM32). >> >> We have the same issue with some other code in hotspot. Some StoreStore >> constraints just before a write have been implemented using >> OrderAccess::release_store(). The later is currently slightly more >> efficient on SPARC/x86... but this is because OrderAccess::storestore() >> is not fully optimized (we just need to prevent C++ compiler reordering, >> we should not have to actually generate code for storestore on TSO >> systems). Unfortunately, on platforms with weaker memory models, >> release_store() may be less efficient than a storestore() followed by a >> write :-( > > Indeed. I think the back end should drive this. > >> May be we need in OrderAccess and in MemNode a new store operation >> weaker than release_store, ordering only the store wrt previous stores >> (e.g. "membar #StoreStore; write"). This will be a simple store on TSO >> systems (+ something to prevent compiler reordering) but will be >> expanded to whatever is the most efficient barrier on the other systems. > > It would help back-end writers tremendously if there were more kinds > of MemNodes, preferably annotated in such a way that we could tell > what each barrier was for. That way we could tell whether this is a > barrier at the end of object creation, etc. And I could simply not > generate anything for barriers that are unneeded because of store > release instructions. > > Andrew. > -- Bertrand Delsart, Grenoble Engineering Center Oracle, 180 av. de l'Europe, ZIRST de Montbonnot 38334 Saint Ismier, FRANCE bertrand.delsart at oracle.com Phone : +33 4 76 18 81 23 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From aph at redhat.com Thu Sep 4 13:13:37 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 14:13:37 +0100 Subject: Release store in C2 putfield In-Reply-To: <5408409C.1030300@oracle.com> References: <540722C0.1060404@redhat.com> <54074E29.8030500@oracle.com> <54075AFC.6020209@redhat.com> <5407B223.5010300@oracle.com> <5408211A.5060307@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E5D@DEWDFEMB19C.global.corp.sap> <540834AA.3050805@redhat.com> <7C9B87B351A4BA4AA9EC95BB418116566ACC5E95@DEWDFEMB19C.global.corp.sap> <54083BDB.2010407@oracle.com> <54083D5B.5080502@redhat.com> <5408409C.1030300@oracle.com> Message-ID: <54086581.80900@redhat.com> On 09/04/2014 11:36 AM, David Holmes wrote: > On 4/09/2014 8:22 PM, Andrew Haley wrote: >> On 09/04/2014 11:15 AM, David Holmes wrote: >>> I'll also add that a "barrier" at the end of construction is not >>> necessarily sufficient as Bertrand has already mentioned, if 'this' can >>> escape. >> >> Can you provide us with an example of this which isn't a Java >> programmer error? > > To clarify there are two sets of object state we are concerned with: > - internal VM state that ensures the platform safety guarantees are met > (ie all fields default initialized, valid vtable and object header seen) > - Java level state > > The first requires a suitable "barrier" before Java constructor code is > executed so that Java programmer errors do not lead to crashes. So a > barrier at the end of construction is not sufficient. Yes, I've got both of those. > If you have the barrier at the start of construction then you don't need > a barrier at the end except potentially as part of the freeze semantic > for final fields. (Though updated JMM in Java 9 may indeed require such > a barrier.) Good. Thanks, Andrew. From aph at redhat.com Thu Sep 4 13:30:24 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 04 Sep 2014 14:30:24 +0100 Subject: Release store in C2 putfield In-Reply-To: <540858C1.6010300@oracle.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> Message-ID: <54086970.20103@redhat.com> On 09/04/2014 01:19 PM, Bertrand Delsart wrote: > On 04/09/14 12:30, Andrew Haley wrote: >> On 09/04/2014 10:30 AM, Bertrand Delsart wrote: >>> I'm not a C2 expert but from what I have quickly checked, an issue may >>> be that we need StoreStore ordering on some platforms. >>> >>> This should for instance be true for cardmarking (new stored oop >>> must be visible before the card is marked). >> >> Okay. I can live with that. Is there a corresponding read barrier in >> the code which scans the card table? > > See for instance the storeload() in > G1SATBCardTableLoggingModRefBS::write_ref_field_work Okay, thanks, that is tremendously helpful. I know what to look for now. > There are in fact a lot of other barriers in concurrent card scanning > and cleaning (some of them being implicit due to compare and swap > operations). > >>> This may also be true for the oop stores in general, as initially >>> discussed. [IMHO this is related to final fields, which have to be >>> visible when the object escape. Barriers are the end of the >>> constructors may not be sufficient if objects can can escape before >>> the end of their init() method. >> >> I'm pretty sure we do this correctly. Are you aware of any place >> (except unsafe publication, which is a programmer error) where this >> might happen? We generate a barrier at the end of a constructor if >> there is a final field and at the end of object creation. > > I agree that this is not a good programming style but I'm not sure this > can always be considered a programmer error. Do you see anything in the > java specification that forbid publication before the end of object > creation ? No, but there's nothing in the Java spec which says that the language will protect a programmer from themself. > For instance, objects may have to be linked at creation time. In > general, the publication should be safe because it will hit barriers > (because what the object is exported too will often needs to be > protected). However, I do not think this is mandatory according to the > specifications. That's right. > Now, the problem is to see what the JMM requires in that case. I'm not > 100% sure that a StoreStore is needed here. This is why I said "may not > be sufficient". The JSR-133 cookbook has several "(outside of > constructor)" statements that might mean it is not needed (if you have a > membar at the end of the constructor). However, while I'm familiar with > barriers because of my runtime, GC and embedded background, I do not > consider myself to be a JMM expert. I will let one chime in. > > Of course, from a support point of view, it may be easier to add a > StoreStore semantic on oop stores (should not be too expensive, taking > into account the cost of GC barriers cost and the frequency of oop > store) than to investigate the kind of troubles a strange ordering can > lead to and explain to the customer why his Java code must be changed > for platforms with weaker memory models. I think programmers are going to have to get used to it. The issue of safe publication is very well known, especially because of the book _Java Concurrency in Practice_. AIUI the purpose of the JMM is to give a clear definition of the memory semantics of Java that can be efficiently executed on a wide variety of machines. We need Java to scale well on machines with many cores, and the JMM is a good fit to that. > Did you measure the performance regression ? Yes. It is high; but I can't provide any numbers. Andrew. From gerard.ziemski at oracle.com Thu Sep 4 16:45:18 2014 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 04 Sep 2014 11:45:18 -0500 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder Message-ID: <5408971E.1090902@oracle.com> hi all, Please review a very small fix that makes hotspot build ignore "ide" folder, which is where local users can store their own favorite IDE projects. For those interested, I have an Xcode project for JDK8 and JDK9 that I am personally actively supporting and using, which is hosted at https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be put in "jdk/hotspot/ide" folder. Summary of fix: Exclude "ide" folder from the makefile that searches for hotspot src files, or otherwise make bails out complaining that it does not know how to handle Xcode project files. Testing: Passes local build on Mac OS X References: bug: https://bugs.openjdk.java.net/browse/JDK-8033946 webrev: http://cr.openjdk.java.net/~gziemski/8033946_rev0/ Thank you! From karen.kinnear at oracle.com Thu Sep 4 17:21:40 2014 From: karen.kinnear at oracle.com (Karen Kinnear) Date: Thu, 4 Sep 2014 13:21:40 -0400 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: <5408971E.1090902@oracle.com> References: <5408971E.1090902@oracle.com> Message-ID: <431AEFE4-C427-4DDA-B8BD-ACF213961DA0@oracle.com> Gerard, I'm a bit confused - if you have an ide to build the entire jdk - what happens? I was a bit surprised to see a hotspot specific change? Also - can't you store your favorite IDE project outside of the repository? thanks, Karen On Sep 4, 2014, at 12:45 PM, Gerard Ziemski wrote: > hi all, > > Please review a very small fix that makes hotspot build ignore "ide" folder, which is where local users can store their own favorite IDE projects. > > For those interested, I have an Xcode project for JDK8 and JDK9 that I am personally actively supporting and using, which is hosted at https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be put in "jdk/hotspot/ide" folder. > > > Summary of fix: > > Exclude "ide" folder from the makefile that searches for hotspot src files, or otherwise make bails out complaining that it does not know how to handle Xcode project files. > > > Testing: > > Passes local build on Mac OS X > > > References: > > bug: https://bugs.openjdk.java.net/browse/JDK-8033946 > > webrev: http://cr.openjdk.java.net/~gziemski/8033946_rev0/ > > > Thank you! > > From gerard.ziemski at oracle.com Thu Sep 4 17:59:06 2014 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Thu, 04 Sep 2014 12:59:06 -0500 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: <431AEFE4-C427-4DDA-B8BD-ACF213961DA0@oracle.com> References: <5408971E.1090902@oracle.com> <431AEFE4-C427-4DDA-B8BD-ACF213961DA0@oracle.com> Message-ID: <5408A86A.6030507@oracle.com> hi Karen, The Xcode project is just to build hotspot libs, which it then combines with all the other JDK libs and jars built beforehand using command line, to create a complete JDK build that can be ran using Xcode for live debugging hotspot. I have made a deliberate decision, which may have not been the right one, to put the project inside the hotspot folder, so that one can have multiple jdks and Xcode projects at the same time and tightly coupled. A long term vision here is that eventually we may consider providing officially supported projects for modern IDE that build hotspot to the developer community - a "jdk/hotspot/ide" location seemed like a logical choice for storing project that is supposed to go along with hotspot. cheers On 9/4/2014 12:21 PM, Karen Kinnear wrote: > Gerard, > > I'm a bit confused - if you have an ide to build the entire jdk - what happens? > I was a bit surprised to see a hotspot specific change? > > Also - can't you store your favorite IDE project outside of the repository? > > thanks, > Karen > > On Sep 4, 2014, at 12:45 PM, Gerard Ziemski wrote: > >> hi all, >> >> Please review a very small fix that makes hotspot build ignore "ide" folder, which is where local users can store their own favorite IDE projects. >> >> For those interested, I have an Xcode project for JDK8 and JDK9 that I am personally actively supporting and using, which is hosted at https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be put in "jdk/hotspot/ide" folder. >> >> >> Summary of fix: >> >> Exclude "ide" folder from the makefile that searches for hotspot src files, or otherwise make bails out complaining that it does not know how to handle Xcode project files. >> >> >> Testing: >> >> Passes local build on Mac OS X >> >> >> References: >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8033946 >> >> webrev: http://cr.openjdk.java.net/~gziemski/8033946_rev0/ >> >> >> Thank you! >> >> > > From igor.ignatyev at oracle.com Thu Sep 4 18:26:22 2014 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 04 Sep 2014 22:26:22 +0400 Subject: [8u40] Request for approval: backports of 8056072(S), 8056223(XXS) Message-ID: <5408AECE.4080009@oracle.com> Hi all, I would like to request backports of fixes for JDK-8056072[1-3] and JDK-8056223[4-6] to 8u40. The original patches were applied cleanly. testing: jprt [1] https://bugs.openjdk.java.net/browse/JDK-8056072 [2] http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/3c92cbe3250f [3] http://cr.openjdk.java.net/~iignatyev/8056072/webrev.00/ [4] https://bugs.openjdk.java.net/browse/JDK-8056223 [5] http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/132677ca8e4e [6] http://cr.openjdk.java.net/~iignatyev/8056223/webrev.00/ -- Igor From vladimir.kozlov at oracle.com Thu Sep 4 18:49:45 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 04 Sep 2014 11:49:45 -0700 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <54084127.8000202@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> <5406F720.2080603@oracle.com> <540799FE.5030309@oracle.com> <54084127.8000202@oracle.com> Message-ID: <5408B449.2040206@oracle.com> The test misses @run command. Why you get ""TieredCompilation is disabled in this release."? Client VM? What happens if we run with TieredStopAtLevel=1? Thanks, Vladimir On 9/4/14 3:38 AM, Tobias Hartmann wrote: > Thank you, Vladimir. > > I added a test that checks the result of segmented code cache related VM > options. > > New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.06/ > > Can I get a second review please? > > Best, > Tobias > > On 04.09.2014 00:45, Vladimir Kozlov wrote: >> Looks good. You need second review. >> >> And, please, add a WB test which verifies a reaction on flags settings >> similar to what test/compiler/codecache/CheckUpperLimit.java does. >> Both positive (SegmentedCodeCache is enabled) and negative (sizes does >> not match or ReservedCodeCacheSize is small) and etc. >> >> Thanks, >> Vladimir >> >> On 9/3/14 4:10 AM, Tobias Hartmann wrote: >>> Hi Vladimir, >>> >>> thanks for the review. >>> >>> On 29.08.2014 19:33, Vladimir Kozlov wrote: >>>> On 8/29/14 7:10 AM, Tobias Hartmann wrote: >>>>> Hi Vladimir, >>>>> >>>>> thanks for the review. >>>>> >>>>> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>>>>> For the record, SegmentedCodeCache is enabled by default when >>>>>> TieredCompilation is enabled and ReservedCodeCacheSize >>>>>> >= 240 MB. Otherwise it is false by default. >>>>> >>>>> Exactly. >>>>> >>>>>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>>>>> setting and segments size adjustment - do adjustment >>>>>> only if SegmentedCodeCache is enabled. >>>>> >>>>> Done. >>>>> >>>>>> Also I think each flag should be checked and adjusted separately. >>>>>> You may bail out (vm_exit_during_initialization) if >>>>>> sizes do not add up. >>>>> >>>>> I think we should only increase the sizes if they are all default. >>>>> Otherwise we would for example fail if the user sets >>>>> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >>>>> NonProfiledCodeHeap size is multiplied by 5. What do >>>>> you think? >>>> >>>> But ReservedCodeCacheSize is scaled anyway and you will get sum of >>>> sizes != whole size. We need to do something. >>> >>> I agree. I changed it as you suggested first: The code heap sizes are >>> scaled individually and we bail out if the sizes are not consistent with >>> ReservedCodeCacheSize. >>> >>>> BTW the error message for next check should print all sizes, user may >>>> not know the default value of some which he did not specified on >>>> command line. >>>> >>>> (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) >>> >>> The error message now prints the sizes in brackets. >>> >>>>>> And use >>>>> >>>>> I think the rest of this sentence is missing :) >>>> >>>> And use FLAG_SET_ERGO() when you scale. :) >>> >>> Done. I also changed the implementation of CodeCache::initialize_heaps() >>> accordingly. >>> >>>>>> Align second line: >>>>>> >>>>>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>>>>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>>>>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>>>>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >>>>> >>>>> Done. >>>>> >>>>>> codeCache.cpp - in initialize_heaps() add new methods in C1 and C2 >>>>>> to return buffer_size they need. Add >>>>>> assert(SegmentedCodeCache) to this method to show that we call it >>>>>> only in such case. >>>>> >>>>> Done. >>>>> >>>>>> You do adjustment only when all flags are default. But you still >>>>>> need to check that you have space in >>>>>> NonMethodCodeHeap for scratch buffers. >>>>> >>>>> I added a the following check: >>>>> >>>>> // Make sure we have enough space for the code buffers >>>>> if (NonMethodCodeHeapSize < code_buffers_size) { >>>>> vm_exit_during_initialization("Not enough space for code buffers >>>>> in CodeCache"); >>>>> } >>>> >>>> I think, you need to take into account min_code_cache_size as in >>>> arguments.cpp: >>>> uint min_code_cache_size = (CodeCacheMinimumUseSpace DEBUG_ONLY(* 3)) >>>> + CodeCacheMinimumFreeSpace; >>>> >>>> if (NonMethodCodeHeapSize < (min_code_cache_size+code_buffers_size)) { >>> >>> True, I changed it. >>> >>>> I would be nice if this code in initialize_heaps() could be moved >>>> called during arguments parsing if we could get number of compiler >>>> threads there. But I understand that we can't do that until >>>> compilation policy is set :( >>> >>> Yes, this is not possible because we need to know the number of C1/C2 >>> compiler threads. >>> >>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ >>> >>> Thanks, >>> Tobias >>> >>>> >>>>> >>>>>> codeCache.hpp - comment alignment: >>>>>> + // Creates a new heap with the given name and size, containing >>>>>> CodeBlobs of the given type >>>>>> ! static void add_heap(ReservedSpace rs, const char* name, size_t >>>>>> size_initial, int code_blob_type); >>>>> >>>>> Done. >>>>> >>>>>> nmethod.cpp - in new() can we mark nmethod allocation critical only >>>>>> when SegmentedCodeCache is enabled? >>>>> >>>>> Yes, that's what we do with: >>>>> >>>>> 809 bool is_critical = SegmentedCodeCache; >>>>> >>>>> Or what are you referring to? >>>> >>>> Somehow I missed that SegmentedCodeCache is used already. It is fine >>>> then. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>>> >>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>>>>> Hi, >>>>>>> >>>>>>> the segmented code cache JEP is now targeted. Please review the >>>>>>> final >>>>>>> implementation before integration. The previous RFR, including a >>>>>>> short >>>>>>> description, can be found here [1]. >>>>>>> >>>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>>>> Implementation: >>>>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>>>> JDK-Test fix: >>>>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>>>> >>>>>>> Changes since the last review: >>>>>>> - Merged with other changes (for example, G1 class unloading >>>>>>> changes [2]) >>>>>>> - Fixed some minor bugs that showed up during testing >>>>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>>>> - Non-method CodeHeap size increased to 5 MB >>>>>>> - Fallback solution: Store non-method code in the non-profiled code >>>>>>> heap >>>>>>> if there is not enough space in the non-method code heap (see >>>>>>> 'CodeCache::allocate') >>>>>>> >>>>>>> Additional testing: >>>>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>>>> - Compiler and GC nightlies >>>>>>> - jtreg tests >>>>>>> - VM (NSK) Testbase >>>>>>> - More performance testing (results attached to the bug) >>>>>>> >>>>>>> Thanks, >>>>>>> Tobias >>>>>>> >>>>>>> [1] >>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>>>> >>>>>>> >>>>>>> >>>>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>>>> >>> > From vladimir.kozlov at oracle.com Thu Sep 4 18:56:09 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 04 Sep 2014 11:56:09 -0700 Subject: [8u40] Request for approval: backports of 8056072(S), 8056223(XXS) In-Reply-To: <5408AECE.4080009@oracle.com> References: <5408AECE.4080009@oracle.com> Message-ID: <5408B5C9.5070400@oracle.com> Good. Thanks, Vladimir PS: Do you know when JPRT will support it? On 9/4/14 11:26 AM, Igor Ignatyev wrote: > Hi all, > > I would like to request backports of fixes for JDK-8056072[1-3] and > JDK-8056223[4-6] to 8u40. The original patches were applied cleanly. > > testing: jprt > > [1] https://bugs.openjdk.java.net/browse/JDK-8056072 > [2] http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/3c92cbe3250f > [3] http://cr.openjdk.java.net/~iignatyev/8056072/webrev.00/ > [4] https://bugs.openjdk.java.net/browse/JDK-8056223 > [5] http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/132677ca8e4e > [6] http://cr.openjdk.java.net/~iignatyev/8056223/webrev.00/ From igor.ignatyev at oracle.com Thu Sep 4 19:09:33 2014 From: igor.ignatyev at oracle.com (Igor Ignatyev) Date: Thu, 04 Sep 2014 23:09:33 +0400 Subject: [8u40] Request for approval: backports of 8056072(S), 8056223(XXS) In-Reply-To: <5408B5C9.5070400@oracle.com> References: <5408AECE.4080009@oracle.com> <5408B5C9.5070400@oracle.com> Message-ID: <5408B8ED.9080301@oracle.com> Vladimir, the patch which adds support of optimized flavors to JPRT is ready, I'll send it to review as soon as these fixes will be propagated to the all active repositories which have JDK-8012292. Igor On 09/04/2014 10:56 PM, Vladimir Kozlov wrote: > Good. > > Thanks, > Vladimir > > PS: Do you know when JPRT will support it? > > On 9/4/14 11:26 AM, Igor Ignatyev wrote: >> Hi all, >> >> I would like to request backports of fixes for JDK-8056072[1-3] and >> JDK-8056223[4-6] to 8u40. The original patches were applied cleanly. >> >> testing: jprt >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8056072 >> [2] http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/3c92cbe3250f >> [3] http://cr.openjdk.java.net/~iignatyev/8056072/webrev.00/ >> [4] https://bugs.openjdk.java.net/browse/JDK-8056223 >> [5] http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/132677ca8e4e >> [6] http://cr.openjdk.java.net/~iignatyev/8056223/webrev.00/ From dmitry.samersoff at oracle.com Thu Sep 4 19:10:32 2014 From: dmitry.samersoff at oracle.com (Dmitry Samersoff) Date: Thu, 04 Sep 2014 23:10:32 +0400 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: <5408971E.1090902@oracle.com> References: <5408971E.1090902@oracle.com> Message-ID: <5408B928.6030606@oracle.com> Gerard, we already have ./jdk/make/netbeans, ./langtools/make/netbeans and I think it's a good pattern to follow. You can create hotspot/make/xcode and don't change the makefile. Also with upcoming switch to full forest build you may consider to create ide folder at top level and move per ws projects to it. Nevertheless, if you decide to go ahead with makefile changes, it requires some find machinery. Something like $(FIND) -L $(HOTSPOT_TOPDIR) -name ".hg" -prune -o \( -name "ide" -a -type "d" \) -prune -o -print current patch filter out all files and directories containing char sequence "ide" e.g. "side", "accidental" etc. -Dmitry On 2014-09-04 20:45, Gerard Ziemski wrote: > hi all, > > Please review a very small fix that makes hotspot build ignore "ide" > folder, which is where local users can store their own favorite IDE > projects. > > For those interested, I have an Xcode project for JDK8 and JDK9 that I > am personally actively supporting and using, which is hosted at > https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be > put in "jdk/hotspot/ide" folder. > > > Summary of fix: > > Exclude "ide" folder from the makefile that searches for hotspot src > files, or otherwise make bails out complaining that it does not know how > to handle Xcode project files. > > > Testing: > > Passes local build on Mac OS X > > > References: > > bug: https://bugs.openjdk.java.net/browse/JDK-8033946 > > webrev: http://cr.openjdk.java.net/~gziemski/8033946_rev0/ > > > Thank you! > > -- Dmitry Samersoff Oracle Java development team, Saint Petersburg, Russia * I would love to change the world, but they won't give me the sources. From dean.long at oracle.com Thu Sep 4 20:32:25 2014 From: dean.long at oracle.com (Dean Long) Date: Thu, 04 Sep 2014 13:32:25 -0700 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: <5408971E.1090902@oracle.com> References: <5408971E.1090902@oracle.com> Message-ID: <5408CC59.4000400@oracle.com> Wouldn't it be better to replace $(HOTSPOT_TOPDIR) with $(HOTSPOT_TOPDIRS), so we don't even search in top-level directories that aren't interesting? dl On 9/4/2014 9:45 AM, Gerard Ziemski wrote: > hi all, > > Please review a very small fix that makes hotspot build ignore "ide" > folder, which is where local users can store their own favorite IDE > projects. > > For those interested, I have an Xcode project for JDK8 and JDK9 that I > am personally actively supporting and using, which is hosted at > https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be > put in "jdk/hotspot/ide" folder. > > > Summary of fix: > > Exclude "ide" folder from the makefile that searches for hotspot src > files, or otherwise make bails out complaining that it does not know > how to handle Xcode project files. > > > Testing: > > Passes local build on Mac OS X > > > References: > > bug: https://bugs.openjdk.java.net/browse/JDK-8033946 > > webrev: http://cr.openjdk.java.net/~gziemski/8033946_rev0/ > > > Thank you! > > From jesper.wilhelmsson at oracle.com Thu Sep 4 22:47:51 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Fri, 05 Sep 2014 00:47:51 +0200 Subject: RFR(XS): 8056056 - Remove unnecessary inclusion of HS_ALT_MAKE from solaris Makefile Message-ID: <5408EC17.2010002@oracle.com> Hi, Looking for reviews for this small fix that removes the unnecessary passing of HS_ALT_MAKE to buildtree.make from the Solaris Makefile. It was recently added but is not needed since buildtree.make explicitly includes defs.make where HS_ALT_MAKE is defined. Webrev: http://cr.openjdk.java.net/~jwilhelm/8056056/webrev/ Bug: https://bugs.openjdk.java.net/browse/JDK-8056056 Thanks, /Jesper From bernhard.urban at jku.at Thu Sep 4 21:16:20 2014 From: bernhard.urban at jku.at (Bernhard Urban) Date: Thu, 4 Sep 2014 23:16:20 +0200 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: <5408971E.1090902@oracle.com> References: <5408971E.1090902@oracle.com> Message-ID: Hi Gerard, On Thu, Sep 4, 2014 at 6:45 PM, Gerard Ziemski wrote: > > For those interested, I have an Xcode project for JDK8 and JDK9 that I am > personally actively supporting and using, which is hosted at > https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be > put in "jdk/hotspot/ide" folder. I would be interested into this Xcode project, but I can't access the URL. Can you make it publicly accessible? Thanks, Bernhard From vladimir.kozlov at oracle.com Fri Sep 5 01:36:27 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 04 Sep 2014 18:36:27 -0700 Subject: RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX Message-ID: <5409139B.7080100@oracle.com> http://cr.openjdk.java.net/~kvn/8057643/webrev/ https://bugs.openjdk.java.net/browse/JDK-8057643 Added missing Hotspot make targets for 'optimized' build. Hotspot VM has 'optimized' build version which is used to collect statistic and for stress testing. A corresponding source code is guarded by #ifndef PRODUCT and by 'notproduct' flags. Switching to full forest build for Hotspot development requires to build all JVM targets from top repository. Unfortunately '--with-debug-level=optimized' build is broken on OSX because of missing targets in Hotspot makefiles. Tested the build on OSX, Linux, Solaris (I don't have Windows). Thanks, Vladimir From igor.veresov at oracle.com Fri Sep 5 06:33:35 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Thu, 4 Sep 2014 23:33:35 -0700 Subject: RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX In-Reply-To: <5409139B.7080100@oracle.com> References: <5409139B.7080100@oracle.com> Message-ID: <0C9551FB-3435-4D1A-917B-1DB12422D78C@oracle.com> Looks good. igor On Sep 4, 2014, at 6:36 PM, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8057643/webrev/ > https://bugs.openjdk.java.net/browse/JDK-8057643 > > Added missing Hotspot make targets for 'optimized' build. > > Hotspot VM has 'optimized' build version which is used to collect statistic and for stress testing. A corresponding source code is guarded by #ifndef PRODUCT and by 'notproduct' flags. > Switching to full forest build for Hotspot development requires to build all JVM targets from top repository. Unfortunately '--with-debug-level=optimized' build is broken on OSX because of missing targets in Hotspot makefiles. > > Tested the build on OSX, Linux, Solaris (I don't have Windows). > > Thanks, > Vladimir From jesper.wilhelmsson at oracle.com Fri Sep 5 06:37:20 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Fri, 05 Sep 2014 08:37:20 +0200 Subject: RFR(XS): 8056056 - Remove unnecessary inclusion of HS_ALT_MAKE from solaris Makefile In-Reply-To: <5408EC17.2010002@oracle.com> References: <5408EC17.2010002@oracle.com> Message-ID: <54095A20.1060307@oracle.com> I forgot to mention that this one is aiming for 8u40. Thanks, /Jesper Jesper Wilhelmsson skrev 5/9/14 00:47: > Hi, > > Looking for reviews for this small fix that removes the unnecessary passing of > HS_ALT_MAKE to buildtree.make from the Solaris Makefile. It was recently added > but is not needed since buildtree.make explicitly includes defs.make where > HS_ALT_MAKE is defined. > > Webrev: http://cr.openjdk.java.net/~jwilhelm/8056056/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056056 > > Thanks, > /Jesper From tobias.hartmann at oracle.com Fri Sep 5 08:48:33 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 05 Sep 2014 10:48:33 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5408B449.2040206@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> <5406F720.2080603@oracle.com> <540799FE.5030309@oracle.com> <54084127.8000202@oracle.com> <5408B449.2040206@oracle.com> Message-ID: <540978E1.2060703@oracle.com> Hi Vladimir, thanks again for the review. On 04.09.2014 20:49, Vladimir Kozlov wrote: > The test misses @run command. I thought it is not necessary if we do not specify any VM options. I added it. > Why you get ""TieredCompilation is disabled in this release."? Client VM? Yes, with a client VM. On a 32 bit machine with a 32 bit build of the VM we may have a client version by default. > What happens if we run with TieredStopAtLevel=1? By now, we use all heaps but I changed 'CodeCache::heap_available' to not use the profiled code heap in this case. I added some more checks and comments to the test. Thanks for catching this! New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ Thanks, Tobias > Thanks, > Vladimir > > On 9/4/14 3:38 AM, Tobias Hartmann wrote: >> Thank you, Vladimir. >> >> I added a test that checks the result of segmented code cache related VM >> options. >> >> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.06/ >> >> Can I get a second review please? >> >> Best, >> Tobias >> >> On 04.09.2014 00:45, Vladimir Kozlov wrote: >>> Looks good. You need second review. >>> >>> And, please, add a WB test which verifies a reaction on flags settings >>> similar to what test/compiler/codecache/CheckUpperLimit.java does. >>> Both positive (SegmentedCodeCache is enabled) and negative (sizes does >>> not match or ReservedCodeCacheSize is small) and etc. >>> >>> Thanks, >>> Vladimir >>> >>> On 9/3/14 4:10 AM, Tobias Hartmann wrote: >>>> Hi Vladimir, >>>> >>>> thanks for the review. >>>> >>>> On 29.08.2014 19:33, Vladimir Kozlov wrote: >>>>> On 8/29/14 7:10 AM, Tobias Hartmann wrote: >>>>>> Hi Vladimir, >>>>>> >>>>>> thanks for the review. >>>>>> >>>>>> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>>>>>> For the record, SegmentedCodeCache is enabled by default when >>>>>>> TieredCompilation is enabled and ReservedCodeCacheSize >>>>>>> >= 240 MB. Otherwise it is false by default. >>>>>> >>>>>> Exactly. >>>>>> >>>>>>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>>>>>> setting and segments size adjustment - do adjustment >>>>>>> only if SegmentedCodeCache is enabled. >>>>>> >>>>>> Done. >>>>>> >>>>>>> Also I think each flag should be checked and adjusted separately. >>>>>>> You may bail out (vm_exit_during_initialization) if >>>>>>> sizes do not add up. >>>>>> >>>>>> I think we should only increase the sizes if they are all default. >>>>>> Otherwise we would for example fail if the user sets >>>>>> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >>>>>> NonProfiledCodeHeap size is multiplied by 5. What do >>>>>> you think? >>>>> >>>>> But ReservedCodeCacheSize is scaled anyway and you will get sum of >>>>> sizes != whole size. We need to do something. >>>> >>>> I agree. I changed it as you suggested first: The code heap sizes are >>>> scaled individually and we bail out if the sizes are not consistent >>>> with >>>> ReservedCodeCacheSize. >>>> >>>>> BTW the error message for next check should print all sizes, user may >>>>> not know the default value of some which he did not specified on >>>>> command line. >>>>> >>>>> (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) >>>> >>>> The error message now prints the sizes in brackets. >>>> >>>>>>> And use >>>>>> >>>>>> I think the rest of this sentence is missing :) >>>>> >>>>> And use FLAG_SET_ERGO() when you scale. :) >>>> >>>> Done. I also changed the implementation of >>>> CodeCache::initialize_heaps() >>>> accordingly. >>>> >>>>>>> Align second line: >>>>>>> >>>>>>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>>>>>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>>>>>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>>>>>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >>>>>> >>>>>> Done. >>>>>> >>>>>>> codeCache.cpp - in initialize_heaps() add new methods in C1 and C2 >>>>>>> to return buffer_size they need. Add >>>>>>> assert(SegmentedCodeCache) to this method to show that we call it >>>>>>> only in such case. >>>>>> >>>>>> Done. >>>>>> >>>>>>> You do adjustment only when all flags are default. But you still >>>>>>> need to check that you have space in >>>>>>> NonMethodCodeHeap for scratch buffers. >>>>>> >>>>>> I added a the following check: >>>>>> >>>>>> // Make sure we have enough space for the code buffers >>>>>> if (NonMethodCodeHeapSize < code_buffers_size) { >>>>>> vm_exit_during_initialization("Not enough space for code >>>>>> buffers >>>>>> in CodeCache"); >>>>>> } >>>>> >>>>> I think, you need to take into account min_code_cache_size as in >>>>> arguments.cpp: >>>>> uint min_code_cache_size = (CodeCacheMinimumUseSpace DEBUG_ONLY(* >>>>> 3)) >>>>> + CodeCacheMinimumFreeSpace; >>>>> >>>>> if (NonMethodCodeHeapSize < >>>>> (min_code_cache_size+code_buffers_size)) { >>>> >>>> True, I changed it. >>>> >>>>> I would be nice if this code in initialize_heaps() could be moved >>>>> called during arguments parsing if we could get number of compiler >>>>> threads there. But I understand that we can't do that until >>>>> compilation policy is set :( >>>> >>>> Yes, this is not possible because we need to know the number of C1/C2 >>>> compiler threads. >>>> >>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ >>>> >>>> Thanks, >>>> Tobias >>>> >>>>> >>>>>> >>>>>>> codeCache.hpp - comment alignment: >>>>>>> + // Creates a new heap with the given name and size, >>>>>>> containing >>>>>>> CodeBlobs of the given type >>>>>>> ! static void add_heap(ReservedSpace rs, const char* name, size_t >>>>>>> size_initial, int code_blob_type); >>>>>> >>>>>> Done. >>>>>> >>>>>>> nmethod.cpp - in new() can we mark nmethod allocation critical only >>>>>>> when SegmentedCodeCache is enabled? >>>>>> >>>>>> Yes, that's what we do with: >>>>>> >>>>>> 809 bool is_critical = SegmentedCodeCache; >>>>>> >>>>>> Or what are you referring to? >>>>> >>>>> Somehow I missed that SegmentedCodeCache is used already. It is fine >>>>> then. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>>> >>>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> the segmented code cache JEP is now targeted. Please review the >>>>>>>> final >>>>>>>> implementation before integration. The previous RFR, including a >>>>>>>> short >>>>>>>> description, can be found here [1]. >>>>>>>> >>>>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>>>>> Implementation: >>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>>>>> JDK-Test fix: >>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>>>>> >>>>>>>> Changes since the last review: >>>>>>>> - Merged with other changes (for example, G1 class unloading >>>>>>>> changes [2]) >>>>>>>> - Fixed some minor bugs that showed up during testing >>>>>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>>>>> - Non-method CodeHeap size increased to 5 MB >>>>>>>> - Fallback solution: Store non-method code in the non-profiled >>>>>>>> code >>>>>>>> heap >>>>>>>> if there is not enough space in the non-method code heap (see >>>>>>>> 'CodeCache::allocate') >>>>>>>> >>>>>>>> Additional testing: >>>>>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>>>>> - Compiler and GC nightlies >>>>>>>> - jtreg tests >>>>>>>> - VM (NSK) Testbase >>>>>>>> - More performance testing (results attached to the bug) >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Tobias >>>>>>>> >>>>>>>> [1] >>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>>>>> >>>> >> From tobias.hartmann at oracle.com Fri Sep 5 08:53:13 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Fri, 05 Sep 2014 10:53:13 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <53FF1BF6.8070600@oracle.com> References: <53FF1BF6.8070600@oracle.com> Message-ID: <540979F9.5080407@oracle.com> Hi, could I get another review for this? Latest webrev is: http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ Thanks, Tobias On 28.08.2014 14:09, Tobias Hartmann wrote: > Hi, > > the segmented code cache JEP is now targeted. Please review the final > implementation before integration. The previous RFR, including a short > description, can be found here [1]. > > JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 > Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 > Implementation: http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ > JDK-Test fix: > http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ > > Changes since the last review: > - Merged with other changes (for example, G1 class unloading changes [2]) > - Fixed some minor bugs that showed up during testing > - Refactoring of 'NMethodIterator' and CodeCache implementation > - Non-method CodeHeap size increased to 5 MB > - Fallback solution: Store non-method code in the non-profiled code > heap if there is not enough space in the non-method code heap (see > 'CodeCache::allocate') > > Additional testing: > - BigApps (Weblogic, Dacapo, runThese, Kitchensink) > - Compiler and GC nightlies > - jtreg tests > - VM (NSK) Testbase > - More performance testing (results attached to the bug) > > Thanks, > Tobias > > [1] > http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html > [2] https://bugs.openjdk.java.net/browse/JDK-8049421 From erik.joelsson at oracle.com Fri Sep 5 09:47:39 2014 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Fri, 05 Sep 2014 11:47:39 +0200 Subject: RFR(XS): 8056056 - Remove unnecessary inclusion of HS_ALT_MAKE from solaris Makefile In-Reply-To: <54095A20.1060307@oracle.com> References: <5408EC17.2010002@oracle.com> <54095A20.1060307@oracle.com> Message-ID: <540986BB.5000506@oracle.com> Looks good to me. /Erik On 2014-09-05 08:37, Jesper Wilhelmsson wrote: > I forgot to mention that this one is aiming for 8u40. > Thanks, > /Jesper > > Jesper Wilhelmsson skrev 5/9/14 00:47: >> Hi, >> >> Looking for reviews for this small fix that removes the unnecessary >> passing of >> HS_ALT_MAKE to buildtree.make from the Solaris Makefile. It was >> recently added >> but is not needed since buildtree.make explicitly includes defs.make >> where >> HS_ALT_MAKE is defined. >> >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8056056/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056056 >> >> Thanks, >> /Jesper From magnus.ihse.bursie at oracle.com Fri Sep 5 09:53:23 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Fri, 05 Sep 2014 11:53:23 +0200 Subject: RFR (preliminary): JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <5407EC21.8050709@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> Message-ID: <54098813.6070903@oracle.com> On 2014-09-04 06:35, David Holmes wrote: > Hi Magnus, > > On 1/09/2014 10:11 PM, Magnus Ihse Bursie wrote: >> Even in the default log level ("warn"), hotspots builds are extremely >> verbose. With the new jigsaw build system, hotspot is build in parallel >> with the jdk, and the sheer amount of hotspot output makes the jdk >> output practically disappear. >> >> This fix will make the following changes: >> * When hotspot is build from the top dir with the default log level, all >> repetetive and purely informative output is hidden (e.g. names of files >> compiled, and the "INFO:" blobs). > > I think I probably want a default log level a little more informative > than that - I like to see visible progress indicators. :) > >> * When hotspot is build from the top dir, with any other log level >> (info, debug, trace), all output will be there, as before. > > Would be nice to have fixed the excessive/repetitive INFO blocks re > FDS :) but that requires more than just controlling an on/off switch. > >> * When hotspot is build from the hotspot repo, all output will be there, >> as before. >> >> Note! This is a preliminary review -- I have made the necessary changes >> for Linux only. If this fix gets thumbs up, I'll continue and apply the >> same pattern to the rest of the platforms. But I didn't want to do all >> that duplication until I felt certain that I wouldn't have to change >> something major. The changes themselves are mostly trivial, but they are >> all over the place :-(. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 >> WebRev: >> http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.01 >> > > Seems to be some overlap with the $(QUIETLY) mechanism - but to be > honest I always have trouble remembering how that works. In looking at > it now it seems to me that "$(QUIETLY) echo" is incorrect as the text > is always echoed, what gets suppressed is the echoing of the echo > command itself - which seems pointless. So I think all "$(QUIETLY) > echo" should just be @echo. I believed that QUIETLY was either set to empty or to @ (default), to supress output of the actual command. And yes, ever seeing the actual echo command as well as the output seems pointless, so it could probably have been @ instead. But there is no overlap there with my fix. The QUIETLY / @ only handles whether the echo command itself is written by make before it is executed. The output is always written anyway, and that is what my fix deals with. > > But then replacing @echo with a $(ECHO) that may be silent would seem > a bit cleaner that "@echo $(LOG_INFO). (Not sure what you are doing in > the rest of the build). So what I am doing here is applying the same pattern as we have in the rest of build-infra. There we have a group of macros (LOG_WARN, LOG_INFO, LOG_DEBUG and LOG_TRACE). The evaluate to either empty, or to " > /dev/null". This means that you can determine the log level you want this particular output to be on, and it's fairly readable what the intention is. E.g.: $(ECHO) $(LOG_DEBUG) Starting clusterfrizz process now It is non-trivial to export these to hospot, so I only copied the definition of the one I care about for now, LOG_INFO. I don't think it's a good idea to change $ECHO, since it is used like this $(ECHO) # Generated file, do not edit! > tmp/myfile.h You *could* introduce a new $(ECHO_LOG_INFO) or so, but I don't think it's better. A valuable effect of the build-infra pattern is that it can be applied to any command (such as build tools), not just echo. > > print_info is nice. Thanks. /Magnus From jesper.wilhelmsson at oracle.com Fri Sep 5 10:50:44 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Fri, 05 Sep 2014 12:50:44 +0200 Subject: RFR(XS): 8056056 - Remove unnecessary inclusion of HS_ALT_MAKE from solaris Makefile In-Reply-To: <540986BB.5000506@oracle.com> References: <5408EC17.2010002@oracle.com> <54095A20.1060307@oracle.com> <540986BB.5000506@oracle.com> Message-ID: <54099584.9010703@oracle.com> Thanks Erik! /Jesper Erik Joelsson skrev 5/9/14 11:47: > Looks good to me. > > /Erik > > On 2014-09-05 08:37, Jesper Wilhelmsson wrote: >> I forgot to mention that this one is aiming for 8u40. >> Thanks, >> /Jesper >> >> Jesper Wilhelmsson skrev 5/9/14 00:47: >>> Hi, >>> >>> Looking for reviews for this small fix that removes the unnecessary passing of >>> HS_ALT_MAKE to buildtree.make from the Solaris Makefile. It was recently added >>> but is not needed since buildtree.make explicitly includes defs.make where >>> HS_ALT_MAKE is defined. >>> >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8056056/webrev/ >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8056056 >>> >>> Thanks, >>> /Jesper > From david.holmes at oracle.com Fri Sep 5 11:22:56 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 05 Sep 2014 21:22:56 +1000 Subject: RFR(XS): 8056056 - Remove unnecessary inclusion of HS_ALT_MAKE from solaris Makefile In-Reply-To: <5408EC17.2010002@oracle.com> References: <5408EC17.2010002@oracle.com> Message-ID: <54099D10.6080306@oracle.com> Looks good. Thanks, David On 5/09/2014 8:47 AM, Jesper Wilhelmsson wrote: > Hi, > > Looking for reviews for this small fix that removes the unnecessary > passing of HS_ALT_MAKE to buildtree.make from the Solaris Makefile. It > was recently added but is not needed since buildtree.make explicitly > includes defs.make where HS_ALT_MAKE is defined. > > Webrev: http://cr.openjdk.java.net/~jwilhelm/8056056/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056056 > > Thanks, > /Jesper From jesper.wilhelmsson at oracle.com Fri Sep 5 11:25:03 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Fri, 05 Sep 2014 13:25:03 +0200 Subject: RFR(XS): 8056056 - Remove unnecessary inclusion of HS_ALT_MAKE from solaris Makefile In-Reply-To: <54099D10.6080306@oracle.com> References: <5408EC17.2010002@oracle.com> <54099D10.6080306@oracle.com> Message-ID: <54099D8F.8010909@oracle.com> Thanks David! /Jesper David Holmes skrev 5/9/14 13:22: > Looks good. > > Thanks, > David > > On 5/09/2014 8:47 AM, Jesper Wilhelmsson wrote: >> Hi, >> >> Looking for reviews for this small fix that removes the unnecessary >> passing of HS_ALT_MAKE to buildtree.make from the Solaris Makefile. It >> was recently added but is not needed since buildtree.make explicitly >> includes defs.make where HS_ALT_MAKE is defined. >> >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8056056/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056056 >> >> Thanks, >> /Jesper From david.holmes at oracle.com Fri Sep 5 11:51:44 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 05 Sep 2014 21:51:44 +1000 Subject: RFR (preliminary): JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <54098813.6070903@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> Message-ID: <5409A3D0.3070208@oracle.com> Okay - thumbs up! Thanks, David On 5/09/2014 7:53 PM, Magnus Ihse Bursie wrote: > On 2014-09-04 06:35, David Holmes wrote: >> Hi Magnus, >> >> On 1/09/2014 10:11 PM, Magnus Ihse Bursie wrote: >>> Even in the default log level ("warn"), hotspots builds are extremely >>> verbose. With the new jigsaw build system, hotspot is build in parallel >>> with the jdk, and the sheer amount of hotspot output makes the jdk >>> output practically disappear. >>> >>> This fix will make the following changes: >>> * When hotspot is build from the top dir with the default log level, all >>> repetetive and purely informative output is hidden (e.g. names of files >>> compiled, and the "INFO:" blobs). >> >> I think I probably want a default log level a little more informative >> than that - I like to see visible progress indicators. :) >> >>> * When hotspot is build from the top dir, with any other log level >>> (info, debug, trace), all output will be there, as before. >> >> Would be nice to have fixed the excessive/repetitive INFO blocks re >> FDS :) but that requires more than just controlling an on/off switch. >> >>> * When hotspot is build from the hotspot repo, all output will be there, >>> as before. >>> >>> Note! This is a preliminary review -- I have made the necessary changes >>> for Linux only. If this fix gets thumbs up, I'll continue and apply the >>> same pattern to the rest of the platforms. But I didn't want to do all >>> that duplication until I felt certain that I wouldn't have to change >>> something major. The changes themselves are mostly trivial, but they are >>> all over the place :-(. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 >>> WebRev: >>> http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.01 >>> >> >> Seems to be some overlap with the $(QUIETLY) mechanism - but to be >> honest I always have trouble remembering how that works. In looking at >> it now it seems to me that "$(QUIETLY) echo" is incorrect as the text >> is always echoed, what gets suppressed is the echoing of the echo >> command itself - which seems pointless. So I think all "$(QUIETLY) >> echo" should just be @echo. > > I believed that QUIETLY was either set to empty or to @ (default), to > supress output of the actual command. And yes, ever seeing the actual > echo command as well as the output seems pointless, so it could probably > have been @ instead. > > But there is no overlap there with my fix. The QUIETLY / @ only handles > whether the echo command itself is written by make before it is > executed. The output is always written anyway, and that is what my fix > deals with. > > >> >> But then replacing @echo with a $(ECHO) that may be silent would seem >> a bit cleaner that "@echo $(LOG_INFO). (Not sure what you are doing in >> the rest of the build). > > So what I am doing here is applying the same pattern as we have in the > rest of build-infra. There we have a group of macros (LOG_WARN, > LOG_INFO, LOG_DEBUG and LOG_TRACE). The evaluate to either empty, or to > " > /dev/null". This means that you can determine the log level you want > this particular output to be on, and it's fairly readable what the > intention is. E.g.: > $(ECHO) $(LOG_DEBUG) Starting clusterfrizz process now > > It is non-trivial to export these to hospot, so I only copied the > definition of the one I care about for now, LOG_INFO. > > I don't think it's a good idea to change $ECHO, since it is used like this > $(ECHO) # Generated file, do not edit! > tmp/myfile.h > > You *could* introduce a new $(ECHO_LOG_INFO) or so, but I don't think > it's better. A valuable effect of the build-infra pattern is that it can > be applied to any command (such as build tools), not just echo. > >> >> print_info is nice. > > Thanks. > > /Magnus From dl at cs.oswego.edu Fri Sep 5 13:08:54 2014 From: dl at cs.oswego.edu (Doug Lea) Date: Fri, 05 Sep 2014 09:08:54 -0400 Subject: Release store in C2 putfield In-Reply-To: <54086970.20103@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> <54086970.20103@redhat.com> Message-ID: <5409B5E6.1050204@cs.oswego.edu> I'm trying to disentangle the many interrelated issues here, mainly wrt to JMM and possible revisions. A few notes/comments: 1. As far as I can see, the G1 post barrier enforces ordering, but the plain one (GraphKit::write_barrier_post) does not. On the other hand, some GC barrier-related mechanics seem to be strewn elsewhere, so might have this effect. In particular, the release inside Parse::do_put_xxx seems suspicious. I'd expect CMS, but not the other GCs, to have the same constraints as G1. I'd also expect the ordering constraints to sometimes have a significant overall performance cost on ARM and Power. (The G1 ordering enforcement changes were apparently performance tested only on TSO machines; see http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-October/011077.html and follow-ups). (Aside: Yet more reasons to hate card-marking.) 2. Hans Boehm has argued/demonstrated over the years (see for example, http://hboehm.info/c++mm/no_write_fences.html), that StoreStore fences, as opposed to release==(StoreStore|StoreLoad) fences, are too delicate and anomaly-filled to expose as a programming mode. But there are cases where they may come into play, for example as the first fence of a volatile-store (that also requires a trailing StoreLoad), that might be profitable to separate if any other internal mechanics could then be applied to further optimize. And even if not generally useful, they seem to apply to the GC post_barrier case. 3. We are indeed strongly considering simplifying the revised Java Memory Model to require release fencing on construction, not just in the presence of final fields. In the ideal implementation, this would require multiple fences (i.e., more than the one needed anyway to force object header sanity) only if "this" escapes within a constructor. But because some of these fences are currently hard-wired and so not amenable to elision/optimization, carrying this out on non-TSO will probably take some effort. 4. Reminder to Andrew: We cannot let the VM crash when people write racy/wrong code including unsafe publication. -Doug On 09/04/2014 09:30 AM, Andrew Haley wrote: > On 09/04/2014 01:19 PM, Bertrand Delsart wrote: >> On 04/09/14 12:30, Andrew Haley wrote: >>> On 09/04/2014 10:30 AM, Bertrand Delsart wrote: >>>> I'm not a C2 expert but from what I have quickly checked, an issue may >>>> be that we need StoreStore ordering on some platforms. >>>> >>>> This should for instance be true for cardmarking (new stored oop >>>> must be visible before the card is marked). >>> >>> Okay. I can live with that. Is there a corresponding read barrier in >>> the code which scans the card table? >> >> See for instance the storeload() in >> G1SATBCardTableLoggingModRefBS::write_ref_field_work > > Okay, thanks, that is tremendously helpful. I know what to look for > now. > >> There are in fact a lot of other barriers in concurrent card scanning >> and cleaning (some of them being implicit due to compare and swap >> operations). >> >>>> This may also be true for the oop stores in general, as initially >>>> discussed. [IMHO this is related to final fields, which have to be >>>> visible when the object escape. Barriers are the end of the >>>> constructors may not be sufficient if objects can can escape before >>>> the end of their init() method. >>> >>> I'm pretty sure we do this correctly. Are you aware of any place >>> (except unsafe publication, which is a programmer error) where this >>> might happen? We generate a barrier at the end of a constructor if >>> there is a final field and at the end of object creation. >> >> I agree that this is not a good programming style but I'm not sure this >> can always be considered a programmer error. Do you see anything in the >> java specification that forbid publication before the end of object >> creation ? > > No, but there's nothing in the Java spec which says that the language > will protect a programmer from themself. > >> For instance, objects may have to be linked at creation time. In >> general, the publication should be safe because it will hit barriers >> (because what the object is exported too will often needs to be >> protected). However, I do not think this is mandatory according to the >> specifications. > > That's right. > >> Now, the problem is to see what the JMM requires in that case. I'm not >> 100% sure that a StoreStore is needed here. This is why I said "may not >> be sufficient". The JSR-133 cookbook has several "(outside of >> constructor)" statements that might mean it is not needed (if you have a >> membar at the end of the constructor). However, while I'm familiar with >> barriers because of my runtime, GC and embedded background, I do not >> consider myself to be a JMM expert. I will let one chime in. >> >> Of course, from a support point of view, it may be easier to add a >> StoreStore semantic on oop stores (should not be too expensive, taking >> into account the cost of GC barriers cost and the frequency of oop >> store) than to investigate the kind of troubles a strange ordering can >> lead to and explain to the customer why his Java code must be changed >> for platforms with weaker memory models. > > I think programmers are going to have to get used to it. The issue of > safe publication is very well known, especially because of the book > _Java Concurrency in Practice_. > > AIUI the purpose of the JMM is to give a clear definition of the > memory semantics of Java that can be efficiently executed on a wide > variety of machines. We need Java to scale well on machines with many > cores, and the JMM is a good fit to that. > >> Did you measure the performance regression ? > > Yes. It is high; but I can't provide any numbers. > > Andrew. > From mikael.gerdin at oracle.com Fri Sep 5 13:40:38 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Fri, 05 Sep 2014 15:40:38 +0200 Subject: Release store in C2 putfield In-Reply-To: <5409B5E6.1050204@cs.oswego.edu> References: <540722C0.1060404@redhat.com> <54086970.20103@redhat.com> <5409B5E6.1050204@cs.oswego.edu> Message-ID: <1674510.nW7Fy45L2F@mgerdin03> I have a short clarification on the card-marking barriers. On Friday 05 September 2014 09.08.54 Doug Lea wrote: > I'm trying to disentangle the many interrelated issues here, > mainly wrt to JMM and possible revisions. A few notes/comments: > > 1. As far as I can see, the G1 post barrier enforces ordering, > but the plain one (GraphKit::write_barrier_post) does not. > On the other hand, some GC barrier-related mechanics seem to > be strewn elsewhere, so might have this effect. In particular, > the release inside Parse::do_put_xxx seems suspicious. > I'd expect CMS, but not the other GCs, to have the > same constraints as G1. I'd also expect the ordering constraints > to sometimes have a significant overall performance cost on ARM and > Power. (The G1 ordering enforcement changes were apparently > performance tested only on TSO machines; see > http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-October/011077.html > and follow-ups). > (Aside: Yet more reasons to hate card-marking.) The reason for adding the StoreLoad for G1 is because G1 always checks if the card is dirty before putting it on the dirty card queue and writing a 0 to it. It's possible that CMS would need a similar StoreLoad if +UseCondCardMark is set, but I'm not sure. With -UseCondCardMark CMS does not need a StoreLoad, but it needs the field write to be visible when the dirty card is visible, so I guess that is StoreStore. /Mikael > > 2. Hans Boehm has argued/demonstrated over the years (see for > example, http://hboehm.info/c++mm/no_write_fences.html), that > StoreStore fences, as opposed to release==(StoreStore|StoreLoad) > fences, are too delicate and anomaly-filled to expose as a > programming mode. But there are cases where they may come into > play, for example as the first fence of a volatile-store > (that also requires a trailing StoreLoad), that might be > profitable to separate if any other internal mechanics could > then be applied to further optimize. And even if not generally > useful, they seem to apply to the GC post_barrier case. > > 3. We are indeed strongly considering simplifying the revised > Java Memory Model to require release fencing on construction, > not just in the presence of final fields. In the ideal implementation, > this would require multiple fences (i.e., more than the one > needed anyway to force object header sanity) only if "this" > escapes within a constructor. But because some of these fences are > currently hard-wired and so not amenable to elision/optimization, > carrying this out on non-TSO will probably take some effort. > > 4. Reminder to Andrew: We cannot let the VM crash when people > write racy/wrong code including unsafe publication. > > -Doug > > On 09/04/2014 09:30 AM, Andrew Haley wrote: > > On 09/04/2014 01:19 PM, Bertrand Delsart wrote: > >> On 04/09/14 12:30, Andrew Haley wrote: > >>> On 09/04/2014 10:30 AM, Bertrand Delsart wrote: > >>>> I'm not a C2 expert but from what I have quickly checked, an issue may > >>>> be that we need StoreStore ordering on some platforms. > >>>> > >>>> This should for instance be true for cardmarking (new stored oop > >>>> must be visible before the card is marked). > >>> > >>> Okay. I can live with that. Is there a corresponding read barrier in > >>> the code which scans the card table? > >> > >> See for instance the storeload() in > >> G1SATBCardTableLoggingModRefBS::write_ref_field_work > > > > Okay, thanks, that is tremendously helpful. I know what to look for > > now. > > > >> There are in fact a lot of other barriers in concurrent card scanning > >> and cleaning (some of them being implicit due to compare and swap > >> operations). > >> > >>>> This may also be true for the oop stores in general, as initially > >>>> discussed. [IMHO this is related to final fields, which have to be > >>>> visible when the object escape. Barriers are the end of the > >>>> constructors may not be sufficient if objects can can escape before > >>>> the end of their init() method. > >>> > >>> I'm pretty sure we do this correctly. Are you aware of any place > >>> (except unsafe publication, which is a programmer error) where this > >>> might happen? We generate a barrier at the end of a constructor if > >>> there is a final field and at the end of object creation. > >> > >> I agree that this is not a good programming style but I'm not sure this > >> can always be considered a programmer error. Do you see anything in the > >> java specification that forbid publication before the end of object > >> creation ? > > > > No, but there's nothing in the Java spec which says that the language > > will protect a programmer from themself. > > > >> For instance, objects may have to be linked at creation time. In > >> general, the publication should be safe because it will hit barriers > >> (because what the object is exported too will often needs to be > >> protected). However, I do not think this is mandatory according to the > >> specifications. > > > > That's right. > > > >> Now, the problem is to see what the JMM requires in that case. I'm not > >> 100% sure that a StoreStore is needed here. This is why I said "may not > >> be sufficient". The JSR-133 cookbook has several "(outside of > >> constructor)" statements that might mean it is not needed (if you have a > >> membar at the end of the constructor). However, while I'm familiar with > >> barriers because of my runtime, GC and embedded background, I do not > >> consider myself to be a JMM expert. I will let one chime in. > >> > >> Of course, from a support point of view, it may be easier to add a > >> StoreStore semantic on oop stores (should not be too expensive, taking > >> into account the cost of GC barriers cost and the frequency of oop > >> store) than to investigate the kind of troubles a strange ordering can > >> lead to and explain to the customer why his Java code must be changed > >> for platforms with weaker memory models. > > > > I think programmers are going to have to get used to it. The issue of > > safe publication is very well known, especially because of the book > > _Java Concurrency in Practice_. > > > > AIUI the purpose of the JMM is to give a clear definition of the > > memory semantics of Java that can be efficiently executed on a wide > > variety of machines. We need Java to scale well on machines with many > > cores, and the JMM is a good fit to that. > > > >> Did you measure the performance regression ? > > > > Yes. It is high; but I can't provide any numbers. > > > > Andrew. From aph at redhat.com Fri Sep 5 13:57:44 2014 From: aph at redhat.com (Andrew Haley) Date: Fri, 05 Sep 2014 14:57:44 +0100 Subject: Release store in C2 putfield In-Reply-To: <5409B5E6.1050204@cs.oswego.edu> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> <54086970.20103@redhat.com> <5409B5E6.1050204@cs.oswego.edu> Message-ID: <5409C158.2030704@redhat.com> On 09/05/2014 02:08 PM, Doug Lea wrote: > > I'm trying to disentangle the many interrelated issues here, > mainly wrt to JMM and possible revisions. A few notes/comments: > > 1. As far as I can see, the G1 post barrier enforces ordering, > but the plain one (GraphKit::write_barrier_post) does not. > On the other hand, some GC barrier-related mechanics seem to > be strewn elsewhere, so might have this effect. In particular, > the release inside Parse::do_put_xxx seems suspicious. > I'd expect CMS, but not the other GCs, to have the > same constraints as G1. I'd also expect the ordering constraints > to sometimes have a significant overall performance cost on ARM and > Power. (The G1 ordering enforcement changes were apparently > performance tested only on TSO machines; see > http://mail.openjdk.java.net/pipermail/hotspot-dev/2013-October/011077.html > and follow-ups). > (Aside: Yet more reasons to hate card-marking.) That G1 code isn't so bad: it is at least conditional in that the card is read and the memory barrier is used if and only if the card is not young. The code to which I really object uses a release store for every card table write. There seems to be a convention (of which I was unaware) that one either implements acquire- and release- reads and writes or implements barrier instructions. Both the type of the MemNode and the explicit barriers are retained thoughout compilation, so one can use either. I have been trying to use barriers and aquire/release as appropriate, so I implemented both, which is why I ran into problems. Note that there is no explicit barrier associated with the card table release store, so the latter group of targets (which use barrier instructions) will emit a normal store. This makes no sense to me: either the card table needs a release or it doesn't. > 2. Hans Boehm has argued/demonstrated over the years (see for > example, http://hboehm.info/c++mm/no_write_fences.html), that > StoreStore fences, as opposed to release==(StoreStore|StoreLoad) > fences, are too delicate and anomaly-filled to expose as a > programming mode. But there are cases where they may come into > play, for example as the first fence of a volatile-store > (that also requires a trailing StoreLoad), that might be > profitable to separate if any other internal mechanics could > then be applied to further optimize. And even if not generally > useful, they seem to apply to the GC post_barrier case. > > 3. We are indeed strongly considering simplifying the revised > Java Memory Model to require release fencing on construction, > not just in the presence of final fields. In the ideal implementation, > this would require multiple fences (i.e., more than the one > needed anyway to force object header sanity) only if "this" > escapes within a constructor. But because some of these fences are > currently hard-wired and so not amenable to elision/optimization, > carrying this out on non-TSO will probably take some effort. That does sound sensible. > 4. Reminder to Andrew: We cannot let the VM crash when people > write racy/wrong code including unsafe publication. There's absolutely no way I'd do that, and I'm rather surprised that anyone got that idea. I emit a write barrier after object creation, and another after an initializer with final fields. I am arguing against every oop store being a release store, which seems Very Wrong to me. (This seems to be for IA64, which as far as I know is a private SAP target. This code really should be marked IA64-ONLY, but it would be even better to reorganize the way this stuff is handled in HotSpot.) I am wondering whether to give up trying to use acquire and release instructions for the time being, and fall back to using explicit barriers. It's rather messy, but it's good enough until this gets sorted out properly. Andrew. From erik.osterlund at lnu.se Fri Sep 5 14:03:31 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Fri, 5 Sep 2014 14:03:31 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <3855070.ADDnZ0LX5H@mgerdin-lap> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> Message-ID: <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> Hi Mikael, Back from travelling now. I did look into other architectures a bit and made some interesting findings. The architecture that stands out the most disastrous to me is ARM. It has three levels of nested loops to carry out a single byte CAS: 1. Outmost loop to emulate byte-grain CAS using word-sized CAS. 2. Middle loop makes calls to the __kernel_cmpxchg which is optimized for non-SMP systems using OS support but backward compatible with LL/SC loop for SMP systems. Unfortunately it returns a boolean (success/failure) rather than the destination value and hence the loop keeps track of the actual value at the destination required by the Atomic::cmpxchg interface. 3. __kernel_cmpxchg implements CAS on SMP-systems using LL/SC (ldrex/strex). Since a context switch can break in the middle, a loop retries the operation in such unfortunate spuriously failing scenario. I have made a new solution that would only make sense on ARMv6 and above with SMP. The proposed solution has only one loop instead of three, would be great if somebody could review it: inline intptr_t __casb_internal(volatile intptr_t *ptr, intptr_t compare, intptr_t new_val) { intptr_t result, old_tmp; // prefetch for writing and barrier __asm__ __volatile__ ("pld [%0]\n\t" " dmb sy\n\t" /* maybe we can get away with dsb st here instead for speed? anyone? playing it safe now */ : : "r" (ptr) : "memory"); do { // spuriously failing CAS loop keeping track of value __asm__ __volatile__("@ __cmpxchgb\n\t" " ldrexb %1, [%2]\n\t" " mov %0, #0\n\t" " teq %1, %3\n\t" " it eq\n\t" " strexbeq %0, %4, [%2]\n\t" : "=&r" (result), "=&r" (old_tmp) : "r" (ptr), "Ir" (compare), "r" (new_val) : "memory", "cc"); } while (result); // barrier __asm__ __volatile__ ("dmb sy" ::: "memory"); return old_tmp; } inline jbyte Atomic::cmpxchg (jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { return (jbyte)__casb_internal(volatile jbyte*)ptr, (intptr_t)compare, (intptr_t)new_val); } What I'm a bit uncertain about here is which barriers we need and which are optimal as it seems to be a bit different for different ARM versions, maybe somebody can enlighten me? Also I'm not sure how hotspot checks ARM version to make the appropriate decision. The proposed x86 implementation is much more straight forward (bsd, linux): inline jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { int mp = os::is_MP(); jbyte result; __asm__ volatile (LOCK_IF_MP(%4) "cmpxchgb %1,(%3)" : "=a" (result) : "q" (exchange_value), "a" (compare_value), "r" (dest), "r" (mp) : "cc", "memory"); return result; } Unfortunately the code is spread out through a billion files because of different ABIs and compiler support for different OS variants. Some use generated stubs, some use ASM files, some use inline assembly. I think I fixed all of them but I need your help to build and verify it if you don't mind as I don't have access to those platforms. How do we best do this? As for SPARC I unfortunately decided to keep the old implementation as SPARC does not seem to support byte-wide CAS, only found the cas and casx instructions which is not sufficient as far as I could tell, corrections if I'm wrong? In that case, add byte-wide CAS on SPARC to my wish list for christmas. Is there any other platform/architecture of interest on your wish list I should investigate which is important to you? PPC? /Erik On 04 Sep 2014, at 11:20, Mikael Gerdin wrote: > Hi Erik, > > On Thursday 04 September 2014 09.05.13 Erik ?sterlund wrote: >> Hi, >> >> The implementation of single byte Atomic::cmpxchg on x86 (and all other >> platforms) emulates the single byte cmpxchgb instruction using a loop of >> jint-sized load and cmpxchgl and code to dynamically align the destination >> address. >> >> This code is used for GC-code related to remembered sets currently. >> >> I have the changes on my platform (amd64, bsd) to simply use the native >> cmpxchgb instead but could provide a patch fixing this unnecessary >> performance glitch for all supported x86 if anybody wants this? > > I think that sounds good. > Would you mind looking at other cpu arches to see if they provide something > similar? It's ok if you can't build the code for the other arches, I can help > you with that. > > /Mikael > >> >> /Erik > From dl at cs.oswego.edu Fri Sep 5 14:35:55 2014 From: dl at cs.oswego.edu (Doug Lea) Date: Fri, 05 Sep 2014 10:35:55 -0400 Subject: Release store in C2 putfield In-Reply-To: <5409C158.2030704@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> <54086970.20103@redhat.com> <5409B5E6.1050204@cs.oswego.edu> <5409C158.2030704@redhat.com> Message-ID: <5409CA4B.9030602@cs.oswego.edu> On 09/05/2014 09:57 AM, Andrew Haley wrote: > On 09/05/2014 02:08 PM, Doug Lea wrote: >> >> 1. As far as I can see, the G1 post barrier enforces ordering, >> but the plain one (GraphKit::write_barrier_post) does not. >> On the other hand, some GC barrier-related mechanics seem to >> be strewn elsewhere, so might have this effect. In particular, >> the release inside Parse::do_put_xxx seems suspicious. >> I'd expect CMS, but not the other GCs, to have the >> same constraints as G1... > > That G1 code isn't so bad: it is at least conditional in that the card > is read and the memory barrier is used if and only if the card is not > young. The code to which I really object uses a release store for > every card table write. I think we (also Mikael) agree that the GC barriers (*write_barrier_post) ought to be self-contained to reflect their actual constraints, ideally avoiding any need to deal with them in Parse::do_put_xxx or elsewhere. Probably this means some changes for CMS (not just G1) vs other collectors. > either the card table needs a release or it doesn't. A related fun fact about release per se is that you have no assurance when that store will occur. It could be postponed for as long as instruction scheduler of the further optimized graph feels like doing so. I expect/hope not past a safepoint though. > >> 4. Reminder to Andrew: We cannot let the VM crash when people >> write racy/wrong code including unsafe publication. > > There's absolutely no way I'd do that (Of course I didn't mean to accuse you of it, just remind you of it when contemplating what to do here!) > > I am arguing against every oop store being a release store, which > seems Very Wrong to me. (This seems to be for IA64, which as far as I > know is a private SAP target. This code really should be marked > IA64-ONLY, but it would be even better to reorganize the way this > stuff is handled in HotSpot.) > I am wondering whether to give up trying to use acquire and release > instructions for the time being, and fall back to using explicit > barriers. It's rather messy, but it's good enough until this gets > sorted out properly. > I'm not sure. ARMv8 (also IA64) acquire/release specs bind some effects to the locations. If you treat the release and store as separable, and later combine at instruction generation, it's superficially conceivable that you'd lose something in case you matched different writes with different fences. (Plus, if you cannot combine, you'd choose a plain fence or use fake thread-local target for releasing write; although we've seen (for x86) that choosing fake targets can be challenging...) But I don't know of any cases (i.e., compiler transforms) in which this could possibly matter wrt to any JMM-related guarantees. -Doug From aph at redhat.com Fri Sep 5 14:41:25 2014 From: aph at redhat.com (Andrew Haley) Date: Fri, 05 Sep 2014 15:41:25 +0100 Subject: Release store in C2 putfield In-Reply-To: <5409CA4B.9030602@cs.oswego.edu> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> <54086970.20103@redhat.com> <5409B5E6.1050204@cs.oswego.edu> <5409C158.2030704@redhat.com> <5409CA4B.9030602@cs.oswego.edu> Message-ID: <5409CB95.3030701@redhat.com> On 09/05/2014 03:35 PM, Doug Lea wrote: > ARMv8 (also IA64) acquire/release specs bind some effects > to the locations. If you treat the release and store > as separable, and later combine at instruction generation, it's > superficially conceivable that you'd lose something in case you > matched different writes with different fences. (Plus, if you > cannot combine, you'd choose a plain fence or use fake thread-local > target for releasing write; although we've seen (for x86) that > choosing fake targets can be challenging...) I think I may give up even trying to combine fences with writes, and emit explicit barrier instructions. That way I at least don't lose relative to other targets. Andrew. From sgehwolf at redhat.com Fri Sep 5 14:49:38 2014 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Fri, 05 Sep 2014 16:49:38 +0200 Subject: RFR 8057696: java -version triggers assertion for slowdebug zero builds Message-ID: <1409928578.3155.34.camel@localhost.localdomain> Hi, Can someone please review and sponsor this tiny change? Bug: https://bugs.openjdk.java.net/browse/JDK-8057696 (Thanks Omair for filing it for me) webrev: https://fedorapeople.org/~jerboaa/bugs/openjdk/JDK-8057696/webrev.0/ As mentioned in the bug, the change as introduced with JDK-8003426 removed some Zero code in cppInterpreter_zero.cpp. (AbstractInterpreterGenerator::generate_method_entry). In this code block was an explicit call to generate_normal_entry() using a param value of false unconditionally (regardless of synchronized == true or not). After the JDK-8003426 change the generate_normal_entry() function get's *correctly* called with true or false values. However, it renders the assertion incorrect. The fix is to get rid of the offending assertion. Thanks, Severin From dl at cs.oswego.edu Fri Sep 5 15:37:50 2014 From: dl at cs.oswego.edu (Doug Lea) Date: Fri, 05 Sep 2014 11:37:50 -0400 Subject: Release store in C2 putfield In-Reply-To: <5409CB95.3030701@redhat.com> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> <54086970.20103@redhat.com> <5409B5E6.1050204@cs.oswego.edu> <5409C158.2030704@redhat.com> <5409CA4B.9030602@cs.oswego.edu> <5409CB95.3030701@redhat.com> Message-ID: <5409D8CE.50407@cs.oswego.edu> On 09/05/2014 10:41 AM, Andrew Haley wrote: > On 09/05/2014 03:35 PM, Doug Lea wrote: >> ARMv8 (also IA64) acquire/release specs bind some effects >> to the locations. If you treat the release and store >> as separable, and later combine at instruction generation, it's >> superficially conceivable that you'd lose something in case you >> matched different writes with different fences. (Plus, if you >> cannot combine, you'd choose a plain fence or use fake thread-local >> target for releasing write; although we've seen (for x86) that >> choosing fake targets can be challenging...) > > I think I may give up even trying to combine fences with writes, and > emit explicit barrier instructions. That way I at least don't lose > relative to other targets. > I probably don't deserve an opinion about this since I don't deal much with hotspot internals, but this seems to be the wrong stance: There are a bunch of cases across a bunch of processors in which fences and accesses are profitably fusible, but no standard way to do it. Even on x64 (probably not 32bit x86), you'd like the option of fusing write+volatile as XCHG, but doing so looks like it would require something in .ad files similar to your aarch64 predicate(followed_by_ordered_store(n)) trick. It would be nice to come up with some way to express these in a way that hotspot could deal with them before outputing instructions. (BTW, I know of the aarch64 strategy because I looked to see how you did this when (almost) proposing it be done in preference to the x86 xadd membar encoding.) -Doug From gerard.ziemski at oracle.com Fri Sep 5 15:55:16 2014 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Fri, 05 Sep 2014 10:55:16 -0500 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: References: <5408971E.1090902@oracle.com> Message-ID: <5409DCE4.8030303@oracle.com> hi Bernhard, I will have to look into how I can make it available to the open source code community. cheers On 9/4/2014 4:16 PM, Bernhard Urban wrote: > Hi Gerard, > > On Thu, Sep 4, 2014 at 6:45 PM, Gerard Ziemski > > wrote: > > > For those interested, I have an Xcode project for JDK8 and JDK9 > that I am personally actively supporting and using, which is > hosted at https://orahub.oraclecorp.com/gerard.ziemski/xcode that > is meant to be put in "jdk/hotspot/ide" folder. > > > I would be interested into this Xcode project, but I can't access the > URL. Can you make it publicly accessible? > > > Thanks, > Bernhard From vladimir.kozlov at oracle.com Fri Sep 5 16:10:52 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 05 Sep 2014 09:10:52 -0700 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <540978E1.2060703@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> <5406F720.2080603@oracle.com> <540799FE.5030309@oracle.com> <54084127.8000202@oracle.com> <5408B449.2040206@oracle.com> <540978E1.2060703@oracle.com> Message-ID: <5409E08C.6010803@oracle.com> Looks good. Thanks, Vladimir On 9/5/14 1:48 AM, Tobias Hartmann wrote: > Hi Vladimir, > > thanks again for the review. > > On 04.09.2014 20:49, Vladimir Kozlov wrote: >> The test misses @run command. > > I thought it is not necessary if we do not specify any VM options. I added it. > >> Why you get ""TieredCompilation is disabled in this release."? Client VM? > > Yes, with a client VM. On a 32 bit machine with a 32 bit build of the VM we may have a client version by default. > >> What happens if we run with TieredStopAtLevel=1? > > By now, we use all heaps but I changed 'CodeCache::heap_available' to not use the profiled code heap in this case. I > added some more checks and comments to the test. Thanks for catching this! > > New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ > > Thanks, > Tobias > >> Thanks, >> Vladimir >> >> On 9/4/14 3:38 AM, Tobias Hartmann wrote: >>> Thank you, Vladimir. >>> >>> I added a test that checks the result of segmented code cache related VM >>> options. >>> >>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.06/ >>> >>> Can I get a second review please? >>> >>> Best, >>> Tobias >>> >>> On 04.09.2014 00:45, Vladimir Kozlov wrote: >>>> Looks good. You need second review. >>>> >>>> And, please, add a WB test which verifies a reaction on flags settings >>>> similar to what test/compiler/codecache/CheckUpperLimit.java does. >>>> Both positive (SegmentedCodeCache is enabled) and negative (sizes does >>>> not match or ReservedCodeCacheSize is small) and etc. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> On 9/3/14 4:10 AM, Tobias Hartmann wrote: >>>>> Hi Vladimir, >>>>> >>>>> thanks for the review. >>>>> >>>>> On 29.08.2014 19:33, Vladimir Kozlov wrote: >>>>>> On 8/29/14 7:10 AM, Tobias Hartmann wrote: >>>>>>> Hi Vladimir, >>>>>>> >>>>>>> thanks for the review. >>>>>>> >>>>>>> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>>>>>>> For the record, SegmentedCodeCache is enabled by default when >>>>>>>> TieredCompilation is enabled and ReservedCodeCacheSize >>>>>>>> >= 240 MB. Otherwise it is false by default. >>>>>>> >>>>>>> Exactly. >>>>>>> >>>>>>>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>>>>>>> setting and segments size adjustment - do adjustment >>>>>>>> only if SegmentedCodeCache is enabled. >>>>>>> >>>>>>> Done. >>>>>>> >>>>>>>> Also I think each flag should be checked and adjusted separately. >>>>>>>> You may bail out (vm_exit_during_initialization) if >>>>>>>> sizes do not add up. >>>>>>> >>>>>>> I think we should only increase the sizes if they are all default. >>>>>>> Otherwise we would for example fail if the user sets >>>>>>> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >>>>>>> NonProfiledCodeHeap size is multiplied by 5. What do >>>>>>> you think? >>>>>> >>>>>> But ReservedCodeCacheSize is scaled anyway and you will get sum of >>>>>> sizes != whole size. We need to do something. >>>>> >>>>> I agree. I changed it as you suggested first: The code heap sizes are >>>>> scaled individually and we bail out if the sizes are not consistent with >>>>> ReservedCodeCacheSize. >>>>> >>>>>> BTW the error message for next check should print all sizes, user may >>>>>> not know the default value of some which he did not specified on >>>>>> command line. >>>>>> >>>>>> (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) >>>>> >>>>> The error message now prints the sizes in brackets. >>>>> >>>>>>>> And use >>>>>>> >>>>>>> I think the rest of this sentence is missing :) >>>>>> >>>>>> And use FLAG_SET_ERGO() when you scale. :) >>>>> >>>>> Done. I also changed the implementation of CodeCache::initialize_heaps() >>>>> accordingly. >>>>> >>>>>>>> Align second line: >>>>>>>> >>>>>>>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>>>>>>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>>>>>>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>>>>>>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >>>>>>> >>>>>>> Done. >>>>>>> >>>>>>>> codeCache.cpp - in initialize_heaps() add new methods in C1 and C2 >>>>>>>> to return buffer_size they need. Add >>>>>>>> assert(SegmentedCodeCache) to this method to show that we call it >>>>>>>> only in such case. >>>>>>> >>>>>>> Done. >>>>>>> >>>>>>>> You do adjustment only when all flags are default. But you still >>>>>>>> need to check that you have space in >>>>>>>> NonMethodCodeHeap for scratch buffers. >>>>>>> >>>>>>> I added a the following check: >>>>>>> >>>>>>> // Make sure we have enough space for the code buffers >>>>>>> if (NonMethodCodeHeapSize < code_buffers_size) { >>>>>>> vm_exit_during_initialization("Not enough space for code buffers >>>>>>> in CodeCache"); >>>>>>> } >>>>>> >>>>>> I think, you need to take into account min_code_cache_size as in >>>>>> arguments.cpp: >>>>>> uint min_code_cache_size = (CodeCacheMinimumUseSpace DEBUG_ONLY(* 3)) >>>>>> + CodeCacheMinimumFreeSpace; >>>>>> >>>>>> if (NonMethodCodeHeapSize < (min_code_cache_size+code_buffers_size)) { >>>>> >>>>> True, I changed it. >>>>> >>>>>> I would be nice if this code in initialize_heaps() could be moved >>>>>> called during arguments parsing if we could get number of compiler >>>>>> threads there. But I understand that we can't do that until >>>>>> compilation policy is set :( >>>>> >>>>> Yes, this is not possible because we need to know the number of C1/C2 >>>>> compiler threads. >>>>> >>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>>> >>>>>>> >>>>>>>> codeCache.hpp - comment alignment: >>>>>>>> + // Creates a new heap with the given name and size, containing >>>>>>>> CodeBlobs of the given type >>>>>>>> ! static void add_heap(ReservedSpace rs, const char* name, size_t >>>>>>>> size_initial, int code_blob_type); >>>>>>> >>>>>>> Done. >>>>>>> >>>>>>>> nmethod.cpp - in new() can we mark nmethod allocation critical only >>>>>>>> when SegmentedCodeCache is enabled? >>>>>>> >>>>>>> Yes, that's what we do with: >>>>>>> >>>>>>> 809 bool is_critical = SegmentedCodeCache; >>>>>>> >>>>>>> Or what are you referring to? >>>>>> >>>>>> Somehow I missed that SegmentedCodeCache is used already. It is fine >>>>>> then. >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>>> >>>>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >>>>>>> >>>>>>> Thanks, >>>>>>> Tobias >>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> the segmented code cache JEP is now targeted. Please review the >>>>>>>>> final >>>>>>>>> implementation before integration. The previous RFR, including a >>>>>>>>> short >>>>>>>>> description, can be found here [1]. >>>>>>>>> >>>>>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>>>>>> Implementation: >>>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>>>>>> JDK-Test fix: >>>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>>>>>> >>>>>>>>> Changes since the last review: >>>>>>>>> - Merged with other changes (for example, G1 class unloading >>>>>>>>> changes [2]) >>>>>>>>> - Fixed some minor bugs that showed up during testing >>>>>>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>>>>>> - Non-method CodeHeap size increased to 5 MB >>>>>>>>> - Fallback solution: Store non-method code in the non-profiled code >>>>>>>>> heap >>>>>>>>> if there is not enough space in the non-method code heap (see >>>>>>>>> 'CodeCache::allocate') >>>>>>>>> >>>>>>>>> Additional testing: >>>>>>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>>>>>> - Compiler and GC nightlies >>>>>>>>> - jtreg tests >>>>>>>>> - VM (NSK) Testbase >>>>>>>>> - More performance testing (results attached to the bug) >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Tobias >>>>>>>>> >>>>>>>>> [1] >>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>>>>>> >>>>> >>> > From gerard.ziemski at oracle.com Fri Sep 5 19:37:53 2014 From: gerard.ziemski at oracle.com (Gerard Ziemski) Date: Fri, 05 Sep 2014 14:37:53 -0500 Subject: RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX In-Reply-To: <5409139B.7080100@oracle.com> References: <5409139B.7080100@oracle.com> Message-ID: <540A1111.8080604@oracle.com> hi Vladimir, I looked at the webrev and it looks to me like the optimized build was implemented on all platforms, except OS X, and therefore disabled for all, and your change implements it on OS X and then makes it available for all the supported platforms - did I get this right? If so, please consider this reviewed (with small "r") Thank you for this fix! cheers On 9/4/2014 8:36 PM, Vladimir Kozlov wrote: > http://cr.openjdk.java.net/~kvn/8057643/webrev/ > https://bugs.openjdk.java.net/browse/JDK-8057643 > > Added missing Hotspot make targets for 'optimized' build. > > Hotspot VM has 'optimized' build version which is used to collect > statistic and for stress testing. A corresponding source code is > guarded by #ifndef PRODUCT and by 'notproduct' flags. > Switching to full forest build for Hotspot development requires to > build all JVM targets from top repository. Unfortunately > '--with-debug-level=optimized' build is broken on OSX because of > missing targets in Hotspot makefiles. > > Tested the build on OSX, Linux, Solaris (I don't have Windows). > > Thanks, > Vladimir > > From coleen.phillimore at oracle.com Fri Sep 5 19:55:26 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Fri, 05 Sep 2014 15:55:26 -0400 Subject: [8u40] RFR 6642881: Improve performance of Class.getClassLoader() Message-ID: <540A152E.9020507@oracle.com> Summary: Add classLoader to java/lang/Class instance for fast access This is a backport request for 8u40. This change has been in the jdk9 code for 3 months without any problems. The JDK changes hg imported cleanly. The Hotspot change needed a hand merge for create_mirror call in klass.cpp. http://cr.openjdk.java.net/~coleenp/6642881_8u40_jdk/ http://cr.openjdk.java.net/~coleenp/6642881_8u40_hotspot/ bug link https://bugs.openjdk.java.net/browse/JDK-6642881 Ran jdk_core jtreg tests in jdk with both jdk/hotspot changes. Also ran jck java_lang tests with only the hotspot change. The hotspot change can be tested separately from the jdk change (but not the other way around). Thanks, Coleen From vladimir.kozlov at oracle.com Fri Sep 5 21:42:57 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 05 Sep 2014 14:42:57 -0700 Subject: RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX In-Reply-To: <540A1111.8080604@oracle.com> References: <5409139B.7080100@oracle.com> <540A1111.8080604@oracle.com> Message-ID: <540A2E61.9090204@oracle.com> Thank you, Gerard On 9/5/14 12:37 PM, Gerard Ziemski wrote: > hi Vladimir, > > I looked at the webrev and it looks to me like the optimized build was > implemented on all platforms, except OS X, and therefore disabled for > all, and your change implements it on OS X and then makes it available > for all the supported platforms - did I get this right? If so, please > consider this reviewed (with small "r") It was not disabled for all platforms. On others platforms you could build with --with-debug-level=optimized without this fix (I verified that). Note, 'optimized' is not 'release' version of JavaVM. It contains additional code (additional 10-15% JVM library size) which is not in product JVM. And it is only applied to Hotspot build, the rest jdk code is built with 'release' level. The main motivation for this work and other (JDK-8056072) is the next problem and coming "Cross Component (hotspot+jdk) Development" which will require whole forest to build Hotspot: https://bugs.openjdk.java.net/browse/INTJDK-7612693 "optimized build had been broken a dozen times, so it's need it to be added into jprt" We want to add 'optimized' build of whole forest in JPRT jobs and we need to support it on all platforms for that. Regards, Vladimir > > Thank you for this fix! > > > cheers > > On 9/4/2014 8:36 PM, Vladimir Kozlov wrote: >> http://cr.openjdk.java.net/~kvn/8057643/webrev/ >> https://bugs.openjdk.java.net/browse/JDK-8057643 >> >> Added missing Hotspot make targets for 'optimized' build. >> >> Hotspot VM has 'optimized' build version which is used to collect >> statistic and for stress testing. A corresponding source code is >> guarded by #ifndef PRODUCT and by 'notproduct' flags. >> Switching to full forest build for Hotspot development requires to >> build all JVM targets from top repository. Unfortunately >> '--with-debug-level=optimized' build is broken on OSX because of >> missing targets in Hotspot makefiles. >> >> Tested the build on OSX, Linux, Solaris (I don't have Windows). >> >> Thanks, >> Vladimir >> >> > From david.holmes at oracle.com Mon Sep 8 01:38:33 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 08 Sep 2014 11:38:33 +1000 Subject: [8u40] RFR 6642881: Improve performance of Class.getClassLoader() In-Reply-To: <540A152E.9020507@oracle.com> References: <540A152E.9020507@oracle.com> Message-ID: <540D0899.7010303@oracle.com> Looks okay to me. David On 6/09/2014 5:55 AM, Coleen Phillimore wrote: > Summary: Add classLoader to java/lang/Class instance for fast access > > This is a backport request for 8u40. This change has been in the jdk9 > code for 3 months without any problems. > > The JDK changes hg imported cleanly. The Hotspot change needed a hand > merge for create_mirror call in klass.cpp. > > http://cr.openjdk.java.net/~coleenp/6642881_8u40_jdk/ > http://cr.openjdk.java.net/~coleenp/6642881_8u40_hotspot/ > > bug link https://bugs.openjdk.java.net/browse/JDK-6642881 > > Ran jdk_core jtreg tests in jdk with both jdk/hotspot changes. Also ran > jck java_lang tests with only the hotspot change. The hotspot change > can be tested separately from the jdk change (but not the other way > around). > > Thanks, > Coleen From david.holmes at oracle.com Mon Sep 8 02:11:50 2014 From: david.holmes at oracle.com (David Holmes) Date: Mon, 08 Sep 2014 12:11:50 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> Message-ID: <540D1066.6030603@oracle.com> Hi Erik, Note there is currently no ARM code in the OpenJDK itself. Of course the Aarch64 project will hopefully be changing that soon, but I would not think they need the logic you describe below. Cheers, David On 6/09/2014 12:03 AM, Erik ?sterlund wrote: > Hi Mikael, > > Back from travelling now. I did look into other architectures a bit and made some interesting findings. > > The architecture that stands out the most disastrous to me is ARM. It has three levels of nested loops to carry out a single byte CAS: > 1. Outmost loop to emulate byte-grain CAS using word-sized CAS. > 2. Middle loop makes calls to the __kernel_cmpxchg which is optimized for non-SMP systems using OS support but backward compatible with LL/SC loop for SMP systems. Unfortunately it returns a boolean (success/failure) rather than the destination value and hence the loop keeps track of the actual value at the destination required by the Atomic::cmpxchg interface. > 3. __kernel_cmpxchg implements CAS on SMP-systems using LL/SC (ldrex/strex). Since a context switch can break in the middle, a loop retries the operation in such unfortunate spuriously failing scenario. > > I have made a new solution that would only make sense on ARMv6 and above with SMP. The proposed solution has only one loop instead of three, would be great if somebody could review it: > > inline intptr_t __casb_internal(volatile intptr_t *ptr, intptr_t compare, intptr_t new_val) { > intptr_t result, old_tmp; > > // prefetch for writing and barrier > __asm__ __volatile__ ("pld [%0]\n\t" > " dmb sy\n\t" /* maybe we can get away with dsb st here instead for speed? anyone? playing it safe now */ > : > : "r" (ptr) > : "memory"); > > do { > // spuriously failing CAS loop keeping track of value > __asm__ __volatile__("@ __cmpxchgb\n\t" > " ldrexb %1, [%2]\n\t" > " mov %0, #0\n\t" > " teq %1, %3\n\t" > " it eq\n\t" > " strexbeq %0, %4, [%2]\n\t" > : "=&r" (result), "=&r" (old_tmp) > : "r" (ptr), "Ir" (compare), "r" (new_val) > : "memory", "cc"); > } while (result); > > // barrier > __asm__ __volatile__ ("dmb sy" > ::: "memory"); > > return old_tmp; > } > > inline jbyte Atomic::cmpxchg (jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { > return (jbyte)__casb_internal(volatile jbyte*)ptr, (intptr_t)compare, (intptr_t)new_val); > } > > What I'm a bit uncertain about here is which barriers we need and which are optimal as it seems to be a bit different for different ARM versions, maybe somebody can enlighten me? Also I'm not sure how hotspot checks ARM version to make the appropriate decision. > > The proposed x86 implementation is much more straight forward (bsd, linux): > > inline jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { > int mp = os::is_MP(); > jbyte result; > __asm__ volatile (LOCK_IF_MP(%4) "cmpxchgb %1,(%3)" > : "=a" (result) > : "q" (exchange_value), "a" (compare_value), "r" (dest), "r" (mp) > : "cc", "memory"); > return result; > } > > Unfortunately the code is spread out through a billion files because of different ABIs and compiler support for different OS variants. Some use generated stubs, some use ASM files, some use inline assembly. I think I fixed all of them but I need your help to build and verify it if you don't mind as I don't have access to those platforms. How do we best do this? > > As for SPARC I unfortunately decided to keep the old implementation as SPARC does not seem to support byte-wide CAS, only found the cas and casx instructions which is not sufficient as far as I could tell, corrections if I'm wrong? In that case, add byte-wide CAS on SPARC to my wish list for christmas. > > Is there any other platform/architecture of interest on your wish list I should investigate which is important to you? PPC? > > /Erik > > On 04 Sep 2014, at 11:20, Mikael Gerdin wrote: > >> Hi Erik, >> >> On Thursday 04 September 2014 09.05.13 Erik ?sterlund wrote: >>> Hi, >>> >>> The implementation of single byte Atomic::cmpxchg on x86 (and all other >>> platforms) emulates the single byte cmpxchgb instruction using a loop of >>> jint-sized load and cmpxchgl and code to dynamically align the destination >>> address. >>> >>> This code is used for GC-code related to remembered sets currently. >>> >>> I have the changes on my platform (amd64, bsd) to simply use the native >>> cmpxchgb instead but could provide a patch fixing this unnecessary >>> performance glitch for all supported x86 if anybody wants this? >> >> I think that sounds good. >> Would you mind looking at other cpu arches to see if they provide something >> similar? It's ok if you can't build the code for the other arches, I can help >> you with that. >> >> /Mikael >> >>> >>> /Erik >> > From tobias.hartmann at oracle.com Mon Sep 8 05:50:36 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 08 Sep 2014 07:50:36 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5409E08C.6010803@oracle.com> References: <53FF1BF6.8070600@oracle.com> <53FFB78C.9060805@oracle.com> <540089D1.4060600@oracle.com> <5400B975.5030703@oracle.com> <5406F720.2080603@oracle.com> <540799FE.5030309@oracle.com> <54084127.8000202@oracle.com> <5408B449.2040206@oracle.com> <540978E1.2060703@oracle.com> <5409E08C.6010803@oracle.com> Message-ID: <540D43AC.7010007@oracle.com> Thank you, Vladimir. Best, Tobias On 05.09.2014 18:10, Vladimir Kozlov wrote: > Looks good. > > Thanks, > Vladimir > > On 9/5/14 1:48 AM, Tobias Hartmann wrote: >> Hi Vladimir, >> >> thanks again for the review. >> >> On 04.09.2014 20:49, Vladimir Kozlov wrote: >>> The test misses @run command. >> >> I thought it is not necessary if we do not specify any VM options. I >> added it. >> >>> Why you get ""TieredCompilation is disabled in this release."? >>> Client VM? >> >> Yes, with a client VM. On a 32 bit machine with a 32 bit build of the >> VM we may have a client version by default. >> >>> What happens if we run with TieredStopAtLevel=1? >> >> By now, we use all heaps but I changed 'CodeCache::heap_available' to >> not use the profiled code heap in this case. I >> added some more checks and comments to the test. Thanks for catching >> this! >> >> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ >> >> Thanks, >> Tobias >> >>> Thanks, >>> Vladimir >>> >>> On 9/4/14 3:38 AM, Tobias Hartmann wrote: >>>> Thank you, Vladimir. >>>> >>>> I added a test that checks the result of segmented code cache >>>> related VM >>>> options. >>>> >>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.06/ >>>> >>>> Can I get a second review please? >>>> >>>> Best, >>>> Tobias >>>> >>>> On 04.09.2014 00:45, Vladimir Kozlov wrote: >>>>> Looks good. You need second review. >>>>> >>>>> And, please, add a WB test which verifies a reaction on flags >>>>> settings >>>>> similar to what test/compiler/codecache/CheckUpperLimit.java does. >>>>> Both positive (SegmentedCodeCache is enabled) and negative (sizes >>>>> does >>>>> not match or ReservedCodeCacheSize is small) and etc. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> On 9/3/14 4:10 AM, Tobias Hartmann wrote: >>>>>> Hi Vladimir, >>>>>> >>>>>> thanks for the review. >>>>>> >>>>>> On 29.08.2014 19:33, Vladimir Kozlov wrote: >>>>>>> On 8/29/14 7:10 AM, Tobias Hartmann wrote: >>>>>>>> Hi Vladimir, >>>>>>>> >>>>>>>> thanks for the review. >>>>>>>> >>>>>>>> On 29.08.2014 01:13, Vladimir Kozlov wrote: >>>>>>>>> For the record, SegmentedCodeCache is enabled by default when >>>>>>>>> TieredCompilation is enabled and ReservedCodeCacheSize >>>>>>>>> >= 240 MB. Otherwise it is false by default. >>>>>>>> >>>>>>>> Exactly. >>>>>>>> >>>>>>>>> arguments.cpp - in set_tiered_flags() swap SegmentedCodeCache >>>>>>>>> setting and segments size adjustment - do adjustment >>>>>>>>> only if SegmentedCodeCache is enabled. >>>>>>>> >>>>>>>> Done. >>>>>>>> >>>>>>>>> Also I think each flag should be checked and adjusted separately. >>>>>>>>> You may bail out (vm_exit_during_initialization) if >>>>>>>>> sizes do not add up. >>>>>>>> >>>>>>>> I think we should only increase the sizes if they are all default. >>>>>>>> Otherwise we would for example fail if the user sets >>>>>>>> the NonMethodCodeHeapSize and the ProfiledCodeHeapSize because the >>>>>>>> NonProfiledCodeHeap size is multiplied by 5. What do >>>>>>>> you think? >>>>>>> >>>>>>> But ReservedCodeCacheSize is scaled anyway and you will get sum of >>>>>>> sizes != whole size. We need to do something. >>>>>> >>>>>> I agree. I changed it as you suggested first: The code heap sizes >>>>>> are >>>>>> scaled individually and we bail out if the sizes are not >>>>>> consistent with >>>>>> ReservedCodeCacheSize. >>>>>> >>>>>>> BTW the error message for next check should print all sizes, >>>>>>> user may >>>>>>> not know the default value of some which he did not specified on >>>>>>> command line. >>>>>>> >>>>>>> (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) >>>>>> >>>>>> The error message now prints the sizes in brackets. >>>>>> >>>>>>>>> And use >>>>>>>> >>>>>>>> I think the rest of this sentence is missing :) >>>>>>> >>>>>>> And use FLAG_SET_ERGO() when you scale. :) >>>>>> >>>>>> Done. I also changed the implementation of >>>>>> CodeCache::initialize_heaps() >>>>>> accordingly. >>>>>> >>>>>>>>> Align second line: >>>>>>>>> >>>>>>>>> 2461 } else if ((!FLAG_IS_DEFAULT(NonMethodCodeHeapSize) || >>>>>>>>> !FLAG_IS_DEFAULT(ProfiledCodeHeapSize) || >>>>>>>>> !FLAG_IS_DEFAULT(NonProfiledCodeHeapSize)) >>>>>>>>> 2462 && (NonMethodCodeHeapSize + NonProfiledCodeHeapSize + >>>>>>>>> ProfiledCodeHeapSize) != ReservedCodeCacheSize) { >>>>>>>> >>>>>>>> Done. >>>>>>>> >>>>>>>>> codeCache.cpp - in initialize_heaps() add new methods in C1 >>>>>>>>> and C2 >>>>>>>>> to return buffer_size they need. Add >>>>>>>>> assert(SegmentedCodeCache) to this method to show that we call it >>>>>>>>> only in such case. >>>>>>>> >>>>>>>> Done. >>>>>>>> >>>>>>>>> You do adjustment only when all flags are default. But you still >>>>>>>>> need to check that you have space in >>>>>>>>> NonMethodCodeHeap for scratch buffers. >>>>>>>> >>>>>>>> I added a the following check: >>>>>>>> >>>>>>>> // Make sure we have enough space for the code buffers >>>>>>>> if (NonMethodCodeHeapSize < code_buffers_size) { >>>>>>>> vm_exit_during_initialization("Not enough space for code >>>>>>>> buffers >>>>>>>> in CodeCache"); >>>>>>>> } >>>>>>> >>>>>>> I think, you need to take into account min_code_cache_size as in >>>>>>> arguments.cpp: >>>>>>> uint min_code_cache_size = (CodeCacheMinimumUseSpace >>>>>>> DEBUG_ONLY(* 3)) >>>>>>> + CodeCacheMinimumFreeSpace; >>>>>>> >>>>>>> if (NonMethodCodeHeapSize < >>>>>>> (min_code_cache_size+code_buffers_size)) { >>>>>> >>>>>> True, I changed it. >>>>>> >>>>>>> I would be nice if this code in initialize_heaps() could be moved >>>>>>> called during arguments parsing if we could get number of compiler >>>>>>> threads there. But I understand that we can't do that until >>>>>>> compilation policy is set :( >>>>>> >>>>>> Yes, this is not possible because we need to know the number of >>>>>> C1/C2 >>>>>> compiler threads. >>>>>> >>>>>> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.05/ >>>>>> >>>>>> Thanks, >>>>>> Tobias >>>>>> >>>>>>> >>>>>>>> >>>>>>>>> codeCache.hpp - comment alignment: >>>>>>>>> + // Creates a new heap with the given name and size, >>>>>>>>> containing >>>>>>>>> CodeBlobs of the given type >>>>>>>>> ! static void add_heap(ReservedSpace rs, const char* name, >>>>>>>>> size_t >>>>>>>>> size_initial, int code_blob_type); >>>>>>>> >>>>>>>> Done. >>>>>>>> >>>>>>>>> nmethod.cpp - in new() can we mark nmethod allocation critical >>>>>>>>> only >>>>>>>>> when SegmentedCodeCache is enabled? >>>>>>>> >>>>>>>> Yes, that's what we do with: >>>>>>>> >>>>>>>> 809 bool is_critical = SegmentedCodeCache; >>>>>>>> >>>>>>>> Or what are you referring to? >>>>>>> >>>>>>> Somehow I missed that SegmentedCodeCache is used already. It is >>>>>>> fine >>>>>>> then. >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>>> >>>>>>>> New webrev: >>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.04 >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Tobias >>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> On 8/28/14 5:09 AM, Tobias Hartmann wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> the segmented code cache JEP is now targeted. Please review the >>>>>>>>>> final >>>>>>>>>> implementation before integration. The previous RFR, including a >>>>>>>>>> short >>>>>>>>>> description, can be found here [1]. >>>>>>>>>> >>>>>>>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>>>>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>>>>>>> Implementation: >>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>>>>>>> JDK-Test fix: >>>>>>>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Changes since the last review: >>>>>>>>>> - Merged with other changes (for example, G1 class unloading >>>>>>>>>> changes [2]) >>>>>>>>>> - Fixed some minor bugs that showed up during testing >>>>>>>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>>>>>>> - Non-method CodeHeap size increased to 5 MB >>>>>>>>>> - Fallback solution: Store non-method code in the >>>>>>>>>> non-profiled code >>>>>>>>>> heap >>>>>>>>>> if there is not enough space in the non-method code heap (see >>>>>>>>>> 'CodeCache::allocate') >>>>>>>>>> >>>>>>>>>> Additional testing: >>>>>>>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>>>>>>> - Compiler and GC nightlies >>>>>>>>>> - jtreg tests >>>>>>>>>> - VM (NSK) Testbase >>>>>>>>>> - More performance testing (results attached to the bug) >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Tobias >>>>>>>>>> >>>>>>>>>> [1] >>>>>>>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>>>>>>> >>>>>> >>>> >> From coleen.phillimore at oracle.com Mon Sep 8 13:40:49 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 08 Sep 2014 09:40:49 -0400 Subject: [8u40] RFR 6642881: Improve performance of Class.getClassLoader() In-Reply-To: <540D0899.7010303@oracle.com> References: <540A152E.9020507@oracle.com> <540D0899.7010303@oracle.com> Message-ID: <540DB1E1.7060900@oracle.com> Thanks David! Coleen On 9/7/14, 9:38 PM, David Holmes wrote: > Looks okay to me. > > David > > On 6/09/2014 5:55 AM, Coleen Phillimore wrote: >> Summary: Add classLoader to java/lang/Class instance for fast access >> >> This is a backport request for 8u40. This change has been in the jdk9 >> code for 3 months without any problems. >> >> The JDK changes hg imported cleanly. The Hotspot change needed a hand >> merge for create_mirror call in klass.cpp. >> >> http://cr.openjdk.java.net/~coleenp/6642881_8u40_jdk/ >> http://cr.openjdk.java.net/~coleenp/6642881_8u40_hotspot/ >> >> bug link https://bugs.openjdk.java.net/browse/JDK-6642881 >> >> Ran jdk_core jtreg tests in jdk with both jdk/hotspot changes. Also ran >> jck java_lang tests with only the hotspot change. The hotspot change >> can be tested separately from the jdk change (but not the other way >> around). >> >> Thanks, >> Coleen From volker.simonis at gmail.com Mon Sep 8 14:10:24 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 8 Sep 2014 16:10:24 +0200 Subject: RFR(XXS): 8057780: Fix ppc build after "8050147: StoreLoad barrier interferes with stack usages" Message-ID: Hi, could somebody please review and sponsor the following tiny fixes in os_linux_ppc.cpp/os_aix_ppc.cpp: http://cr.openjdk.java.net/~simonis/webrevs/8057780/ https://bugs.openjdk.java.net/browse/JDK-8057780 They simply fix a typo on Linux/PPC64 and an incorrect method signature on AIX which have been introduced by "8050147: StoreLoad barrier interferes with stack usages" Thank you and best regards, Volker From aleksey.shipilev at oracle.com Mon Sep 8 14:24:29 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Mon, 08 Sep 2014 18:24:29 +0400 Subject: RFR(XXS): 8057780: Fix ppc build after "8050147: StoreLoad barrier interferes with stack usages" In-Reply-To: References: Message-ID: <540DBC1D.7070808@oracle.com> On 09/08/2014 06:10 PM, Volker Simonis wrote: > http://cr.openjdk.java.net/~simonis/webrevs/8057780/ > https://bugs.openjdk.java.net/browse/JDK-8057780 Ouch, sorry about that, Volker, copy-pasting gone wrong. The changes look fine, though I am not a Reviewer. Thanks, -Aleksey. From stefan.johansson at oracle.com Mon Sep 8 14:52:41 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Mon, 08 Sep 2014 16:52:41 +0200 Subject: RFR: 8057752: WhiteBox extension support for testing Message-ID: <540DC2B9.3000505@oracle.com> Hi, Please review these changes for RFE: https://bugs.openjdk.java.net/browse/JDK-8057752 Webrev: http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00/ Summary: Added the call to register_extended to make it possible extend the WhiteBox API. The Java API is still defined in WhiteBox.java, if the extension methods are not defined by an extension a linker error will occur. Stefan From coleen.phillimore at oracle.com Mon Sep 8 15:01:35 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 08 Sep 2014 11:01:35 -0400 Subject: RFR 8057696: java -version triggers assertion for slowdebug zero builds In-Reply-To: <1409928578.3155.34.camel@localhost.localdomain> References: <1409928578.3155.34.camel@localhost.localdomain> Message-ID: <540DC4CF.1090004@oracle.com> This looks good. I'll sponsor it. Thanks for contributing the patch. Coleen On 9/5/14, 10:49 AM, Severin Gehwolf wrote: > Hi, > > Can someone please review and sponsor this tiny change? > > Bug: https://bugs.openjdk.java.net/browse/JDK-8057696 (Thanks Omair for > filing it for me) > webrev: > https://fedorapeople.org/~jerboaa/bugs/openjdk/JDK-8057696/webrev.0/ > > > As mentioned in the bug, the change as introduced with JDK-8003426 > removed some Zero code in cppInterpreter_zero.cpp. > (AbstractInterpreterGenerator::generate_method_entry). In this code > block was an explicit call to generate_normal_entry() using a param > value of false unconditionally (regardless of synchronized == true or > not). After the JDK-8003426 change the generate_normal_entry() function > get's *correctly* called with true or false values. However, it renders > the assertion incorrect. The fix is to get rid of the offending > assertion. > > Thanks, > Severin > From filipp.zhinkin at oracle.com Mon Sep 8 16:06:20 2014 From: filipp.zhinkin at oracle.com (Filipp Zhinkin) Date: Mon, 08 Sep 2014 20:06:20 +0400 Subject: [8u40] Request for approval: backports of JDK-8056091 (S), JDK-8055903 (L) and JDK-8055904 (L) Message-ID: <540DD3FC.4010100@oracle.com> Hi all, I'd like to backport fixes for JDK-8056091, JDK-8055903 and JDK-8055904 to 8u40. For JDK-8056091 I updated dates in the copyright notice. Original patches for JDK-8055903 and JDK-8055904 were applied without changes. JDK-8056091: Move compiler/intrinsics/mathexact/sanity/Verifier to compiler/testlibrary and extend its functionality Bug id: https://bugs.openjdk.java.net/browse/JDK-8056091 Webrev: http://cr.openjdk.java.net/~fzhinkin/8u/8056091/webrev.00/ Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a274904ceb95 Review thread for original fix: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015336.html JDK-8055903: Develop sanity tests on SPARC's SHA instructions support Bug id: https://bugs.openjdk.java.net/browse/JDK-8055903 Webrev: http://cr.openjdk.java.net/~fzhinkin/8055903/webrev.00/ Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/846fc505810a Review thread for original fix: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015337.html JDK-8055904: Develop tests for new command-line options related to SHA intrinsics Bug id: https://bugs.openjdk.java.net/browse/JDK-8055904 Webrev: http://cr.openjdk.java.net/~fzhinkin/8055904/webrev.01/ Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/676f67452a76 Review thread for original fix: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015348.html All compiler/intrinsics/sha and compilers/intrinsics/mathexact/sanity tests were tested on all supported platforms. Thanks, Filipp. From vladimir.kozlov at oracle.com Mon Sep 8 16:16:48 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 09:16:48 -0700 Subject: [8u40] Request for approval: backports of JDK-8056091 (S), JDK-8055903 (L) and JDK-8055904 (L) In-Reply-To: <540DD3FC.4010100@oracle.com> References: <540DD3FC.4010100@oracle.com> Message-ID: <540DD670.2090401@oracle.com> Good. Thanks, Vladimir On 9/8/14 9:06 AM, Filipp Zhinkin wrote: > Hi all, > > I'd like to backport fixes for JDK-8056091, JDK-8055903 and JDK-8055904 to 8u40. > > For JDK-8056091 I updated dates in the copyright notice. > Original patches for JDK-8055903 and JDK-8055904 were applied without changes. > > > JDK-8056091: Move compiler/intrinsics/mathexact/sanity/Verifier to compiler/testlibrary and extend its functionality > > Bug id: https://bugs.openjdk.java.net/browse/JDK-8056091 > Webrev: http://cr.openjdk.java.net/~fzhinkin/8u/8056091/webrev.00/ > Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a274904ceb95 > Review thread for original fix: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015336.html > > > JDK-8055903: Develop sanity tests on SPARC's SHA instructions support > > Bug id: https://bugs.openjdk.java.net/browse/JDK-8055903 > Webrev: http://cr.openjdk.java.net/~fzhinkin/8055903/webrev.00/ > Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/846fc505810a > Review thread for original fix: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015337.html > > > JDK-8055904: Develop tests for new command-line options related to SHA intrinsics > > Bug id: https://bugs.openjdk.java.net/browse/JDK-8055904 > Webrev: http://cr.openjdk.java.net/~fzhinkin/8055904/webrev.01/ > Changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/676f67452a76 > Review thread for original fix: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015348.html > > > All compiler/intrinsics/sha and compilers/intrinsics/mathexact/sanity tests were tested on all supported platforms. > > Thanks, > Filipp. > From filipp.zhinkin at oracle.com Mon Sep 8 16:15:03 2014 From: filipp.zhinkin at oracle.com (Filipp Zhinkin) Date: Mon, 08 Sep 2014 20:15:03 +0400 Subject: [8u40] Request for approval: backports of JDK-8056091 (S), JDK-8055903 (L) and JDK-8055904 (L) In-Reply-To: <540DD670.2090401@oracle.com> References: <540DD3FC.4010100@oracle.com> <540DD670.2090401@oracle.com> Message-ID: <540DD607.4040509@oracle.com> Vladimir, thank you. Filipp. On 09/08/2014 08:16 PM, Vladimir Kozlov wrote: > Good. > > Thanks, > Vladimir > > On 9/8/14 9:06 AM, Filipp Zhinkin wrote: >> Hi all, >> >> I'd like to backport fixes for JDK-8056091, JDK-8055903 and >> JDK-8055904 to 8u40. >> >> For JDK-8056091 I updated dates in the copyright notice. >> Original patches for JDK-8055903 and JDK-8055904 were applied without >> changes. >> >> >> JDK-8056091: Move compiler/intrinsics/mathexact/sanity/Verifier to >> compiler/testlibrary and extend its functionality >> >> Bug id: https://bugs.openjdk.java.net/browse/JDK-8056091 >> Webrev: http://cr.openjdk.java.net/~fzhinkin/8u/8056091/webrev.00/ >> Changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a274904ceb95 >> Review thread for original fix: >> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015336.html >> >> >> JDK-8055903: Develop sanity tests on SPARC's SHA instructions support >> >> Bug id: https://bugs.openjdk.java.net/browse/JDK-8055903 >> Webrev: http://cr.openjdk.java.net/~fzhinkin/8055903/webrev.00/ >> Changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/846fc505810a >> Review thread for original fix: >> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015337.html >> >> >> JDK-8055904: Develop tests for new command-line options related to >> SHA intrinsics >> >> Bug id: https://bugs.openjdk.java.net/browse/JDK-8055904 >> Webrev: http://cr.openjdk.java.net/~fzhinkin/8055904/webrev.01/ >> Changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/676f67452a76 >> Review thread for original fix: >> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-August/015348.html >> >> >> All compiler/intrinsics/sha and compilers/intrinsics/mathexact/sanity >> tests were tested on all supported platforms. >> >> Thanks, >> Filipp. >> From sgehwolf at redhat.com Mon Sep 8 16:43:12 2014 From: sgehwolf at redhat.com (Severin Gehwolf) Date: Mon, 08 Sep 2014 18:43:12 +0200 Subject: RFR 8057696: java -version triggers assertion for slowdebug zero builds In-Reply-To: <540DC4CF.1090004@oracle.com> References: <1409928578.3155.34.camel@localhost.localdomain> <540DC4CF.1090004@oracle.com> Message-ID: <1410194592.3199.68.camel@localhost.localdomain> On Mon, 2014-09-08 at 11:01 -0400, Coleen Phillimore wrote: > This looks good. I'll sponsor it. Thanks for contributing the patch. Thank you, Coleen! --Severin > Coleen > > On 9/5/14, 10:49 AM, Severin Gehwolf wrote: > > Hi, > > > > Can someone please review and sponsor this tiny change? > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8057696 (Thanks Omair for > > filing it for me) > > webrev: > > https://fedorapeople.org/~jerboaa/bugs/openjdk/JDK-8057696/webrev.0/ > > > > > > As mentioned in the bug, the change as introduced with JDK-8003426 > > removed some Zero code in cppInterpreter_zero.cpp. > > (AbstractInterpreterGenerator::generate_method_entry). In this code > > block was an explicit call to generate_normal_entry() using a param > > value of false unconditionally (regardless of synchronized == true or > > not). After the JDK-8003426 change the generate_normal_entry() function > > get's *correctly* called with true or false values. However, it renders > > the assertion incorrect. The fix is to get rid of the offending > > assertion. > > > > Thanks, > > Severin > > > From george.triantafillou at oracle.com Mon Sep 8 19:55:20 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Mon, 08 Sep 2014 15:55:20 -0400 Subject: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking Message-ID: <540E09A8.9040402@oracle.com> Please review this new native memory tracking test for 8054836. The test allocates small amounts of memory with random pseudo call stacks using the WhiteBox API: Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev/ Bug: https://bugs.openjdk.java.net/browse/JDK-8054836 The fix was tested locally on Linux with jtreg. Thanks. -George From george.triantafillou at oracle.com Mon Sep 8 20:41:36 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Mon, 08 Sep 2014 16:41:36 -0400 Subject: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <540E09A8.9040402@oracle.com> References: <540E09A8.9040402@oracle.com> Message-ID: <540E1480.1060103@oracle.com> Corrected webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01/ Thanks. -George On 9/8/2014 3:55 PM, George Triantafillou wrote: > Please review this new native memory tracking test for 8054836. The > test allocates small amounts of memory with random pseudo call stacks > using the WhiteBox API: > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8054836 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George From vladimir.kozlov at oracle.com Mon Sep 8 20:57:54 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 13:57:54 -0700 Subject: [8u40] RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le Message-ID: <540E1852.20503@oracle.com> It is only ppc64 changes. They were pushed 1.5 months ago into jdk9. Changes applied cleanly, except assert(EnableInvokeDynamic) in interp_masm_ppc_64.cpp. It was remove in jdk9 with 8036956 changes which we will not backport. jdk9 webrev: http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5224135904f8 https://bugs.openjdk.java.net/browse/JDK-8050942 From vladimir.kozlov at oracle.com Mon Sep 8 21:05:34 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 14:05:34 -0700 Subject: [8u40] RFR (S) 8054927: Missing MemNode::acquire ordering in some volatile Load nodes Message-ID: <540E1A1E.7040308@oracle.com> Backport request. The fix was pushed into jdk9 month ago. Changes are applied cleanly. jdk9 webrev: http://cr.openjdk.java.net/~kvn/8054927/webrev/ jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/f62b69773aaf https://bugs.openjdk.java.net/browse/JDK-8054927 Thanks, Vladimir From igor.veresov at oracle.com Mon Sep 8 22:23:35 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 15:23:35 -0700 Subject: [8u40] RFR (S) 8054927: Missing MemNode::acquire ordering in some volatile Load nodes In-Reply-To: <540E1A1E.7040308@oracle.com> References: <540E1A1E.7040308@oracle.com> Message-ID: Looks good. igor On Sep 8, 2014, at 2:05 PM, Vladimir Kozlov wrote: > Backport request. The fix was pushed into jdk9 month ago. > Changes are applied cleanly. > > jdk9 webrev: > http://cr.openjdk.java.net/~kvn/8054927/webrev/ > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/f62b69773aaf > https://bugs.openjdk.java.net/browse/JDK-8054927 > > Thanks, > Vladimir From igor.veresov at oracle.com Mon Sep 8 22:23:51 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 15:23:51 -0700 Subject: [8u40] RFR(M): 8050942 : PPC64: implement template interpreter for ppc64le In-Reply-To: <540E1852.20503@oracle.com> References: <540E1852.20503@oracle.com> Message-ID: <440DF335-7374-4375-A065-11DA57C6F634@oracle.com> Looks good. igor On Sep 8, 2014, at 1:57 PM, Vladimir Kozlov wrote: > It is only ppc64 changes. They were pushed 1.5 months ago into jdk9. > Changes applied cleanly, except assert(EnableInvokeDynamic) in interp_masm_ppc_64.cpp. It was remove in jdk9 with 8036956 changes which we will not backport. > > jdk9 webrev: > http://cr.openjdk.java.net/~asmundak/8050942/hotspot/webrev.01/ > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5224135904f8 > https://bugs.openjdk.java.net/browse/JDK-8050942 > From vladimir.kozlov at oracle.com Mon Sep 8 22:28:35 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 15:28:35 -0700 Subject: [8u40] RFR(M) 8055286, 8056964, 8057129: Extend CompileCommand=option Message-ID: <540E2D93.6020509@oracle.com> Backport request. Changes were pushed into jdk9 last week. They are applied to 8u cleanly. They are needed for 8055494 backport. https://bugs.openjdk.java.net/browse/JDK-8055286 jdk9 webrev: http://cr.openjdk.java.net/~zmajo/8055286/webrev.03/ jdk9 changeset http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5cb3c079bf70 https://bugs.openjdk.java.net/browse/JDK-8056964 jdk9 webrev: http://cr.openjdk.java.net/~kvn/8056964/webrev/ jdk9 changeset http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a9581f019c38 https://bugs.openjdk.java.net/browse/JDK-8057129 jdk9 webrev: http://cr.openjdk.java.net/~simonis/webrevs/8057129/ jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/cbae7c62e1bd Thanks, Vladimir From vladimir.kozlov at oracle.com Mon Sep 8 22:35:27 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 15:35:27 -0700 Subject: [8u40] RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX Message-ID: <540E2F2F.80008@oracle.com> Backport request. The fix was pushed into jdk9 last week. Changes are applied cleanly. https://bugs.openjdk.java.net/browse/JDK-8057643 jdk9 webrev: http://cr.openjdk.java.net/~kvn/8057643/webrev/ jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/d3e712a41646 Thanks, Vladimir From vladimir.kozlov at oracle.com Mon Sep 8 22:44:51 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 15:44:51 -0700 Subject: [8u40] RFR(L) 8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method Message-ID: <540E3163.7050106@oracle.com> Backport request. The fix was pushed into jdk9 last week. Changes were applied cleanly. https://bugs.openjdk.java.net/browse/JDK-8055494 jdk9 webrev: http://cr.openjdk.java.net/~kvn/8055494/webrev/ jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/427de14928ab Thanks, Vladimir From mandy.chung at oracle.com Mon Sep 8 22:59:15 2014 From: mandy.chung at oracle.com (Mandy Chung) Date: Mon, 08 Sep 2014 15:59:15 -0700 Subject: [8u40] RFR 6642881: Improve performance of Class.getClassLoader() In-Reply-To: <540A152E.9020507@oracle.com> References: <540A152E.9020507@oracle.com> Message-ID: <540E34C3.4070203@oracle.com> Thumbs up. Mandy On 9/5/2014 12:55 PM, Coleen Phillimore wrote: > Summary: Add classLoader to java/lang/Class instance for fast access > > This is a backport request for 8u40. This change has been in the > jdk9 code for 3 months without any problems. > > The JDK changes hg imported cleanly. The Hotspot change needed a hand > merge for create_mirror call in klass.cpp. > > http://cr.openjdk.java.net/~coleenp/6642881_8u40_jdk/ > http://cr.openjdk.java.net/~coleenp/6642881_8u40_hotspot/ > > bug link https://bugs.openjdk.java.net/browse/JDK-6642881 > > Ran jdk_core jtreg tests in jdk with both jdk/hotspot changes. Also > ran jck java_lang tests with only the hotspot change. The hotspot > change can be tested separately from the jdk change (but not the other > way around). > > Thanks, > Coleen From vladimir.kozlov at oracle.com Mon Sep 8 23:05:06 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 16:05:06 -0700 Subject: RFR(XXS): 8057780: Fix ppc build after "8050147: StoreLoad barrier interferes with stack usages" In-Reply-To: References: Message-ID: <540E3622.9010300@oracle.com> Looks good. The push job is in JPRT queue. Thanks, Vladimir On 9/8/14 7:10 AM, Volker Simonis wrote: > Hi, > > could somebody please review and sponsor the following tiny fixes in > os_linux_ppc.cpp/os_aix_ppc.cpp: > > http://cr.openjdk.java.net/~simonis/webrevs/8057780/ > https://bugs.openjdk.java.net/browse/JDK-8057780 > > They simply fix a typo on Linux/PPC64 and an incorrect method > signature on AIX which have been introduced by "8050147: StoreLoad > barrier interferes with stack usages" > > Thank you and best regards, > Volker > From igor.veresov at oracle.com Mon Sep 8 23:54:26 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 16:54:26 -0700 Subject: [8u40] RFR(M) 8055286, 8056964, 8057129: Extend CompileCommand=option In-Reply-To: <540E2D93.6020509@oracle.com> References: <540E2D93.6020509@oracle.com> Message-ID: Good. igor On Sep 8, 2014, at 3:28 PM, Vladimir Kozlov wrote: > Backport request. Changes were pushed into jdk9 last week. > They are applied to 8u cleanly. > > They are needed for 8055494 backport. > > https://bugs.openjdk.java.net/browse/JDK-8055286 > jdk9 webrev: > http://cr.openjdk.java.net/~zmajo/8055286/webrev.03/ > jdk9 changeset > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/5cb3c079bf70 > > https://bugs.openjdk.java.net/browse/JDK-8056964 > jdk9 webrev: > http://cr.openjdk.java.net/~kvn/8056964/webrev/ > jdk9 changeset > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a9581f019c38 > > https://bugs.openjdk.java.net/browse/JDK-8057129 > jdk9 webrev: > http://cr.openjdk.java.net/~simonis/webrevs/8057129/ > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/cbae7c62e1bd > > Thanks, > Vladimir From igor.veresov at oracle.com Mon Sep 8 23:54:39 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 16:54:39 -0700 Subject: [8u40] RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX In-Reply-To: <540E2F2F.80008@oracle.com> References: <540E2F2F.80008@oracle.com> Message-ID: <2884838D-9A3A-46A2-9A83-D000EC4DC6D5@oracle.com> Looks good. igor On Sep 8, 2014, at 3:35 PM, Vladimir Kozlov wrote: > Backport request. The fix was pushed into jdk9 last week. > Changes are applied cleanly. > > https://bugs.openjdk.java.net/browse/JDK-8057643 > jdk9 webrev: > http://cr.openjdk.java.net/~kvn/8057643/webrev/ > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/d3e712a41646 > > Thanks, > Vladimir From vladimir.kozlov at oracle.com Mon Sep 8 23:57:26 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 16:57:26 -0700 Subject: [8u40] RFR (S) 8057643: Unable to build --with-debug-level=optimized on OSX In-Reply-To: <2884838D-9A3A-46A2-9A83-D000EC4DC6D5@oracle.com> References: <540E2F2F.80008@oracle.com> <2884838D-9A3A-46A2-9A83-D000EC4DC6D5@oracle.com> Message-ID: <540E4266.2010304@oracle.com> Thank you, Igor, for reviews. Vladimir On 9/8/14 4:54 PM, Igor Veresov wrote: > Looks good. > > igor > > On Sep 8, 2014, at 3:35 PM, Vladimir Kozlov wrote: > >> Backport request. The fix was pushed into jdk9 last week. >> Changes are applied cleanly. >> >> https://bugs.openjdk.java.net/browse/JDK-8057643 >> jdk9 webrev: >> http://cr.openjdk.java.net/~kvn/8057643/webrev/ >> jdk9 changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/d3e712a41646 >> >> Thanks, >> Vladimir > From igor.veresov at oracle.com Mon Sep 8 23:57:22 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 16:57:22 -0700 Subject: [8u40] RFR(L) 8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method In-Reply-To: <540E3163.7050106@oracle.com> References: <540E3163.7050106@oracle.com> Message-ID: <529548E6-7D7F-44E9-A43A-778D40A97F9A@oracle.com> Looks good, but shouldn?t it be pushed together with https://bugs.openjdk.java.net/browse/JDK-8057758 after it?s fixed? igor On Sep 8, 2014, at 3:44 PM, Vladimir Kozlov wrote: > Backport request. The fix was pushed into jdk9 last week. > Changes were applied cleanly. > > https://bugs.openjdk.java.net/browse/JDK-8055494 > jdk9 webrev: > http://cr.openjdk.java.net/~kvn/8055494/webrev/ > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/427de14928ab > > Thanks, > Vladimir From vladimir.kozlov at oracle.com Tue Sep 9 00:03:57 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 17:03:57 -0700 Subject: [8u40] RFR(L) 8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method In-Reply-To: <529548E6-7D7F-44E9-A43A-778D40A97F9A@oracle.com> References: <540E3163.7050106@oracle.com> <529548E6-7D7F-44E9-A43A-778D40A97F9A@oracle.com> Message-ID: <540E43ED.40205@oracle.com> On 9/8/14 4:57 PM, Igor Veresov wrote: > Looks good, but shouldn?t it be pushed together with https://bugs.openjdk.java.net/browse/JDK-8057758 after it?s fixed? Yes, I will wait 8057758 fix. It was the plan. But I need to push it this week. Thanks, Vladimir > > igor > > On Sep 8, 2014, at 3:44 PM, Vladimir Kozlov wrote: > >> Backport request. The fix was pushed into jdk9 last week. >> Changes were applied cleanly. >> >> https://bugs.openjdk.java.net/browse/JDK-8055494 >> jdk9 webrev: >> http://cr.openjdk.java.net/~kvn/8055494/webrev/ >> jdk9 changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/427de14928ab >> >> Thanks, >> Vladimir > From igor.veresov at oracle.com Tue Sep 9 00:13:33 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 17:13:33 -0700 Subject: [8u] 8056124: Hotspot should use PICL interface to get cacheline size on SPARC Message-ID: JDK9: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/63934ec778a2 Webrev for 9: http://cr.openjdk.java.net/~iveresov/8056124/webrev.03/ Webrev for 8: http://cr.openjdk.java.net/~iveresov/8056124-8u/webrev/ JBS: https://bugs.openjdk.java.net/browse/JDK-8056124 Nightlies are clean, the patch needed to be adjusted to account for the fact that there is no VM_Version::_L1_data_cache_line_size in jdk8. igor From vladimir.kozlov at oracle.com Tue Sep 9 00:28:38 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 08 Sep 2014 17:28:38 -0700 Subject: [8u] 8056124: Hotspot should use PICL interface to get cacheline size on SPARC In-Reply-To: References: Message-ID: <540E49B6.5060700@oracle.com> Looks good. Thanks, Vladimir On 9/8/14 5:13 PM, Igor Veresov wrote: > JDK9: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/63934ec778a2 > Webrev for 9: http://cr.openjdk.java.net/~iveresov/8056124/webrev.03/ > Webrev for 8: http://cr.openjdk.java.net/~iveresov/8056124-8u/webrev/ > JBS: https://bugs.openjdk.java.net/browse/JDK-8056124 > > Nightlies are clean, the patch needed to be adjusted to account for the fact that there is no VM_Version::_L1_data_cache_line_size in jdk8. > > igor > From igor.veresov at oracle.com Tue Sep 9 01:10:47 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 8 Sep 2014 18:10:47 -0700 Subject: [8u] 8056124: Hotspot should use PICL interface to get cacheline size on SPARC In-Reply-To: <540E49B6.5060700@oracle.com> References: <540E49B6.5060700@oracle.com> Message-ID: Thanks! igor On Sep 8, 2014, at 5:28 PM, Vladimir Kozlov wrote: > Looks good. > > Thanks, > Vladimir > > On 9/8/14 5:13 PM, Igor Veresov wrote: >> JDK9: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/63934ec778a2 >> Webrev for 9: http://cr.openjdk.java.net/~iveresov/8056124/webrev.03/ >> Webrev for 8: http://cr.openjdk.java.net/~iveresov/8056124-8u/webrev/ >> JBS: https://bugs.openjdk.java.net/browse/JDK-8056124 >> >> Nightlies are clean, the patch needed to be adjusted to account for the fact that there is no VM_Version::_L1_data_cache_line_size in jdk8. >> >> igor >> From coleen.phillimore at oracle.com Tue Sep 9 02:05:16 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Mon, 08 Sep 2014 22:05:16 -0400 Subject: [8u40] RFR 6642881: Improve performance of Class.getClassLoader() In-Reply-To: <540E34C3.4070203@oracle.com> References: <540A152E.9020507@oracle.com> <540E34C3.4070203@oracle.com> Message-ID: <540E605C.8020707@oracle.com> Thanks, Mandy! Coleen On 9/8/14, 6:59 PM, Mandy Chung wrote: > Thumbs up. > > Mandy > > On 9/5/2014 12:55 PM, Coleen Phillimore wrote: >> Summary: Add classLoader to java/lang/Class instance for fast access >> >> This is a backport request for 8u40. This change has been in the >> jdk9 code for 3 months without any problems. >> >> The JDK changes hg imported cleanly. The Hotspot change needed a >> hand merge for create_mirror call in klass.cpp. >> >> http://cr.openjdk.java.net/~coleenp/6642881_8u40_jdk/ >> http://cr.openjdk.java.net/~coleenp/6642881_8u40_hotspot/ >> >> bug link https://bugs.openjdk.java.net/browse/JDK-6642881 >> >> Ran jdk_core jtreg tests in jdk with both jdk/hotspot changes. Also >> ran jck java_lang tests with only the hotspot change. The hotspot >> change can be tested separately from the jdk change (but not the >> other way around). >> >> Thanks, >> Coleen > From staffan.larsen at oracle.com Tue Sep 9 06:02:46 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 9 Sep 2014 08:02:46 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos Message-ID: ## tl;dr We propose a move to a Hotspot development model where we can do both hotspot and jdk changes in the hotspot group repos. This will require a fully populated JDK forest to push changes (whether hotspot or jdk changes) through JPRT. We do not expect these changes to have much affect on the open community, but it is good to note that there can be changes both in hotspot and jdk code coming through the hotspot repositories, and the best practise is to always clone and build the complete forest. We propose to do this change in a few weeks time. ## Problem We see an increasing number of features (small and large) that require concerted changes to both the hotspot and the jdk repos. Our current development model does not support this very well since it requires jdk changes to be made in jdk9/dev and hotspot changes to be made in the hotspot group repositories. Alternatively, such changes results in "flag days" where jdk and hotspot changes are pushed through the group repos with a lot of manual work and impact on everyone working in the group repos. Either way, the result is very slow and cumbersome development. Some examples where concerted changes have been required are JSR-292, default methods, Java Flight Recorder, work on annotations, moving Class fields to Java, many serviceability area tests, and so on. A lot of this work will continue and we will also see new things such as jigsaw that add to the mix. Doing concerted changes today takes a lot of manual effort and calendar time to make sure nothing break. In many cases the addition of a new feature needs to made first to a hotspot group repo. That change needs to propagate to jdk9/dev where library code can be changed to depend on it. Once that change has propagated back to the hotspot group repo, the final change can be made to remove the old implementation. This dance can take anywhere from 2 to 4 weeks to complete - for a single feature. There has also been quite a few cases where we missed taking the dependency into account which results in test failures in one or more repos. In some cases these failures go on for several weeks causing lots of extra work and confusion simply because it takes time for the fix to propagate through the repos. Instead, we want to move to a model where we can make both jdk and hotspot changes directly in the hotspot group repos. In that way the changes will always "travel together" through the repos. This will make our development cycle faster as well as more reliable. More or less by definition these type of changes introduce a stronger dependency between hotspot and the jdk. For the product as a whole to work correctly the right combination of hotspot and the jdk need to be used. We have long since removed the requirement that hotspot would support several jdk versions (known as the Hotspot Express - or hsx - model) and we continue to see a strong dependency, where matching code in hotspot and the jdk needs to be used. ## No More Dependency on Latest Promoted Build The strong dependency between hotspot and jdk makes it impossible for hotspot to depend on the latest promoted jdk build for testing and development. To elaborate on this; if a change with hotspot+jdk dependencies have been pushed to a group repo, it will not longer be possible to use the latest promoted build for running or testing the version of hotspot built in that repo -- the latest promoted build will not have the latest change to the jdk that hotspot now depends on (or vice versa). ## Require Fully Populated JDK Forest The simple solution that we can switch to today is to always require a fully populated JDK forest when building (both locally and in JPRT). By this we mean a clone of all the repos in the forest under, for example, jdk9/hs-rt. JPRT would no longer be using the latest promoted build when creating bundles, instead it will build the code from the submitted forest. If all operations (builds, integrations, pushes, JPRT jobs) always work on the full forest, then there will never be a mismatch between the jdk and the hotspot code. The main drawbacks of this is that developers now need to clone, store and build a lot more code. Cloning the full forest takes longer than just cloning the hotspot forest. This can be alleviated by maintaining local cached versions. Storing full forests require more disk space. This can be mitigated by buying more disks or using a different workflow (for example Mercurial Queues). Building a full jdk takes longer, but hotspot is already one of the larger components to build and incremental builds are usually quite fast. ## Next Steps Given that we would like to improve the model we use for cross component development as soon as possible, we would like to switch to require a fully populated JDK forest for hotspot development. All the prerequisites for doing this are in place (changes to JPRT, both on the servers and to the configuration files in the source repos). A group of volunteering hotspot developers have been using full jdk repos for a while for day-to-day work (except pushes) and have not reported any showstopper problems. If no strong objections are rasied we need decide on a date when we throw the switch. A good date is probably after the 8u40 Feature Complete date of Mid September [0] so as not to impact that release (although this change will only apply to JDK 9 development for now). Regards, Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, Staffan Larsen, Stefan S?rne, Vladimir Kozlov [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html From volker.simonis at gmail.com Tue Sep 9 06:46:32 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 9 Sep 2014 08:46:32 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: References: Message-ID: Just add build changes and you get another class of changes which often require changing more than one repository in the forest. I was (and still am :) a fan of the HotSpot Express model and used to work and build Hotspot only for a long time. But after the introduction of the new build system which resulted in dramatically improved build times I've already switched to building whole forest for already quite some time now. There's still a small question I have: what about other cross development topics (e.g. top-level+jdk)? Will it be possible to "misuse" the new hotspot repositories to keep such changes in sync? Or will other team repositories like jdk-dev or jdk-client switch to such a model as well? Thank you and best regards, Volker On Tuesday, September 9, 2014, Staffan Larsen wrote: > > ## tl;dr > > We propose a move to a Hotspot development model where we can do both > hotspot and jdk changes in the hotspot group repos. This will require a > fully populated JDK forest to push changes (whether hotspot or jdk > changes) through JPRT. We do not expect these changes to have much > affect on the open community, but it is good to note that there can be > changes both in hotspot and jdk code coming through the hotspot > repositories, and the best practise is to always clone and build the > complete forest. > > We propose to do this change in a few weeks time. > > ## Problem > > We see an increasing number of features (small and large) that require > concerted changes to both the hotspot and the jdk repos. Our current > development model does not support this very well since it requires jdk > changes to be made in jdk9/dev and hotspot changes to be made in the > hotspot group repositories. Alternatively, such changes results in "flag > days" where jdk and hotspot changes are pushed through the group repos > with a lot of manual work and impact on everyone working in the group > repos. Either way, the result is very slow and cumbersome development. > > Some examples where concerted changes have been required are JSR-292, > default methods, Java Flight Recorder, work on annotations, moving Class > fields to Java, many serviceability area tests, and so on. A lot of this > work will continue and we will also see new things such as jigsaw that > add to the mix. > > Doing concerted changes today takes a lot of manual effort and calendar > time to make sure nothing break. In many cases the addition of a new > feature needs to made first to a hotspot group repo. That change needs > to propagate to jdk9/dev where library code can be changed to depend on > it. Once that change has propagated back to the hotspot group repo, the > final change can be made to remove the old implementation. This dance > can take anywhere from 2 to 4 weeks to complete - for a single feature. > > There has also been quite a few cases where we missed taking the > dependency into account which results in test failures in one or more > repos. In some cases these failures go on for several weeks causing lots > of extra work and confusion simply because it takes time for the fix to > propagate through the repos. > > Instead, we want to move to a model where we can make both jdk and > hotspot changes directly in the hotspot group repos. In that way the > changes will always "travel together" through the repos. This will make > our development cycle faster as well as more reliable. > > More or less by definition these type of changes introduce a stronger > dependency between hotspot and the jdk. For the product as a whole to > work correctly the right combination of hotspot and the jdk need to be > used. We have long since removed the requirement that hotspot would > support several jdk versions (known as the Hotspot Express - or hsx - > model) and we continue to see a strong dependency, where matching code > in hotspot and the jdk needs to be used. > > ## No More Dependency on Latest Promoted Build > > The strong dependency between hotspot and jdk makes it impossible for > hotspot to depend on the latest promoted jdk build for testing and > development. To elaborate on this; if a change with hotspot+jdk > dependencies have been pushed to a group repo, it will not longer be > possible to use the latest promoted build for running or testing the > version of hotspot built in that repo -- the latest promoted build will > not have the latest change to the jdk that hotspot now depends on (or > vice versa). > > ## Require Fully Populated JDK Forest > > The simple solution that we can switch to today is to always require a > fully populated JDK forest when building (both locally and in JPRT). By > this we mean a clone of all the repos in the forest under, for example, > jdk9/hs-rt. JPRT would no longer be using the latest promoted build when > creating bundles, instead it will build the code from the submitted > forest. > > If all operations (builds, integrations, pushes, JPRT jobs) always work > on the full forest, then there will never be a mismatch between the jdk > and the hotspot code. > > The main drawbacks of this is that developers now need to clone, store > and build a lot more code. Cloning the full forest takes longer than > just cloning the hotspot forest. This can be alleviated by maintaining > local cached versions. Storing full forests require more disk space. > This can be mitigated by buying more disks or using a different workflow > (for example Mercurial Queues). Building a full jdk takes longer, but > hotspot is already one of the larger components to build and incremental > builds are usually quite fast. > > ## Next Steps > > Given that we would like to improve the model we use for cross component > development as soon as possible, we would like to switch to require a > fully populated JDK forest for hotspot development. All the > prerequisites for doing this are in place (changes to JPRT, both on the > servers and to the configuration files in the source repos). A group of > volunteering hotspot developers have been using full jdk repos for a > while for day-to-day work (except pushes) and have not reported any > showstopper problems. > > If no strong objections are rasied we need decide on a date when we > throw the switch. A good date is probably after the 8u40 Feature > Complete date of Mid September [0] so as not to impact that release > (although this change will only apply to JDK 9 development for now). > > Regards, > Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, > Staffan Larsen, Stefan S?rne, Vladimir Kozlov > > [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html From volker.simonis at gmail.com Tue Sep 9 06:49:03 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 9 Sep 2014 08:49:03 +0200 Subject: RFR(XXS): 8057780: Fix ppc build after "8050147: StoreLoad barrier interferes with stack usages" In-Reply-To: <540E3622.9010300@oracle.com> References: <540E3622.9010300@oracle.com> Message-ID: Like always - thanks a lot for you fast help! Regards, Volker On Tuesday, September 9, 2014, Vladimir Kozlov wrote: > Looks good. The push job is in JPRT queue. > > Thanks, > Vladimir > > On 9/8/14 7:10 AM, Volker Simonis wrote: > >> Hi, >> >> could somebody please review and sponsor the following tiny fixes in >> os_linux_ppc.cpp/os_aix_ppc.cpp: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8057780/ >> https://bugs.openjdk.java.net/browse/JDK-8057780 >> >> They simply fix a typo on Linux/PPC64 and an incorrect method >> signature on AIX which have been introduced by "8050147: StoreLoad >> barrier interferes with stack usages" >> >> Thank you and best regards, >> Volker >> >> From magnus.ihse.bursie at oracle.com Tue Sep 9 11:54:05 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Tue, 09 Sep 2014 13:54:05 +0200 Subject: RFR (XS) 8033946 - Hotspot build should ignore "ide" folder In-Reply-To: <5408971E.1090902@oracle.com> References: <5408971E.1090902@oracle.com> Message-ID: <540EEA5D.4070400@oracle.com> On 2014-09-04 18:45, Gerard Ziemski wrote: > hi all, > > Please review a very small fix that makes hotspot build ignore "ide" > folder, which is where local users can store their own favorite IDE > projects. > > For those interested, I have an Xcode project for JDK8 and JDK9 that I > am personally actively supporting and using, which is hosted at > https://orahub.oraclecorp.com/gerard.ziemski/xcode that is meant to be > put in "jdk/hotspot/ide" folder. > > > Summary of fix: > > Exclude "ide" folder from the makefile that searches for hotspot src > files, or otherwise make bails out complaining that it does not know > how to handle Xcode project files. I'm a bit skeptical to this fix. First of all, I'd like to understand in what way make "bails out" if you have an extra directory with an Xcode project in it? The code you are modifying is just the HotspotWrapper, which is a temporary solution for gluing together the old and the new build. It checks for modified files in the hotspot directory, to determine if the proper hotspot makefile should be called. Is this code bailing out? How can that be? What is the error message? Secondly, I think it is a good idea if we can get more support for IDE projects into the codebase itself. Unfortunately we have no really standard way of doing this. We have netbeans directories in top/common, jdk/make and langtools/make, and the hotspot Visual Studio project files are generated on-demant, and not checked in (if I remember correctly). I'm not entirely found of putting the ide project files in make, but at least we should be consistent. Just adding a new way, without addressing the old, is not helpful. /Magnus From peter.allwin at oracle.com Tue Sep 9 14:15:14 2014 From: peter.allwin at oracle.com (Peter Allwin) Date: Tue, 9 Sep 2014 16:15:14 +0200 Subject: RFR(S): 8055719 - Clean out support for old VC versions from ProjectCreator In-Reply-To: <070601cfc302$b59cd960$20d68c20$@oracle.com> References: <070601cfc302$b59cd960$20d68c20$@oracle.com> Message-ID: <75E92FE2-7BA7-44B0-815C-F8D9AA64F0B9@oracle.com> Looks good to me! /peter On 28 Aug 2014, at 22:57, Christian Tornqvist wrote: > Hi everyone, > > > > This change removes support for old VC versions from ProjectCreator. I've > verified the change by building project files using VS2010 and VS2013 > x86/x64, I've also diffed the generated project files before and after my > change. > > > > Webrev: > > http://cr.openjdk.java.net/~ctornqvi/webrev/8055719/webrev.00/ > > > > Bug: > > https://bugs.openjdk.java.net/browse/JDK-8055719 > > > > Thanks, > > Christian > From staffan.larsen at oracle.com Tue Sep 9 14:17:05 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 9 Sep 2014 16:17:05 +0200 Subject: RFR(S): 8055719 - Clean out support for old VC versions from ProjectCreator In-Reply-To: <75E92FE2-7BA7-44B0-815C-F8D9AA64F0B9@oracle.com> References: <070601cfc302$b59cd960$20d68c20$@oracle.com> <75E92FE2-7BA7-44B0-815C-F8D9AA64F0B9@oracle.com> Message-ID: <4556817C-B9F8-432D-A607-2A82E05D63C3@oracle.com> Looks good! Thanks, /Staffan On 9 sep 2014, at 16:15, Peter Allwin wrote: > Looks good to me! > > /peter > > On 28 Aug 2014, at 22:57, Christian Tornqvist wrote: > >> Hi everyone, >> >> >> >> This change removes support for old VC versions from ProjectCreator. I've >> verified the change by building project files using VS2010 and VS2013 >> x86/x64, I've also diffed the generated project files before and after my >> change. >> >> >> >> Webrev: >> >> http://cr.openjdk.java.net/~ctornqvi/webrev/8055719/webrev.00/ >> >> >> >> Bug: >> >> https://bugs.openjdk.java.net/browse/JDK-8055719 >> >> >> >> Thanks, >> >> Christian >> > From coleen.phillimore at oracle.com Tue Sep 9 14:36:28 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 09 Sep 2014 10:36:28 -0400 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: References: Message-ID: <540F106C.1050101@oracle.com> Hi, Is there a definitive guide on how to build the entire JDK on all platforms that is available on the open mailing lists? http://openjdk.java.net/guide/ is all I found. I don't know how to do this on Windows, for example. Thanks, Coleen On 9/9/14, 2:02 AM, Staffan Larsen wrote: > ## tl;dr > > We propose a move to a Hotspot development model where we can do both > hotspot and jdk changes in the hotspot group repos. This will require a > fully populated JDK forest to push changes (whether hotspot or jdk > changes) through JPRT. We do not expect these changes to have much > affect on the open community, but it is good to note that there can be > changes both in hotspot and jdk code coming through the hotspot > repositories, and the best practise is to always clone and build the > complete forest. > > We propose to do this change in a few weeks time. > > ## Problem > > We see an increasing number of features (small and large) that require > concerted changes to both the hotspot and the jdk repos. Our current > development model does not support this very well since it requires jdk > changes to be made in jdk9/dev and hotspot changes to be made in the > hotspot group repositories. Alternatively, such changes results in "flag > days" where jdk and hotspot changes are pushed through the group repos > with a lot of manual work and impact on everyone working in the group > repos. Either way, the result is very slow and cumbersome development. > > Some examples where concerted changes have been required are JSR-292, > default methods, Java Flight Recorder, work on annotations, moving Class > fields to Java, many serviceability area tests, and so on. A lot of this > work will continue and we will also see new things such as jigsaw that > add to the mix. > > Doing concerted changes today takes a lot of manual effort and calendar > time to make sure nothing break. In many cases the addition of a new > feature needs to made first to a hotspot group repo. That change needs > to propagate to jdk9/dev where library code can be changed to depend on > it. Once that change has propagated back to the hotspot group repo, the > final change can be made to remove the old implementation. This dance > can take anywhere from 2 to 4 weeks to complete - for a single feature. > > There has also been quite a few cases where we missed taking the > dependency into account which results in test failures in one or more > repos. In some cases these failures go on for several weeks causing lots > of extra work and confusion simply because it takes time for the fix to > propagate through the repos. > > Instead, we want to move to a model where we can make both jdk and > hotspot changes directly in the hotspot group repos. In that way the > changes will always "travel together" through the repos. This will make > our development cycle faster as well as more reliable. > > More or less by definition these type of changes introduce a stronger > dependency between hotspot and the jdk. For the product as a whole to > work correctly the right combination of hotspot and the jdk need to be > used. We have long since removed the requirement that hotspot would > support several jdk versions (known as the Hotspot Express - or hsx - > model) and we continue to see a strong dependency, where matching code > in hotspot and the jdk needs to be used. > > ## No More Dependency on Latest Promoted Build > > The strong dependency between hotspot and jdk makes it impossible for > hotspot to depend on the latest promoted jdk build for testing and > development. To elaborate on this; if a change with hotspot+jdk > dependencies have been pushed to a group repo, it will not longer be > possible to use the latest promoted build for running or testing the > version of hotspot built in that repo -- the latest promoted build will > not have the latest change to the jdk that hotspot now depends on (or > vice versa). > > ## Require Fully Populated JDK Forest > > The simple solution that we can switch to today is to always require a > fully populated JDK forest when building (both locally and in JPRT). By > this we mean a clone of all the repos in the forest under, for example, > jdk9/hs-rt. JPRT would no longer be using the latest promoted build when > creating bundles, instead it will build the code from the submitted > forest. > > If all operations (builds, integrations, pushes, JPRT jobs) always work > on the full forest, then there will never be a mismatch between the jdk > and the hotspot code. > > The main drawbacks of this is that developers now need to clone, store > and build a lot more code. Cloning the full forest takes longer than > just cloning the hotspot forest. This can be alleviated by maintaining > local cached versions. Storing full forests require more disk space. > This can be mitigated by buying more disks or using a different workflow > (for example Mercurial Queues). Building a full jdk takes longer, but > hotspot is already one of the larger components to build and incremental > builds are usually quite fast. > > ## Next Steps > > Given that we would like to improve the model we use for cross component > development as soon as possible, we would like to switch to require a > fully populated JDK forest for hotspot development. All the > prerequisites for doing this are in place (changes to JPRT, both on the > servers and to the configuration files in the source repos). A group of > volunteering hotspot developers have been using full jdk repos for a > while for day-to-day work (except pushes) and have not reported any > showstopper problems. > > If no strong objections are rasied we need decide on a date when we > throw the switch. A good date is probably after the 8u40 Feature > Complete date of Mid September [0] so as not to impact that release > (although this change will only apply to JDK 9 development for now). > > Regards, > Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, > Staffan Larsen, Stefan S?rne, Vladimir Kozlov > > [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html From christian.tornqvist at oracle.com Tue Sep 9 16:29:07 2014 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Tue, 9 Sep 2014 12:29:07 -0400 Subject: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <540E1480.1060103@oracle.com> References: <540E09A8.9040402@oracle.com> <540E1480.1060103@oracle.com> Message-ID: <0d8101cfcc4b$2e1e87c0$8a5b9740$@oracle.com> Hi George, Line 52 shouldn't be needed, otherwise this looks good. Thanks, Christian -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of George Triantafillou Sent: Monday, September 8, 2014 4:42 PM To: hotspot-dev at openjdk.java.net Subject: Re: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking Corrected webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01/ Thanks. -George On 9/8/2014 3:55 PM, George Triantafillou wrote: > Please review this new native memory tracking test for 8054836. The > test allocates small amounts of memory with random pseudo call stacks > using the WhiteBox API: > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8054836 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George From igor.veresov at oracle.com Tue Sep 9 16:32:06 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Tue, 9 Sep 2014 09:32:06 -0700 Subject: [8u] 8056154, 8057750: JVM crash with EXCEPTION_ACCESS_VIOLATION when there are many threads running Message-ID: <14CF3521-5476-48D9-B104-F4F53CD68B59@oracle.com> Backport request of the following two issues (jdk9 nightlies are ok): 8056154: JVM crash with EXCEPTION_ACCESS_VIOLATION when there are many threads running JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/9ac4db006cd5 JBS: https://bugs.openjdk.java.net/browse/JDK-8056154 Webrev: http://cr.openjdk.java.net/~iveresov/8056154/webrev.01 8057750: CTW should not make MH intrinsics not entrant JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/712420bcab47 JBS: https://bugs.openjdk.java.net/browse/JDK-8057750 Webrev: http://cr.openjdk.java.net/~iveresov/8057750/webrev.00/ Thanks! igor From george.triantafillou at oracle.com Tue Sep 9 16:38:08 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Tue, 09 Sep 2014 12:38:08 -0400 Subject: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <0d8101cfcc4b$2e1e87c0$8a5b9740$@oracle.com> References: <540E09A8.9040402@oracle.com> <540E1480.1060103@oracle.com> <0d8101cfcc4b$2e1e87c0$8a5b9740$@oracle.com> Message-ID: <540F2CF0.2000207@oracle.com> Thanks Christian, I'll remove line 52. -George On 9/9/2014 12:29 PM, Christian Tornqvist wrote: > Hi George, > > Line 52 shouldn't be needed, otherwise this looks good. > > Thanks, > Christian > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of George Triantafillou > Sent: Monday, September 8, 2014 4:42 PM > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking > > Corrected webrev: > > http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01/ > > > Thanks. > > -George > > On 9/8/2014 3:55 PM, George Triantafillou wrote: >> Please review this new native memory tracking test for 8054836. The >> test allocates small amounts of memory with random pseudo call stacks >> using the WhiteBox API: >> >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8054836 >> >> The fix was tested locally on Linux with jtreg. >> >> Thanks. >> >> -George > From lois.foltan at oracle.com Tue Sep 9 16:43:28 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 09 Sep 2014 12:43:28 -0400 Subject: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <540E1480.1060103@oracle.com> References: <540E09A8.9040402@oracle.com> <540E1480.1060103@oracle.com> Message-ID: <540F2E30.5060107@oracle.com> George, this looks good. Lois On 9/8/2014 4:41 PM, George Triantafillou wrote: > Corrected webrev: > > http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01/ > > > Thanks. > > -George > > On 9/8/2014 3:55 PM, George Triantafillou wrote: >> Please review this new native memory tracking test for 8054836. The >> test allocates small amounts of memory with random pseudo call stacks >> using the WhiteBox API: >> >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8054836 >> >> The fix was tested locally on Linux with jtreg. >> >> Thanks. >> >> -George > From george.triantafillou at oracle.com Tue Sep 9 16:44:47 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Tue, 09 Sep 2014 12:44:47 -0400 Subject: RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <540F2E30.5060107@oracle.com> References: <540E09A8.9040402@oracle.com> <540E1480.1060103@oracle.com> <540F2E30.5060107@oracle.com> Message-ID: <540F2E7F.9080707@oracle.com> Thanks Lois! -George On 9/9/2014 12:43 PM, Lois Foltan wrote: > George, this looks good. > Lois > > On 9/8/2014 4:41 PM, George Triantafillou wrote: >> Corrected webrev: >> >> http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01/ >> >> >> Thanks. >> >> -George >> >> On 9/8/2014 3:55 PM, George Triantafillou wrote: >>> Please review this new native memory tracking test for 8054836. The >>> test allocates small amounts of memory with random pseudo call >>> stacks using the WhiteBox API: >>> >>> Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev/ >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8054836 >>> >>> The fix was tested locally on Linux with jtreg. >>> >>> Thanks. >>> >>> -George >> > From vladimir.kozlov at oracle.com Tue Sep 9 16:58:42 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 09 Sep 2014 09:58:42 -0700 Subject: [8u] 8056154, 8057750: JVM crash with EXCEPTION_ACCESS_VIOLATION when there are many threads running In-Reply-To: <14CF3521-5476-48D9-B104-F4F53CD68B59@oracle.com> References: <14CF3521-5476-48D9-B104-F4F53CD68B59@oracle.com> Message-ID: <540F31C2.7000305@oracle.com> Good. Thanks, Vladimir On 9/9/14 9:32 AM, Igor Veresov wrote: > Backport request of the following two issues (jdk9 nightlies are ok): > > 8056154: JVM crash with EXCEPTION_ACCESS_VIOLATION when there are many threads running > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/9ac4db006cd5 > JBS: https://bugs.openjdk.java.net/browse/JDK-8056154 > Webrev: http://cr.openjdk.java.net/~iveresov/8056154/webrev.01 > > 8057750: CTW should not make MH intrinsics not entrant > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/712420bcab47 > JBS: https://bugs.openjdk.java.net/browse/JDK-8057750 > Webrev: http://cr.openjdk.java.net/~iveresov/8057750/webrev.00/ > > > Thanks! > igor > From igor.veresov at oracle.com Tue Sep 9 17:03:05 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Tue, 9 Sep 2014 10:03:05 -0700 Subject: [8u] 8056154, 8057750: JVM crash with EXCEPTION_ACCESS_VIOLATION when there are many threads running In-Reply-To: <540F31C2.7000305@oracle.com> References: <14CF3521-5476-48D9-B104-F4F53CD68B59@oracle.com> <540F31C2.7000305@oracle.com> Message-ID: <7359470C-B70F-414D-A76D-DBB64C3B7C11@oracle.com> Thanks, Vladimir! igor On Sep 9, 2014, at 9:58 AM, Vladimir Kozlov wrote: > Good. > > Thanks, > Vladimir > > On 9/9/14 9:32 AM, Igor Veresov wrote: >> Backport request of the following two issues (jdk9 nightlies are ok): >> >> 8056154: JVM crash with EXCEPTION_ACCESS_VIOLATION when there are many threads running >> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/9ac4db006cd5 >> JBS: https://bugs.openjdk.java.net/browse/JDK-8056154 >> Webrev: http://cr.openjdk.java.net/~iveresov/8056154/webrev.01 >> >> 8057750: CTW should not make MH intrinsics not entrant >> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/712420bcab47 >> JBS: https://bugs.openjdk.java.net/browse/JDK-8057750 >> Webrev: http://cr.openjdk.java.net/~iveresov/8057750/webrev.00/ >> >> >> Thanks! >> igor >> From staffan.larsen at oracle.com Tue Sep 9 17:29:06 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 9 Sep 2014 19:29:06 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: <540F106C.1050101@oracle.com> References: <540F106C.1050101@oracle.com> Message-ID: <26F0CED8-BDF2-4501-9FDC-CD2A8F216A30@oracle.com> There is a README-builds.html file in the top-level repo that has some instructions. The Adopt OpenJDK project has some good documentation as well: https://java.net/projects/adoptopenjdk/pages/Build /Staffan On 9 sep 2014, at 16:36, Coleen Phillimore wrote: > > Hi, > > Is there a definitive guide on how to build the entire JDK on all platforms that is available on the open mailing lists? > http://openjdk.java.net/guide/ is all I found. I don't know how to do this on Windows, for example. > > Thanks, > Coleen > > On 9/9/14, 2:02 AM, Staffan Larsen wrote: >> ## tl;dr >> >> We propose a move to a Hotspot development model where we can do both >> hotspot and jdk changes in the hotspot group repos. This will require a >> fully populated JDK forest to push changes (whether hotspot or jdk >> changes) through JPRT. We do not expect these changes to have much >> affect on the open community, but it is good to note that there can be >> changes both in hotspot and jdk code coming through the hotspot >> repositories, and the best practise is to always clone and build the >> complete forest. >> >> We propose to do this change in a few weeks time. >> >> ## Problem >> >> We see an increasing number of features (small and large) that require >> concerted changes to both the hotspot and the jdk repos. Our current >> development model does not support this very well since it requires jdk >> changes to be made in jdk9/dev and hotspot changes to be made in the >> hotspot group repositories. Alternatively, such changes results in "flag >> days" where jdk and hotspot changes are pushed through the group repos >> with a lot of manual work and impact on everyone working in the group >> repos. Either way, the result is very slow and cumbersome development. >> >> Some examples where concerted changes have been required are JSR-292, >> default methods, Java Flight Recorder, work on annotations, moving Class >> fields to Java, many serviceability area tests, and so on. A lot of this >> work will continue and we will also see new things such as jigsaw that >> add to the mix. >> >> Doing concerted changes today takes a lot of manual effort and calendar >> time to make sure nothing break. In many cases the addition of a new >> feature needs to made first to a hotspot group repo. That change needs >> to propagate to jdk9/dev where library code can be changed to depend on >> it. Once that change has propagated back to the hotspot group repo, the >> final change can be made to remove the old implementation. This dance >> can take anywhere from 2 to 4 weeks to complete - for a single feature. >> >> There has also been quite a few cases where we missed taking the >> dependency into account which results in test failures in one or more >> repos. In some cases these failures go on for several weeks causing lots >> of extra work and confusion simply because it takes time for the fix to >> propagate through the repos. >> >> Instead, we want to move to a model where we can make both jdk and >> hotspot changes directly in the hotspot group repos. In that way the >> changes will always "travel together" through the repos. This will make >> our development cycle faster as well as more reliable. >> >> More or less by definition these type of changes introduce a stronger >> dependency between hotspot and the jdk. For the product as a whole to >> work correctly the right combination of hotspot and the jdk need to be >> used. We have long since removed the requirement that hotspot would >> support several jdk versions (known as the Hotspot Express - or hsx - >> model) and we continue to see a strong dependency, where matching code >> in hotspot and the jdk needs to be used. >> >> ## No More Dependency on Latest Promoted Build >> >> The strong dependency between hotspot and jdk makes it impossible for >> hotspot to depend on the latest promoted jdk build for testing and >> development. To elaborate on this; if a change with hotspot+jdk >> dependencies have been pushed to a group repo, it will not longer be >> possible to use the latest promoted build for running or testing the >> version of hotspot built in that repo -- the latest promoted build will >> not have the latest change to the jdk that hotspot now depends on (or >> vice versa). >> >> ## Require Fully Populated JDK Forest >> >> The simple solution that we can switch to today is to always require a >> fully populated JDK forest when building (both locally and in JPRT). By >> this we mean a clone of all the repos in the forest under, for example, >> jdk9/hs-rt. JPRT would no longer be using the latest promoted build when >> creating bundles, instead it will build the code from the submitted >> forest. >> >> If all operations (builds, integrations, pushes, JPRT jobs) always work >> on the full forest, then there will never be a mismatch between the jdk >> and the hotspot code. >> >> The main drawbacks of this is that developers now need to clone, store >> and build a lot more code. Cloning the full forest takes longer than >> just cloning the hotspot forest. This can be alleviated by maintaining >> local cached versions. Storing full forests require more disk space. >> This can be mitigated by buying more disks or using a different workflow >> (for example Mercurial Queues). Building a full jdk takes longer, but >> hotspot is already one of the larger components to build and incremental >> builds are usually quite fast. >> >> ## Next Steps >> >> Given that we would like to improve the model we use for cross component >> development as soon as possible, we would like to switch to require a >> fully populated JDK forest for hotspot development. All the >> prerequisites for doing this are in place (changes to JPRT, both on the >> servers and to the configuration files in the source repos). A group of >> volunteering hotspot developers have been using full jdk repos for a >> while for day-to-day work (except pushes) and have not reported any >> showstopper problems. >> >> If no strong objections are rasied we need decide on a date when we >> throw the switch. A good date is probably after the 8u40 Feature >> Complete date of Mid September [0] so as not to impact that release >> (although this change will only apply to JDK 9 development for now). >> >> Regards, >> Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, >> Staffan Larsen, Stefan S?rne, Vladimir Kozlov >> >> [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html > From mikael.vidstedt at oracle.com Tue Sep 9 21:24:49 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Tue, 09 Sep 2014 14:24:49 -0700 Subject: Proposal: Allowing selective pushes to hotspot without jprt Message-ID: <540F7021.5080100@oracle.com> All, Made up primarily of low level C++ code, the Hotspot codebase is highly platform dependent and also tightly coupled with the tool chains on the various platforms. Each platform/tool chain combination has its set of special quirks, and code must be implemented in a way such that it only relies on the common subset of syntax and functionality across all these combinations. History has taught us that even simple changes can have surprising results when compiled with different compilers. For more than a decade the Hotspot team has ensured a minimum quality level by requiring all pushes to be done through a build and test system (jprt) which guarantees that the code resulting from applying a set of changes builds on a set of core platforms and that a set of core tests pass. Only if all the builds and tests pass will the changes actually be pushed to the target repository. We believe that testing like the above, in combination with later stages of testing, is vital to ensuring that the quality level of the Hotspot code remains high and that developers do not run into situations where the latest version has build errors on some platforms. Recently the AIX/PPC port was added to the set of OpenJDK platforms. From a Hotspot perspective this new platform added a set of AIX/PPC specific files including some platform specific changes to shared code. The AIX/PPC platform is not tested by Oracle as part of Hotspot push jobs. The same thing applies for the shark and zero versions of Hotspot. While Hotspot developers remain committed to making sure changes are developed in a way such that the quality level remains high across all platforms and variants, because of the above mentioned complexities it is inevitable that from time to time changes will be made which introduce issues on specific platforms or tool chains not part of the core testing. To allow these issues to be resolved more quickly I would like to propose a relaxation in the requirements on how changes to Hotspot are pushed. Specifically I would like to allow for direct pushes to the hotspot/ repository of files specific to the following ports/variants/tools: * AIX * PPC * Shark * Zero Today this translates into the following files: - src/cpu/ppc/** - src/cpu/zero/** - src/os/aix/** - src/os_cpu/aix_ppc/** - src/os_cpu/bsd_zero/** - src/os_cpu/linux_ppc/** - src/os_cpu/linux_zero/** Note that all changes are still required to go through the normal development and review cycle; the proposed relaxation only applies to how the changes are pushed. If at code review time a change is for some reason deemed to be risky and/or otherwise have impact on shared files the reviewer may request that the change to go through the regular push testing. For changes only touching the above set of files this expected to be rare. Please let me know what you think. Cheers, Mikael From gnu.andrew at redhat.com Tue Sep 9 22:12:28 2014 From: gnu.andrew at redhat.com (Andrew Hughes) Date: Tue, 9 Sep 2014 18:12:28 -0400 (EDT) Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: <540F7021.5080100@oracle.com> References: <540F7021.5080100@oracle.com> Message-ID: <1664098353.20252087.1410300748282.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > All, > > Made up primarily of low level C++ code, the Hotspot codebase is highly > platform dependent and also tightly coupled with the tool chains on the > various platforms. Each platform/tool chain combination has its set of > special quirks, and code must be implemented in a way such that it only > relies on the common subset of syntax and functionality across all these > combinations. History has taught us that even simple changes can have > surprising results when compiled with different compilers. > > For more than a decade the Hotspot team has ensured a minimum quality > level by requiring all pushes to be done through a build and test system > (jprt) which guarantees that the code resulting from applying a set of > changes builds on a set of core platforms and that a set of core tests > pass. Only if all the builds and tests pass will the changes actually be > pushed to the target repository. > > We believe that testing like the above, in combination with later stages > of testing, is vital to ensuring that the quality level of the Hotspot > code remains high and that developers do not run into situations where > the latest version has build errors on some platforms. > > Recently the AIX/PPC port was added to the set of OpenJDK platforms. > From a Hotspot perspective this new platform added a set of AIX/PPC > specific files including some platform specific changes to shared code. > The AIX/PPC platform is not tested by Oracle as part of Hotspot push > jobs. The same thing applies for the shark and zero versions of Hotspot. > > While Hotspot developers remain committed to making sure changes are > developed in a way such that the quality level remains high across all > platforms and variants, because of the above mentioned complexities it > is inevitable that from time to time changes will be made which > introduce issues on specific platforms or tool chains not part of the > core testing. > > To allow these issues to be resolved more quickly I would like to > propose a relaxation in the requirements on how changes to Hotspot are > pushed. Specifically I would like to allow for direct pushes to the > hotspot/ repository of files specific to the following ports/variants/tools: > > * AIX > * PPC > * Shark > * Zero > > Today this translates into the following files: > > - src/cpu/ppc/** > - src/cpu/zero/** > - src/os/aix/** > - src/os_cpu/aix_ppc/** > - src/os_cpu/bsd_zero/** > - src/os_cpu/linux_ppc/** > - src/os_cpu/linux_zero/** > > Note that all changes are still required to go through the normal > development and review cycle; the proposed relaxation only applies to > how the changes are pushed. > > If at code review time a change is for some reason deemed to be risky > and/or otherwise have impact on shared files the reviewer may request > that the change to go through the regular push testing. For changes only > touching the above set of files this expected to be rare. > > Please let me know what you think. > > Cheers, > Mikael > > +1 The build-test-commit idea works fine.. if a) it actually tests the platforms relevant to the fix and b) it's accessible to everyone involved in the project. For those of us outside Oracle working on platforms other than x86, x86_64 and sparc, neither a or b is true. As a case in point, "7141246: build-infra merge: Introduce new JVM_VARIANT* to control which kind of jvm gets built" was committed, having passed jprt. However, it primarily made changes to the builds for Zero and Shark, which jprt doesn't test, and subsequently broke them. For me to then fix it, in "8024648: 7141246 & 8016131 break Zero port", meant finding someone to both review that change and run it through jprt. Due to the way things are frozen, I believe it's still broken in u40. -- Andrew :) Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) PGP Key: 248BDC07 (https://keys.indymedia.org/) Fingerprint = EC5A 1F5E C0AD 1D15 8F1F 8F91 3B96 A578 248B DC07 From vladimir.kozlov at oracle.com Tue Sep 9 23:57:39 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 09 Sep 2014 16:57:39 -0700 Subject: [8u40] RFR(XS) 8057758: Tests run TypeProfileLevel=222 crash with guarantee(0) failed: must find derived/base pair Message-ID: <540F93F3.6070205@oracle.com> Backport request. The fix was pushed today (Sep 9). Changes were applied cleanly. It is needed for the backport "8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method". https://bugs.openjdk.java.net/browse/JDK-8057758 jdk9 webrev and changeset: http://cr.openjdk.java.net/~kvn/8057758/webrev/ http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/d8ecd90aa61c Thanks, Vladimir From igor.veresov at oracle.com Wed Sep 10 04:58:06 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Tue, 9 Sep 2014 21:58:06 -0700 Subject: [8u40] RFR(XS) 8057758: Tests run TypeProfileLevel=222 crash with guarantee(0) failed: must find derived/base pair In-Reply-To: <540F93F3.6070205@oracle.com> References: <540F93F3.6070205@oracle.com> Message-ID: Good. igor On Sep 9, 2014, at 4:57 PM, Vladimir Kozlov wrote: > Backport request. The fix was pushed today (Sep 9). > Changes were applied cleanly. > It is needed for the backport "8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method". > > https://bugs.openjdk.java.net/browse/JDK-8057758 > jdk9 webrev and changeset: > http://cr.openjdk.java.net/~kvn/8057758/webrev/ > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/d8ecd90aa61c > > Thanks, > Vladimir From vladimir.kozlov at oracle.com Wed Sep 10 05:03:27 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 09 Sep 2014 22:03:27 -0700 Subject: [8u40] RFR(XS) 8057758: Tests run TypeProfileLevel=222 crash with guarantee(0) failed: must find derived/base pair In-Reply-To: References: <540F93F3.6070205@oracle.com> Message-ID: <540FDB9F.4070801@oracle.com> Thank you, Igor Vladimir On 9/9/14 9:58 PM, Igor Veresov wrote: > Good. > > igor > > On Sep 9, 2014, at 4:57 PM, Vladimir Kozlov wrote: > >> Backport request. The fix was pushed today (Sep 9). >> Changes were applied cleanly. >> It is needed for the backport "8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method". >> >> https://bugs.openjdk.java.net/browse/JDK-8057758 >> jdk9 webrev and changeset: >> http://cr.openjdk.java.net/~kvn/8057758/webrev/ >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/d8ecd90aa61c >> >> Thanks, >> Vladimir > From volker.simonis at gmail.com Wed Sep 10 08:45:12 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 10 Sep 2014 10:45:12 +0200 Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: <540F7021.5080100@oracle.com> References: <540F7021.5080100@oracle.com> Message-ID: Hi Mikael, thanks a lot for this proposal. I think this will dramatically simplify our work to keep our ports up to date! So I fully support it. Nevertheless, I think this can only be a first step towards fully open the JPRT system to developers outside Oracle. With "opening" I mean to allow OpenJDK commiters from outside Oracle to submit and run JPRT jobs as well as allowing porting projects to add hardware which builds and tests the HotSpot on alternative platforms. So while I'm all in favor of your proposal I hope you can allay my doubts that this simplification will hopefully not push the realization of a truly OPEN JPRT system even further away. Regards, Volker On Tue, Sep 9, 2014 at 11:24 PM, Mikael Vidstedt wrote: > > All, > > Made up primarily of low level C++ code, the Hotspot codebase is highly > platform dependent and also tightly coupled with the tool chains on the > various platforms. Each platform/tool chain combination has its set of > special quirks, and code must be implemented in a way such that it only > relies on the common subset of syntax and functionality across all these > combinations. History has taught us that even simple changes can have > surprising results when compiled with different compilers. > > For more than a decade the Hotspot team has ensured a minimum quality level > by requiring all pushes to be done through a build and test system (jprt) > which guarantees that the code resulting from applying a set of changes > builds on a set of core platforms and that a set of core tests pass. Only if > all the builds and tests pass will the changes actually be pushed to the > target repository. > > We believe that testing like the above, in combination with later stages of > testing, is vital to ensuring that the quality level of the Hotspot code > remains high and that developers do not run into situations where the latest > version has build errors on some platforms. > > Recently the AIX/PPC port was added to the set of OpenJDK platforms. From a > Hotspot perspective this new platform added a set of AIX/PPC specific files > including some platform specific changes to shared code. The AIX/PPC > platform is not tested by Oracle as part of Hotspot push jobs. The same > thing applies for the shark and zero versions of Hotspot. > > While Hotspot developers remain committed to making sure changes are > developed in a way such that the quality level remains high across all > platforms and variants, because of the above mentioned complexities it is > inevitable that from time to time changes will be made which introduce > issues on specific platforms or tool chains not part of the core testing. > > To allow these issues to be resolved more quickly I would like to propose a > relaxation in the requirements on how changes to Hotspot are pushed. > Specifically I would like to allow for direct pushes to the hotspot/ > repository of files specific to the following ports/variants/tools: > > * AIX > * PPC > * Shark > * Zero > > Today this translates into the following files: > > - src/cpu/ppc/** > - src/cpu/zero/** > - src/os/aix/** > - src/os_cpu/aix_ppc/** > - src/os_cpu/bsd_zero/** > - src/os_cpu/linux_ppc/** > - src/os_cpu/linux_zero/** > > Note that all changes are still required to go through the normal > development and review cycle; the proposed relaxation only applies to how > the changes are pushed. > > If at code review time a change is for some reason deemed to be risky and/or > otherwise have impact on shared files the reviewer may request that the > change to go through the regular push testing. For changes only touching the > above set of files this expected to be rare. > > Please let me know what you think. > > Cheers, > Mikael > From mikael.gerdin at oracle.com Wed Sep 10 09:11:15 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 10 Sep 2014 11:11:15 +0200 Subject: RFR: 8057752: WhiteBox extension support for testing In-Reply-To: <540DC2B9.3000505@oracle.com> References: <540DC2B9.3000505@oracle.com> Message-ID: <1536244.eiS3300VYN@mgerdin03> Stefan, On Monday 08 September 2014 16.52.41 Stefan Johansson wrote: > Hi, > > Please review these changes for RFE: > https://bugs.openjdk.java.net/browse/JDK-8057752 > > Webrev: > http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00/ Looks good. > > Summary: > Added the call to register_extended to make it possible extend the > WhiteBox API. The Java API is still defined in WhiteBox.java, if the > extension methods are not defined by an extension a linker error will > occur. Right, the code is designed so that it's possible to use mismatching versions of WhiteBox.java as long as you avoid calling the unlinkable methods. /Mikael > > Stefan From stefan.johansson at oracle.com Wed Sep 10 11:35:20 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Wed, 10 Sep 2014 13:35:20 +0200 Subject: RFR: 8057752: WhiteBox extension support for testing In-Reply-To: <1536244.eiS3300VYN@mgerdin03> References: <540DC2B9.3000505@oracle.com> <1536244.eiS3300VYN@mgerdin03> Message-ID: <54103778.5060406@oracle.com> Thanks for reviewing this Mikael. Made a small addition to allow Windows to handle empty extensions: http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00-01/ Stefan On 2014-09-10 11:11, Mikael Gerdin wrote: > Stefan, > > On Monday 08 September 2014 16.52.41 Stefan Johansson wrote: >> Hi, >> >> Please review these changes for RFE: >> https://bugs.openjdk.java.net/browse/JDK-8057752 >> >> Webrev: >> http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00/ > Looks good. > >> Summary: >> Added the call to register_extended to make it possible extend the >> WhiteBox API. The Java API is still defined in WhiteBox.java, if the >> extension methods are not defined by an extension a linker error will >> occur. > Right, the code is designed so that it's possible to use mismatching versions > of WhiteBox.java as long as you avoid calling the unlinkable methods. > > /Mikael > >> Stefan From george.triantafillou at oracle.com Wed Sep 10 13:06:37 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Wed, 10 Sep 2014 09:06:37 -0400 Subject: Backport request - RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking Message-ID: <54104CDD.2090803@oracle.com> This is a backport request to 8u40 for "JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking". JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/59c55db51def JBS: https://bugs.openjdk.java.net/browse/JDK-8054836 Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01 The original patch applied cleanly. Thanks. -George From christian.tornqvist at oracle.com Wed Sep 10 14:09:44 2014 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Wed, 10 Sep 2014 10:09:44 -0400 Subject: RFR: 8057752: WhiteBox extension support for testing In-Reply-To: <54103778.5060406@oracle.com> References: <540DC2B9.3000505@oracle.com> <1536244.eiS3300VYN@mgerdin03> <54103778.5060406@oracle.com> Message-ID: <119f01cfcd00$e01c5140$a054f3c0$@oracle.com> Hi Stefan, This looks good. Thanks, Christian -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Stefan Johansson Sent: Wednesday, September 10, 2014 7:35 AM To: Mikael Gerdin; hotspot-dev at openjdk.java.net Subject: Re: RFR: 8057752: WhiteBox extension support for testing Thanks for reviewing this Mikael. Made a small addition to allow Windows to handle empty extensions: http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00-01/ Stefan On 2014-09-10 11:11, Mikael Gerdin wrote: > Stefan, > > On Monday 08 September 2014 16.52.41 Stefan Johansson wrote: >> Hi, >> >> Please review these changes for RFE: >> https://bugs.openjdk.java.net/browse/JDK-8057752 >> >> Webrev: >> http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00/ > Looks good. > >> Summary: >> Added the call to register_extended to make it possible extend the >> WhiteBox API. The Java API is still defined in WhiteBox.java, if the >> extension methods are not defined by an extension a linker error will >> occur. > Right, the code is designed so that it's possible to use mismatching > versions of WhiteBox.java as long as you avoid calling the unlinkable methods. > > /Mikael > >> Stefan From stefan.johansson at oracle.com Wed Sep 10 15:04:53 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Wed, 10 Sep 2014 17:04:53 +0200 Subject: RFR: 8057752: WhiteBox extension support for testing In-Reply-To: <119f01cfcd00$e01c5140$a054f3c0$@oracle.com> References: <540DC2B9.3000505@oracle.com> <1536244.eiS3300VYN@mgerdin03> <54103778.5060406@oracle.com> <119f01cfcd00$e01c5140$a054f3c0$@oracle.com> Message-ID: <54106895.9030906@oracle.com> On 2014-09-10 16:09, Christian Tornqvist wrote: > Hi Stefan, > > This looks good. Thanks Christian. Stefan > Thanks, > Christian > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of > Stefan Johansson > Sent: Wednesday, September 10, 2014 7:35 AM > To: Mikael Gerdin; hotspot-dev at openjdk.java.net > Subject: Re: RFR: 8057752: WhiteBox extension support for testing > > Thanks for reviewing this Mikael. > > Made a small addition to allow Windows to handle empty extensions: > http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00-01/ > > Stefan > > On 2014-09-10 11:11, Mikael Gerdin wrote: >> Stefan, >> >> On Monday 08 September 2014 16.52.41 Stefan Johansson wrote: >>> Hi, >>> >>> Please review these changes for RFE: >>> https://bugs.openjdk.java.net/browse/JDK-8057752 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~sjohanss/8057752/webrev.00/ >> Looks good. >> >>> Summary: >>> Added the call to register_extended to make it possible extend the >>> WhiteBox API. The Java API is still defined in WhiteBox.java, if the >>> extension methods are not defined by an extension a linker error will >>> occur. >> Right, the code is designed so that it's possible to use mismatching >> versions of WhiteBox.java as long as you avoid calling the unlinkable > methods. >> /Mikael >> >>> Stefan > From vladimir.x.ivanov at oracle.com Wed Sep 10 15:14:00 2014 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Wed, 10 Sep 2014 19:14:00 +0400 Subject: [8u40] Bulk backport request: 8048703, 8049532, 8049529, 8049530, 8049528, 8034935, 8023461, 8025842 Message-ID: <54106AB8.5080409@oracle.com> This is a bulk request to backport the following changes into 8u40. They were integrated into 9 long ago, but still apply cleanly to jdk8u-dev. (1) 8048703: ReplacedNodes dumps it's content to tty https://jbs.oracle.com/bugs/browse/JDK-8048703 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/18d4d4c8beea (2) 8049532: LogCompilation: C1: inlining tree is flat (no depth is stored) https://jbs.oracle.com/bugs/browse/JDK-8049532 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4102555e5695 (3) 8049529: LogCompilation: annotate make_not_compilable with compilation level https://jbs.oracle.com/bugs/browse/JDK-8049529 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cdf968fe49ce (4) 8049530: Provide descriptive failure reason for compilation tasks removed for the queue https://jbs.oracle.com/bugs/browse/JDK-8049530 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/307ecb8f6676 (5) 8049528: Method marked w/ @ForceInline isn't inlined with "executed < MinInliningThreshold times" message https://jbs.oracle.com/bugs/browse/JDK-8049528 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4153b0978181 (6) 8034935: JSR 292 support for PopFrame has a fragile coupling with DirectMethodHandle https://jbs.oracle.com/bugs/browse/JDK-8034935 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/39e7fbc6d865 (7) 8023461: Thread holding lock at safepoint that vm can block on: MethodCompileQueue_lock https://jbs.oracle.com/bugs/browse/JDK-8023461 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/99dc0ff1d4c7 (8) 8025842: Convert warning("Thread holding lock at safepoint that vm can block on") to fatal(...) https://jbs.oracle.com/bugs/browse/JDK-8025842 http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/c0774726073e Best regards, Vladimir Ivanov From erik.osterlund at lnu.se Wed Sep 10 15:14:29 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Wed, 10 Sep 2014 15:14:29 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <540D1066.6030603@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> Message-ID: <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> Hi David, Thank you for your reply. So with ARM out of the window, I thought I'd have a look at the PPC/AIX implementation. Looks like the loops here can be flattened to 1 loop instead of 2 nested. On a more crucial note, I was very surprised to see two full sync instructions were emitted in the CAS. I believe these are over conservative fences in the current implementation. I would like to replace the write fence with lwsync instead of sync and the read fence with isync instead of sync. Is there any good reason why it was implemented in this (in my opinion) over-conservative and expensive way? inline jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { unsigned int old_value; const uint64_t zero = 0; __asm__ __volatile__ ( /* fence */ strasm_sync // <------------------ write fence (release) should be strasm_lwsync instead as we don't care about ordering of device memory /* simple guard */ " lwz %[old_value], 0(%[dest]) \n" " cmpw %[compare_value], %[old_value] \n" " bne- 2f \n" /* atomic loop */ "1: \n" " lwarx %[old_value], %[dest], %[zero] \n" " cmpw %[compare_value], %[old_value] \n" " bne- 2f \n" " stwcx. %[exchange_value], %[dest], %[zero] \n" " bne- 1b \n" /* acquire */ strasm_sync // <------------------ read fence (acquire) should be starsm_isync instead as we don't care about ordering of device memory /* exit */ "2: \n" /* out */ : [old_value] "=&r" (old_value), "=m" (*dest) /* in */ : [dest] "b" (dest), [zero] "r" (zero), [compare_value] "r" (compare_value), [exchange_value] "r" (exchange_value), "m" (*dest) /* clobber */ : "cc", "memory" ); return (jint) old_value; } If nobody minds, I'd like to change this. :) /Erik On 08 Sep 2014, at 04:11, David Holmes wrote: > Hi Erik, > > Note there is currently no ARM code in the OpenJDK itself. Of course the Aarch64 project will hopefully be changing that soon, but I would not think they need the logic you describe below. > > Cheers, > David > > On 6/09/2014 12:03 AM, Erik ?sterlund wrote: >> Hi Mikael, >> >> Back from travelling now. I did look into other architectures a bit and made some interesting findings. >> >> The architecture that stands out the most disastrous to me is ARM. It has three levels of nested loops to carry out a single byte CAS: >> 1. Outmost loop to emulate byte-grain CAS using word-sized CAS. >> 2. Middle loop makes calls to the __kernel_cmpxchg which is optimized for non-SMP systems using OS support but backward compatible with LL/SC loop for SMP systems. Unfortunately it returns a boolean (success/failure) rather than the destination value and hence the loop keeps track of the actual value at the destination required by the Atomic::cmpxchg interface. >> 3. __kernel_cmpxchg implements CAS on SMP-systems using LL/SC (ldrex/strex). Since a context switch can break in the middle, a loop retries the operation in such unfortunate spuriously failing scenario. >> >> I have made a new solution that would only make sense on ARMv6 and above with SMP. The proposed solution has only one loop instead of three, would be great if somebody could review it: >> >> inline intptr_t __casb_internal(volatile intptr_t *ptr, intptr_t compare, intptr_t new_val) { >> intptr_t result, old_tmp; >> >> // prefetch for writing and barrier >> __asm__ __volatile__ ("pld [%0]\n\t" >> " dmb sy\n\t" /* maybe we can get away with dsb st here instead for speed? anyone? playing it safe now */ >> : >> : "r" (ptr) >> : "memory"); >> >> do { >> // spuriously failing CAS loop keeping track of value >> __asm__ __volatile__("@ __cmpxchgb\n\t" >> " ldrexb %1, [%2]\n\t" >> " mov %0, #0\n\t" >> " teq %1, %3\n\t" >> " it eq\n\t" >> " strexbeq %0, %4, [%2]\n\t" >> : "=&r" (result), "=&r" (old_tmp) >> : "r" (ptr), "Ir" (compare), "r" (new_val) >> : "memory", "cc"); >> } while (result); >> >> // barrier >> __asm__ __volatile__ ("dmb sy" >> ::: "memory"); >> >> return old_tmp; >> } >> >> inline jbyte Atomic::cmpxchg (jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { >> return (jbyte)__casb_internal(volatile jbyte*)ptr, (intptr_t)compare, (intptr_t)new_val); >> } >> >> What I'm a bit uncertain about here is which barriers we need and which are optimal as it seems to be a bit different for different ARM versions, maybe somebody can enlighten me? Also I'm not sure how hotspot checks ARM version to make the appropriate decision. >> >> The proposed x86 implementation is much more straight forward (bsd, linux): >> >> inline jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { >> int mp = os::is_MP(); >> jbyte result; >> __asm__ volatile (LOCK_IF_MP(%4) "cmpxchgb %1,(%3)" >> : "=a" (result) >> : "q" (exchange_value), "a" (compare_value), "r" (dest), "r" (mp) >> : "cc", "memory"); >> return result; >> } >> >> Unfortunately the code is spread out through a billion files because of different ABIs and compiler support for different OS variants. Some use generated stubs, some use ASM files, some use inline assembly. I think I fixed all of them but I need your help to build and verify it if you don't mind as I don't have access to those platforms. How do we best do this? >> >> As for SPARC I unfortunately decided to keep the old implementation as SPARC does not seem to support byte-wide CAS, only found the cas and casx instructions which is not sufficient as far as I could tell, corrections if I'm wrong? In that case, add byte-wide CAS on SPARC to my wish list for christmas. >> >> Is there any other platform/architecture of interest on your wish list I should investigate which is important to you? PPC? >> >> /Erik >> >> On 04 Sep 2014, at 11:20, Mikael Gerdin wrote: >> >>> Hi Erik, >>> >>> On Thursday 04 September 2014 09.05.13 Erik ?sterlund wrote: >>>> Hi, >>>> >>>> The implementation of single byte Atomic::cmpxchg on x86 (and all other >>>> platforms) emulates the single byte cmpxchgb instruction using a loop of >>>> jint-sized load and cmpxchgl and code to dynamically align the destination >>>> address. >>>> >>>> This code is used for GC-code related to remembered sets currently. >>>> >>>> I have the changes on my platform (amd64, bsd) to simply use the native >>>> cmpxchgb instead but could provide a patch fixing this unnecessary >>>> performance glitch for all supported x86 if anybody wants this? >>> >>> I think that sounds good. >>> Would you mind looking at other cpu arches to see if they provide something >>> similar? It's ok if you can't build the code for the other arches, I can help >>> you with that. >>> >>> /Mikael >>> >>>> >>>> /Erik >>> >> From christian.tornqvist at oracle.com Wed Sep 10 17:32:31 2014 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Wed, 10 Sep 2014 13:32:31 -0400 Subject: Backport request - RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <54104CDD.2090803@oracle.com> References: <54104CDD.2090803@oracle.com> Message-ID: <126801cfcd1d$344b27c0$9ce17740$@oracle.com> Hi George, Looks good, I'll push this for you. Thanks, Christian -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of George Triantafillou Sent: Wednesday, September 10, 2014 9:07 AM To: hotspot-dev at openjdk.java.net Subject: Backport request - RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking This is a backport request to 8u40 for "JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking". JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/59c55db51def JBS: https://bugs.openjdk.java.net/browse/JDK-8054836 Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01 The original patch applied cleanly. Thanks. -George From staffan.larsen at oracle.com Wed Sep 10 18:43:29 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Wed, 10 Sep 2014 20:43:29 +0200 Subject: RFR 8056039: Hotspot does not compile with clang 3.4 on Linux In-Reply-To: <1593876.cxpFjnnrsK@mgerdin03> References: <1593876.cxpFjnnrsK@mgerdin03> Message-ID: <37A7AB85-DAE2-4179-9EE9-772FABEBB3A0@oracle.com> Removed code is good code. Reviewed. /Staffan On 26 aug 2014, at 12:17, Mikael Gerdin wrote: > Hi all, > > In order to get clang's (sometimes) more helpful error messages when compiling > I'd like to fix the few remaining places where clang fails to compile Hotspot. > > The culprit in this case was "local_vsnprintf" in os_linux.cpp, an unused > function which wasn't annotaded with the PRINTF_FORMAT macro. > Since the function was unused I decided to remove it instead, then I found it > in the other os_*nix.cpp files as well. > > Digging into the Teamware history it looks like it first appeared in the > Solaris port because vsnprintf did not exist on some very old versions of > Solaris, so it was dynamically looked up through dlsym. For a few years > vsnprintf has been present in the Solaris header files, so I think it's safe > to remove the workaround now some 17 years later. > > I also need a SCANF_FORMAT for an internal file, so I added that to > globalDefinitions. > > Webrev: http://cr.openjdk.java.net/~mgerdin/8056039/webrev/ > Buglink: https://bugs.openjdk.java.net/browse/JDK-8056039 > > Thanks > /Mikael > > From george.triantafillou at oracle.com Wed Sep 10 18:52:55 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Wed, 10 Sep 2014 14:52:55 -0400 Subject: Backport request - RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking In-Reply-To: <126801cfcd1d$344b27c0$9ce17740$@oracle.com> References: <54104CDD.2090803@oracle.com> <126801cfcd1d$344b27c0$9ce17740$@oracle.com> Message-ID: <54109E07.2000603@oracle.com> Thanks Christian. -George On 9/10/2014 1:32 PM, Christian Tornqvist wrote: > Hi George, > > Looks good, I'll push this for you. > > Thanks, > Christian > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of George Triantafillou > Sent: Wednesday, September 10, 2014 9:07 AM > To: hotspot-dev at openjdk.java.net > Subject: Backport request - RFR: JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking > > This is a backport request to 8u40 for "JDK-8054836 [TESTBUG] Test is needed to verify correctness of malloc tracking". > > JDK9 changeset: > http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/59c55db51def > JBS: https://bugs.openjdk.java.net/browse/JDK-8054836 > Webrev: http://cr.openjdk.java.net/~gtriantafill/8054836/webrev.01 > > > The original patch applied cleanly. > > Thanks. > > -George > From mikael.vidstedt at oracle.com Wed Sep 10 22:16:09 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Wed, 10 Sep 2014 15:16:09 -0700 Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: References: <540F7021.5080100@oracle.com> Message-ID: <5410CDA9.7030405@oracle.com> Andrew/Volker, Thanks for the positive feedback. The goal of the proposal is to simplify pushing changes which are effectively not tested by the jprt system anyway. The proposed relaxation would not affect work on other infrastructure projects in any relevant way, but would hopefully improve all our lives significantly immediately. Cheers, Mikael On 2014-09-10 01:45, Volker Simonis wrote: > Hi Mikael, > > thanks a lot for this proposal. I think this will dramatically > simplify our work to keep our ports up to date! So I fully support it. > > Nevertheless, I think this can only be a first step towards fully open > the JPRT system to developers outside Oracle. With "opening" I mean to > allow OpenJDK commiters from outside Oracle to submit and run JPRT > jobs as well as allowing porting projects to add hardware which builds > and tests the HotSpot on alternative platforms. > > So while I'm all in favor of your proposal I hope you can allay my > doubts that this simplification will hopefully not push the > realization of a truly OPEN JPRT system even further away. > > Regards, > Volker > > > On Tue, Sep 9, 2014 at 11:24 PM, Mikael Vidstedt > wrote: >> All, >> >> Made up primarily of low level C++ code, the Hotspot codebase is highly >> platform dependent and also tightly coupled with the tool chains on the >> various platforms. Each platform/tool chain combination has its set of >> special quirks, and code must be implemented in a way such that it only >> relies on the common subset of syntax and functionality across all these >> combinations. History has taught us that even simple changes can have >> surprising results when compiled with different compilers. >> >> For more than a decade the Hotspot team has ensured a minimum quality level >> by requiring all pushes to be done through a build and test system (jprt) >> which guarantees that the code resulting from applying a set of changes >> builds on a set of core platforms and that a set of core tests pass. Only if >> all the builds and tests pass will the changes actually be pushed to the >> target repository. >> >> We believe that testing like the above, in combination with later stages of >> testing, is vital to ensuring that the quality level of the Hotspot code >> remains high and that developers do not run into situations where the latest >> version has build errors on some platforms. >> >> Recently the AIX/PPC port was added to the set of OpenJDK platforms. From a >> Hotspot perspective this new platform added a set of AIX/PPC specific files >> including some platform specific changes to shared code. The AIX/PPC >> platform is not tested by Oracle as part of Hotspot push jobs. The same >> thing applies for the shark and zero versions of Hotspot. >> >> While Hotspot developers remain committed to making sure changes are >> developed in a way such that the quality level remains high across all >> platforms and variants, because of the above mentioned complexities it is >> inevitable that from time to time changes will be made which introduce >> issues on specific platforms or tool chains not part of the core testing. >> >> To allow these issues to be resolved more quickly I would like to propose a >> relaxation in the requirements on how changes to Hotspot are pushed. >> Specifically I would like to allow for direct pushes to the hotspot/ >> repository of files specific to the following ports/variants/tools: >> >> * AIX >> * PPC >> * Shark >> * Zero >> >> Today this translates into the following files: >> >> - src/cpu/ppc/** >> - src/cpu/zero/** >> - src/os/aix/** >> - src/os_cpu/aix_ppc/** >> - src/os_cpu/bsd_zero/** >> - src/os_cpu/linux_ppc/** >> - src/os_cpu/linux_zero/** >> >> Note that all changes are still required to go through the normal >> development and review cycle; the proposed relaxation only applies to how >> the changes are pushed. >> >> If at code review time a change is for some reason deemed to be risky and/or >> otherwise have impact on shared files the reviewer may request that the >> change to go through the regular push testing. For changes only touching the >> above set of files this expected to be rare. >> >> Please let me know what you think. >> >> Cheers, >> Mikael >> From david.holmes at oracle.com Thu Sep 11 01:25:43 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Sep 2014 11:25:43 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> Message-ID: <5410FA17.7060506@oracle.com> On 11/09/2014 1:14 AM, Erik ?sterlund wrote: > Hi David, > > Thank you for your reply. So with ARM out of the window, I thought I'd have a look at the PPC/AIX implementation. Looks like the loops here can be flattened to 1 loop instead of 2 nested. > > On a more crucial note, I was very surprised to see two full sync instructions were emitted in the CAS. I believe these are over conservative fences in the current implementation. I would like to replace the write fence with lwsync instead of sync and the read fence with isync instead of sync. Is there any good reason why it was implemented in this (in my opinion) over-conservative and expensive way? The Atomic operations must provide full bi-directional fence semantics, so a full sync on entry is required in my opinion. I agree that the combination of bne+isync would suffice on the exit path. But this is a complex area, involving hardware that doesn't always follow the rules, so conservatism is understandable. But this needs to be taken up with the PPC64 folk who did this port. Cheers, David > inline jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { > unsigned int old_value; > const uint64_t zero = 0; > > __asm__ __volatile__ ( > /* fence */ > strasm_sync // <------------------ write fence (release) should be strasm_lwsync instead as we don't care about ordering of device memory > /* simple guard */ > " lwz %[old_value], 0(%[dest]) \n" > " cmpw %[compare_value], %[old_value] \n" > " bne- 2f \n" > /* atomic loop */ > "1: \n" > " lwarx %[old_value], %[dest], %[zero] \n" > " cmpw %[compare_value], %[old_value] \n" > " bne- 2f \n" > " stwcx. %[exchange_value], %[dest], %[zero] \n" > " bne- 1b \n" > /* acquire */ > strasm_sync // <------------------ read fence (acquire) should be starsm_isync instead as we don't care about ordering of device memory > /* exit */ > "2: \n" > /* out */ > : [old_value] "=&r" (old_value), > "=m" (*dest) > /* in */ > : [dest] "b" (dest), > [zero] "r" (zero), > [compare_value] "r" (compare_value), > [exchange_value] "r" (exchange_value), > "m" (*dest) > /* clobber */ > : "cc", > "memory" > ); > > return (jint) old_value; > } > > If nobody minds, I'd like to change this. :) > > /Erik > > On 08 Sep 2014, at 04:11, David Holmes wrote: > >> Hi Erik, >> >> Note there is currently no ARM code in the OpenJDK itself. Of course the Aarch64 project will hopefully be changing that soon, but I would not think they need the logic you describe below. >> >> Cheers, >> David >> >> On 6/09/2014 12:03 AM, Erik ?sterlund wrote: >>> Hi Mikael, >>> >>> Back from travelling now. I did look into other architectures a bit and made some interesting findings. >>> >>> The architecture that stands out the most disastrous to me is ARM. It has three levels of nested loops to carry out a single byte CAS: >>> 1. Outmost loop to emulate byte-grain CAS using word-sized CAS. >>> 2. Middle loop makes calls to the __kernel_cmpxchg which is optimized for non-SMP systems using OS support but backward compatible with LL/SC loop for SMP systems. Unfortunately it returns a boolean (success/failure) rather than the destination value and hence the loop keeps track of the actual value at the destination required by the Atomic::cmpxchg interface. >>> 3. __kernel_cmpxchg implements CAS on SMP-systems using LL/SC (ldrex/strex). Since a context switch can break in the middle, a loop retries the operation in such unfortunate spuriously failing scenario. >>> >>> I have made a new solution that would only make sense on ARMv6 and above with SMP. The proposed solution has only one loop instead of three, would be great if somebody could review it: >>> >>> inline intptr_t __casb_internal(volatile intptr_t *ptr, intptr_t compare, intptr_t new_val) { >>> intptr_t result, old_tmp; >>> >>> // prefetch for writing and barrier >>> __asm__ __volatile__ ("pld [%0]\n\t" >>> " dmb sy\n\t" /* maybe we can get away with dsb st here instead for speed? anyone? playing it safe now */ >>> : >>> : "r" (ptr) >>> : "memory"); >>> >>> do { >>> // spuriously failing CAS loop keeping track of value >>> __asm__ __volatile__("@ __cmpxchgb\n\t" >>> " ldrexb %1, [%2]\n\t" >>> " mov %0, #0\n\t" >>> " teq %1, %3\n\t" >>> " it eq\n\t" >>> " strexbeq %0, %4, [%2]\n\t" >>> : "=&r" (result), "=&r" (old_tmp) >>> : "r" (ptr), "Ir" (compare), "r" (new_val) >>> : "memory", "cc"); >>> } while (result); >>> >>> // barrier >>> __asm__ __volatile__ ("dmb sy" >>> ::: "memory"); >>> >>> return old_tmp; >>> } >>> >>> inline jbyte Atomic::cmpxchg (jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { >>> return (jbyte)__casb_internal(volatile jbyte*)ptr, (intptr_t)compare, (intptr_t)new_val); >>> } >>> >>> What I'm a bit uncertain about here is which barriers we need and which are optimal as it seems to be a bit different for different ARM versions, maybe somebody can enlighten me? Also I'm not sure how hotspot checks ARM version to make the appropriate decision. >>> >>> The proposed x86 implementation is much more straight forward (bsd, linux): >>> >>> inline jbyte Atomic::cmpxchg(jbyte exchange_value, volatile jbyte* dest, jbyte compare_value) { >>> int mp = os::is_MP(); >>> jbyte result; >>> __asm__ volatile (LOCK_IF_MP(%4) "cmpxchgb %1,(%3)" >>> : "=a" (result) >>> : "q" (exchange_value), "a" (compare_value), "r" (dest), "r" (mp) >>> : "cc", "memory"); >>> return result; >>> } >>> >>> Unfortunately the code is spread out through a billion files because of different ABIs and compiler support for different OS variants. Some use generated stubs, some use ASM files, some use inline assembly. I think I fixed all of them but I need your help to build and verify it if you don't mind as I don't have access to those platforms. How do we best do this? >>> >>> As for SPARC I unfortunately decided to keep the old implementation as SPARC does not seem to support byte-wide CAS, only found the cas and casx instructions which is not sufficient as far as I could tell, corrections if I'm wrong? In that case, add byte-wide CAS on SPARC to my wish list for christmas. >>> >>> Is there any other platform/architecture of interest on your wish list I should investigate which is important to you? PPC? >>> >>> /Erik >>> >>> On 04 Sep 2014, at 11:20, Mikael Gerdin wrote: >>> >>>> Hi Erik, >>>> >>>> On Thursday 04 September 2014 09.05.13 Erik ?sterlund wrote: >>>>> Hi, >>>>> >>>>> The implementation of single byte Atomic::cmpxchg on x86 (and all other >>>>> platforms) emulates the single byte cmpxchgb instruction using a loop of >>>>> jint-sized load and cmpxchgl and code to dynamically align the destination >>>>> address. >>>>> >>>>> This code is used for GC-code related to remembered sets currently. >>>>> >>>>> I have the changes on my platform (amd64, bsd) to simply use the native >>>>> cmpxchgb instead but could provide a patch fixing this unnecessary >>>>> performance glitch for all supported x86 if anybody wants this? >>>> >>>> I think that sounds good. >>>> Would you mind looking at other cpu arches to see if they provide something >>>> similar? It's ok if you can't build the code for the other arches, I can help >>>> you with that. >>>> >>>> /Mikael >>>> >>>>> >>>>> /Erik >>>> >>> > From igor.veresov at oracle.com Thu Sep 11 04:19:24 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Wed, 10 Sep 2014 21:19:24 -0700 Subject: [8u40] Bulk backport request: 8048703, 8049532, 8049529, 8049530, 8049528, 8034935, 8023461, 8025842 In-Reply-To: <54106AB8.5080409@oracle.com> References: <54106AB8.5080409@oracle.com> Message-ID: Seems alright. igor On Sep 10, 2014, at 8:14 AM, Vladimir Ivanov wrote: > This is a bulk request to backport the following changes into 8u40. They were integrated into 9 long ago, but still apply cleanly to jdk8u-dev. > > (1) 8048703: ReplacedNodes dumps it's content to tty > https://jbs.oracle.com/bugs/browse/JDK-8048703 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/18d4d4c8beea > > (2) 8049532: LogCompilation: C1: inlining tree is flat (no depth is stored) > https://jbs.oracle.com/bugs/browse/JDK-8049532 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4102555e5695 > > (3) 8049529: LogCompilation: annotate make_not_compilable with compilation level > https://jbs.oracle.com/bugs/browse/JDK-8049529 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cdf968fe49ce > > (4) 8049530: Provide descriptive failure reason for compilation tasks removed for the queue > https://jbs.oracle.com/bugs/browse/JDK-8049530 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/307ecb8f6676 > > (5) 8049528: Method marked w/ @ForceInline isn't inlined with "executed < MinInliningThreshold times" message > https://jbs.oracle.com/bugs/browse/JDK-8049528 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4153b0978181 > > (6) 8034935: JSR 292 support for PopFrame has a fragile coupling with DirectMethodHandle > https://jbs.oracle.com/bugs/browse/JDK-8034935 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/39e7fbc6d865 > > (7) 8023461: Thread holding lock at safepoint that vm can block on: MethodCompileQueue_lock > https://jbs.oracle.com/bugs/browse/JDK-8023461 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/99dc0ff1d4c7 > > (8) 8025842: Convert warning("Thread holding lock at safepoint that vm can block on") to fatal(...) > https://jbs.oracle.com/bugs/browse/JDK-8025842 > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/c0774726073e > > Best regards, > Vladimir Ivanov From vladimir.kozlov at oracle.com Thu Sep 11 04:26:42 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 10 Sep 2014 21:26:42 -0700 Subject: [8u40] Bulk backport request: 8048703, 8049532, 8049529, 8049530, 8049528, 8034935, 8023461, 8025842 In-Reply-To: References: <54106AB8.5080409@oracle.com> Message-ID: <54112482.7080903@oracle.com> On 9/10/14 9:19 PM, Igor Veresov wrote: > Seems alright. +1 Thanks, Vladimir K > > igor > > On Sep 10, 2014, at 8:14 AM, Vladimir Ivanov wrote: > >> This is a bulk request to backport the following changes into 8u40. They were integrated into 9 long ago, but still apply cleanly to jdk8u-dev. >> >> (1) 8048703: ReplacedNodes dumps it's content to tty >> https://jbs.oracle.com/bugs/browse/JDK-8048703 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/18d4d4c8beea >> >> (2) 8049532: LogCompilation: C1: inlining tree is flat (no depth is stored) >> https://jbs.oracle.com/bugs/browse/JDK-8049532 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4102555e5695 >> >> (3) 8049529: LogCompilation: annotate make_not_compilable with compilation level >> https://jbs.oracle.com/bugs/browse/JDK-8049529 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cdf968fe49ce >> >> (4) 8049530: Provide descriptive failure reason for compilation tasks removed for the queue >> https://jbs.oracle.com/bugs/browse/JDK-8049530 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/307ecb8f6676 >> >> (5) 8049528: Method marked w/ @ForceInline isn't inlined with "executed < MinInliningThreshold times" message >> https://jbs.oracle.com/bugs/browse/JDK-8049528 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4153b0978181 >> >> (6) 8034935: JSR 292 support for PopFrame has a fragile coupling with DirectMethodHandle >> https://jbs.oracle.com/bugs/browse/JDK-8034935 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/39e7fbc6d865 >> >> (7) 8023461: Thread holding lock at safepoint that vm can block on: MethodCompileQueue_lock >> https://jbs.oracle.com/bugs/browse/JDK-8023461 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/99dc0ff1d4c7 >> >> (8) 8025842: Convert warning("Thread holding lock at safepoint that vm can block on") to fatal(...) >> https://jbs.oracle.com/bugs/browse/JDK-8025842 >> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/c0774726073e >> >> Best regards, >> Vladimir Ivanov > From vladimir.x.ivanov at oracle.com Thu Sep 11 05:59:24 2014 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Thu, 11 Sep 2014 09:59:24 +0400 Subject: [8u40] Bulk backport request: 8048703, 8049532, 8049529, 8049530, 8049528, 8034935, 8023461, 8025842 In-Reply-To: <54112482.7080903@oracle.com> References: <54106AB8.5080409@oracle.com> <54112482.7080903@oracle.com> Message-ID: <54113A3C.9040307@oracle.com> Igor, Vladimir, thanks! Best regards, Vladimir Ivanov On 9/11/14, 8:26 AM, Vladimir Kozlov wrote: > On 9/10/14 9:19 PM, Igor Veresov wrote: >> Seems alright. > > +1 > > Thanks, > Vladimir K > >> >> igor >> >> On Sep 10, 2014, at 8:14 AM, Vladimir Ivanov >> wrote: >> >>> This is a bulk request to backport the following changes into 8u40. >>> They were integrated into 9 long ago, but still apply cleanly to >>> jdk8u-dev. >>> >>> (1) 8048703: ReplacedNodes dumps it's content to tty >>> https://jbs.oracle.com/bugs/browse/JDK-8048703 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/18d4d4c8beea >>> >>> (2) 8049532: LogCompilation: C1: inlining tree is flat (no depth is >>> stored) >>> https://jbs.oracle.com/bugs/browse/JDK-8049532 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4102555e5695 >>> >>> (3) 8049529: LogCompilation: annotate make_not_compilable with >>> compilation level >>> https://jbs.oracle.com/bugs/browse/JDK-8049529 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/cdf968fe49ce >>> >>> (4) 8049530: Provide descriptive failure reason for compilation tasks >>> removed for the queue >>> https://jbs.oracle.com/bugs/browse/JDK-8049530 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/307ecb8f6676 >>> >>> (5) 8049528: Method marked w/ @ForceInline isn't inlined with >>> "executed < MinInliningThreshold times" message >>> https://jbs.oracle.com/bugs/browse/JDK-8049528 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/4153b0978181 >>> >>> (6) 8034935: JSR 292 support for PopFrame has a fragile coupling with >>> DirectMethodHandle >>> https://jbs.oracle.com/bugs/browse/JDK-8034935 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/39e7fbc6d865 >>> >>> (7) 8023461: Thread holding lock at safepoint that vm can block on: >>> MethodCompileQueue_lock >>> https://jbs.oracle.com/bugs/browse/JDK-8023461 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/99dc0ff1d4c7 >>> >>> (8) 8025842: Convert warning("Thread holding lock at safepoint that >>> vm can block on") to fatal(...) >>> https://jbs.oracle.com/bugs/browse/JDK-8025842 >>> http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/c0774726073e >>> >>> Best regards, >>> Vladimir Ivanov >> From vladimir.kozlov at oracle.com Thu Sep 11 06:12:59 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 10 Sep 2014 23:12:59 -0700 Subject: [8u40] RFR(s) backport fixes 8046698, 8054224, 8055946 Message-ID: <54113D6B.9020303@oracle.com> Collection of small fixes which Roland pushed into jdk9 last month. All are applied cleanly to jdk8u. 8046698: assert(false) failed: only Initialize or AddP expected macro.cpp:943 https://bugs.openjdk.java.net/browse/JDK-8046698 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/c82d0e6f53cd http://cr.openjdk.java.net/~roland/8046698/webrev.01/ 8054224: Recursive method that was compiled by C1 is unable to catch StackOverflowError https://bugs.openjdk.java.net/browse/JDK-8054224 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/db7d2f27bcb6 http://cr.openjdk.java.net/~roland/8054224/webrev.00/ 8055946: assert(result == NULL || result->is_oop()) failed: must be oop https://bugs.openjdk.java.net/browse/JDK-8055946 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/df76aa0bf77f http://cr.openjdk.java.net/~roland/8055946/webrev.00/ Thanks, Vladimir From mikael.gerdin at oracle.com Thu Sep 11 06:56:04 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 11 Sep 2014 08:56:04 +0200 Subject: RFR 8056039: Hotspot does not compile with clang 3.4 on Linux In-Reply-To: <37A7AB85-DAE2-4179-9EE9-772FABEBB3A0@oracle.com> References: <1593876.cxpFjnnrsK@mgerdin03> <37A7AB85-DAE2-4179-9EE9-772FABEBB3A0@oracle.com> Message-ID: <4021552.hqHyfxcC3W@mgerdin03> Thanks for the review Staffan! /Mikael On Wednesday 10 September 2014 20.43.29 Staffan Larsen wrote: > Removed code is good code. Reviewed. > > /Staffan > > On 26 aug 2014, at 12:17, Mikael Gerdin wrote: > > Hi all, > > > > In order to get clang's (sometimes) more helpful error messages when > > compiling I'd like to fix the few remaining places where clang fails to > > compile Hotspot. > > > > The culprit in this case was "local_vsnprintf" in os_linux.cpp, an unused > > function which wasn't annotaded with the PRINTF_FORMAT macro. > > Since the function was unused I decided to remove it instead, then I found > > it in the other os_*nix.cpp files as well. > > > > Digging into the Teamware history it looks like it first appeared in the > > Solaris port because vsnprintf did not exist on some very old versions of > > Solaris, so it was dynamically looked up through dlsym. For a few years > > vsnprintf has been present in the Solaris header files, so I think it's > > safe to remove the workaround now some 17 years later. > > > > I also need a SCANF_FORMAT for an internal file, so I added that to > > globalDefinitions. > > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8056039/webrev/ > > Buglink: https://bugs.openjdk.java.net/browse/JDK-8056039 > > > > Thanks > > /Mikael From aph at redhat.com Thu Sep 11 08:22:41 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Sep 2014 09:22:41 +0100 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <540D1066.6030603@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> Message-ID: <54115BD1.5020502@redhat.com> On 08/09/14 03:11, David Holmes wrote: > Note there is currently no ARM code in the OpenJDK itself. Of course the > Aarch64 project will hopefully be changing that soon, but I would not > think they need the logic you describe below. >From the AArch64 project's point of view, all we need is for the single byte Atomic::cmpxchg implementation to be move into os_cpu/ and we can do the rest. The question in my mind is the extent to which we need to maintain compatibility with old C++ compilers. Modern ones have all the primitives we need as builtins, and will often generate better code than with inline assembler. Andrew. From igor.veresov at oracle.com Thu Sep 11 09:05:09 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Thu, 11 Sep 2014 02:05:09 -0700 Subject: [8u40] RFR(s) backport fixes 8046698, 8054224, 8055946 In-Reply-To: <54113D6B.9020303@oracle.com> References: <54113D6B.9020303@oracle.com> Message-ID: <6DC1A0ED-D16C-4ABD-B86C-91C6901E706C@oracle.com> 8054224 has a different changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/7f427b4f174d Good otherwise. igor On Sep 10, 2014, at 11:12 PM, Vladimir Kozlov wrote: > Collection of small fixes which Roland pushed into jdk9 last month. > All are applied cleanly to jdk8u. > > 8046698: assert(false) failed: only Initialize or AddP expected macro.cpp:943 > https://bugs.openjdk.java.net/browse/JDK-8046698 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/c82d0e6f53cd > http://cr.openjdk.java.net/~roland/8046698/webrev.01/ > > 8054224: Recursive method that was compiled by C1 is unable to catch StackOverflowError > https://bugs.openjdk.java.net/browse/JDK-8054224 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/db7d2f27bcb6 > http://cr.openjdk.java.net/~roland/8054224/webrev.00/ > > 8055946: assert(result == NULL || result->is_oop()) failed: must be oop > https://bugs.openjdk.java.net/browse/JDK-8055946 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/df76aa0bf77f > http://cr.openjdk.java.net/~roland/8055946/webrev.00/ > > Thanks, > Vladimir From igor.veresov at oracle.com Thu Sep 11 09:29:51 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Thu, 11 Sep 2014 02:29:51 -0700 Subject: [8u40] RFR(s): backport of 8058184, 8058092 Message-ID: <8075621B-C38C-4879-BCCE-58FF368C9A17@oracle.com> 8058184: Move _highest_comp_level and _highest_osr_comp_level from MethodData to MethodCounters The patch did not apply automatically because 8u doesn?t have code aging that added new field to MethodCounters. Since the patch had to be adjusted manually I also moved _highest_*_level fields in MethodCounters to achieve optimal packing (not a problem for 9, packing there is optimal already). Webrev for 8u: http://cr.openjdk.java.net/~iveresov/8058184-8u/webrev.00/ JBS: https://bugs.openjdk.java.net/browse/JDK-8058184 Webrev for 9: http://cr.openjdk.java.net/~iveresov/8058184/webrev.00 JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a39c9249f4be 8058092: Test vm/mlvm/meth/stress/compiler/deoptimize. Assert in src/share/vm/classfile/systemDictionary.cpp: MH intrinsic invariant JBS: https://bugs.openjdk.java.net/browse/JDK-8058092 Webrev: http://cr.openjdk.java.net/~iveresov/8058092/webrev.00/ JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/daa5ae1d95c4 Thanks! igor From david.holmes at oracle.com Thu Sep 11 10:34:12 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Sep 2014 20:34:12 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <54115BD1.5020502@redhat.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> Message-ID: <54117AA4.8030300@oracle.com> On 11/09/2014 6:22 PM, Andrew Haley wrote: > On 08/09/14 03:11, David Holmes wrote: >> Note there is currently no ARM code in the OpenJDK itself. Of course the >> Aarch64 project will hopefully be changing that soon, but I would not >> think they need the logic you describe below. > > From the AArch64 project's point of view, all we need is for the > single byte Atomic::cmpxchg implementation to be move into os_cpu/ > and we can do the rest. > > The question in my mind is the extent to which we need to maintain > compatibility with old C++ compilers. Modern ones have all the > primitives we need as builtins, and will often generate better code > than with inline assembler. For things like Atomic::* the definitions are in platform specific files so you should be able to use whatever mechanism you want and require whatever minimal compiler version is needed. David > Andrew. > From aph at redhat.com Thu Sep 11 10:59:31 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Sep 2014 11:59:31 +0100 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <54117AA4.8030300@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <54117AA4.8030300@oracle.com> Message-ID: <54118093.4030507@redhat.com> On 09/11/2014 11:34 AM, David Holmes wrote: > For things like Atomic::* the definitions are in platform specific > files so you should be able to use whatever mechanism you want and > require whatever minimal compiler version is needed. But they're not: Atomic::cmpxchg(jbyte seems to be defined in common code. If it was defined in a platform-specific file there would be no problem for us to discuss in this thread. Andrew. From erik.osterlund at lnu.se Thu Sep 11 11:18:22 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 11 Sep 2014 11:18:22 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <54115BD1.5020502@redhat.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> Message-ID: <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> On 11 Sep 2014, at 10:22, Andrew Haley wrote: > From the AArch64 project's point of view, all we need is for the > single byte Atomic::cmpxchg implementation to be move into os_cpu/ > and we can do the rest. I agree that it definitely needs to allow platform specific implementations. But I would also advocate not using __kernel_cmpxchg in MP AArch64 even for normal jint CAS for the simple reason that it only optimizes non-MP performance, and has an interface returning boolean instead of the old value needing a translation to the interface we expect, resulting in a less efficient implementation with two nested loops and a call (for normal jint CAS) instead of a single loop retrying only on spurious SC failure due to scheduling. But I'll leave that to you guys. :) /Erik From aph at redhat.com Thu Sep 11 11:24:49 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Sep 2014 12:24:49 +0100 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> Message-ID: <54118681.800@redhat.com> On 09/11/2014 12:18 PM, Erik ?sterlund wrote: > > On 11 Sep 2014, at 10:22, Andrew Haley wrote: >> From the AArch64 project's point of view, all we need is for the >> single byte Atomic::cmpxchg implementation to be move into os_cpu/ >> and we can do the rest. > > I agree that it definitely needs to allow platform specific > implementations. But I would also advocate not using > __kernel_cmpxchg in MP AArch64 even for normal jint CAS for the > simple reason that it only optimizes non-MP performance, and has an > interface returning boolean instead of the old value needing a > translation to the interface we expect, resulting in a less > efficient implementation with two nested loops and a call (for > normal jint CAS) instead of a single loop retrying only on spurious > SC failure due to scheduling. Sure; just to be clear, I wasn't thinking about __kernel_cmpxchg, bug a GCC builtin. AFAIK __kernel_cmpxchg isn't used for anything on AArch64, and we certainly don't need it. Andrew. From erik.osterlund at lnu.se Thu Sep 11 11:30:25 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 11 Sep 2014 11:30:25 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <5410FA17.7060506@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> <5410FA17.7060506@oracle.com> Message-ID: <2AD8756F-A776-497D-9376-1A0AFECA1C3E@lnu.se> On 11 Sep 2014, at 03:25, David Holmes wrote: > The Atomic operations must provide full bi-directional fence semantics, so a full sync on entry is required in my opinion. I agree that the combination of bne+isync would suffice on the exit path. I see no reason for the atomic operations to support more than full acquire and release (hence sequential consistency) memory behaviour as well as atomic updates. For this, I see no reason why a full sync rather than lwsync is required (for the write barrier). The XNU kernel implementation also uses lwsync for release semantics and isync for the acquire. Why would this be different for us? From the XNU kernel (note the choice of fences I argue for): compare_and_swap32_on64b: // bool OSAtomicCompareAndSwapBarrier32( int32_t old, int32_t new, int32_t *value); lwsync // write barrier, NOP'd on a UP 1: lwarx r7,0,r5 cmplw r7,r3 bne-- 2f stwcx. r4,0,r5 bne-- 1b isync // read barrier, NOP'd on a UP li r3,1 blr 2: li r8,-8 // on 970, must release reservation li r3,0 // return failure stwcx. r4,r8,r1 // store into red zone to release blr > But this is a complex area, involving hardware that doesn't always follow the rules, so conservatism is understandable. As far as wrong hardware goes, I don't know what to do about that but can we confirm that there is hardware not doing the fences according to specification in particular? It becomes very difficult to respect incorrect hardware implementations in my opinion. > But this needs to be taken up with the PPC64 folk who did this port. I agree, it would be very helpful to hear the perspective of the ones who wrote our implementation. /Erik From erik.osterlund at lnu.se Thu Sep 11 11:46:09 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 11 Sep 2014 11:46:09 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <54118681.800@redhat.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> <54118681.800@redhat.com> Message-ID: <9A5D9DEE-5AFB-4699-9CE7-EE8F5ED1B5A2@lnu.se> On 11 Sep 2014, at 13:24, Andrew Haley wrote: > Sure; just to be clear, I wasn't thinking about __kernel_cmpxchg, bug a > GCC builtin. AFAIK __kernel_cmpxchg isn't used for anything on > AArch64, and we certainly don't need it. Sorry, I seem to have looked at defined(ARM) + zero where __kernel_cmpxchg is the implementation of choice rather than the AArch64-port specific code. Perhaps somebody else is responsible for that... /Erik From adinn at redhat.com Thu Sep 11 11:58:29 2014 From: adinn at redhat.com (Andrew Dinn) Date: Thu, 11 Sep 2014 12:58:29 +0100 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <9A5D9DEE-5AFB-4699-9CE7-EE8F5ED1B5A2@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> <54118681.800@redhat.com> <9A5D9DEE-5AFB-4699-9CE7-EE8F5ED1B5A2@lnu.se> Message-ID: <54118E65.2090802@redhat.com> On 11/09/14 12:46, Erik ?sterlund wrote: > On 11 Sep 2014, at 13:24, Andrew Haley wrote: >> Sure; just to be clear, I wasn't thinking about __kernel_cmpxchg, >> bug a GCC builtin. AFAIK __kernel_cmpxchg isn't used for anything >> on AArch64, and we certainly don't need it. > > Sorry, I seem to have looked at defined(ARM) + zero where > __kernel_cmpxchg is the implementation of choice rather than the > AArch64-port specific code. Perhaps somebody else is responsible for > that... You are now talking to two of those somebody else :-) regards, Andrew Dinn ----------- From david.holmes at oracle.com Thu Sep 11 13:09:14 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Sep 2014 23:09:14 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <54118093.4030507@redhat.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <54117AA4.8030300@oracle.com> <54118093.4030507@redhat.com> Message-ID: <54119EFA.8020308@oracle.com> On 11/09/2014 8:59 PM, Andrew Haley wrote: > On 09/11/2014 11:34 AM, David Holmes wrote: >> For things like Atomic::* the definitions are in platform specific >> files so you should be able to use whatever mechanism you want and >> require whatever minimal compiler version is needed. > > But they're not: Atomic::cmpxchg(jbyte seems to be defined in common > code. If it was defined in a platform-specific file there would be no > problem for us to discuss in this thread. Sorry - yes there is a seemingly arbitrary split between shared and platform-specific variants. It's fine to have generic shared approaches but there really needs to be a way to allow platform specific "overrides". David > Andrew. > From david.holmes at oracle.com Thu Sep 11 13:14:58 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Sep 2014 23:14:58 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <2AD8756F-A776-497D-9376-1A0AFECA1C3E@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> <5410FA17.7060506@oracle.com> <2AD8756F-A776-497D-9376-1A0AFECA1C3E@lnu.se> Message-ID: <5411A052.7060309@oracle.com> On 11/09/2014 9:30 PM, Erik ?sterlund wrote: > On 11 Sep 2014, at 03:25, David Holmes wrote: >> The Atomic operations must provide full bi-directional fence semantics, so a full sync on entry is required in my opinion. I agree that the combination of bne+isync would suffice on the exit path. > > I see no reason for the atomic operations to support more than full acquire and release (hence sequential consistency) memory behaviour as well as atomic updates. If the atomic operation were itself indivisible then the suggested barriers pre- and post would be correct. But when the atomic operation is itself a sequence of instructions you also have to guard against reordering relative to the variable being atomically updated. So the sync is needed to provide a full two-way barrier between the code preceding the atomic op and the code within the atomic op. There was a very long discussion on this aspect of the atomic operations not that long ago. David ------ > For this, I see no reason why a full sync rather than lwsync is required (for the write barrier). The XNU kernel implementation also uses lwsync for release semantics and isync for the acquire. > Why would this be different for us? From the XNU kernel (note the choice of fences I argue for): > > compare_and_swap32_on64b: // bool OSAtomicCompareAndSwapBarrier32( int32_t old, int32_t new, int32_t *value); > lwsync // write barrier, NOP'd on a UP > 1: > lwarx r7,0,r5 > cmplw r7,r3 > bne-- 2f > stwcx. r4,0,r5 > bne-- 1b > isync // read barrier, NOP'd on a UP > li r3,1 > blr > 2: > li r8,-8 // on 970, must release reservation > li r3,0 // return failure > stwcx. r4,r8,r1 // store into red zone to release > blr > >> But this is a complex area, involving hardware that doesn't always follow the rules, so conservatism is understandable. > > As far as wrong hardware goes, I don't know what to do about that but can we confirm that there is hardware not doing the fences according to specification in particular? > It becomes very difficult to respect incorrect hardware implementations in my opinion. > >> But this needs to be taken up with the PPC64 folk who did this port. > > I agree, it would be very helpful to hear the perspective of the ones who wrote our implementation. > > /Erik > From david.holmes at oracle.com Thu Sep 11 13:16:03 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 11 Sep 2014 23:16:03 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <9A5D9DEE-5AFB-4699-9CE7-EE8F5ED1B5A2@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> <54118681.800@redhat.com> <9A5D9DEE-5AFB-4699-9CE7-EE8F5ED1B5A2@lnu.se> Message-ID: <5411A093.4020704@oracle.com> On 11/09/2014 9:46 PM, Erik ?sterlund wrote: > On 11 Sep 2014, at 13:24, Andrew Haley wrote: >> Sure; just to be clear, I wasn't thinking about __kernel_cmpxchg, bug a >> GCC builtin. AFAIK __kernel_cmpxchg isn't used for anything on >> AArch64, and we certainly don't need it. > > Sorry, I seem to have looked at defined(ARM) + zero where __kernel_cmpxchg is the implementation of choice rather than the AArch64-port specific code. Perhaps somebody else is responsible for that... Yes the zero port is separate again. The kernel helper is needed on ARMv5. David > /Erik > From aph at redhat.com Thu Sep 11 13:49:13 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Sep 2014 14:49:13 +0100 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <54119EFA.8020308@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <54117AA4.8030300@oracle.com> <54118093.4030507@redhat.com> <54119EFA.8020308@oracle.com> Message-ID: <5411A859.70608@redhat.com> On 09/11/2014 02:09 PM, David Holmes wrote: > On 11/09/2014 8:59 PM, Andrew Haley wrote: >> On 09/11/2014 11:34 AM, David Holmes wrote: >>> For things like Atomic::* the definitions are in platform specific >>> files so you should be able to use whatever mechanism you want and >>> require whatever minimal compiler version is needed. >> >> But they're not: Atomic::cmpxchg(jbyte seems to be defined in common >> code. If it was defined in a platform-specific file there would be no >> problem for us to discuss in this thread. > > Sorry - yes there is a seemingly arbitrary split between shared and > platform-specific variants. It's fine to have generic shared approaches > but there really needs to be a way to allow platform specific "overrides". Mmm. I understand that object-oriented programming supports that kind of thing. :-) Andrew. From aph at redhat.com Thu Sep 11 15:02:06 2014 From: aph at redhat.com (Andrew Haley) Date: Thu, 11 Sep 2014 16:02:06 +0100 Subject: Release store in C2 putfield In-Reply-To: <5409B5E6.1050204@cs.oswego.edu> References: <540722C0.1060404@redhat.com> <5407468A.2020004@oracle.com> <54074942.9050506@redhat.com> <5407541E.1070707@oracle.com> <540757CC.2050904@redhat.com> <54075994.1050609@oracle.com> <54075D33.7060400@redhat.com> <54075DFB.4050807@oracle.com> <540764E0.9030601@redhat.com> <5408313E.1020500@oracle.com> <54083F60.80206@redhat.com> <540858C1.6010300@oracle.com> <54086970.20103@redhat.com> <5409B5E6.1050204@cs.oswego.edu> Message-ID: <5411B96E.3030408@redhat.com> On 09/05/2014 02:08 PM, Doug Lea wrote: > 2. Hans Boehm has argued/demonstrated over the years (see for > example, http://hboehm.info/c++mm/no_write_fences.html), that > StoreStore fences, as opposed to release==(StoreStore|StoreLoad) Shouldn't that be (LoadStore|StoreStore) ? > fences, are too delicate and anomaly-filled to expose as a > programming mode. But there are cases where they may come into > play, for example as the first fence of a volatile-store > (that also requires a trailing StoreLoad), that might be > profitable to separate if any other internal mechanics could > then be applied to further optimize. And even if not generally > useful, they seem to apply to the GC post_barrier case. They're used when an object is created, just after it is zeroed but before it is initialized. Andrew. From vladimir.kozlov at oracle.com Thu Sep 11 15:34:51 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 11 Sep 2014 08:34:51 -0700 Subject: [8u40] RFR(s) backport fixes 8046698, 8054224, 8055946 In-Reply-To: <6DC1A0ED-D16C-4ABD-B86C-91C6901E706C@oracle.com> References: <54113D6B.9020303@oracle.com> <6DC1A0ED-D16C-4ABD-B86C-91C6901E706C@oracle.com> Message-ID: <5411C11B.9060906@oracle.com> On 9/11/14 2:05 AM, Igor Veresov wrote: > 8054224 has a different changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/7f427b4f174d Right. db7d2f27bcb6 was parent for it. Sorry about that. > > Good otherwise. Thanks, Vladimir > > igor > > On Sep 10, 2014, at 11:12 PM, Vladimir Kozlov wrote: > >> Collection of small fixes which Roland pushed into jdk9 last month. >> All are applied cleanly to jdk8u. >> >> 8046698: assert(false) failed: only Initialize or AddP expected macro.cpp:943 >> https://bugs.openjdk.java.net/browse/JDK-8046698 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/c82d0e6f53cd >> http://cr.openjdk.java.net/~roland/8046698/webrev.01/ >> >> 8054224: Recursive method that was compiled by C1 is unable to catch StackOverflowError >> https://bugs.openjdk.java.net/browse/JDK-8054224 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/db7d2f27bcb6 >> http://cr.openjdk.java.net/~roland/8054224/webrev.00/ >> >> 8055946: assert(result == NULL || result->is_oop()) failed: must be oop >> https://bugs.openjdk.java.net/browse/JDK-8055946 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/df76aa0bf77f >> http://cr.openjdk.java.net/~roland/8055946/webrev.00/ >> >> Thanks, >> Vladimir > From vladimir.kozlov at oracle.com Thu Sep 11 15:42:48 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 11 Sep 2014 08:42:48 -0700 Subject: [8u40] RFR(s): backport of 8058184, 8058092 In-Reply-To: <8075621B-C38C-4879-BCCE-58FF368C9A17@oracle.com> References: <8075621B-C38C-4879-BCCE-58FF368C9A17@oracle.com> Message-ID: <5411C2F8.10707@oracle.com> On 9/11/14 2:29 AM, Igor Veresov wrote: > 8058184: Move _highest_comp_level and _highest_osr_comp_level from MethodData to MethodCounters > The patch did not apply automatically because 8u doesn?t have code aging that added new field to MethodCounters. Since the patch had to > be adjusted manually I also moved _highest_*_level fields in MethodCounters to achieve optimal packing (not a problem for 9, packing there is optimal already). > > Webrev for 8u: http://cr.openjdk.java.net/~iveresov/8058184-8u/webrev.00/. Changes looks good. > JBS: https://bugs.openjdk.java.net/browse/JDK-8058184 > Webrev for 9: http://cr.openjdk.java.net/~iveresov/8058184/webrev.00 > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a39c9249f4be > > 8058092: Test vm/mlvm/meth/stress/compiler/deoptimize. Assert in src/share/vm/classfile/systemDictionary.cpp: MH intrinsic invariant > JBS: https://bugs.openjdk.java.net/browse/JDK-8058092 > Webrev: http://cr.openjdk.java.net/~iveresov/8058092/webrev.00/ > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/daa5ae1d95c4 Good. Thanks, Vladimir > > Thanks! > igor > From erik.osterlund at lnu.se Thu Sep 11 18:26:28 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 11 Sep 2014 18:26:28 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <5411A093.4020704@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <54115BD1.5020502@redhat.com> <3AA9BF97-339E-44C2-B37A-1EAA8BC6D889@lnu.se> <54118681.800@redhat.com> <9A5D9DEE-5AFB-4699-9CE7-EE8F5ED1B5A2@lnu.se>, <5411A093.4020704@oracle.com> Message-ID: <2ECBD6AF-4768-47DC-835C-78EA6073F280@lnu.se> > On 11 sep 2014, at 14:16, "David Holmes" wrote: > >> On 11/09/2014 9:46 PM, Erik ?sterlund wrote: >> Sorry, I seem to have looked at defined(ARM) + zero where __kernel_cmpxchg is the implementation of choice rather than the AArch64-port specific code. Perhaps somebody else is responsible for that... > > Yes the zero port is separate again. The kernel helper is needed on ARMv5. Yes. This is why I previously said my ARM fix would only make sense on ARM >= 6 with MP. :) Then if it's worth the hassle of maintaining it is a different question. I merely suggested improvements on other platforms than x86 which I was asked to do! Will simply let the Andrews do what they like on that one! :) /Erik From erik.osterlund at lnu.se Thu Sep 11 18:43:14 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Thu, 11 Sep 2014 18:43:14 +0000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <5411A052.7060309@oracle.com> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> <5410FA17.7060506@oracle.com> <2AD8756F-A776-497D-9376-1A0AFECA1C3E@lnu.se>, <5411A052.7060309@oracle.com> Message-ID: <8B5BDD30-D67C-4654-9289-85FB074C4026@lnu.se> > On 11 sep 2014, at 14:15, "David Holmes" wrote: > >> On 11/09/2014 9:30 PM, Erik ?sterlund wrote: >>> On 11 Sep 2014, at 03:25, David Holmes wrote: >>> The Atomic operations must provide full bi-directional fence semantics, so a full sync on entry is required in my opinion. I agree that the combination of bne+isync would suffice on the exit path. >> >> I see no reason for the atomic operations to support more than full acquire and release (hence sequential consistency) memory behaviour as well as atomic updates. > > If the atomic operation were itself indivisible then the suggested barriers pre- and post would be correct. But when the atomic operation is itself a sequence of instructions you also have to guard against reordering relative to the variable being atomically updated. So the sync is needed to provide a full two-way barrier between the code preceding the atomic op and the code within the atomic op. I see. AFAIK lwsync orders everything except StoreLoad. So I deduce the only potential hazard of replacing the write barrier with lwsync would be that the load link could be speculatively loaded like normal loads and lead to false negative CAS? I didn't think lwarx could be speculatively loaded, but I see the point now if that is the case. (Note that false positives are still impossible because a reorded load link would fail to store conditional when attempting to commit, and the store conditional will not float above the lwsync) If this is indeed the case, they should still be isync instead of sync right? Also, what about allowing programmers to use weak CAS like in more advanced atomics APIs? For most lock-free algorithms weak CAS is good enough since there is a retry loop anyway. And it would get rid of the awkward retry loop required for the case of context switching between LL and SC. But then it's a larger change suddenly which maybe isn't worth the trouble? :) /Erik From igor.veresov at oracle.com Thu Sep 11 18:46:50 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Thu, 11 Sep 2014 11:46:50 -0700 Subject: [8u40] RFR(s): backport of 8058184, 8058092 In-Reply-To: <5411C2F8.10707@oracle.com> References: <8075621B-C38C-4879-BCCE-58FF368C9A17@oracle.com> <5411C2F8.10707@oracle.com> Message-ID: <1A666D56-0079-4A7D-A499-CD42BCD917D5@oracle.com> Thanks, Vladimir! igor On Sep 11, 2014, at 8:42 AM, Vladimir Kozlov wrote: > On 9/11/14 2:29 AM, Igor Veresov wrote: >> 8058184: Move _highest_comp_level and _highest_osr_comp_level from MethodData to MethodCounters >> The patch did not apply automatically because 8u doesn?t have code aging that added new field to MethodCounters. Since the patch had to >> be adjusted manually I also moved _highest_*_level fields in MethodCounters to achieve optimal packing (not a problem for 9, packing there is optimal already). >> >> Webrev for 8u: http://cr.openjdk.java.net/~iveresov/8058184-8u/webrev.00/. > > Changes looks good. > >> JBS: https://bugs.openjdk.java.net/browse/JDK-8058184 >> Webrev for 9: http://cr.openjdk.java.net/~iveresov/8058184/webrev.00 >> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/a39c9249f4be >> >> 8058092: Test vm/mlvm/meth/stress/compiler/deoptimize. Assert in src/share/vm/classfile/systemDictionary.cpp: MH intrinsic invariant >> JBS: https://bugs.openjdk.java.net/browse/JDK-8058092 >> Webrev: http://cr.openjdk.java.net/~iveresov/8058092/webrev.00/ >> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/daa5ae1d95c4 > > Good. > > Thanks, > Vladimir > >> >> Thanks! >> igor >> From erik.osterlund at lnu.se Thu Sep 11 21:48:43 2014 From: erik.osterlund at lnu.se (=?utf-8?B?RXJpayDDlnN0ZXJsdW5k?=) Date: Thu, 11 Sep 2014 21:48:43 +0000 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms Message-ID: Hi, These changes aim at replacing the awkward old jbyte Atomic::cmpxchg implementation for all the supported x86 platforms. It previously emulated the behaviour of cmpxchgb using a loop of cmpxchgl and some dynamic alignment of the destination address. This code is called by remembered sets to manipulate card entries. The implementation has now been replaced with a bunch of assembly, appropriate for all platforms. Yes, for windows too. Implementations include: bsd x86/x86_64: inline asm linux x86/x86_64: inline asm solaris x86/x86_64: .il files windows x86_64 without GNU source: stubGenerator and manual code emission and hence including new Assembler::cmpxchgb support Windows x86 + x86_64 with GNU source: inline asm Bug: https://bugs.openjdk.java.net/browse/JDK-8058255 Webrev: http://cr.openjdk.java.net/~jwilhelm/8058255/webrev/ Improvements can be made for other architectures can as well, but this should be a good start. /Erik From david.holmes at oracle.com Fri Sep 12 01:40:33 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 12 Sep 2014 11:40:33 +1000 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms In-Reply-To: References: Message-ID: <54124F11.8060100@oracle.com> Hi Erik, Can we pause and give some more thought to a clean mechanism for allowing a shared implementation if desired with the ability to override if desired. I really do not like to see CPU specific ifdefs being added to shared code. (And I would also not like to see all platforms being forced to reimplement this natively). I'm not saying we will find a simple solution, but it would be nice if we could get a few folk to think about it before proceeding with the ifdefs :) Thanks, David On 12/09/2014 7:48 AM, Erik ?sterlund wrote: > Hi, > > These changes aim at replacing the awkward old jbyte Atomic::cmpxchg implementation for all the supported x86 platforms. It previously emulated the behaviour of cmpxchgb using a loop of cmpxchgl and some dynamic alignment of the destination address. > > This code is called by remembered sets to manipulate card entries. > > The implementation has now been replaced with a bunch of assembly, appropriate for all platforms. Yes, for windows too. > > Implementations include: > bsd x86/x86_64: inline asm > linux x86/x86_64: inline asm > solaris x86/x86_64: .il files > windows x86_64 without GNU source: stubGenerator and manual code emission and hence including new Assembler::cmpxchgb support > Windows x86 + x86_64 with GNU source: inline asm > > Bug: https://bugs.openjdk.java.net/browse/JDK-8058255 > > Webrev: http://cr.openjdk.java.net/~jwilhelm/8058255/webrev/ > > Improvements can be made for other architectures can as well, but this should be a good start. > > /Erik > From david.holmes at oracle.com Fri Sep 12 02:07:18 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 12 Sep 2014 12:07:18 +1000 Subject: Single byte Atomic::cmpxchg implementation In-Reply-To: <8B5BDD30-D67C-4654-9289-85FB074C4026@lnu.se> References: <631578E5-52BF-45D1-A8F2-50862878EBB8@lnu.se> <3855070.ADDnZ0LX5H@mgerdin-lap> <0EC100DC-9C53-4A37-9BF6-761A2DA9C770@lnu.se> <540D1066.6030603@oracle.com> <8EFE7E09-5919-485D-9D49-94BD819382B6@lnu.se> <5410FA17.7060506@oracle.com> <2AD8756F-A776-497D-9376-1A0AFECA1C3E@lnu.se>, <5411A052.7060309@oracle.com> <8B5BDD30-D67C-4654-9289-85FB074C4026@lnu.se> Message-ID: <54125556.10506@oracle.com> On 12/09/2014 4:43 AM, Erik ?sterlund wrote: > >> On 11 sep 2014, at 14:15, "David Holmes" wrote: >> >>> On 11/09/2014 9:30 PM, Erik ?sterlund wrote: >>>> On 11 Sep 2014, at 03:25, David Holmes wrote: >>>> The Atomic operations must provide full bi-directional fence semantics, so a full sync on entry is required in my opinion. I agree that the combination of bne+isync would suffice on the exit path. >>> >>> I see no reason for the atomic operations to support more than full acquire and release (hence sequential consistency) memory behaviour as well as atomic updates. >> >> If the atomic operation were itself indivisible then the suggested barriers pre- and post would be correct. But when the atomic operation is itself a sequence of instructions you also have to guard against reordering relative to the variable being atomically updated. So the sync is needed to provide a full two-way barrier between the code preceding the atomic op and the code within the atomic op. > > I see. AFAIK lwsync orders everything except StoreLoad. So I deduce the only potential hazard of replacing the write barrier with lwsync would be that the load link could be speculatively loaded like normal loads and lead to false negative CAS? I didn't think lwarx could be speculatively loaded, but I see the point now if that is the case. (Note that false positives are still impossible because a reorded load link would fail to store conditional when attempting to commit, and the store conditional will not float above the lwsync) > > If this is indeed the case, they should still be isync instead of sync right? isync? You mean lwsync? I agree that missing storeload prior to the load-linked should not be a problem. But I'm unclear if all the Power architectures define lwsync exactly the same way (I have a Freescale reference which does, but I don't know if IBM Power is the same.) I defer to the PPC64 folk to have selected what seems the most generally appropriate form. > Also, what about allowing programmers to use weak CAS like in more advanced atomics APIs? For most lock-free algorithms weak CAS is good enough since there is a retry loop anyway. And it would get rid of the awkward retry loop required for the case of context switching between LL and SC. Allowing in what context? If such a need arose in the VM then we would certainly look at implementing whatever was necessary. > But then it's a larger change suddenly which maybe isn't worth the trouble? :) Indeed. The general correctness and performance concerns make changes in this area difficult. David > /Erik > From aph at redhat.com Fri Sep 12 08:46:51 2014 From: aph at redhat.com (Andrew Haley) Date: Fri, 12 Sep 2014 09:46:51 +0100 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms In-Reply-To: <54124F11.8060100@oracle.com> References: <54124F11.8060100@oracle.com> Message-ID: <5412B2FB.8090909@redhat.com> On 12/09/14 02:40, David Holmes wrote: > Can we pause and give some more thought to a clean mechanism for > allowing a shared implementation if desired with the ability to override > if desired. I really do not like to see CPU specific ifdefs being added > to shared code. (And I would also not like to see all platforms being > forced to reimplement this natively). Indeed. Could we put the code for things like cmpxchg in an abstract class and override them? Also, this doesn't look right: + // Support fori jbyte Atomic::cmpxchg(jbyte exchange_value, + // volatile jbyte *dest, + // jbyte compare_value) + // An additional bool (os::is_MP()) is passed as the last argument. + .inline _Atomic_cmpxchg_byte,4 + movb 8(%esp), %al // compare_value + movb 0(%esp), %cl // exchange_value + movl 4(%esp), %edx // dest + cmp $0, 12(%esp) // MP test + jne 1f + cmpxchgb %cl, (%edx) + jmp 2f +1: lock + cmpxchgl %cl, (%edx) +2: + .end What is this cmpxchgl for? Andrew. From erik.osterlund at lnu.se Fri Sep 12 09:14:49 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Fri, 12 Sep 2014 09:14:49 +0000 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms In-Reply-To: <5412B2FB.8090909@redhat.com> References: <54124F11.8060100@oracle.com>,<5412B2FB.8090909@redhat.com> Message-ID: <875B2D9C-439F-43E5-B965-726DB2F1A2FF@lnu.se> Sure, if you want a cleaner more OOP fix then I'm fine with slowing down. I won't be able to respond until wednesday anyway. :( And yes that cmpxhgl should be cmpxhgb thanks for spotting that! /Erik > On 12 sep 2014, at 09:47, "Andrew Haley" wrote: > >> On 12/09/14 02:40, David Holmes wrote: >> Can we pause and give some more thought to a clean mechanism for >> allowing a shared implementation if desired with the ability to override >> if desired. I really do not like to see CPU specific ifdefs being added >> to shared code. (And I would also not like to see all platforms being >> forced to reimplement this natively). > > Indeed. Could we put the code for things like cmpxchg in an abstract > class and override them? > > Also, this doesn't look right: > > + // Support fori jbyte Atomic::cmpxchg(jbyte exchange_value, > + // volatile jbyte *dest, > + // jbyte compare_value) > + // An additional bool (os::is_MP()) is passed as the last argument. > + .inline _Atomic_cmpxchg_byte,4 > + movb 8(%esp), %al // compare_value > + movb 0(%esp), %cl // exchange_value > + movl 4(%esp), %edx // dest > + cmp $0, 12(%esp) // MP test > + jne 1f > + cmpxchgb %cl, (%edx) > + jmp 2f > +1: lock > + cmpxchgl %cl, (%edx) > +2: > + .end > > What is this cmpxchgl for? > > Andrew. From volker.simonis at gmail.com Fri Sep 12 18:38:29 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 12 Sep 2014 20:38:29 +0200 Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: <5410CDA9.7030405@oracle.com> References: <540F7021.5080100@oracle.com> <5410CDA9.7030405@oracle.com> Message-ID: Hi Mikael, there's one more question that came to my mind: will the new rule apply to all hotspot respitories (i.e. jdk9/hs-rt/hotspot, jdk9/hs-comp/hotspot, jdk9/hs-gc/hotspot, jdk9/hs-hs/hotspot AND jdk8u/jdk8u-dev/hotspot, jdk8u/hs-dev/hotspot) ? Thanks, Volker On Thu, Sep 11, 2014 at 12:16 AM, Mikael Vidstedt wrote: > > Andrew/Volker, > > Thanks for the positive feedback. The goal of the proposal is to simplify > pushing changes which are effectively not tested by the jprt system anyway. > The proposed relaxation would not affect work on other infrastructure > projects in any relevant way, but would hopefully improve all our lives > significantly immediately. > > Cheers, > Mikael > > > On 2014-09-10 01:45, Volker Simonis wrote: >> >> Hi Mikael, >> >> thanks a lot for this proposal. I think this will dramatically >> simplify our work to keep our ports up to date! So I fully support it. >> >> Nevertheless, I think this can only be a first step towards fully open >> the JPRT system to developers outside Oracle. With "opening" I mean to >> allow OpenJDK commiters from outside Oracle to submit and run JPRT >> jobs as well as allowing porting projects to add hardware which builds >> and tests the HotSpot on alternative platforms. >> >> So while I'm all in favor of your proposal I hope you can allay my >> doubts that this simplification will hopefully not push the >> realization of a truly OPEN JPRT system even further away. >> >> Regards, >> Volker >> >> >> On Tue, Sep 9, 2014 at 11:24 PM, Mikael Vidstedt >> wrote: >>> >>> All, >>> >>> Made up primarily of low level C++ code, the Hotspot codebase is highly >>> platform dependent and also tightly coupled with the tool chains on the >>> various platforms. Each platform/tool chain combination has its set of >>> special quirks, and code must be implemented in a way such that it only >>> relies on the common subset of syntax and functionality across all these >>> combinations. History has taught us that even simple changes can have >>> surprising results when compiled with different compilers. >>> >>> For more than a decade the Hotspot team has ensured a minimum quality >>> level >>> by requiring all pushes to be done through a build and test system (jprt) >>> which guarantees that the code resulting from applying a set of changes >>> builds on a set of core platforms and that a set of core tests pass. Only >>> if >>> all the builds and tests pass will the changes actually be pushed to the >>> target repository. >>> >>> We believe that testing like the above, in combination with later stages >>> of >>> testing, is vital to ensuring that the quality level of the Hotspot code >>> remains high and that developers do not run into situations where the >>> latest >>> version has build errors on some platforms. >>> >>> Recently the AIX/PPC port was added to the set of OpenJDK platforms. From >>> a >>> Hotspot perspective this new platform added a set of AIX/PPC specific >>> files >>> including some platform specific changes to shared code. The AIX/PPC >>> platform is not tested by Oracle as part of Hotspot push jobs. The same >>> thing applies for the shark and zero versions of Hotspot. >>> >>> While Hotspot developers remain committed to making sure changes are >>> developed in a way such that the quality level remains high across all >>> platforms and variants, because of the above mentioned complexities it is >>> inevitable that from time to time changes will be made which introduce >>> issues on specific platforms or tool chains not part of the core testing. >>> >>> To allow these issues to be resolved more quickly I would like to propose >>> a >>> relaxation in the requirements on how changes to Hotspot are pushed. >>> Specifically I would like to allow for direct pushes to the hotspot/ >>> repository of files specific to the following ports/variants/tools: >>> >>> * AIX >>> * PPC >>> * Shark >>> * Zero >>> >>> Today this translates into the following files: >>> >>> - src/cpu/ppc/** >>> - src/cpu/zero/** >>> - src/os/aix/** >>> - src/os_cpu/aix_ppc/** >>> - src/os_cpu/bsd_zero/** >>> - src/os_cpu/linux_ppc/** >>> - src/os_cpu/linux_zero/** >>> >>> Note that all changes are still required to go through the normal >>> development and review cycle; the proposed relaxation only applies to how >>> the changes are pushed. >>> >>> If at code review time a change is for some reason deemed to be risky >>> and/or >>> otherwise have impact on shared files the reviewer may request that the >>> change to go through the regular push testing. For changes only touching >>> the >>> above set of files this expected to be rare. >>> >>> Please let me know what you think. >>> >>> Cheers, >>> Mikael >>> > From volker.simonis at gmail.com Fri Sep 12 19:15:10 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 12 Sep 2014 21:15:10 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well Message-ID: Hi, could you please review and sponsor the following small change which should make debugging a little more comfortabel (at least on Linux for now): http://cr.openjdk.java.net/~simonis/webrevs/8058345/ https://bugs.openjdk.java.net/browse/JDK-8058345 In the hs_err files we have a nice mixed stack trace which contains both, Java and native frames. It would be nice if we could make this functionality available from within gdb during debugging sessions (until now we can only print the pure Java stack with the "ps()" helper function from debug.cpp). This new feature can be easily achieved by refactoring the corresponding stack printing code from VMError::report() in vmError.cpp into its own method in debug.cpp. This change extracts that code into the new function 'print_native_stack()' in debug.cpp without changing anything of the functionality. It also adds some helper functions which make it easy to call the new 'print_native_stack()' method from within gdb. There's the new helper function 'pns(frame f)' which takes a frame argument and calls 'print_native_stack()'. We need the frame argument because gdb inserts a dummy frame for every call and we can't easily walk over this dummy frame from our stack printing routine. To simplify the creation of the frame object, I've added the helper functions: extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { return frame(sp, fp, pc); } for x86 (in frame_x86.cpp) and extern "C" frame make_frame(intptr_t* sp, address pc) { return frame(sp, pc); } for ppc64 in frame_ppc.cpp. With these helper functions we can now easily get a mixed stack trace of a Java thread in gdb (see below). All the helper functions are protected by '#ifndef PRODUCT' Thank you and best regards, Volker (gdb) call pns(make_frame($sp, $rbp, $pc)) "Executing pns" Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 V [libjvm.so+0x75f442] JVM_Sleep+0x312 j java.lang.Thread.sleep(J)V+0 j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 j CrashNative.doIt()V+45 v ~StubRoutines::call_stub V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xf8f V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, bool, Thread*) [clone .constprop.218]+0xa25 V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, objArrayHandle, Thread*)+0x1c8 V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe j sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 j sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 j sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 j java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 j CrashNative.mainJava()V+32 v ~StubRoutines::call_stub V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xf8f V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.238] [clone .constprop.250]+0x385 V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, _jmethodID*, ...)+0xb9 C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 j CrashNative.nativeMethod()V+0 j CrashNative.main([Ljava/lang/String;)V+9 v ~StubRoutines::call_stub V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xf8f V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.238] [clone .constprop.250]+0x385 V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 C [libjli.so+0x742a] JavaMain+0x65a C [libpthread.so.0+0x7e9a] start_thread+0xda From aph at redhat.com Sat Sep 13 07:27:33 2014 From: aph at redhat.com (Andrew Haley) Date: Sat, 13 Sep 2014 08:27:33 +0100 Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: <540F7021.5080100@oracle.com> References: <540F7021.5080100@oracle.com> Message-ID: <5413F1E5.4010806@redhat.com> On 09/09/14 22:24, Mikael Vidstedt wrote: > Note that all changes are still required to go through the normal > development and review cycle; the proposed relaxation only applies to > how the changes are pushed. > > If at code review time a change is for some reason deemed to be risky > and/or otherwise have impact on shared files the reviewer may request > that the change to go through the regular push testing. For changes only > touching the above set of files this expected to be rare. > > Please let me know what you think. Thank you. From my point of view as an at-large member of the governing board, I welcome this change: it is exactly the direction we need to go for OpenJDK to become an true Open Source project. Of course I endorse Volker's point that developers outside Oracle need to be able to submit jprt jobs, but this is a very welcome step. Andrew. From tobias.hartmann at oracle.com Mon Sep 15 06:50:44 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 15 Sep 2014 08:50:44 +0200 Subject: [8u40] RFR(s): backport of 8035328 and 8044538 Message-ID: <54168C44.50208@oracle.com> Hi, please review the following backports to 8u40. 8035328: closed/compiler/6595044/Main.java failed with timeout https://bugs.openjdk.java.net/browse/JDK-8035328 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/46e85b1633d7 http://cr.openjdk.java.net/~thartmann/8035328/webrev.00/ 8044538: assert(which != imm_operand) failed: instruction is not a movq reg, imm64 https://bugs.openjdk.java.net/browse/JDK-8044538 http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/f3624d042de0 http://cr.openjdk.java.net/~thartmann/8044538/webrev.04/ The changes were pushed to 9 some weeks ago and nightly testing showed no problems. All changes apply cleanly to 8u40. Thanks, Tobias From albert.noll at oracle.com Mon Sep 15 07:27:21 2014 From: albert.noll at oracle.com (Albert) Date: Mon, 15 Sep 2014 09:27:21 +0200 Subject: [8u40] RFR(): backport of JDK-8034775 Message-ID: <541694D9.2020200@oracle.com> Hi, please review the following backport: https://bugs.openjdk.java.net/browse/JDK-8034775 http://cr.openjdk.java.net/~anoll/8034775/webrev.01/ http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/9a83b7b3e37c The changes were pushed to 9 some weeks ago and nightly testing showed no problems. All changes apply cleanly to 8u40. Thanks, Albert From staffan.larsen at oracle.com Mon Sep 15 09:18:31 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 15 Sep 2014 11:18:31 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: References: Message-ID: <10210D33-4A70-47F3-A6AE-540235738968@oracle.com> On 9 sep 2014, at 08:46, Volker Simonis wrote: > Just add build changes and you get another class of changes which often require changing more than one repository in the forest. > > I was (and still am :) a fan of the HotSpot Express model and used to work and build Hotspot only for a long time. But after the introduction of the new build system which resulted in dramatically improved build times I've already switched to building whole forest for already quite some time now. > > There's still a small question I have: what about other cross development topics (e.g. top-level+jdk)? Will it be possible to "misuse" the new hotspot repositories to keep such changes in sync? Or will other team repositories like jdk-dev or jdk-client switch to such a model as well? I think we should minimize the number of non-hotspot related changes in the hotspot repos. I?m thinking that if the changes do not involve the hotspot code or affect hotspot?s testing, then push the changes through the same route as today (usually jdk9/dev). jdk9/dev already accepts changes to several repos at the same time (except hotspot repos), so changing top-level+jdk should be possible there. Thanks, /Staffan > > Thank you and best regards, > Volker > > On Tuesday, September 9, 2014, Staffan Larsen wrote: > > ## tl;dr > > We propose a move to a Hotspot development model where we can do both > hotspot and jdk changes in the hotspot group repos. This will require a > fully populated JDK forest to push changes (whether hotspot or jdk > changes) through JPRT. We do not expect these changes to have much > affect on the open community, but it is good to note that there can be > changes both in hotspot and jdk code coming through the hotspot > repositories, and the best practise is to always clone and build the > complete forest. > > We propose to do this change in a few weeks time. > > ## Problem > > We see an increasing number of features (small and large) that require > concerted changes to both the hotspot and the jdk repos. Our current > development model does not support this very well since it requires jdk > changes to be made in jdk9/dev and hotspot changes to be made in the > hotspot group repositories. Alternatively, such changes results in "flag > days" where jdk and hotspot changes are pushed through the group repos > with a lot of manual work and impact on everyone working in the group > repos. Either way, the result is very slow and cumbersome development. > > Some examples where concerted changes have been required are JSR-292, > default methods, Java Flight Recorder, work on annotations, moving Class > fields to Java, many serviceability area tests, and so on. A lot of this > work will continue and we will also see new things such as jigsaw that > add to the mix. > > Doing concerted changes today takes a lot of manual effort and calendar > time to make sure nothing break. In many cases the addition of a new > feature needs to made first to a hotspot group repo. That change needs > to propagate to jdk9/dev where library code can be changed to depend on > it. Once that change has propagated back to the hotspot group repo, the > final change can be made to remove the old implementation. This dance > can take anywhere from 2 to 4 weeks to complete - for a single feature. > > There has also been quite a few cases where we missed taking the > dependency into account which results in test failures in one or more > repos. In some cases these failures go on for several weeks causing lots > of extra work and confusion simply because it takes time for the fix to > propagate through the repos. > > Instead, we want to move to a model where we can make both jdk and > hotspot changes directly in the hotspot group repos. In that way the > changes will always "travel together" through the repos. This will make > our development cycle faster as well as more reliable. > > More or less by definition these type of changes introduce a stronger > dependency between hotspot and the jdk. For the product as a whole to > work correctly the right combination of hotspot and the jdk need to be > used. We have long since removed the requirement that hotspot would > support several jdk versions (known as the Hotspot Express - or hsx - > model) and we continue to see a strong dependency, where matching code > in hotspot and the jdk needs to be used. > > ## No More Dependency on Latest Promoted Build > > The strong dependency between hotspot and jdk makes it impossible for > hotspot to depend on the latest promoted jdk build for testing and > development. To elaborate on this; if a change with hotspot+jdk > dependencies have been pushed to a group repo, it will not longer be > possible to use the latest promoted build for running or testing the > version of hotspot built in that repo -- the latest promoted build will > not have the latest change to the jdk that hotspot now depends on (or > vice versa). > > ## Require Fully Populated JDK Forest > > The simple solution that we can switch to today is to always require a > fully populated JDK forest when building (both locally and in JPRT). By > this we mean a clone of all the repos in the forest under, for example, > jdk9/hs-rt. JPRT would no longer be using the latest promoted build when > creating bundles, instead it will build the code from the submitted > forest. > > If all operations (builds, integrations, pushes, JPRT jobs) always work > on the full forest, then there will never be a mismatch between the jdk > and the hotspot code. > > The main drawbacks of this is that developers now need to clone, store > and build a lot more code. Cloning the full forest takes longer than > just cloning the hotspot forest. This can be alleviated by maintaining > local cached versions. Storing full forests require more disk space. > This can be mitigated by buying more disks or using a different workflow > (for example Mercurial Queues). Building a full jdk takes longer, but > hotspot is already one of the larger components to build and incremental > builds are usually quite fast. > > ## Next Steps > > Given that we would like to improve the model we use for cross component > development as soon as possible, we would like to switch to require a > fully populated JDK forest for hotspot development. All the > prerequisites for doing this are in place (changes to JPRT, both on the > servers and to the configuration files in the source repos). A group of > volunteering hotspot developers have been using full jdk repos for a > while for day-to-day work (except pushes) and have not reported any > showstopper problems. > > If no strong objections are rasied we need decide on a date when we > throw the switch. A good date is probably after the 8u40 Feature > Complete date of Mid September [0] so as not to impact that release > (although this change will only apply to JDK 9 development for now). > > Regards, > Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, > Staffan Larsen, Stefan S?rne, Vladimir Kozlov > > [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html From staffan.larsen at oracle.com Mon Sep 15 09:25:53 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 15 Sep 2014 11:25:53 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: References: Message-ID: All, We plan to move ahead with this change on Wednesday (Sept 17th) unless there are instabilities that prevent this. We currently have one open bug blocking this (JDK-8058251). I will follow up with an email once the switch has happened. Thanks, /Staffan On 9 sep 2014, at 08:02, Staffan Larsen wrote: > > ## tl;dr > > We propose a move to a Hotspot development model where we can do both > hotspot and jdk changes in the hotspot group repos. This will require a > fully populated JDK forest to push changes (whether hotspot or jdk > changes) through JPRT. We do not expect these changes to have much > affect on the open community, but it is good to note that there can be > changes both in hotspot and jdk code coming through the hotspot > repositories, and the best practise is to always clone and build the > complete forest. > > We propose to do this change in a few weeks time. > > ## Problem > > We see an increasing number of features (small and large) that require > concerted changes to both the hotspot and the jdk repos. Our current > development model does not support this very well since it requires jdk > changes to be made in jdk9/dev and hotspot changes to be made in the > hotspot group repositories. Alternatively, such changes results in "flag > days" where jdk and hotspot changes are pushed through the group repos > with a lot of manual work and impact on everyone working in the group > repos. Either way, the result is very slow and cumbersome development. > > Some examples where concerted changes have been required are JSR-292, > default methods, Java Flight Recorder, work on annotations, moving Class > fields to Java, many serviceability area tests, and so on. A lot of this > work will continue and we will also see new things such as jigsaw that > add to the mix. > > Doing concerted changes today takes a lot of manual effort and calendar > time to make sure nothing break. In many cases the addition of a new > feature needs to made first to a hotspot group repo. That change needs > to propagate to jdk9/dev where library code can be changed to depend on > it. Once that change has propagated back to the hotspot group repo, the > final change can be made to remove the old implementation. This dance > can take anywhere from 2 to 4 weeks to complete - for a single feature. > > There has also been quite a few cases where we missed taking the > dependency into account which results in test failures in one or more > repos. In some cases these failures go on for several weeks causing lots > of extra work and confusion simply because it takes time for the fix to > propagate through the repos. > > Instead, we want to move to a model where we can make both jdk and > hotspot changes directly in the hotspot group repos. In that way the > changes will always "travel together" through the repos. This will make > our development cycle faster as well as more reliable. > > More or less by definition these type of changes introduce a stronger > dependency between hotspot and the jdk. For the product as a whole to > work correctly the right combination of hotspot and the jdk need to be > used. We have long since removed the requirement that hotspot would > support several jdk versions (known as the Hotspot Express - or hsx - > model) and we continue to see a strong dependency, where matching code > in hotspot and the jdk needs to be used. > > ## No More Dependency on Latest Promoted Build > > The strong dependency between hotspot and jdk makes it impossible for > hotspot to depend on the latest promoted jdk build for testing and > development. To elaborate on this; if a change with hotspot+jdk > dependencies have been pushed to a group repo, it will not longer be > possible to use the latest promoted build for running or testing the > version of hotspot built in that repo -- the latest promoted build will > not have the latest change to the jdk that hotspot now depends on (or > vice versa). > > ## Require Fully Populated JDK Forest > > The simple solution that we can switch to today is to always require a > fully populated JDK forest when building (both locally and in JPRT). By > this we mean a clone of all the repos in the forest under, for example, > jdk9/hs-rt. JPRT would no longer be using the latest promoted build when > creating bundles, instead it will build the code from the submitted > forest. > > If all operations (builds, integrations, pushes, JPRT jobs) always work > on the full forest, then there will never be a mismatch between the jdk > and the hotspot code. > > The main drawbacks of this is that developers now need to clone, store > and build a lot more code. Cloning the full forest takes longer than > just cloning the hotspot forest. This can be alleviated by maintaining > local cached versions. Storing full forests require more disk space. > This can be mitigated by buying more disks or using a different workflow > (for example Mercurial Queues). Building a full jdk takes longer, but > hotspot is already one of the larger components to build and incremental > builds are usually quite fast. > > ## Next Steps > > Given that we would like to improve the model we use for cross component > development as soon as possible, we would like to switch to require a > fully populated JDK forest for hotspot development. All the > prerequisites for doing this are in place (changes to JPRT, both on the > servers and to the configuration files in the source repos). A group of > volunteering hotspot developers have been using full jdk repos for a > while for day-to-day work (except pushes) and have not reported any > showstopper problems. > > If no strong objections are rasied we need decide on a date when we > throw the switch. A good date is probably after the 8u40 Feature > Complete date of Mid September [0] so as not to impact that release > (although this change will only apply to JDK 9 development for now). > > Regards, > Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, > Staffan Larsen, Stefan S?rne, Vladimir Kozlov > > [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html From magnus.ihse.bursie at oracle.com Mon Sep 15 11:16:13 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Mon, 15 Sep 2014 13:16:13 +0200 Subject: RFR: JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <5409A3D0.3070208@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> <5409A3D0.3070208@oracle.com> Message-ID: <5416CA7D.7000002@oracle.com> Here is the full review of this fix. I have now applied the same pattern as I used on linux to aix, bsd and solaris as well. It turned out that windows was problematic. Due to the big difference between windows and the unix versions (different and/or limited nmake flexibility, different and/or limited shell functionality and differences in design in the hotspot make files), I did not manage to get a working Windows version of this fix in a reasonable time frame. (This fix has already taken more time than I wanted to spend on it.) I suggest that this fix nevertheless is an improvment on the other platforms, and that I open a new bug report for the remaining work on Windows. And here's what I wrote about the preliminary version of this fix: Even in the default log level ("warn"), hotspots builds are extremely verbose. With the new jigsaw build system, hotspot is build in parallel with the jdk, and the sheer amount of hotspot output makes the jdk output practically disappear. This fix will make the following changes: * When hotspot is build from the top dir with the default log level, all repetetive and purely informative output is hidden (e.g. names of files compiled, and the "INFO:" blobs). * When hotspot is build from the top dir, with any other log level (info, debug, trace), all output will be there, as before. * When hotspot is build from the hotspot repo, all output will be there, as before. I have tested building on JPRT with LOG=debug and LOG=warn, and it all looks as it should as far as I could tell. Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 WebRev: http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.02 /Magnus From staffan.larsen at oracle.com Mon Sep 15 13:15:07 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 15 Sep 2014 15:15:07 +0200 Subject: RFR: 8058448 Disable JPRT submissions from the hotspot repo Message-ID: When we switch to require JPRT submissions to be made from the top-level repo (see other email thread), we also want to prevent submissions from the hotspot repo so that an accidental push does not happen. In order to achieve that, this change removes the make/jprt.properties file from the hotspot repo. Once we decide to go ahead, I would like to push this first to jdk9/hs/hotspot and then pull it down to jdk9/hs-rt/hotspot, jdk9/hs-gc/hotspot and jdk9/hs-comp/hotspot. bug: https://bugs.openjdk.java.net/browse/JDK-8058448 webrev: http://cr.openjdk.java.net/~sla/8058448/webrev.00/ Thanks, /Staffan From aph at redhat.com Mon Sep 15 15:20:59 2014 From: aph at redhat.com (Andrew Haley) Date: Mon, 15 Sep 2014 16:20:59 +0100 Subject: More on memory barriers Message-ID: <541703DB.5030207@redhat.com> I'm still looking at the best way to generate code for AArch64 volatiles, and I've come across something really odd. void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) { bool is_vol = field->is_volatile(); // If reference is volatile, prevent following memory ops from // floating down past the volatile write. Also prevents commoning // another volatile read. if (is_vol) insert_mem_bar(Op_MemBarRelease); A release here is much too strong: the JSR-133 cookbook says that we only need a StoreStore here, presumably because we're going to emit a full StoreLoad barrier after the store to the volatile field. On a target where this makes an actual difference to code quality, this matters. Does anyone here understand what this release fence is for? Thanks, Andrew. From dl at cs.oswego.edu Mon Sep 15 16:08:38 2014 From: dl at cs.oswego.edu (Doug Lea) Date: Mon, 15 Sep 2014 12:08:38 -0400 Subject: More on memory barriers In-Reply-To: <541703DB.5030207@redhat.com> References: <541703DB.5030207@redhat.com> Message-ID: <54170F06.2070207@cs.oswego.edu> On 09/15/2014 11:20 AM, Andrew Haley wrote: > I'm still looking at the best way to generate code for AArch64 volatiles, > and I've come across something really odd. > > void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) { > bool is_vol = field->is_volatile(); > // If reference is volatile, prevent following memory ops from > // floating down past the volatile write. Also prevents commoning > // another volatile read. > if (is_vol) insert_mem_bar(Op_MemBarRelease); > > A release here is much too strong: the JSR-133 cookbook says that we > only need a StoreStore here, presumably because we're going to emit a > full StoreLoad barrier after the store to the volatile field. > > On a target where this makes an actual difference to code quality, > this matters. Does anyone here understand what this release fence is > for? > My understanding is that storeStore was not distinguished from release only because it did not impact any existing platforms. (Plus as I mentioned before, storeStore is unlikely to ever be exposed as part of any user-visible mode). But I can't see any reason not to define a Op_MemBarStoreStore as a subtype of Op_MemBarRelease, keeping everything the same in c2, but matching it more cheaply on platforms where it matters. (You lose almost nothing, or maybe exactly nothing treating it as release wrt other c2 internals.) [Insert my usual disclaimers about not being a hotspot engineer.] -Doug From aph at redhat.com Mon Sep 15 16:13:26 2014 From: aph at redhat.com (Andrew Haley) Date: Mon, 15 Sep 2014 17:13:26 +0100 Subject: More on memory barriers In-Reply-To: <54170F06.2070207@cs.oswego.edu> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> Message-ID: <54171026.8060700@redhat.com> On 09/15/2014 05:08 PM, Doug Lea wrote: > On 09/15/2014 11:20 AM, Andrew Haley wrote: >> I'm still looking at the best way to generate code for AArch64 volatiles, >> and I've come across something really odd. >> >> void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) { >> bool is_vol = field->is_volatile(); >> // If reference is volatile, prevent following memory ops from >> // floating down past the volatile write. Also prevents commoning >> // another volatile read. >> if (is_vol) insert_mem_bar(Op_MemBarRelease); >> >> A release here is much too strong: the JSR-133 cookbook says that we >> only need a StoreStore here, presumably because we're going to emit a >> full StoreLoad barrier after the store to the volatile field. >> >> On a target where this makes an actual difference to code quality, >> this matters. Does anyone here understand what this release fence is >> for? > > My understanding is that storeStore was not distinguished from > release only because it did not impact any existing platforms. (Plus > as I mentioned before, storeStore is unlikely to ever be exposed > as part of any user-visible mode). Okay, that makes sense. > But I can't see any reason not to define a Op_MemBarStoreStore as a > subtype of Op_MemBarRelease, keeping everything the same in c2, but > matching it more cheaply on platforms where it matters. (You lose > almost nothing, or maybe exactly nothing treating it as release wrt > other c2 internals.) > > [Insert my usual disclaimers about not being a hotspot engineer.] Mmm. MemBarStoreStore is defined, it's just not used for this operation. Andrew. From vitalyd at gmail.com Mon Sep 15 16:20:15 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Mon, 15 Sep 2014 12:20:15 -0400 Subject: More on memory barriers In-Reply-To: <54171026.8060700@redhat.com> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> Message-ID: Looking at hg history, MemBarStoreStore was added a few years ago, whereas the code in question is much older. The comments in the changelist adding MemBarStoreStore seem to indicate it was done to address a specific issue, and my guess is that it wasn't "retrofitted" into all possible places. Just a guess though ... On Mon, Sep 15, 2014 at 12:13 PM, Andrew Haley wrote: > On 09/15/2014 05:08 PM, Doug Lea wrote: > > On 09/15/2014 11:20 AM, Andrew Haley wrote: > >> I'm still looking at the best way to generate code for AArch64 > volatiles, > >> and I've come across something really odd. > >> > >> void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) { > >> bool is_vol = field->is_volatile(); > >> // If reference is volatile, prevent following memory ops from > >> // floating down past the volatile write. Also prevents commoning > >> // another volatile read. > >> if (is_vol) insert_mem_bar(Op_MemBarRelease); > >> > >> A release here is much too strong: the JSR-133 cookbook says that we > >> only need a StoreStore here, presumably because we're going to emit a > >> full StoreLoad barrier after the store to the volatile field. > >> > >> On a target where this makes an actual difference to code quality, > >> this matters. Does anyone here understand what this release fence is > >> for? > > > > My understanding is that storeStore was not distinguished from > > release only because it did not impact any existing platforms. (Plus > > as I mentioned before, storeStore is unlikely to ever be exposed > > as part of any user-visible mode). > > Okay, that makes sense. > > > But I can't see any reason not to define a Op_MemBarStoreStore as a > > subtype of Op_MemBarRelease, keeping everything the same in c2, but > > matching it more cheaply on platforms where it matters. (You lose > > almost nothing, or maybe exactly nothing treating it as release wrt > > other c2 internals.) > > > > [Insert my usual disclaimers about not being a hotspot engineer.] > > Mmm. MemBarStoreStore is defined, it's just not used for this > operation. > > Andrew. > From vladimir.kozlov at oracle.com Mon Sep 15 17:51:18 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 15 Sep 2014 10:51:18 -0700 Subject: RFR: 8058448 Disable JPRT submissions from the hotspot repo In-Reply-To: References: Message-ID: <54172716.3020809@oracle.com> Good. As I understand runtime group does sync on Wednesday. Can you do your fix pushes before that? Thanks, Vladimir On 9/15/14 6:15 AM, Staffan Larsen wrote: > When we switch to require JPRT submissions to be made from the top-level repo (see other email thread), we also want to prevent submissions from the hotspot repo so that an accidental push does not happen. In order to achieve that, this change removes the make/jprt.properties file from the hotspot repo. > > Once we decide to go ahead, I would like to push this first to jdk9/hs/hotspot and then pull it down to jdk9/hs-rt/hotspot, jdk9/hs-gc/hotspot and jdk9/hs-comp/hotspot. > > bug: https://bugs.openjdk.java.net/browse/JDK-8058448 > webrev: http://cr.openjdk.java.net/~sla/8058448/webrev.00/ > > Thanks, > /Staffan > From vladimir.kozlov at oracle.com Mon Sep 15 18:00:16 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 15 Sep 2014 11:00:16 -0700 Subject: [8u40] RFR(): backport of JDK-8034775 In-Reply-To: <541694D9.2020200@oracle.com> References: <541694D9.2020200@oracle.com> Message-ID: <54172930.20600@oracle.com> Albert, 8034775 had bugs tail, you need to collect all related fixes together (several changesets with one push). Thanks, Vladimir On 9/15/14 12:27 AM, Albert wrote: > Hi, > > please review the following backport: > > https://bugs.openjdk.java.net/browse/JDK-8034775 > http://cr.openjdk.java.net/~anoll/8034775/webrev.01/ > http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/9a83b7b3e37c > > The changes were pushed to 9 some weeks ago and nightly testing showed no problems. All changes apply cleanly to 8u40. > > Thanks, > Albert > > From vladimir.kozlov at oracle.com Mon Sep 15 18:05:00 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 15 Sep 2014 11:05:00 -0700 Subject: [8u40] RFR(s): backport of 8035328 and 8044538 In-Reply-To: <54168C44.50208@oracle.com> References: <54168C44.50208@oracle.com> Message-ID: <54172A4C.7060003@oracle.com> This looks good. Thanks, Vladimir On 9/14/14 11:50 PM, Tobias Hartmann wrote: > Hi, > > please review the following backports to 8u40. > > 8035328: closed/compiler/6595044/Main.java failed with timeout > https://bugs.openjdk.java.net/browse/JDK-8035328 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/46e85b1633d7 > http://cr.openjdk.java.net/~thartmann/8035328/webrev.00/ > > 8044538: assert(which != imm_operand) failed: instruction is not a movq reg, imm64 > https://bugs.openjdk.java.net/browse/JDK-8044538 > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/f3624d042de0 > http://cr.openjdk.java.net/~thartmann/8044538/webrev.04/ > > The changes were pushed to 9 some weeks ago and nightly testing showed no problems. All changes apply cleanly to 8u40. > > Thanks, > Tobias > > > From staffan.larsen at oracle.com Mon Sep 15 18:51:55 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 15 Sep 2014 20:51:55 +0200 Subject: RFR: 8058448 Disable JPRT submissions from the hotspot repo In-Reply-To: <54172716.3020809@oracle.com> References: <54172716.3020809@oracle.com> Message-ID: <287121B9-88A3-487E-AB03-50F8669AA0DF@oracle.com> On 15 sep 2014, at 19:51, Vladimir Kozlov wrote: > Good. > > As I understand runtime group does sync on Wednesday. Can you do your fix pushes before that? Good point. I will coordinate with the runtime gatekeeper. Hopefully I am a couple of timezones ahead :-) /Staffan > > Thanks, > Vladimir > > On 9/15/14 6:15 AM, Staffan Larsen wrote: >> When we switch to require JPRT submissions to be made from the top-level repo (see other email thread), we also want to prevent submissions from the hotspot repo so that an accidental push does not happen. In order to achieve that, this change removes the make/jprt.properties file from the hotspot repo. >> >> Once we decide to go ahead, I would like to push this first to jdk9/hs/hotspot and then pull it down to jdk9/hs-rt/hotspot, jdk9/hs-gc/hotspot and jdk9/hs-comp/hotspot. >> >> bug: https://bugs.openjdk.java.net/browse/JDK-8058448 >> webrev: http://cr.openjdk.java.net/~sla/8058448/webrev.00/ >> >> Thanks, >> /Staffan >> From dean.long at oracle.com Mon Sep 15 19:44:00 2014 From: dean.long at oracle.com (Dean Long) Date: Mon, 15 Sep 2014 12:44:00 -0700 Subject: More on memory barriers In-Reply-To: <541703DB.5030207@redhat.com> References: <541703DB.5030207@redhat.com> Message-ID: <54174180.4060100@oracle.com> If volatile store uses AArch64 "stlr" and volatile load uses "ldar", then is that enough (no additional barriers, including StoreLoad, required)? That's my understanding from the comments in orderAccess.hpp regarding ia64 st.rel and ld.acq. dl On 9/15/2014 8:20 AM, Andrew Haley wrote: > I'm still looking at the best way to generate code for AArch64 volatiles, > and I've come across something really odd. > > void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) { > bool is_vol = field->is_volatile(); > // If reference is volatile, prevent following memory ops from > // floating down past the volatile write. Also prevents commoning > // another volatile read. > if (is_vol) insert_mem_bar(Op_MemBarRelease); > > A release here is much too strong: the JSR-133 cookbook says that we > only need a StoreStore here, presumably because we're going to emit a > full StoreLoad barrier after the store to the volatile field. > > On a target where this makes an actual difference to code quality, > this matters. Does anyone here understand what this release fence is > for? > > Thanks, > Andrew. From george.triantafillou at oracle.com Mon Sep 15 20:59:36 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Mon, 15 Sep 2014 16:59:36 -0400 Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java Message-ID: <54175338.30301@oracle.com> Please review this updated test for 8058504. This test is intermittently failing in JPRT and needs to be disabled until the root cause is determined. The related issue is JDK-8058251. Webrev: http://cr.openjdk.java.net/~gtriantafill/8058504/webrev/ Bug: https://bugs.openjdk.java.net/browse/JDK-8058504 The fix was tested locally on Linux with jtreg. Thanks. -George From christian.tornqvist at oracle.com Mon Sep 15 21:10:46 2014 From: christian.tornqvist at oracle.com (Christian Tornqvist) Date: Mon, 15 Sep 2014 17:10:46 -0400 Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java In-Reply-To: <54175338.30301@oracle.com> References: <54175338.30301@oracle.com> Message-ID: <027501cfd129$84b989d0$8e2c9d70$@oracle.com> Hi George, This looks good. Thanks, Christian -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of George Triantafillou Sent: Monday, September 15, 2014 5:00 PM To: hotspot-dev at openjdk.java.net Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java Please review this updated test for 8058504. This test is intermittently failing in JPRT and needs to be disabled until the root cause is determined. The related issue is JDK-8058251. Webrev: http://cr.openjdk.java.net/~gtriantafill/8058504/webrev/ Bug: https://bugs.openjdk.java.net/browse/JDK-8058504 The fix was tested locally on Linux with jtreg. Thanks. -George From harold.seigel at oracle.com Mon Sep 15 21:11:06 2014 From: harold.seigel at oracle.com (harold seigel) Date: Mon, 15 Sep 2014 17:11:06 -0400 Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java In-Reply-To: <54175338.30301@oracle.com> References: <54175338.30301@oracle.com> Message-ID: <541755EA.5030307@oracle.com> Hi George, The change looks good. Harold On 9/15/2014 4:59 PM, George Triantafillou wrote: > Please review this updated test for 8058504. This test is > intermittently failing in JPRT and needs to be disabled until the root > cause is determined. The related issue is JDK-8058251. > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8058504/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8058504 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George > From george.triantafillou at oracle.com Mon Sep 15 21:18:43 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Mon, 15 Sep 2014 17:18:43 -0400 Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java In-Reply-To: <027501cfd129$84b989d0$8e2c9d70$@oracle.com> References: <54175338.30301@oracle.com> <027501cfd129$84b989d0$8e2c9d70$@oracle.com> Message-ID: <541757B3.8050003@oracle.com> Thanks Christian. -George On 9/15/2014 5:10 PM, Christian Tornqvist wrote: > Hi George, > > This looks good. > > Thanks, > Christian > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of George Triantafillou > Sent: Monday, September 15, 2014 5:00 PM > To: hotspot-dev at openjdk.java.net > Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java > > Please review this updated test for 8058504. This test is intermittently failing in JPRT and needs to be disabled until the root cause is determined. The related issue is JDK-8058251. > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8058504/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8058504 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George > > From george.triantafillou at oracle.com Mon Sep 15 21:19:23 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Mon, 15 Sep 2014 17:19:23 -0400 Subject: RFR (XS): 8058504 [TESTBUG] Temporarily disable failing test runtime/NMT/MallocTrackingVerify.java In-Reply-To: <541755EA.5030307@oracle.com> References: <54175338.30301@oracle.com> <541755EA.5030307@oracle.com> Message-ID: <541757DB.40606@oracle.com> Thanks Harold. -George On 9/15/2014 5:11 PM, harold seigel wrote: > Hi George, > > The change looks good. > > Harold > > On 9/15/2014 4:59 PM, George Triantafillou wrote: >> Please review this updated test for 8058504. This test is >> intermittently failing in JPRT and needs to be disabled until the >> root cause is determined. The related issue is JDK-8058251. >> >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8058504/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8058504 >> >> The fix was tested locally on Linux with jtreg. >> >> Thanks. >> >> -George >> > From igor.veresov at oracle.com Mon Sep 15 23:18:32 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 15 Sep 2014 16:18:32 -0700 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <540979F9.5080407@oracle.com> References: <53FF1BF6.8070600@oracle.com> <540979F9.5080407@oracle.com> Message-ID: <45497D70-63BD-4A76-9F27-B3E3457638DD@oracle.com> A little thing that worried me.. c2i go to i2c adapters go to the NonMethod space (via BufferBlob::new), which is fixed and not scaled. However MH intrinsics (and native adapters) go to MethodNonProfile space. Since the number of c2i and i2c are signature polymorphic (and each MH intrinsic has them) perhaps they should go to the MethodNonProfiled space as well? AdapterBlob would have to have a different new operator and the OOM handing code in AdapterHandleLibrary::get_adapter() will have to adjusted. Nits: codeCache.cpp: 141 // Initialize array of CodeHeaps 142 GrowableArray* CodeCache::_heaps = new(ResourceObj::C_HEAP, mtCode) GrowableArray (3, true); Perhaps 3 should be a named constant. May be you can put it in the enum with segment types you have in CodeBlobType ? advancedThresholdPolicy.cpp: 213 // Increase C1 compile threshold when the code cache is filled more 214 // than specified by IncreaseFirstTierCompileThresholdAt percentage. 215 // The main intention is to keep enough free space for C2 compiled code 216 // to achieve peak performance if the code cache is under stress. 217 if ((TieredStopAtLevel == CompLevel_full_optimization) && (level != CompLevel_full_optimization)) { 218 double current_reverse_free_ratio = CodeCache::reverse_free_ratio(CodeCache::get_code_blob_type(level)); 219 if (current_reverse_free_ratio > _increase_threshold_at_ratio) { 220 k *= exp(current_reverse_free_ratio - _increase_threshold_at_ratio); 221 } 222 } Do you think it still makes sense to do that with segmented code cache? C1 methods are not really going to take space from C2 methods, right? Perhaps it should be predicated off for segmented code cache? Otherwise looks good. igor On Sep 5, 2014, at 1:53 AM, Tobias Hartmann wrote: > Hi, > > could I get another review for this? > > Latest webrev is: http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ > > Thanks, > Tobias > > On 28.08.2014 14:09, Tobias Hartmann wrote: >> Hi, >> >> the segmented code cache JEP is now targeted. Please review the final implementation before integration. The previous RFR, including a short description, can be found here [1]. >> >> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >> Implementation: http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >> JDK-Test fix: http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >> >> Changes since the last review: >> - Merged with other changes (for example, G1 class unloading changes [2]) >> - Fixed some minor bugs that showed up during testing >> - Refactoring of 'NMethodIterator' and CodeCache implementation >> - Non-method CodeHeap size increased to 5 MB >> - Fallback solution: Store non-method code in the non-profiled code heap if there is not enough space in the non-method code heap (see 'CodeCache::allocate') >> >> Additional testing: >> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >> - Compiler and GC nightlies >> - jtreg tests >> - VM (NSK) Testbase >> - More performance testing (results attached to the bug) >> >> Thanks, >> Tobias >> >> [1] http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 > From david.holmes at oracle.com Tue Sep 16 02:01:36 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 16 Sep 2014 12:01:36 +1000 Subject: More on memory barriers In-Reply-To: <541703DB.5030207@redhat.com> References: <541703DB.5030207@redhat.com> Message-ID: <54179A00.70105@oracle.com> Hi Andrew, On 16/09/2014 1:20 AM, Andrew Haley wrote: > I'm still looking at the best way to generate code for AArch64 volatiles, > and I've come across something really odd. > > void Parse::do_put_xxx(Node* obj, ciField* field, bool is_field) { > bool is_vol = field->is_volatile(); > // If reference is volatile, prevent following memory ops from > // floating down past the volatile write. Also prevents commoning > // another volatile read. > if (is_vol) insert_mem_bar(Op_MemBarRelease); > > A release here is much too strong: the JSR-133 cookbook says that we > only need a StoreStore here, presumably because we're going to emit a > full StoreLoad barrier after the store to the volatile field. > > On a target where this makes an actual difference to code quality, > this matters. Does anyone here understand what this release fence is > for? My understanding of this is that, where the cookbook defines the barriers needed between given pairs of accesses, the hotspot JITs don't look at things at that level but just look at the volatile access in isolation. So a volatile read and a volatile write are surrounded by whatever pre- and post-barriers are needed in the worst-case. It's possible that C2 also has provision to later remove redundant barriers when examining a larger sequence of generated code, but I'm not familiar with those aspects of C2. David > Thanks, > Andrew. > From tobias.hartmann at oracle.com Tue Sep 16 05:06:01 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 16 Sep 2014 07:06:01 +0200 Subject: [8u40] RFR(s): backport of 8035328 and 8044538 In-Reply-To: <54172A4C.7060003@oracle.com> References: <54168C44.50208@oracle.com> <54172A4C.7060003@oracle.com> Message-ID: <5417C539.2090006@oracle.com> Thanks, Vladimir. Best, Tobias On 15.09.2014 20:05, Vladimir Kozlov wrote: > This looks good. > > Thanks, > Vladimir > > On 9/14/14 11:50 PM, Tobias Hartmann wrote: >> Hi, >> >> please review the following backports to 8u40. >> >> 8035328: closed/compiler/6595044/Main.java failed with timeout >> https://bugs.openjdk.java.net/browse/JDK-8035328 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/46e85b1633d7 >> http://cr.openjdk.java.net/~thartmann/8035328/webrev.00/ >> >> 8044538: assert(which != imm_operand) failed: instruction is not a >> movq reg, imm64 >> https://bugs.openjdk.java.net/browse/JDK-8044538 >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/f3624d042de0 >> http://cr.openjdk.java.net/~thartmann/8044538/webrev.04/ >> >> The changes were pushed to 9 some weeks ago and nightly testing >> showed no problems. All changes apply cleanly to 8u40. >> >> Thanks, >> Tobias >> >> >> From david.holmes at oracle.com Tue Sep 16 06:44:12 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 16 Sep 2014 16:44:12 +1000 Subject: RFR: JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <5416CA7D.7000002@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> <5409A3D0.3070208@oracle.com> <5416CA7D.7000002@oracle.com> Message-ID: <5417DC3C.9000109@oracle.com> Hi Magnus, This seems okay to me. Just for everyone else's benefit. We can pass LOG=info etc as a configure arg to set the default log level. But we can also pass LOG_LEVEL=XXX as a make arg to override that default. Thanks, David On 15/09/2014 9:16 PM, Magnus Ihse Bursie wrote: > Here is the full review of this fix. I have now applied the same pattern > as I used on linux to aix, bsd and solaris as well. > > It turned out that windows was problematic. Due to the big difference > between windows and the unix versions (different and/or limited nmake > flexibility, different and/or limited shell functionality and > differences in design in the hotspot make files), I did not manage to > get a working Windows version of this fix in a reasonable time frame. > (This fix has already taken more time than I wanted to spend on it.) > > I suggest that this fix nevertheless is an improvment on the other > platforms, and that I open a new bug report for the remaining work on > Windows. > > And here's what I wrote about the preliminary version of this fix: > > Even in the default log level ("warn"), hotspots builds are extremely > verbose. With the new jigsaw build system, hotspot is build in parallel > with the jdk, and the sheer amount of hotspot output makes the jdk > output practically disappear. > > This fix will make the following changes: > * When hotspot is build from the top dir with the default log level, all > repetetive and purely informative output is hidden (e.g. names of files > compiled, and the "INFO:" blobs). > * When hotspot is build from the top dir, with any other log level > (info, debug, trace), all output will be there, as before. > * When hotspot is build from the hotspot repo, all output will be there, > as before. > > I have tested building on JPRT with LOG=debug and LOG=warn, and it all > looks as it should as far as I could tell. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 > WebRev: > http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.02 > > > /Magnus From david.holmes at oracle.com Tue Sep 16 06:51:47 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 16 Sep 2014 16:51:47 +1000 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: Message-ID: <5417DE03.6060301@oracle.com> Hi Volker, On 13/09/2014 5:15 AM, Volker Simonis wrote: > Hi, > > could you please review and sponsor the following small change which > should make debugging a little more comfortabel (at least on Linux for > now): > > http://cr.openjdk.java.net/~simonis/webrevs/8058345/ > https://bugs.openjdk.java.net/browse/JDK-8058345 > > In the hs_err files we have a nice mixed stack trace which contains > both, Java and native frames. > It would be nice if we could make this functionality available from > within gdb during debugging sessions (until now we can only print the > pure Java stack with the "ps()" helper function from debug.cpp). > > This new feature can be easily achieved by refactoring the > corresponding stack printing code from VMError::report() in > vmError.cpp into its own method in debug.cpp. This change extracts > that code into the new function 'print_native_stack()' in debug.cpp > without changing anything of the functionality. Why does it need to move to debug.cpp to allow this ? David ----- > It also adds some helper functions which make it easy to call the new > 'print_native_stack()' method from within gdb. There's the new helper > function 'pns(frame f)' which takes a frame argument and calls > 'print_native_stack()'. We need the frame argument because gdb inserts > a dummy frame for every call and we can't easily walk over this dummy > frame from our stack printing routine. > > To simplify the creation of the frame object, I've added the helper functions: > > extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { > return frame(sp, fp, pc); > } > > for x86 (in frame_x86.cpp) and > > extern "C" frame make_frame(intptr_t* sp, address pc) { > return frame(sp, pc); > } > > for ppc64 in frame_ppc.cpp. With these helper functions we can now > easily get a mixed stack trace of a Java thread in gdb (see below). > > All the helper functions are protected by '#ifndef PRODUCT' > > Thank you and best regards, > Volker > > > (gdb) call pns(make_frame($sp, $rbp, $pc)) > > "Executing pns" > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) > C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e > V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 > V [libjvm.so+0x75f442] JVM_Sleep+0x312 > j java.lang.Thread.sleep(J)V+0 > j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 > j CrashNative.doIt()V+45 > v ~StubRoutines::call_stub > V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, > methodHandle*, JavaCallArguments*, Thread*)+0xf8f > V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, > methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, > bool, Thread*) [clone .constprop.218]+0xa25 > V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, > objArrayHandle, Thread*)+0x1c8 > V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe > j sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 > j sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 > j sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 > j java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 > j CrashNative.mainJava()V+32 > v ~StubRoutines::call_stub > V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, > methodHandle*, JavaCallArguments*, Thread*)+0xf8f > V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, > _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) > [clone .isra.238] [clone .constprop.250]+0x385 > V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 > C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, > _jmethodID*, ...)+0xb9 > C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 > C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 > C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 > C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 > C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 > C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 > C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 > C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 > C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 > j CrashNative.nativeMethod()V+0 > j CrashNative.main([Ljava/lang/String;)V+9 > v ~StubRoutines::call_stub > V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, > methodHandle*, JavaCallArguments*, Thread*)+0xf8f > V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, > _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) > [clone .isra.238] [clone .constprop.250]+0x385 > V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 > C [libjli.so+0x742a] JavaMain+0x65a > C [libpthread.so.0+0x7e9a] start_thread+0xda > From tobias.hartmann at oracle.com Tue Sep 16 07:10:06 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 16 Sep 2014 09:10:06 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <45497D70-63BD-4A76-9F27-B3E3457638DD@oracle.com> References: <53FF1BF6.8070600@oracle.com> <540979F9.5080407@oracle.com> <45497D70-63BD-4A76-9F27-B3E3457638DD@oracle.com> Message-ID: <5417E24E.5060503@oracle.com> Hi Igor, thanks for the review. On 16.09.2014 01:18, Igor Veresov wrote: > A little thing that worried me.. c2i go to i2c adapters go to the > NonMethod space (via BufferBlob::new), which is fixed and not scaled. > However MH intrinsics (and native adapters) go to MethodNonProfile > space. Since the number of c2i and i2c are signature polymorphic (and > each MH intrinsic has them) perhaps they should go to the > MethodNonProfiled space as well? AdapterBlob would have to have a > different new operator and the OOM handing code in > AdapterHandleLibrary::get_adapter() will have to adjusted. If the NonMethod segment is full we allocate new BufferBlobs in the MethodNonProfiled segment. See 'CodeCache::allocate': if (SegmentedCodeCache && (code_blob_type == CodeBlobType::NonMethod)) { // Fallback solution: Store non-method code in the non-profiled code heap return allocate(size, CodeBlobType::MethodNonProfiled, is_critical); } In the case of c2i and i2c adapters we first try to allocate them in the NonMethod segment and if this fails "fall back" to the MethodNonProfiled segment. The main advantage of this solution is that we avoid having non-method code in the method segments as long as possible. > Nits: > > codeCache.cpp: > 141 // Initialize array of CodeHeaps > 142 GrowableArray* CodeCache::_heaps = new(ResourceObj::C_HEAP, mtCode) GrowableArray (3, true); > Perhaps 3 should be a named constant. May be you can put it in the > enum with segment types you have in CodeBlobType ? Yes, I replaced 3 by the existing constant 'CodeBlobType::All'. > advancedThresholdPolicy.cpp: > 213 // Increase C1 compile threshold when the code cache is filled more > 214 // than specified by IncreaseFirstTierCompileThresholdAt percentage. > 215 // The main intention is to keep enough free space for C2 compiled code > 216 // to achieve peak performance if the code cache is under stress. > 217 if ((TieredStopAtLevel == CompLevel_full_optimization) && (level != CompLevel_full_optimization)) { > 218 double current_reverse_free_ratio = CodeCache::reverse_free_ratio(CodeCache::get_code_blob_type(level)); > 219 if (current_reverse_free_ratio > _increase_threshold_at_ratio) { > 220 k *= exp(current_reverse_free_ratio - _increase_threshold_at_ratio); > 221 } > 222 } > Do you think it still makes sense to do that with segmented code > cache? C1 methods are not really going to take space from C2 methods, > right? Perhaps it should be predicated off for segmented code cache? Thanks for catching this. Yes, C1 methods do not take space from C2 methods. I tried to disable that part some time ago during development and it caused problems with too much C1 code being generated. The sweeper did not remove methods fast enough and the profiled code heap filled up. I would prefer to re-investigate the removal of these lines after the initial integration of the segmented code cache. I planned to file an RFE for "per segment sweeping", i.e., sweeping the profiled segment more often. Also, Albert's fix for JDK-8046809 [1] will probably affect the behaviour of the sweeper. What do you think? New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.08/ Thanks, Tobias [1] https://bugs.openjdk.java.net/browse/JDK-8046809 > > Otherwise looks good. > > igor > > > On Sep 5, 2014, at 1:53 AM, Tobias Hartmann > > wrote: > >> Hi, >> >> could I get another review for this? >> >> Latest webrev is: >> http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ >> >> >> Thanks, >> Tobias >> >> On 28.08.2014 14:09, Tobias Hartmann wrote: >>> Hi, >>> >>> the segmented code cache JEP is now targeted. Please review the >>> final implementation before integration. The previous RFR, including >>> a short description, can be found here [1]. >>> >>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>> Implementation: >>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>> >>> JDK-Test fix: >>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>> >>> >>> Changes since the last review: >>> - Merged with other changes (for example, G1 class unloading changes >>> [2]) >>> - Fixed some minor bugs that showed up during testing >>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>> - Non-method CodeHeap size increased to 5 MB >>> - Fallback solution: Store non-method code in the non-profiled code >>> heap if there is not enough space in the non-method code heap (see >>> 'CodeCache::allocate') >>> >>> Additional testing: >>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>> - Compiler and GC nightlies >>> - jtreg tests >>> - VM (NSK) Testbase >>> - More performance testing (results attached to the bug) >>> >>> Thanks, >>> Tobias >>> >>> [1] >>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >> > From aph at redhat.com Tue Sep 16 07:22:33 2014 From: aph at redhat.com (Andrew Haley) Date: Tue, 16 Sep 2014 08:22:33 +0100 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <540081C9.7020307@oracle.com> References: <53FF1BF6.8070600@oracle.com> <540081C9.7020307@oracle.com> Message-ID: <5417E539.80300@redhat.com> This has some potential impact on AArch64. The issue there is that branch instructions have a range of +- 128Mb. Any further than that and you have to use multiple instructions, and then you have problems with thread-safe patching. So, is it possible to allocate all of the code heaps in a single block, so that we won't exceed that range? Andrew. From tobias.hartmann at oracle.com Tue Sep 16 07:31:58 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 16 Sep 2014 09:31:58 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5417E539.80300@redhat.com> References: <53FF1BF6.8070600@oracle.com> <540081C9.7020307@oracle.com> <5417E539.80300@redhat.com> Message-ID: <5417E76E.5050709@oracle.com> Hi Andrew, thanks for your feedback, please see comments inline. On 16.09.2014 09:22, Andrew Haley wrote: > This has some potential impact on AArch64. > > The issue there is that branch instructions have a range of +- 128Mb. > Any further than that and you have to use multiple instructions, and > then you have problems with thread-safe patching. > > So, is it possible to allocate all of the code heaps in a single > block, so that we won't exceed that range? The code heaps are already allocated next to each other because we have to make sure that the overall memory size does not exceed 2 GB (as this is a general requirement for 32 bit immediates). Further, the code cache segmentation is only enabled with TieredCompilation and a ReservedCodeCacheSize >= 240 MB. Does that answer your question? Thanks, Tobias > Andrew. From aph at redhat.com Tue Sep 16 07:38:23 2014 From: aph at redhat.com (Andrew Haley) Date: Tue, 16 Sep 2014 08:38:23 +0100 Subject: More on memory barriers In-Reply-To: <54174180.4060100@oracle.com> References: <541703DB.5030207@redhat.com> <54174180.4060100@oracle.com> Message-ID: <5417E8EF.7010708@redhat.com> On 15/09/14 20:44, Dean Long wrote: > If volatile store uses AArch64 "stlr" and volatile load uses "ldar", > then is that enough (no additional barriers, including StoreLoad, > required)? That's my understanding from the comments in > orderAccess.hpp regarding ia64 st.rel and ld.acq. Not quite: we'd still need a StoreLoad even after a stlr. I don't think I can use stlr without making changes to C2. My problem is that MemBar nodes are emitted in places where they are needed (e.g. after an object is created) and places where they are not needed (e.g. before a volatile store) and in the back end I can't tell which is which. Ideally there would be more barrier types, and then I could use stlr, but at the moment I emit barriers. In practice I'm not sure that it makes any difference. Code density is about the same with separate barriers because store release instructions have a much more restricted set of addressing modes. Andrew. From aph at redhat.com Tue Sep 16 07:40:19 2014 From: aph at redhat.com (Andrew Haley) Date: Tue, 16 Sep 2014 08:40:19 +0100 Subject: More on memory barriers In-Reply-To: References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> Message-ID: <5417E963.5060103@redhat.com> On 15/09/14 17:20, Vitaly Davidovich wrote: > Looking at hg history, MemBarStoreStore was added a few years ago, whereas > the code in question is much older. The comments in the changelist adding > MemBarStoreStore seem to indicate it was done to address a specific issue, > and my guess is that it wasn't "retrofitted" into all possible places. That sounds plausible. I'll change this to a StoreStore in the AArch64 port and do some testing. Andrew. From aph at redhat.com Tue Sep 16 07:46:19 2014 From: aph at redhat.com (Andrew Haley) Date: Tue, 16 Sep 2014 08:46:19 +0100 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5417E76E.5050709@oracle.com> References: <53FF1BF6.8070600@oracle.com> <540081C9.7020307@oracle.com> <5417E539.80300@redhat.com> <5417E76E.5050709@oracle.com> Message-ID: <5417EACB.3000303@redhat.com> On 16/09/14 08:31, Tobias Hartmann wrote: > Does that answer your question? Yes thanks, Andrew. From igor.veresov at oracle.com Tue Sep 16 08:23:54 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Tue, 16 Sep 2014 01:23:54 -0700 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <5417E24E.5060503@oracle.com> References: <53FF1BF6.8070600@oracle.com> <540979F9.5080407@oracle.com> <45497D70-63BD-4A76-9F27-B3E3457638DD@oracle.com> <5417E24E.5060503@oracle.com> Message-ID: <0BFB57F5-F67C-4168-AE03-80B03844E667@oracle.com> On Sep 16, 2014, at 12:10 AM, Tobias Hartmann wrote: > Hi Igor, > > thanks for the review. > > On 16.09.2014 01:18, Igor Veresov wrote: >> A little thing that worried me.. c2i go to i2c adapters go to the NonMethod space (via BufferBlob::new), which is fixed and not scaled. However MH intrinsics (and native adapters) go to MethodNonProfile space. Since the number of c2i and i2c are signature polymorphic (and each MH intrinsic has them) perhaps they should go to the MethodNonProfiled space as well? AdapterBlob would have to have a different new operator and the OOM handing code in AdapterHandleLibrary::get_adapter() will have to adjusted. > > If the NonMethod segment is full we allocate new BufferBlobs in the MethodNonProfiled segment. See 'CodeCache::allocate': > > if (SegmentedCodeCache && (code_blob_type == CodeBlobType::NonMethod)) { > // Fallback solution: Store non-method code in the non-profiled code heap > return allocate(size, CodeBlobType::MethodNonProfiled, is_critical); > } > > In the case of c2i and i2c adapters we first try to allocate them in the NonMethod segment and if this fails "fall back" to the MethodNonProfiled segment. The main advantage of this solution is that we avoid having non-method code in the method segments as long as possible. Ok, makes sense. > >> Nits: >> >> codeCache.cpp: >> 141 // Initialize array of CodeHeaps >> 142 GrowableArray* CodeCache::_heaps = new(ResourceObj::C_HEAP, mtCode) GrowableArray (3, true); >> Perhaps 3 should be a named constant. May be you can put it in the enum with segment types you have in CodeBlobType ? > > Yes, I replaced 3 by the existing constant 'CodeBlobType::All'. > >> advancedThresholdPolicy.cpp: >> 213 // Increase C1 compile threshold when the code cache is filled more >> 214 // than specified by IncreaseFirstTierCompileThresholdAt percentage. >> 215 // The main intention is to keep enough free space for C2 compiled code >> 216 // to achieve peak performance if the code cache is under stress. >> 217 if ((TieredStopAtLevel == CompLevel_full_optimization) && (level != CompLevel_full_optimization)) { >> 218 double current_reverse_free_ratio = CodeCache::reverse_free_ratio(CodeCache::get_code_blob_type(level)); >> 219 if (current_reverse_free_ratio > _increase_threshold_at_ratio) { >> 220 k *= exp(current_reverse_free_ratio - _increase_threshold_at_ratio); >> 221 } >> 222 } >> Do you think it still makes sense to do that with segmented code cache? C1 methods are not really going to take space from C2 methods, right? Perhaps it should be predicated off for segmented code cache? > > Thanks for catching this. Yes, C1 methods do not take space from C2 methods. I tried to disable that part some time ago during development and it caused problems with too much C1 code being generated. The sweeper did not remove methods fast enough and the profiled code heap filled up. > > I would prefer to re-investigate the removal of these lines after the initial integration of the segmented code cache. I planned to file an RFE for "per segment sweeping", i.e., sweeping the profiled segment more often. Also, Albert's fix for JDK-8046809 [1] will probably affect the behaviour of the sweeper. > > What do you think? Having a second look, I think this code is fine. It?s ok to increase the threshold for profiled code if there?s too much space pressure. Even if we don?t compile it with C1, we?ll start profiling in the interpreter, which is a reasonable behavior. However we might want to adjust the profile code cache size if the behavior you describe happens with real apps because that?s suboptimal. > > New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.08/ Looks fine. Thanks, igor > > Thanks, > Tobias > > [1] https://bugs.openjdk.java.net/browse/JDK-8046809 > >> >> Otherwise looks good. >> >> igor >> >> >> On Sep 5, 2014, at 1:53 AM, Tobias Hartmann wrote: >> >>> Hi, >>> >>> could I get another review for this? >>> >>> Latest webrev is: http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ >>> >>> Thanks, >>> Tobias >>> >>> On 28.08.2014 14:09, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> the segmented code cache JEP is now targeted. Please review the final implementation before integration. The previous RFR, including a short description, can be found here [1]. >>>> >>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>> Implementation: http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>> JDK-Test fix: http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>> >>>> Changes since the last review: >>>> - Merged with other changes (for example, G1 class unloading changes [2]) >>>> - Fixed some minor bugs that showed up during testing >>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>> - Non-method CodeHeap size increased to 5 MB >>>> - Fallback solution: Store non-method code in the non-profiled code heap if there is not enough space in the non-method code heap (see 'CodeCache::allocate') >>>> >>>> Additional testing: >>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>> - Compiler and GC nightlies >>>> - jtreg tests >>>> - VM (NSK) Testbase >>>> - More performance testing (results attached to the bug) >>>> >>>> Thanks, >>>> Tobias >>>> >>>> [1] http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>> >> > From tobias.hartmann at oracle.com Tue Sep 16 08:31:44 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 16 Sep 2014 10:31:44 +0200 Subject: [9] RFR (L): 8015774: Add support for multiple code heaps In-Reply-To: <0BFB57F5-F67C-4168-AE03-80B03844E667@oracle.com> References: <53FF1BF6.8070600@oracle.com> <540979F9.5080407@oracle.com> <45497D70-63BD-4A76-9F27-B3E3457638DD@oracle.com> <5417E24E.5060503@oracle.com> <0BFB57F5-F67C-4168-AE03-80B03844E667@oracle.com> Message-ID: <5417F570.7030007@oracle.com> Igor, thanks for the review. Best, Tobias On 16.09.2014 10:23, Igor Veresov wrote: > > On Sep 16, 2014, at 12:10 AM, Tobias Hartmann > > wrote: > >> Hi Igor, >> >> thanks for the review. >> >> On 16.09.2014 01:18, Igor Veresov wrote: >>> A little thing that worried me.. c2i go to i2c adapters go to the >>> NonMethod space (via BufferBlob::new), which is fixed and not >>> scaled. However MH intrinsics (and native adapters) go to >>> MethodNonProfile space. Since the number of c2i and i2c are >>> signature polymorphic (and each MH intrinsic has them) perhaps they >>> should go to the MethodNonProfiled space as well? AdapterBlob would >>> have to have a different new operator and the OOM handing code in >>> AdapterHandleLibrary::get_adapter() will have to adjusted. >> >> If the NonMethod segment is full we allocate new BufferBlobs in the >> MethodNonProfiled segment. See 'CodeCache::allocate': >> >> if (SegmentedCodeCache && (code_blob_type == >> CodeBlobType::NonMethod)) { >> // Fallback solution: Store non-method code in the >> non-profiled code heap >> return allocate(size, CodeBlobType::MethodNonProfiled, >> is_critical); >> } >> >> In the case of c2i and i2c adapters we first try to allocate them in >> the NonMethod segment and if this fails "fall back" to the >> MethodNonProfiled segment. The main advantage of this solution is >> that we avoid having non-method code in the method segments as long >> as possible. > > Ok, makes sense. > >> >>> Nits: >>> >>> codeCache.cpp: >>> 141 // Initialize array of CodeHeaps >>> 142 GrowableArray* CodeCache::_heaps = new(ResourceObj::C_HEAP, mtCode) GrowableArray (3, true); >>> Perhaps 3 should be a named constant. May be you can put it in the >>> enum with segment types you have in CodeBlobType ? >> >> Yes, I replaced 3 by the existing constant 'CodeBlobType::All'. >> >>> advancedThresholdPolicy.cpp: >>> 213 // Increase C1 compile threshold when the code cache is filled more >>> 214 // than specified by IncreaseFirstTierCompileThresholdAt percentage. >>> 215 // The main intention is to keep enough free space for C2 compiled code >>> 216 // to achieve peak performance if the code cache is under stress. >>> 217 if ((TieredStopAtLevel == CompLevel_full_optimization) && (level != CompLevel_full_optimization)) { >>> 218 double current_reverse_free_ratio = CodeCache::reverse_free_ratio(CodeCache::get_code_blob_type(level)); >>> 219 if (current_reverse_free_ratio > _increase_threshold_at_ratio) { >>> 220 k *= exp(current_reverse_free_ratio - _increase_threshold_at_ratio); >>> 221 } >>> 222 } >>> Do you think it still makes sense to do that with segmented code >>> cache? C1 methods are not really going to take space from C2 >>> methods, right? Perhaps it should be predicated off for segmented >>> code cache? >> >> Thanks for catching this. Yes, C1 methods do not take space from C2 >> methods. I tried to disable that part some time ago during >> development and it caused problems with too much C1 code being >> generated. The sweeper did not remove methods fast enough and the >> profiled code heap filled up. >> >> I would prefer to re-investigate the removal of these lines after the >> initial integration of the segmented code cache. I planned to file an >> RFE for "per segment sweeping", i.e., sweeping the profiled segment >> more often. Also, Albert's fix for JDK-8046809 [1] will probably >> affect the behaviour of the sweeper. >> >> What do you think? > > Having a second look, I think this code is fine. It?s ok to increase > the threshold for profiled code if there?s too much space pressure. > Even if we don?t compile it with C1, we?ll start profiling in the > interpreter, which is a reasonable behavior. However we might want to > adjust the profile code cache size if the behavior you describe > happens with real apps because that?s suboptimal. > >> >> New webrev: http://cr.openjdk.java.net/~thartmann/8015774/webrev.08/ > > > Looks fine. > > Thanks, > igor > > >> >> Thanks, >> Tobias >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8046809 >> >>> >>> Otherwise looks good. >>> >>> igor >>> >>> >>> On Sep 5, 2014, at 1:53 AM, Tobias Hartmann >>> > wrote: >>> >>>> Hi, >>>> >>>> could I get another review for this? >>>> >>>> Latest webrev is: >>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.07/ >>>> >>>> >>>> Thanks, >>>> Tobias >>>> >>>> On 28.08.2014 14:09, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> the segmented code cache JEP is now targeted. Please review the >>>>> final implementation before integration. The previous RFR, >>>>> including a short description, can be found here [1]. >>>>> >>>>> JEP: https://bugs.openjdk.java.net/browse/JDK-8043304 >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8015774 >>>>> Implementation: >>>>> http://cr.openjdk.java.net/~thartmann/8015774/webrev.03/ >>>>> >>>>> JDK-Test fix: >>>>> http://cr.openjdk.java.net/~thartmann/8015774_jdk_test/webrev.00/ >>>>> >>>>> >>>>> Changes since the last review: >>>>> - Merged with other changes (for example, G1 class unloading >>>>> changes [2]) >>>>> - Fixed some minor bugs that showed up during testing >>>>> - Refactoring of 'NMethodIterator' and CodeCache implementation >>>>> - Non-method CodeHeap size increased to 5 MB >>>>> - Fallback solution: Store non-method code in the non-profiled >>>>> code heap if there is not enough space in the non-method code heap >>>>> (see 'CodeCache::allocate') >>>>> >>>>> Additional testing: >>>>> - BigApps (Weblogic, Dacapo, runThese, Kitchensink) >>>>> - Compiler and GC nightlies >>>>> - jtreg tests >>>>> - VM (NSK) Testbase >>>>> - More performance testing (results attached to the bug) >>>>> >>>>> Thanks, >>>>> Tobias >>>>> >>>>> [1] >>>>> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-April/014098.html >>>>> [2] https://bugs.openjdk.java.net/browse/JDK-8049421 >>>> >>> >> > From magnus.ihse.bursie at oracle.com Tue Sep 16 09:37:46 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Tue, 16 Sep 2014 11:37:46 +0200 Subject: RFR: JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <5417DC3C.9000109@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> <5409A3D0.3070208@oracle.com> <5416CA7D.7000002@oracle.com> <5417DC3C.9000109@oracle.com> Message-ID: <541804EA.4000204@oracle.com> On 2014-09-16 08:44, David Holmes wrote: > Hi Magnus, > > This seems okay to me. Thank you. Since this is a hotspot change, I assume I need a second reviewer, and that it should be pushed into hs-rt, right? > > Just for everyone else's benefit. We can pass LOG=info etc as a > configure arg to set the default log level. But we can also pass > LOG_LEVEL=XXX as a make arg to override that default. Actually, you cannot set the default log level in configure in that way. :-( But that sounded like a good idea, though. I'll open a bug for it. Normally, LOG_LEVEL is considered an internal variable that should not be set by the user, but if you want to call the hotspot makefile directly, and still benefit from this change, then you need to set LOG_LEVEL (instead of LOG). LOG is processed in the top makefile, and one of the result of the processing is that LOG_LEVEL is set. /Magnus > > Thanks, > David > > On 15/09/2014 9:16 PM, Magnus Ihse Bursie wrote: >> Here is the full review of this fix. I have now applied the same pattern >> as I used on linux to aix, bsd and solaris as well. >> >> It turned out that windows was problematic. Due to the big difference >> between windows and the unix versions (different and/or limited nmake >> flexibility, different and/or limited shell functionality and >> differences in design in the hotspot make files), I did not manage to >> get a working Windows version of this fix in a reasonable time frame. >> (This fix has already taken more time than I wanted to spend on it.) >> >> I suggest that this fix nevertheless is an improvment on the other >> platforms, and that I open a new bug report for the remaining work on >> Windows. >> >> And here's what I wrote about the preliminary version of this fix: >> >> Even in the default log level ("warn"), hotspots builds are extremely >> verbose. With the new jigsaw build system, hotspot is build in parallel >> with the jdk, and the sheer amount of hotspot output makes the jdk >> output practically disappear. >> >> This fix will make the following changes: >> * When hotspot is build from the top dir with the default log level, all >> repetetive and purely informative output is hidden (e.g. names of files >> compiled, and the "INFO:" blobs). >> * When hotspot is build from the top dir, with any other log level >> (info, debug, trace), all output will be there, as before. >> * When hotspot is build from the hotspot repo, all output will be there, >> as before. >> >> I have tested building on JPRT with LOG=debug and LOG=warn, and it all >> looks as it should as far as I could tell. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 >> WebRev: >> http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.02 >> >> >> >> /Magnus From erik.joelsson at oracle.com Tue Sep 16 09:48:03 2014 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Tue, 16 Sep 2014 11:48:03 +0200 Subject: RFR: JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <5416CA7D.7000002@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> <5409A3D0.3070208@oracle.com> <5416CA7D.7000002@oracle.com> Message-ID: <54180753.4030909@oracle.com> Looks good to me. /Erik On 2014-09-15 13:16, Magnus Ihse Bursie wrote: > Here is the full review of this fix. I have now applied the same > pattern as I used on linux to aix, bsd and solaris as well. > > It turned out that windows was problematic. Due to the big difference > between windows and the unix versions (different and/or limited nmake > flexibility, different and/or limited shell functionality and > differences in design in the hotspot make files), I did not manage to > get a working Windows version of this fix in a reasonable time frame. > (This fix has already taken more time than I wanted to spend on it.) > > I suggest that this fix nevertheless is an improvment on the other > platforms, and that I open a new bug report for the remaining work on > Windows. > > And here's what I wrote about the preliminary version of this fix: > > Even in the default log level ("warn"), hotspots builds are extremely > verbose. With the new jigsaw build system, hotspot is build in > parallel with the jdk, and the sheer amount of hotspot output makes > the jdk output practically disappear. > > This fix will make the following changes: > * When hotspot is build from the top dir with the default log level, > all repetetive and purely informative output is hidden (e.g. names of > files compiled, and the "INFO:" blobs). > * When hotspot is build from the top dir, with any other log level > (info, debug, trace), all output will be there, as before. > * When hotspot is build from the hotspot repo, all output will be > there, as before. > > I have tested building on JPRT with LOG=debug and LOG=warn, and it all > looks as it should as far as I could tell. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 > WebRev: > http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.02 > > /Magnus From magnus.ihse.bursie at oracle.com Tue Sep 16 09:55:45 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Tue, 16 Sep 2014 11:55:45 +0200 Subject: RFR: JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <54180753.4030909@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> <5409A3D0.3070208@oracle.com> <5416CA7D.7000002@oracle.com> <54180753.4030909@oracle.com> Message-ID: <54180921.2020403@oracle.com> On 2014-09-16 11:48, Erik Joelsson wrote: > Looks good to me. Thank you Erik. /Magnus From dl at cs.oswego.edu Tue Sep 16 10:44:42 2014 From: dl at cs.oswego.edu (Doug Lea) Date: Tue, 16 Sep 2014 06:44:42 -0400 Subject: More on memory barriers In-Reply-To: <5417E8EF.7010708@redhat.com> References: <541703DB.5030207@redhat.com> <54174180.4060100@oracle.com> <5417E8EF.7010708@redhat.com> Message-ID: <5418149A.706@cs.oswego.edu> On 09/16/2014 03:38 AM, Andrew Haley wrote: > On 15/09/14 20:44, Dean Long wrote: >> If volatile store uses AArch64 "stlr" and volatile load uses "ldar", >> then is that enough (no additional barriers, including StoreLoad, >> required)? That's my understanding from the comments in >> orderAccess.hpp regarding ia64 st.rel and ld.acq. > > Not quite: we'd still need a StoreLoad even after a stlr. Not always. The cookbook conservatively approximates JMM rules. But maybe in practice you do because... > I don't think I can use stlr without making changes to C2. ... in particular, a better way of handling fused access+fence instructions, which looks to be useful across several processors (even x86). -Doug From david.holmes at oracle.com Tue Sep 16 12:04:12 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 16 Sep 2014 22:04:12 +1000 Subject: RFR: JDK-8056999 Make hotspot builds less verbose on default log level In-Reply-To: <541804EA.4000204@oracle.com> References: <54046278.7050404@oracle.com> <5407EC21.8050709@oracle.com> <54098813.6070903@oracle.com> <5409A3D0.3070208@oracle.com> <5416CA7D.7000002@oracle.com> <5417DC3C.9000109@oracle.com> <541804EA.4000204@oracle.com> Message-ID: <5418273C.2030500@oracle.com> On 16/09/2014 7:37 PM, Magnus Ihse Bursie wrote: > On 2014-09-16 08:44, David Holmes wrote: >> Hi Magnus, >> >> This seems okay to me. > > Thank you. Since this is a hotspot change, I assume I need a second > reviewer, and that it should be pushed into hs-rt, right? Right. >> >> Just for everyone else's benefit. We can pass LOG=info etc as a >> configure arg to set the default log level. But we can also pass >> LOG_LEVEL=XXX as a make arg to override that default. > > Actually, you cannot set the default log level in configure in that way. > :-( But that sounded like a good idea, though. I'll open a bug for it. > Normally, LOG_LEVEL is considered an internal variable that should not > be set by the user, but if you want to call the hotspot makefile > directly, and still benefit from this change, then you need to set > LOG_LEVEL (instead of LOG). LOG is processed in the top makefile, and > one of the result of the processing is that LOG_LEVEL is set. Hmmm. I have been using this technique with apparent success, but only to enable more verbose logging - so perhaps it was not working as I was thinking. David > /Magnus >> >> Thanks, >> David >> >> On 15/09/2014 9:16 PM, Magnus Ihse Bursie wrote: >>> Here is the full review of this fix. I have now applied the same pattern >>> as I used on linux to aix, bsd and solaris as well. >>> >>> It turned out that windows was problematic. Due to the big difference >>> between windows and the unix versions (different and/or limited nmake >>> flexibility, different and/or limited shell functionality and >>> differences in design in the hotspot make files), I did not manage to >>> get a working Windows version of this fix in a reasonable time frame. >>> (This fix has already taken more time than I wanted to spend on it.) >>> >>> I suggest that this fix nevertheless is an improvment on the other >>> platforms, and that I open a new bug report for the remaining work on >>> Windows. >>> >>> And here's what I wrote about the preliminary version of this fix: >>> >>> Even in the default log level ("warn"), hotspots builds are extremely >>> verbose. With the new jigsaw build system, hotspot is build in parallel >>> with the jdk, and the sheer amount of hotspot output makes the jdk >>> output practically disappear. >>> >>> This fix will make the following changes: >>> * When hotspot is build from the top dir with the default log level, all >>> repetetive and purely informative output is hidden (e.g. names of files >>> compiled, and the "INFO:" blobs). >>> * When hotspot is build from the top dir, with any other log level >>> (info, debug, trace), all output will be there, as before. >>> * When hotspot is build from the hotspot repo, all output will be there, >>> as before. >>> >>> I have tested building on JPRT with LOG=debug and LOG=warn, and it all >>> looks as it should as far as I could tell. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8056999 >>> WebRev: >>> http://cr.openjdk.java.net/~ihse/JDK-8056999-less-verbose-hotspot-builds/webrev.02 >>> >>> >>> >>> /Magnus > From volker.simonis at gmail.com Tue Sep 16 13:48:57 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 16 Sep 2014 15:48:57 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <5417DE03.6060301@oracle.com> References: <5417DE03.6060301@oracle.com> Message-ID: 'print_native_stack()' must be visible in both vmError.cpp and debug.cpp. Initially I saw that vmError.cpp already included debug.hpp so I decided to declare it in debug.hpp. But now I realized that also debug.cpp includes vmError.hpp so I could just as well declare 'print_native_stack()' in vmError.hpp and leave the implementation in vmError.cpp. Do you want me to change that? Thank you and best regards, Volker On Tue, Sep 16, 2014 at 8:51 AM, David Holmes wrote: > Hi Volker, > > On 13/09/2014 5:15 AM, Volker Simonis wrote: >> >> Hi, >> >> could you please review and sponsor the following small change which >> should make debugging a little more comfortabel (at least on Linux for >> now): >> >> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >> https://bugs.openjdk.java.net/browse/JDK-8058345 >> >> In the hs_err files we have a nice mixed stack trace which contains >> both, Java and native frames. >> It would be nice if we could make this functionality available from >> within gdb during debugging sessions (until now we can only print the >> pure Java stack with the "ps()" helper function from debug.cpp). >> >> This new feature can be easily achieved by refactoring the >> corresponding stack printing code from VMError::report() in >> vmError.cpp into its own method in debug.cpp. This change extracts >> that code into the new function 'print_native_stack()' in debug.cpp >> without changing anything of the functionality. > > > Why does it need to move to debug.cpp to allow this ? > > David > ----- > > >> It also adds some helper functions which make it easy to call the new >> 'print_native_stack()' method from within gdb. There's the new helper >> function 'pns(frame f)' which takes a frame argument and calls >> 'print_native_stack()'. We need the frame argument because gdb inserts >> a dummy frame for every call and we can't easily walk over this dummy >> frame from our stack printing routine. >> >> To simplify the creation of the frame object, I've added the helper >> functions: >> >> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >> return frame(sp, fp, pc); >> } >> >> for x86 (in frame_x86.cpp) and >> >> extern "C" frame make_frame(intptr_t* sp, address pc) { >> return frame(sp, pc); >> } >> >> for ppc64 in frame_ppc.cpp. With these helper functions we can now >> easily get a mixed stack trace of a Java thread in gdb (see below). >> >> All the helper functions are protected by '#ifndef PRODUCT' >> >> Thank you and best regards, >> Volker >> >> >> (gdb) call pns(make_frame($sp, $rbp, $pc)) >> >> "Executing pns" >> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >> code) >> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >> j java.lang.Thread.sleep(J)V+0 >> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >> j CrashNative.doIt()V+45 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >> methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, >> bool, Thread*) [clone .constprop.218]+0xa25 >> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >> objArrayHandle, Thread*)+0x1c8 >> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >> j >> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >> j >> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >> j >> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >> j >> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >> j CrashNative.mainJava()V+32 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >> [clone .isra.238] [clone .constprop.250]+0x385 >> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >> _jmethodID*, ...)+0xb9 >> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >> j CrashNative.nativeMethod()V+0 >> j CrashNative.main([Ljava/lang/String;)V+9 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >> [clone .isra.238] [clone .constprop.250]+0x385 >> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >> C [libjli.so+0x742a] JavaMain+0x65a >> C [libpthread.so.0+0x7e9a] start_thread+0xda >> > From volker.simonis at gmail.com Tue Sep 16 16:35:19 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 16 Sep 2014 18:35:19 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: <5417DE03.6060301@oracle.com> Message-ID: Hi, while testing my change, I found two other small problems with native stack traces: 1. we can not walk native wrappers on (at least not on Linux/amd64) because they are treated as native "C" frames. However, if the native wrapper was called from a compiled frame which had no valid frame pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad frame. This can be easily fixed by treating native wrappers like java frames. 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err file)" introduced a similar problem. If we walk tha stack from a native wrapper down to a compiled frame, we will have a frame with an invalid frame pointer. In that case, the newly introduced check from change 8035983 will fail, because fr.sender_sp() depends on a valid fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which should do the same but also works for compiled frames with invalid fp. Here's the new webrev: http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ What dou you think? Thank you and best regards, Volker On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis wrote: > 'print_native_stack()' must be visible in both vmError.cpp and > debug.cpp. Initially I saw that vmError.cpp already included debug.hpp > so I decided to declare it in debug.hpp. But now I realized that also > debug.cpp includes vmError.hpp so I could just as well declare > 'print_native_stack()' in vmError.hpp and leave the implementation in > vmError.cpp. Do you want me to change that? > > Thank you and best regards, > Volker > > > On Tue, Sep 16, 2014 at 8:51 AM, David Holmes wrote: >> Hi Volker, >> >> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> could you please review and sponsor the following small change which >>> should make debugging a little more comfortabel (at least on Linux for >>> now): >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>> >>> In the hs_err files we have a nice mixed stack trace which contains >>> both, Java and native frames. >>> It would be nice if we could make this functionality available from >>> within gdb during debugging sessions (until now we can only print the >>> pure Java stack with the "ps()" helper function from debug.cpp). >>> >>> This new feature can be easily achieved by refactoring the >>> corresponding stack printing code from VMError::report() in >>> vmError.cpp into its own method in debug.cpp. This change extracts >>> that code into the new function 'print_native_stack()' in debug.cpp >>> without changing anything of the functionality. >> >> >> Why does it need to move to debug.cpp to allow this ? >> >> David >> ----- >> >> >>> It also adds some helper functions which make it easy to call the new >>> 'print_native_stack()' method from within gdb. There's the new helper >>> function 'pns(frame f)' which takes a frame argument and calls >>> 'print_native_stack()'. We need the frame argument because gdb inserts >>> a dummy frame for every call and we can't easily walk over this dummy >>> frame from our stack printing routine. >>> >>> To simplify the creation of the frame object, I've added the helper >>> functions: >>> >>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>> return frame(sp, fp, pc); >>> } >>> >>> for x86 (in frame_x86.cpp) and >>> >>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>> return frame(sp, pc); >>> } >>> >>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>> easily get a mixed stack trace of a Java thread in gdb (see below). >>> >>> All the helper functions are protected by '#ifndef PRODUCT' >>> >>> Thank you and best regards, >>> Volker >>> >>> >>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>> >>> "Executing pns" >>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >>> code) >>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>> j java.lang.Thread.sleep(J)V+0 >>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>> j CrashNative.doIt()V+45 >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>> methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, >>> bool, Thread*) [clone .constprop.218]+0xa25 >>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>> objArrayHandle, Thread*)+0x1c8 >>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>> j >>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>> j >>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>> j >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>> j >>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>> j CrashNative.mainJava()V+32 >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>> [clone .isra.238] [clone .constprop.250]+0x385 >>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >>> _jmethodID*, ...)+0xb9 >>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>> j CrashNative.nativeMethod()V+0 >>> j CrashNative.main([Ljava/lang/String;)V+9 >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>> [clone .isra.238] [clone .constprop.250]+0x385 >>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>> C [libjli.so+0x742a] JavaMain+0x65a >>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>> >> From vladimir.kozlov at oracle.com Tue Sep 16 18:11:37 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 16 Sep 2014 11:11:37 -0700 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: <5417DE03.6060301@oracle.com> Message-ID: <54187D59.5050602@oracle.com> Thank you for fixing frame walk. I don't see where make_frame() is used. Thanks, Vladimir On 9/16/14 9:35 AM, Volker Simonis wrote: > Hi, > > while testing my change, I found two other small problems with native > stack traces: > > 1. we can not walk native wrappers on (at least not on Linux/amd64) > because they are treated as native "C" frames. However, if the native > wrapper was called from a compiled frame which had no valid frame > pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad > frame. This can be easily fixed by treating native wrappers like java > frames. > > 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err > file)" introduced a similar problem. If we walk tha stack from a > native wrapper down to a compiled frame, we will have a frame with an > invalid frame pointer. In that case, the newly introduced check from > change 8035983 will fail, because fr.sender_sp() depends on a valid > fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which > should do the same but also works for compiled frames with invalid fp. > > Here's the new webrev: > > http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ > > What dou you think? > > Thank you and best regards, > Volker > > > On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis > wrote: >> 'print_native_stack()' must be visible in both vmError.cpp and >> debug.cpp. Initially I saw that vmError.cpp already included debug.hpp >> so I decided to declare it in debug.hpp. But now I realized that also >> debug.cpp includes vmError.hpp so I could just as well declare >> 'print_native_stack()' in vmError.hpp and leave the implementation in >> vmError.cpp. Do you want me to change that? >> >> Thank you and best regards, >> Volker >> >> >> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes wrote: >>> Hi Volker, >>> >>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>> >>>> Hi, >>>> >>>> could you please review and sponsor the following small change which >>>> should make debugging a little more comfortabel (at least on Linux for >>>> now): >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>> >>>> In the hs_err files we have a nice mixed stack trace which contains >>>> both, Java and native frames. >>>> It would be nice if we could make this functionality available from >>>> within gdb during debugging sessions (until now we can only print the >>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>> >>>> This new feature can be easily achieved by refactoring the >>>> corresponding stack printing code from VMError::report() in >>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>> that code into the new function 'print_native_stack()' in debug.cpp >>>> without changing anything of the functionality. >>> >>> >>> Why does it need to move to debug.cpp to allow this ? >>> >>> David >>> ----- >>> >>> >>>> It also adds some helper functions which make it easy to call the new >>>> 'print_native_stack()' method from within gdb. There's the new helper >>>> function 'pns(frame f)' which takes a frame argument and calls >>>> 'print_native_stack()'. We need the frame argument because gdb inserts >>>> a dummy frame for every call and we can't easily walk over this dummy >>>> frame from our stack printing routine. >>>> >>>> To simplify the creation of the frame object, I've added the helper >>>> functions: >>>> >>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>>> return frame(sp, fp, pc); >>>> } >>>> >>>> for x86 (in frame_x86.cpp) and >>>> >>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>> return frame(sp, pc); >>>> } >>>> >>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>> >>>> All the helper functions are protected by '#ifndef PRODUCT' >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> >>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>> >>>> "Executing pns" >>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >>>> code) >>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>> j java.lang.Thread.sleep(J)V+0 >>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>> j CrashNative.doIt()V+45 >>>> v ~StubRoutines::call_stub >>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>> methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, >>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>>> objArrayHandle, Thread*)+0x1c8 >>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>> j >>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>> j >>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>> j >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>> j >>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>> j CrashNative.mainJava()V+32 >>>> v ~StubRoutines::call_stub >>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >>>> _jmethodID*, ...)+0xb9 >>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>> j CrashNative.nativeMethod()V+0 >>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>> v ~StubRoutines::call_stub >>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>> C [libjli.so+0x742a] JavaMain+0x65a >>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>> >>> From jesper.wilhelmsson at oracle.com Tue Sep 16 18:32:34 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Tue, 16 Sep 2014 20:32:34 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio Message-ID: <54188242.9060808@oracle.com> Hi, The fix for JDK-8055006 was reviewed by several engineers and was pushed directly to 8u40 due to time constraints. This is a forward port to get the same changes into JDK 9. There are two webrevs, one for HotSpot and one for the JDK. The 8u40 HotSpot change applied cleanly to 9 so if this was a traditional backport it wouldn't require another review. But since this is a weird situation and I'm pushing to 9 I'll ask for reviews just to be on the safe side. Also, the original 8u40 push contained some unnecessary changes that was later cleaned up by JDK-8056056. In this port to 9 I have merged these two changes into one to avoid introducing a known issue only to remove it again. The JDK change is new. The makefiles differ between 8u40 and 9 and this new change makes use of functionality not present in 8u40. This patch was provided by Erik Joelsson and I have reviewed it myself, but it needs two reviews so another one is welcome. Bug: https://bugs.openjdk.java.net/browse/JDK-8055006 Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/jdk9/ 8u40 Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/ 8u40 changes: http://hg.openjdk.java.net/jdk8u/jdk8u-dev/hotspot/rev/f933a15469d4 http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/312152328471 Bug and change for the second 8u40 fix: https://bugs.openjdk.java.net/browse/JDK-8056056 http://hg.openjdk.java.net/jdk8u/hs-dev/hotspot/rev/9be4ca335650 Thanks! /Jesper From volker.simonis at gmail.com Tue Sep 16 19:21:58 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Tue, 16 Sep 2014 21:21:58 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <54187D59.5050602@oracle.com> References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> Message-ID: Hi Vladimir, thanks for looking at the change. 'make_frame' is only intended to be used from within the debugger to simplify the usage of the new 'pns()' (i.e. "print native stack") helper. It can be used as follows: (gdb) call pns(make_frame($sp, $rbp, $pc)) "Executing pns" Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 V [libjvm.so+0x75f442] JVM_Sleep+0x312 j java.lang.Thread.sleep(J)V+0 j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 j CrashNative.doIt()V+45 v ~StubRoutines::call_stub V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, Thread*)+0xf8f What about the two fixesin in 'print_native_stack()' - do you think they are OK? Should I move 'print_native_stack()' to vmError.cpp as suggested by David? Thank you and best regards, Volker On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov wrote: > Thank you for fixing frame walk. > I don't see where make_frame() is used. > > Thanks, > Vladimir > > > On 9/16/14 9:35 AM, Volker Simonis wrote: >> >> Hi, >> >> while testing my change, I found two other small problems with native >> stack traces: >> >> 1. we can not walk native wrappers on (at least not on Linux/amd64) >> because they are treated as native "C" frames. However, if the native >> wrapper was called from a compiled frame which had no valid frame >> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >> frame. This can be easily fixed by treating native wrappers like java >> frames. >> >> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >> file)" introduced a similar problem. If we walk tha stack from a >> native wrapper down to a compiled frame, we will have a frame with an >> invalid frame pointer. In that case, the newly introduced check from >> change 8035983 will fail, because fr.sender_sp() depends on a valid >> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >> should do the same but also works for compiled frames with invalid fp. >> >> Here's the new webrev: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >> >> What dou you think? >> >> Thank you and best regards, >> Volker >> >> >> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >> wrote: >>> >>> 'print_native_stack()' must be visible in both vmError.cpp and >>> debug.cpp. Initially I saw that vmError.cpp already included debug.hpp >>> so I decided to declare it in debug.hpp. But now I realized that also >>> debug.cpp includes vmError.hpp so I could just as well declare >>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>> vmError.cpp. Do you want me to change that? >>> >>> Thank you and best regards, >>> Volker >>> >>> >>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>> wrote: >>>> >>>> Hi Volker, >>>> >>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> could you please review and sponsor the following small change which >>>>> should make debugging a little more comfortabel (at least on Linux for >>>>> now): >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>> >>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>> both, Java and native frames. >>>>> It would be nice if we could make this functionality available from >>>>> within gdb during debugging sessions (until now we can only print the >>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>> >>>>> This new feature can be easily achieved by refactoring the >>>>> corresponding stack printing code from VMError::report() in >>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>> without changing anything of the functionality. >>>> >>>> >>>> >>>> Why does it need to move to debug.cpp to allow this ? >>>> >>>> David >>>> ----- >>>> >>>> >>>>> It also adds some helper functions which make it easy to call the new >>>>> 'print_native_stack()' method from within gdb. There's the new helper >>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>> 'print_native_stack()'. We need the frame argument because gdb inserts >>>>> a dummy frame for every call and we can't easily walk over this dummy >>>>> frame from our stack printing routine. >>>>> >>>>> To simplify the creation of the frame object, I've added the helper >>>>> functions: >>>>> >>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>>>> return frame(sp, fp, pc); >>>>> } >>>>> >>>>> for x86 (in frame_x86.cpp) and >>>>> >>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>> return frame(sp, pc); >>>>> } >>>>> >>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>> >>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> >>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>> >>>>> "Executing pns" >>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>> C=native >>>>> code) >>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>> j java.lang.Thread.sleep(J)V+0 >>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>> j CrashNative.doIt()V+45 >>>>> v ~StubRoutines::call_stub >>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, >>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>>>> objArrayHandle, Thread*)+0x1c8 >>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>> j >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>> j >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>> j >>>>> >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>> j >>>>> >>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>> j CrashNative.mainJava()V+32 >>>>> v ~StubRoutines::call_stub >>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>> _jmethodID*, ...)+0xb9 >>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>> j CrashNative.nativeMethod()V+0 >>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>> v ~StubRoutines::call_stub >>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>> >>>> > From thomas.schatzl at oracle.com Tue Sep 16 20:16:43 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Tue, 16 Sep 2014 22:16:43 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <54188242.9060808@oracle.com> References: <54188242.9060808@oracle.com> Message-ID: <1410898603.2833.7.camel@cirrus> Hi, On Tue, 2014-09-16 at 20:32 +0200, Jesper Wilhelmsson wrote: > Hi, > > The fix for JDK-8055006 was reviewed by several engineers and was pushed > directly to 8u40 due to time constraints. This is a forward port to get the same > changes into JDK 9. > > There are two webrevs, one for HotSpot and one for the JDK. > > The 8u40 HotSpot change applied cleanly to 9 so if this was a traditional > backport it wouldn't require another review. But since this is a weird situation > and I'm pushing to 9 I'll ask for reviews just to be on the safe side. > Also, the original 8u40 push contained some unnecessary changes that was later > cleaned up by JDK-8056056. In this port to 9 I have merged these two changes > into one to avoid introducing a known issue only to remove it again. > I would prefer if you pushed all changes in the order they were applied at once, even the ones that were buggy and their fix. Combining changesets during porting makes comparing source trees to find changesets that might have been overlooked very hard. Thanks, Thomas From jesper.wilhelmsson at oracle.com Tue Sep 16 21:51:56 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Tue, 16 Sep 2014 23:51:56 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <1410898603.2833.7.camel@cirrus> References: <54188242.9060808@oracle.com> <1410898603.2833.7.camel@cirrus> Message-ID: <5418B0FC.9060203@oracle.com> Thomas Schatzl skrev 16/9/14 22:16: > Hi, > > On Tue, 2014-09-16 at 20:32 +0200, Jesper Wilhelmsson wrote: >> Hi, >> >> The fix for JDK-8055006 was reviewed by several engineers and was pushed >> directly to 8u40 due to time constraints. This is a forward port to get the same >> changes into JDK 9. >> >> There are two webrevs, one for HotSpot and one for the JDK. >> >> The 8u40 HotSpot change applied cleanly to 9 so if this was a traditional >> backport it wouldn't require another review. But since this is a weird situation >> and I'm pushing to 9 I'll ask for reviews just to be on the safe side. >> Also, the original 8u40 push contained some unnecessary changes that was later >> cleaned up by JDK-8056056. In this port to 9 I have merged these two changes >> into one to avoid introducing a known issue only to remove it again. >> > > I would prefer if you pushed all changes in the order they were applied > at once, even the ones that were buggy and their fix. > > Combining changesets during porting makes comparing source trees to find > changesets that might have been overlooked very hard. OK, will do. /Jesper > > Thanks, > Thomas > > From vladimir.kozlov at oracle.com Tue Sep 16 22:10:07 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Tue, 16 Sep 2014 15:10:07 -0700 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> Message-ID: <5418B53F.7050508@oracle.com> On 9/16/14 12:21 PM, Volker Simonis wrote: > Hi Vladimir, > > thanks for looking at the change. > > 'make_frame' is only intended to be used from within the debugger to > simplify the usage of the new 'pns()' (i.e. "print native stack") > helper. It can be used as follows: > > (gdb) call pns(make_frame($sp, $rbp, $pc)) It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and let it call make_frame()? To have make_frame() only on ppc and x86 will not allow to use pns() on other platforms. Would be nice to have pns() version (names different) without input parameters. Can we use os::current_frame() inside for that? Add pns() description to help() output. > > "Executing pns" > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) > C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e > V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 > V [libjvm.so+0x75f442] JVM_Sleep+0x312 > j java.lang.Thread.sleep(J)V+0 > j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 > j CrashNative.doIt()V+45 > v ~StubRoutines::call_stub > V [libjvm.so+0x71599f] > JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, > Thread*)+0xf8f > > What about the two fixesin in 'print_native_stack()' - do you think they are OK? What about is_runtime_frame()? It is wrapper for runtime calls from compiled code. You need to check what fr.real_fp() returns on all platforms for the very first frame (_lwp_start). That is what this check about - stop walking when it reaches the first frame. fr.sender_sp() returns bogus value which is not stack pointer for the first frame. From 8035983 review: "It seems using fr.sender_sp() in the check work on x86 and sparc. On x86 it return stack_base value on sparc it returns STACK_BIAS." Also on other our platforms it could return 0 or small integer value. If you can suggest an other way to determine the first frame, please, tell. > Should I move 'print_native_stack()' to vmError.cpp as suggested by David? I am fine with both places. Thanks, Vladimir > > Thank you and best regards, > Volker > > On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov > wrote: >> Thank you for fixing frame walk. >> I don't see where make_frame() is used. >> >> Thanks, >> Vladimir >> >> >> On 9/16/14 9:35 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> while testing my change, I found two other small problems with native >>> stack traces: >>> >>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>> because they are treated as native "C" frames. However, if the native >>> wrapper was called from a compiled frame which had no valid frame >>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >>> frame. This can be easily fixed by treating native wrappers like java >>> frames. >>> >>> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >>> file)" introduced a similar problem. If we walk tha stack from a >>> native wrapper down to a compiled frame, we will have a frame with an >>> invalid frame pointer. In that case, the newly introduced check from >>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>> should do the same but also works for compiled frames with invalid fp. >>> >>> Here's the new webrev: >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>> >>> What dou you think? >>> >>> Thank you and best regards, >>> Volker >>> >>> >>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>> wrote: >>>> >>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>> debug.cpp. Initially I saw that vmError.cpp already included debug.hpp >>>> so I decided to declare it in debug.hpp. But now I realized that also >>>> debug.cpp includes vmError.hpp so I could just as well declare >>>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>>> vmError.cpp. Do you want me to change that? >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> >>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>> wrote: >>>>> >>>>> Hi Volker, >>>>> >>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> could you please review and sponsor the following small change which >>>>>> should make debugging a little more comfortabel (at least on Linux for >>>>>> now): >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>> >>>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>>> both, Java and native frames. >>>>>> It would be nice if we could make this functionality available from >>>>>> within gdb during debugging sessions (until now we can only print the >>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>> >>>>>> This new feature can be easily achieved by refactoring the >>>>>> corresponding stack printing code from VMError::report() in >>>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>>> without changing anything of the functionality. >>>>> >>>>> >>>>> >>>>> Why does it need to move to debug.cpp to allow this ? >>>>> >>>>> David >>>>> ----- >>>>> >>>>> >>>>>> It also adds some helper functions which make it easy to call the new >>>>>> 'print_native_stack()' method from within gdb. There's the new helper >>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>> 'print_native_stack()'. We need the frame argument because gdb inserts >>>>>> a dummy frame for every call and we can't easily walk over this dummy >>>>>> frame from our stack printing routine. >>>>>> >>>>>> To simplify the creation of the frame object, I've added the helper >>>>>> functions: >>>>>> >>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>>>>> return frame(sp, fp, pc); >>>>>> } >>>>>> >>>>>> for x86 (in frame_x86.cpp) and >>>>>> >>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>> return frame(sp, pc); >>>>>> } >>>>>> >>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>>> >>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>> >>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>> >>>>>> "Executing pns" >>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>> C=native >>>>>> code) >>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>> j CrashNative.doIt()V+45 >>>>>> v ~StubRoutines::call_stub >>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, objArrayHandle, >>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>> j >>>>>> >>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>> j >>>>>> >>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>> j >>>>>> >>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>> j >>>>>> >>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>> j CrashNative.mainJava()V+32 >>>>>> v ~StubRoutines::call_stub >>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>> _jmethodID*, ...)+0xb9 >>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>> j CrashNative.nativeMethod()V+0 >>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>> v ~StubRoutines::call_stub >>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>> >>>>> >> From volker.simonis at gmail.com Wed Sep 17 05:03:25 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 17 Sep 2014 07:03:25 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <5418B53F.7050508@oracle.com> References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> Message-ID: On Wednesday, September 17, 2014, Vladimir Kozlov < vladimir.kozlov at oracle.com> wrote: > On 9/16/14 12:21 PM, Volker Simonis wrote: > >> Hi Vladimir, >> >> thanks for looking at the change. >> >> 'make_frame' is only intended to be used from within the debugger to >> simplify the usage of the new 'pns()' (i.e. "print native stack") >> helper. It can be used as follows: >> >> (gdb) call pns(make_frame($sp, $rbp, $pc)) >> > > It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and let > it call make_frame()? To have make_frame() only on ppc and x86 will not > allow to use pns() on other platforms. > > Would be nice to have pns() version (names different) without input > parameters. Can we use os::current_frame() inside for that? > > Unfortunately, this doesn't work out of the box because because of the intermediate frame which gdb pushes on the stack when the user calls a function from within gdb. If we would use os::current_frame() we would need special, platform dependent code in pns() for walking this intermediate gdb frame. I therefore think it would be easier to add make_frame() for other platforms. > Add pns() description to help() output. Will do. I'll investigate the other suggestions and answer later today. Thanks, Volker > > >> "Executing pns" >> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >> code) >> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >> j java.lang.Thread.sleep(J)V+0 >> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >> j CrashNative.doIt()V+45 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x71599f] >> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >> Thread*)+0xf8f >> >> What about the two fixesin in 'print_native_stack()' - do you think they >> are OK? >> > > What about is_runtime_frame()? It is wrapper for runtime calls from > compiled code. > > You need to check what fr.real_fp() returns on all platforms for the very > first frame (_lwp_start). That is what this check about - stop walking when > it reaches the first frame. fr.sender_sp() returns bogus value which is not > stack pointer for the first frame. From 8035983 review: > > "It seems using fr.sender_sp() in the check work on x86 and sparc. > On x86 it return stack_base value on sparc it returns STACK_BIAS." > > Also on other our platforms it could return 0 or small integer value. > > If you can suggest an other way to determine the first frame, please, tell. > > Should I move 'print_native_stack()' to vmError.cpp as suggested by David? >> > > I am fine with both places. > > Thanks, > Vladimir > > >> Thank you and best regards, >> Volker >> >> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >> wrote: >> >>> Thank you for fixing frame walk. >>> I don't see where make_frame() is used. >>> >>> Thanks, >>> Vladimir >>> >>> >>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>> >>>> >>>> Hi, >>>> >>>> while testing my change, I found two other small problems with native >>>> stack traces: >>>> >>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>> because they are treated as native "C" frames. However, if the native >>>> wrapper was called from a compiled frame which had no valid frame >>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >>>> frame. This can be easily fixed by treating native wrappers like java >>>> frames. >>>> >>>> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >>>> file)" introduced a similar problem. If we walk tha stack from a >>>> native wrapper down to a compiled frame, we will have a frame with an >>>> invalid frame pointer. In that case, the newly introduced check from >>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>> should do the same but also works for compiled frames with invalid fp. >>>> >>>> Here's the new webrev: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>> >>>> What dou you think? >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> >>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>> wrote: >>>> >>>>> >>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>> debug.cpp. Initially I saw that vmError.cpp already included debug.hpp >>>>> so I decided to declare it in debug.hpp. But now I realized that also >>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>>>> vmError.cpp. Do you want me to change that? >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> >>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>> > >>>>> wrote: >>>>> >>>>>> >>>>>> Hi Volker, >>>>>> >>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> could you please review and sponsor the following small change which >>>>>>> should make debugging a little more comfortabel (at least on Linux >>>>>>> for >>>>>>> now): >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>> >>>>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>>>> both, Java and native frames. >>>>>>> It would be nice if we could make this functionality available from >>>>>>> within gdb during debugging sessions (until now we can only print the >>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>> >>>>>>> This new feature can be easily achieved by refactoring the >>>>>>> corresponding stack printing code from VMError::report() in >>>>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>>>> without changing anything of the functionality. >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>> >>>>>> David >>>>>> ----- >>>>>> >>>>>> >>>>>> It also adds some helper functions which make it easy to call the new >>>>>>> 'print_native_stack()' method from within gdb. There's the new helper >>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>> inserts >>>>>>> a dummy frame for every call and we can't easily walk over this dummy >>>>>>> frame from our stack printing routine. >>>>>>> >>>>>>> To simplify the creation of the frame object, I've added the helper >>>>>>> functions: >>>>>>> >>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>>>>>> return frame(sp, fp, pc); >>>>>>> } >>>>>>> >>>>>>> for x86 (in frame_x86.cpp) and >>>>>>> >>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>> return frame(sp, pc); >>>>>>> } >>>>>>> >>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>>>> >>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>> >>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>> >>>>>>> "Executing pns" >>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>> C=native >>>>>>> code) >>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>> j CrashNative.doIt()V+45 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>> objArrayHandle, >>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>> j >>>>>>> >>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/ >>>>>>> Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>> j >>>>>>> >>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[ >>>>>>> Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>> j >>>>>>> >>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[ >>>>>>> Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>> j >>>>>>> >>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[ >>>>>>> Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>> j CrashNative.mainJava()V+32 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod( >>>>>>> _jclass*, >>>>>>> _jmethodID*, ...)+0xb9 >>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>> >>>>>>> >>>>>> >>> From mikael.gerdin at oracle.com Wed Sep 17 07:00:21 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Wed, 17 Sep 2014 09:00:21 +0200 Subject: RFR [8u40] 8056084: Refactor Hashtable to allow implementations without rehashing support Message-ID: <15856493.62xT43KoU5@mgerdin03> Hi all, I need to backport this change in order to backport 8048268 which we need for G1 performance in 8u40. The patch didn't apply cleanly since StringTable was moved to a separate file in 9. The StringTable patch hunks applied correctly to the relevant parts of symbolTable.[ch]pp. Webrev: http://cr.openjdk.java.net/~mgerdin/8056084/8u/webrev/ Bug: https://bugs.openjdk.java.net/browse/JDK-8056084 Review thread at: http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-August/015039.html /Mikael From staffan.larsen at oracle.com Wed Sep 17 07:45:55 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Wed, 17 Sep 2014 09:45:55 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: References: Message-ID: All, We have discovered a problem with one of our internal tools (DKFL) that rely on the jdk build numbers as part of the version string. Currently when we use the latest promoted jdk the version string will have a build number, but when building the complete source the build is always set to b00. We will delay the change below until we have resolved this problem. /Staffan On 15 sep 2014, at 11:25, Staffan Larsen wrote: > All, > > We plan to move ahead with this change on Wednesday (Sept 17th) unless there are instabilities that prevent this. We currently have one open bug blocking this (JDK-8058251). > > I will follow up with an email once the switch has happened. > > Thanks, > /Staffan > > > On 9 sep 2014, at 08:02, Staffan Larsen wrote: > >> >> ## tl;dr >> >> We propose a move to a Hotspot development model where we can do both >> hotspot and jdk changes in the hotspot group repos. This will require a >> fully populated JDK forest to push changes (whether hotspot or jdk >> changes) through JPRT. We do not expect these changes to have much >> affect on the open community, but it is good to note that there can be >> changes both in hotspot and jdk code coming through the hotspot >> repositories, and the best practise is to always clone and build the >> complete forest. >> >> We propose to do this change in a few weeks time. >> >> ## Problem >> >> We see an increasing number of features (small and large) that require >> concerted changes to both the hotspot and the jdk repos. Our current >> development model does not support this very well since it requires jdk >> changes to be made in jdk9/dev and hotspot changes to be made in the >> hotspot group repositories. Alternatively, such changes results in "flag >> days" where jdk and hotspot changes are pushed through the group repos >> with a lot of manual work and impact on everyone working in the group >> repos. Either way, the result is very slow and cumbersome development. >> >> Some examples where concerted changes have been required are JSR-292, >> default methods, Java Flight Recorder, work on annotations, moving Class >> fields to Java, many serviceability area tests, and so on. A lot of this >> work will continue and we will also see new things such as jigsaw that >> add to the mix. >> >> Doing concerted changes today takes a lot of manual effort and calendar >> time to make sure nothing break. In many cases the addition of a new >> feature needs to made first to a hotspot group repo. That change needs >> to propagate to jdk9/dev where library code can be changed to depend on >> it. Once that change has propagated back to the hotspot group repo, the >> final change can be made to remove the old implementation. This dance >> can take anywhere from 2 to 4 weeks to complete - for a single feature. >> >> There has also been quite a few cases where we missed taking the >> dependency into account which results in test failures in one or more >> repos. In some cases these failures go on for several weeks causing lots >> of extra work and confusion simply because it takes time for the fix to >> propagate through the repos. >> >> Instead, we want to move to a model where we can make both jdk and >> hotspot changes directly in the hotspot group repos. In that way the >> changes will always "travel together" through the repos. This will make >> our development cycle faster as well as more reliable. >> >> More or less by definition these type of changes introduce a stronger >> dependency between hotspot and the jdk. For the product as a whole to >> work correctly the right combination of hotspot and the jdk need to be >> used. We have long since removed the requirement that hotspot would >> support several jdk versions (known as the Hotspot Express - or hsx - >> model) and we continue to see a strong dependency, where matching code >> in hotspot and the jdk needs to be used. >> >> ## No More Dependency on Latest Promoted Build >> >> The strong dependency between hotspot and jdk makes it impossible for >> hotspot to depend on the latest promoted jdk build for testing and >> development. To elaborate on this; if a change with hotspot+jdk >> dependencies have been pushed to a group repo, it will not longer be >> possible to use the latest promoted build for running or testing the >> version of hotspot built in that repo -- the latest promoted build will >> not have the latest change to the jdk that hotspot now depends on (or >> vice versa). >> >> ## Require Fully Populated JDK Forest >> >> The simple solution that we can switch to today is to always require a >> fully populated JDK forest when building (both locally and in JPRT). By >> this we mean a clone of all the repos in the forest under, for example, >> jdk9/hs-rt. JPRT would no longer be using the latest promoted build when >> creating bundles, instead it will build the code from the submitted >> forest. >> >> If all operations (builds, integrations, pushes, JPRT jobs) always work >> on the full forest, then there will never be a mismatch between the jdk >> and the hotspot code. >> >> The main drawbacks of this is that developers now need to clone, store >> and build a lot more code. Cloning the full forest takes longer than >> just cloning the hotspot forest. This can be alleviated by maintaining >> local cached versions. Storing full forests require more disk space. >> This can be mitigated by buying more disks or using a different workflow >> (for example Mercurial Queues). Building a full jdk takes longer, but >> hotspot is already one of the larger components to build and incremental >> builds are usually quite fast. >> >> ## Next Steps >> >> Given that we would like to improve the model we use for cross component >> development as soon as possible, we would like to switch to require a >> fully populated JDK forest for hotspot development. All the >> prerequisites for doing this are in place (changes to JPRT, both on the >> servers and to the configuration files in the source repos). A group of >> volunteering hotspot developers have been using full jdk repos for a >> while for day-to-day work (except pushes) and have not reported any >> showstopper problems. >> >> If no strong objections are rasied we need decide on a date when we >> throw the switch. A good date is probably after the 8u40 Feature >> Complete date of Mid September [0] so as not to impact that release >> (although this change will only apply to JDK 9 development for now). >> >> Regards, >> Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, >> Staffan Larsen, Stefan S?rne, Vladimir Kozlov >> >> [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html > From goetz.lindenmaier at sap.com Wed Sep 17 09:27:20 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 17 Sep 2014 09:27:20 +0000 Subject: RFR: [backport] 8044775: Improve usage of umbrella header atomic.inline.hpp. Message-ID: <4295855A5C1DE049A61835A1887419CC2CF07045@DEWDFEMB12A.global.corp.sap> Hi, I'd like to backport this change: JDK-8044775: Improve usage of umbrella header atomic.inline.hpp. It did not apply cleanly, so I please need a review: Some files do not exist in 8 or don't exist any more: src/share/vm/classfile/stringTable.cpp src/share/vm/service/memPtr.hpp src/share/vm/service/memPtr.cpp src/share/vm/service/memRecorder.cpp In some files the patch did not apply cleanly, as the context changed: src/share/vm/utilities/bitMap.cpp src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp src/share/vm/oops/instanceKlass.cpp Here usage of class Atomic was removed along with the header: src/share/vm/service/memTracker.cpp This is the webrev for the 8u repository: http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.8.00/ This is the change in 9: http://hg.openjdk.java.net/jdk9/hs/hotspot/rev/b596a1063e90 and the webrev sumitted to 9: http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.01/ Please review this. I please need a sponsor to push the change. Best regards, Goetz. From goetz.lindenmaier at sap.com Wed Sep 17 09:27:24 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Wed, 17 Sep 2014 09:27:24 +0000 Subject: RFR: [backport] 8048241: Introduce umbrella header os.inline.hpp and clean up includes Message-ID: <4295855A5C1DE049A61835A1887419CC2CF0704E@DEWDFEMB12A.global.corp.sap> Hi, I'd like to backport this change: 8048241: Introduce umbrella header os.inline.hpp and clean up includes It did not apply cleanly, so I please need a review. The context has changed in src/share/vm/runtime/arguments.cpp, the change in the patch remained the same. This is the webrev for the 8u repository: http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.8.00/ This is the change in 9: http://hg.openjdk.java.net/jdk9/hs/hotspot/rev/08a2164660fb and the webrev submitted to 9: http://cr.openjdk.java.net/~goetz/webrevs/8048241-osInc/webrev.00/ Please review this. I please need a sponsor to push the change. Best regards, Goetz. From aph at redhat.com Wed Sep 17 10:31:19 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 17 Sep 2014 11:31:19 +0100 Subject: More on memory barriers In-Reply-To: <5417E963.5060103@redhat.com> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> Message-ID: <541962F7.5020807@redhat.com> On 09/16/2014 08:40 AM, Andrew Haley wrote: > On 15/09/14 17:20, Vitaly Davidovich wrote: >> Looking at hg history, MemBarStoreStore was added a few years ago, whereas >> the code in question is much older. The comments in the changelist adding >> MemBarStoreStore seem to indicate it was done to address a specific issue, >> and my guess is that it wasn't "retrofitted" into all possible places. > > That sounds plausible. I'll change this to a StoreStore in the AArch64 > port and do some testing. Bah, that doesn't work. Escape analysis assumes that a StoreStore is only used in certain contexts. Back to the drawing board. Andrew. From erik.osterlund at lnu.se Wed Sep 17 11:13:04 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Wed, 17 Sep 2014 11:13:04 +0000 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms In-Reply-To: <54124F11.8060100@oracle.com> References: <54124F11.8060100@oracle.com> Message-ID: <8E69D7A0-CD8D-4EC6-B708-5F2C0098B183@lnu.se> I am back! Did you guys have time to do some thinking? I see three different solutions: 1. Good old class inheritance! Class Atomic is-a Atomic_YourArchHere is-a AtomicAbstract Using the CRTP (Curiously Recurring Template Pattern) for C++, this could be done without a virtual call where we want inlining. 2. Similar except with the SFINAE idiom (Substitution Failure Is Not An Error) for C++, to pick the right overload based on statically determined constraints. E.g. define if Atomic::has_general_byte_CAS and based on whether this is defined or not, pick the general or specific overload variant of the CAS member function. 3. Simply make the current CAS a normal function which is called from billions of new inline method definitions that we have to create for every single architecture. What do we prefer here? Does anyone else have a better idea? Also, should I start a new thread or is it okay to post it here? /Erik On 12 Sep 2014, at 03:40, David Holmes wrote: > Hi Erik, > > Can we pause and give some more thought to a clean mechanism for allowing a shared implementation if desired with the ability to override if desired. I really do not like to see CPU specific ifdefs being added to shared code. (And I would also not like to see all platforms being forced to reimplement this natively). > > I'm not saying we will find a simple solution, but it would be nice if we could get a few folk to think about it before proceeding with the ifdefs :) > > > Thanks, > David > > > On 12/09/2014 7:48 AM, Erik ?sterlund wrote: >> Hi, >> >> These changes aim at replacing the awkward old jbyte Atomic::cmpxchg implementation for all the supported x86 platforms. It previously emulated the behaviour of cmpxchgb using a loop of cmpxchgl and some dynamic alignment of the destination address. >> >> This code is called by remembered sets to manipulate card entries. >> >> The implementation has now been replaced with a bunch of assembly, appropriate for all platforms. Yes, for windows too. >> >> Implementations include: >> bsd x86/x86_64: inline asm >> linux x86/x86_64: inline asm >> solaris x86/x86_64: .il files >> windows x86_64 without GNU source: stubGenerator and manual code emission and hence including new Assembler::cmpxchgb support >> Windows x86 + x86_64 with GNU source: inline asm >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8058255 >> >> Webrev: http://cr.openjdk.java.net/~jwilhelm/8058255/webrev/ >> >> Improvements can be made for other architectures can as well, but this should be a good start. >> >> /Erik >> From dl at cs.oswego.edu Wed Sep 17 11:16:36 2014 From: dl at cs.oswego.edu (Doug Lea) Date: Wed, 17 Sep 2014 07:16:36 -0400 Subject: More on memory barriers In-Reply-To: <541962F7.5020807@redhat.com> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> Message-ID: <54196D94.9050900@cs.oswego.edu> On 09/17/2014 06:31 AM, Andrew Haley wrote: > On 09/16/2014 08:40 AM, Andrew Haley wrote: >> On 15/09/14 17:20, Vitaly Davidovich wrote: >>> Looking at hg history, MemBarStoreStore was added a few years ago, whereas >>> the code in question is much older. The comments in the changelist adding >>> MemBarStoreStore seem to indicate it was done to address a specific issue, >>> and my guess is that it wasn't "retrofitted" into all possible places. >> >> That sounds plausible. I'll change this to a StoreStore in the AArch64 >> port and do some testing. > > Bah, that doesn't work. Escape analysis assumes that a StoreStore > is only used in certain contexts. Back to the drawing board. > The setup for StoreStore seems suspicious. I believe that this could only work in C2 if done in the way I mentioned: StoreStore must be handled identically to Release by c2, but possibly more cheaply matched. Can StoreStore be reworked as a subtype or property of Release? -Doug From aph at redhat.com Wed Sep 17 11:22:00 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 17 Sep 2014 12:22:00 +0100 Subject: More on memory barriers In-Reply-To: <54196D94.9050900@cs.oswego.edu> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> <54196D94.9050900@cs.oswego.edu> Message-ID: <54196ED8.2020303@redhat.com> On 09/17/2014 12:16 PM, Doug Lea wrote: > On 09/17/2014 06:31 AM, Andrew Haley wrote: >> On 09/16/2014 08:40 AM, Andrew Haley wrote: >>> On 15/09/14 17:20, Vitaly Davidovich wrote: >>>> Looking at hg history, MemBarStoreStore was added a few years ago, whereas >>>> the code in question is much older. The comments in the changelist adding >>>> MemBarStoreStore seem to indicate it was done to address a specific issue, >>>> and my guess is that it wasn't "retrofitted" into all possible places. >>> >>> That sounds plausible. I'll change this to a StoreStore in the AArch64 >>> port and do some testing. >> >> Bah, that doesn't work. Escape analysis assumes that a StoreStore >> is only used in certain contexts. Back to the drawing board. > > The setup for StoreStore seems suspicious. I believe that this could > only work in C2 if done in the way I mentioned: StoreStore must be > handled identically to Release by c2, but possibly more cheaply > matched. Can StoreStore be reworked as a subtype or property of > Release? That's what I was thinking. I'll have a look at doing something along those lines. Andrew. From george.triantafillou at oracle.com Wed Sep 17 12:15:24 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Wed, 17 Sep 2014 08:15:24 -0400 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test Message-ID: <54197B5C.6050006@oracle.com> Please review this updated test for 8056263. Prior to the promotion of the java launcher changes, the test was failing in JPRT. The test is now enabled. Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 The fix was tested locally on Linux with jtreg. Thanks. -George From david.holmes at oracle.com Wed Sep 17 12:28:03 2014 From: david.holmes at oracle.com (David Holmes) Date: Wed, 17 Sep 2014 22:28:03 +1000 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms In-Reply-To: <8E69D7A0-CD8D-4EC6-B708-5F2C0098B183@lnu.se> References: <54124F11.8060100@oracle.com> <8E69D7A0-CD8D-4EC6-B708-5F2C0098B183@lnu.se> Message-ID: <54197E53.9030307@oracle.com> On 17/09/2014 9:13 PM, Erik ?sterlund wrote: > I am back! Did you guys have time to do some thinking? I see three different solutions: > > 1. Good old class inheritance! Class Atomic is-a Atomic_YourArchHere is-a AtomicAbstract > Using the CRTP (Curiously Recurring Template Pattern) for C++, this could be done without a virtual call where we want inlining. I would prefer this approach (here and elsewhere) but it is not a short-term option. > 2. Similar except with the SFINAE idiom (Substitution Failure Is Not An Error) for C++, to pick the right overload based on statically determined constraints. > E.g. define if Atomic::has_general_byte_CAS and based on whether this is defined or not, pick the general or specific overload variant of the CAS member function. Not sure what this one is but it sounds like a manual virtual dispatch - which seems not a good solution. > 3. Simply make the current CAS a normal function which is called from billions of new inline method definitions that we have to create for every single architecture. I think the simple version of 3 is just move cmpxchg(jbtye) out of the shared code and define for each platform - there aren't that many and it is consistent with many of the other variants. > What do we prefer here? Does anyone else have a better idea? Also, should I start a new thread or is it okay to post it here? Continuing this thread is fine by me. I think short-term the simple version of 3 is preferable. Thanks, David > /Erik > > > On 12 Sep 2014, at 03:40, David Holmes wrote: > >> Hi Erik, >> >> Can we pause and give some more thought to a clean mechanism for allowing a shared implementation if desired with the ability to override if desired. I really do not like to see CPU specific ifdefs being added to shared code. (And I would also not like to see all platforms being forced to reimplement this natively). >> >> I'm not saying we will find a simple solution, but it would be nice if we could get a few folk to think about it before proceeding with the ifdefs :) >> >> >> Thanks, >> David >> >> >> On 12/09/2014 7:48 AM, Erik ?sterlund wrote: >>> Hi, >>> >>> These changes aim at replacing the awkward old jbyte Atomic::cmpxchg implementation for all the supported x86 platforms. It previously emulated the behaviour of cmpxchgb using a loop of cmpxchgl and some dynamic alignment of the destination address. >>> >>> This code is called by remembered sets to manipulate card entries. >>> >>> The implementation has now been replaced with a bunch of assembly, appropriate for all platforms. Yes, for windows too. >>> >>> Implementations include: >>> bsd x86/x86_64: inline asm >>> linux x86/x86_64: inline asm >>> solaris x86/x86_64: .il files >>> windows x86_64 without GNU source: stubGenerator and manual code emission and hence including new Assembler::cmpxchgb support >>> Windows x86 + x86_64 with GNU source: inline asm >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8058255 >>> >>> Webrev: http://cr.openjdk.java.net/~jwilhelm/8058255/webrev/ >>> >>> Improvements can be made for other architectures can as well, but this should be a good start. >>> >>> /Erik >>> > From harold.seigel at oracle.com Wed Sep 17 12:29:18 2014 From: harold.seigel at oracle.com (harold seigel) Date: Wed, 17 Sep 2014 08:29:18 -0400 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test In-Reply-To: <54197B5C.6050006@oracle.com> References: <54197B5C.6050006@oracle.com> Message-ID: <54197E9E.3040209@oracle.com> Hi George, The change looks good. Harold On 9/17/2014 8:15 AM, George Triantafillou wrote: > Please review this updated test for 8056263. Prior to the promotion > of the java launcher changes, the test was failing in JPRT. The test > is now enabled. > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George From george.triantafillou at oracle.com Wed Sep 17 12:31:12 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Wed, 17 Sep 2014 08:31:12 -0400 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test In-Reply-To: <54197E9E.3040209@oracle.com> References: <54197B5C.6050006@oracle.com> <54197E9E.3040209@oracle.com> Message-ID: <54197F10.8000906@oracle.com> Thanks Harold. -George On 9/17/2014 8:29 AM, harold seigel wrote: > Hi George, > > The change looks good. > > Harold > On 9/17/2014 8:15 AM, George Triantafillou wrote: >> Please review this updated test for 8056263. Prior to the promotion >> of the java launcher changes, the test was failing in JPRT. The test >> is now enabled. >> >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 >> >> The fix was tested locally on Linux with jtreg. >> >> Thanks. >> >> -George > From vitalyd at gmail.com Wed Sep 17 12:37:55 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 17 Sep 2014 08:37:55 -0400 Subject: More on memory barriers In-Reply-To: <541962F7.5020807@redhat.com> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> Message-ID: That's unfortunate; naively seems odd that EA is tripped up by it. Sent from my phone On Sep 17, 2014 6:31 AM, "Andrew Haley" wrote: > On 09/16/2014 08:40 AM, Andrew Haley wrote: > > On 15/09/14 17:20, Vitaly Davidovich wrote: > >> Looking at hg history, MemBarStoreStore was added a few years ago, > whereas > >> the code in question is much older. The comments in the changelist > adding > >> MemBarStoreStore seem to indicate it was done to address a specific > issue, > >> and my guess is that it wasn't "retrofitted" into all possible places. > > > > That sounds plausible. I'll change this to a StoreStore in the AArch64 > > port and do some testing. > > Bah, that doesn't work. Escape analysis assumes that a StoreStore > is only used in certain contexts. Back to the drawing board. > > Andrew. > > From lois.foltan at oracle.com Wed Sep 17 12:43:25 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Wed, 17 Sep 2014 08:43:25 -0400 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test In-Reply-To: <54197B5C.6050006@oracle.com> References: <54197B5C.6050006@oracle.com> Message-ID: <541981ED.6070504@oracle.com> Looks good. Lois On 9/17/2014 8:15 AM, George Triantafillou wrote: > Please review this updated test for 8056263. Prior to the promotion > of the java launcher changes, the test was failing in JPRT. The test > is now enabled. > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George From george.triantafillou at oracle.com Wed Sep 17 12:43:53 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Wed, 17 Sep 2014 08:43:53 -0400 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test In-Reply-To: <541981ED.6070504@oracle.com> References: <54197B5C.6050006@oracle.com> <541981ED.6070504@oracle.com> Message-ID: <54198209.5010409@oracle.com> Thanks Lois. -George On 9/17/2014 8:43 AM, Lois Foltan wrote: > Looks good. > Lois > > On 9/17/2014 8:15 AM, George Triantafillou wrote: >> Please review this updated test for 8056263. Prior to the promotion >> of the java launcher changes, the test was failing in JPRT. The test >> is now enabled. >> >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 >> >> The fix was tested locally on Linux with jtreg. >> >> Thanks. >> >> -George > From aph at redhat.com Wed Sep 17 14:36:43 2014 From: aph at redhat.com (Andrew Haley) Date: Wed, 17 Sep 2014 15:36:43 +0100 Subject: More on memory barriers In-Reply-To: References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> Message-ID: <54199C7B.4030201@redhat.com> On 09/17/2014 01:37 PM, Vitaly Davidovich wrote: > That's unfortunate; naively seems odd that EA is tripped up by it. Well, that's probably not the only problem: there are many places where different paths are taken for Release and StoreStore. The best approach seems to be to leave the kind of barrier as it is, but mark it in some way. Andrew. From vitalyd at gmail.com Wed Sep 17 14:42:54 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 17 Sep 2014 10:42:54 -0400 Subject: More on memory barriers In-Reply-To: <54199C7B.4030201@redhat.com> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> <54199C7B.4030201@redhat.com> Message-ID: Yeah, your (and Doug's) suggestion seems sensible. On Wed, Sep 17, 2014 at 10:36 AM, Andrew Haley wrote: > On 09/17/2014 01:37 PM, Vitaly Davidovich wrote: > > That's unfortunate; naively seems odd that EA is tripped up by it. > > Well, that's probably not the only problem: there are many places > where different paths are taken for Release and StoreStore. The best > approach seems to be to leave the kind of barrier as it is, but mark it > in some way. > > Andrew. > > > From daniel.daugherty at oracle.com Wed Sep 17 14:46:02 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 17 Sep 2014 08:46:02 -0600 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test In-Reply-To: <54197B5C.6050006@oracle.com> References: <54197B5C.6050006@oracle.com> Message-ID: <54199EAA.2050500@oracle.com> > the test was failing in JPRT > The fix was tested locally on Linux with jtreg. I'm having trouble reconciling these two sentences... Seems like you need a JPRT test job that runs just this test. Dan On 9/17/14 6:15 AM, George Triantafillou wrote: > Please review this updated test for 8056263. Prior to the promotion > of the java launcher changes, the test was failing in JPRT. The test > is now enabled. > > Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 > > The fix was tested locally on Linux with jtreg. > > Thanks. > > -George From daniel.daugherty at oracle.com Wed Sep 17 14:48:49 2014 From: daniel.daugherty at oracle.com (Daniel D. Daugherty) Date: Wed, 17 Sep 2014 08:48:49 -0600 Subject: RFR(XS): 8056263 [TESTBUG] Re-enable NMTWithCDS.java test In-Reply-To: <54199EAA.2050500@oracle.com> References: <54197B5C.6050006@oracle.com> <54199EAA.2050500@oracle.com> Message-ID: <54199F51.5090809@oracle.com> OK. So all you did was remove an @ignore. I was fooled by the "Please review this updated test for 8056263"... That made it sound like the test was changed/fixed... Dan On 9/17/14 8:46 AM, Daniel D. Daugherty wrote: > > the test was failing in JPRT > > The fix was tested locally on Linux with jtreg. > > I'm having trouble reconciling these two sentences... > > Seems like you need a JPRT test job that runs just this test. > > Dan > > > On 9/17/14 6:15 AM, George Triantafillou wrote: >> Please review this updated test for 8056263. Prior to the promotion >> of the java launcher changes, the test was failing in JPRT. The test >> is now enabled. >> >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8056263/webrev/ >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8056263 >> >> The fix was tested locally on Linux with jtreg. >> >> Thanks. >> >> -George > From volker.simonis at gmail.com Wed Sep 17 18:29:49 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Wed, 17 Sep 2014 20:29:49 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <5418B53F.7050508@oracle.com> References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> Message-ID: On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov wrote: > On 9/16/14 12:21 PM, Volker Simonis wrote: >> >> Hi Vladimir, >> >> thanks for looking at the change. >> >> 'make_frame' is only intended to be used from within the debugger to >> simplify the usage of the new 'pns()' (i.e. "print native stack") >> helper. It can be used as follows: >> >> (gdb) call pns(make_frame($sp, $rbp, $pc)) > > > It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and let > it call make_frame()? To have make_frame() only on ppc and x86 will not > allow to use pns() on other platforms. > > Would be nice to have pns() version (names different) without input > parameters. Can we use os::current_frame() inside for that? > > Add pns() description to help() output. > >> >> "Executing pns" >> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >> code) >> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >> j java.lang.Thread.sleep(J)V+0 >> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >> j CrashNative.doIt()V+45 >> v ~StubRoutines::call_stub >> V [libjvm.so+0x71599f] >> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >> Thread*)+0xf8f >> >> What about the two fixesin in 'print_native_stack()' - do you think they >> are OK? > > > What about is_runtime_frame()? It is wrapper for runtime calls from compiled > code. > Yes, but I don't see how this could help here, because the native wrapper which makes problems here is a nmethod and not a runtime stub. Maybe you mean to additionally add is_runtime_frame() to the check? Yes, I've just realized that that's indeed needed on amd64 to walk runtime stubs. SPARC is more graceful and works without these changes, but on amd64 we need them (on both Solaris and Linux) and on Sparc they don't hurt. I've written a small test program which should be similar to the one you used for 8035983: import java.util.Hashtable; public class StackTraceTest { static Hashtable ht; static { ht = new Hashtable(); ht.put("one", "one"); } public static void foo() { bar(); } public static void bar() { ht.get("one"); } public static void main(String args[]) { for (int i = 0; i < 5; i++) { new Thread() { public void run() { while(true) { foo(); } } }.start(); } } } If I run it with "-XX:-Inline -XX:+PrintCompilation -XX:-TieredCompilation StackTraceTest" inside the debugger and crash one of the Java threads in native code, I get the correct stack traces on SPARC. But on amd64, I only get the following without my changes: Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], sp=0xfffffd7da17f7c60, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f V [libjvm.so+0x171443b] int os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf V [libjvm.so+0x18cdd00] void ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 V [libjvm.so+0x18cd6a7] void ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 V [libjvm.so+0x182f39e] void SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e v ~RuntimeStub::_complete_monitor_locking_Java C 0x2aad1dd1000016d8 With the changes (and the additional check for is_runtime_frame()) I get full stack traces on amd64 as well. So I think the changes should be at least an improvement:) > You need to check what fr.real_fp() returns on all platforms for the very > first frame (_lwp_start). That is what this check about - stop walking when > it reaches the first frame. fr.sender_sp() returns bogus value which is not > stack pointer for the first frame. From 8035983 review: > > "It seems using fr.sender_sp() in the check work on x86 and sparc. > On x86 it return stack_base value on sparc it returns STACK_BIAS." > > Also on other our platforms it could return 0 or small integer value. > > If you can suggest an other way to determine the first frame, please, tell. > So the initial problem in 8035983 was that we used os::is_first_C_frame(&fr) for native frames where the sender was a compiled frame. That didn't work reliably because, os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of the sender and that doesn't work for compiled senders. So you replaced os::is_first_C_frame(&fr) by !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() internally which in turn uses fp() so it won't work for frames which have a bogus frame pointer like native wrappers. I think using fr.real_fp() should be safe because as far as I can see it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() on SPARC. On Linux/amd64 both, the sp and fp of the first frame will be 0 (still have to check on SPARC). But the example above works fine with my changes on both, Linux/amd64 and Solaris/SPARC and Solaris/amd64. I'll prepare a new webrev tomorrow which will have the documentation for "pns" and a version of make_frame() for SPARC. Regards, Volker >> Should I move 'print_native_stack()' to vmError.cpp as suggested by David? > > > I am fine with both places. > > Thanks, > Vladimir > > >> >> Thank you and best regards, >> Volker >> >> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >> wrote: >>> >>> Thank you for fixing frame walk. >>> I don't see where make_frame() is used. >>> >>> Thanks, >>> Vladimir >>> >>> >>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>> >>>> >>>> Hi, >>>> >>>> while testing my change, I found two other small problems with native >>>> stack traces: >>>> >>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>> because they are treated as native "C" frames. However, if the native >>>> wrapper was called from a compiled frame which had no valid frame >>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >>>> frame. This can be easily fixed by treating native wrappers like java >>>> frames. >>>> >>>> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >>>> file)" introduced a similar problem. If we walk tha stack from a >>>> native wrapper down to a compiled frame, we will have a frame with an >>>> invalid frame pointer. In that case, the newly introduced check from >>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>> should do the same but also works for compiled frames with invalid fp. >>>> >>>> Here's the new webrev: >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>> >>>> What dou you think? >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> >>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>> wrote: >>>>> >>>>> >>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>> debug.cpp. Initially I saw that vmError.cpp already included debug.hpp >>>>> so I decided to declare it in debug.hpp. But now I realized that also >>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>>>> vmError.cpp. Do you want me to change that? >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> >>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>> wrote: >>>>>> >>>>>> >>>>>> Hi Volker, >>>>>> >>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> could you please review and sponsor the following small change which >>>>>>> should make debugging a little more comfortabel (at least on Linux >>>>>>> for >>>>>>> now): >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>> >>>>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>>>> both, Java and native frames. >>>>>>> It would be nice if we could make this functionality available from >>>>>>> within gdb during debugging sessions (until now we can only print the >>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>> >>>>>>> This new feature can be easily achieved by refactoring the >>>>>>> corresponding stack printing code from VMError::report() in >>>>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>>>> without changing anything of the functionality. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>> >>>>>> David >>>>>> ----- >>>>>> >>>>>> >>>>>>> It also adds some helper functions which make it easy to call the new >>>>>>> 'print_native_stack()' method from within gdb. There's the new helper >>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>> inserts >>>>>>> a dummy frame for every call and we can't easily walk over this dummy >>>>>>> frame from our stack printing routine. >>>>>>> >>>>>>> To simplify the creation of the frame object, I've added the helper >>>>>>> functions: >>>>>>> >>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>>>>>> return frame(sp, fp, pc); >>>>>>> } >>>>>>> >>>>>>> for x86 (in frame_x86.cpp) and >>>>>>> >>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>> return frame(sp, pc); >>>>>>> } >>>>>>> >>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>>>> >>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>> >>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>> >>>>>>> "Executing pns" >>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>> C=native >>>>>>> code) >>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>> j CrashNative.doIt()V+45 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>> objArrayHandle, >>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>> j >>>>>>> >>>>>>> >>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>> j >>>>>>> >>>>>>> >>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>> j >>>>>>> >>>>>>> >>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>> j >>>>>>> >>>>>>> >>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>> j CrashNative.mainJava()V+32 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>> _jmethodID*, ...)+0xb9 >>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>> >>>>>> >>> > From vladimir.kozlov at oracle.com Wed Sep 17 19:49:25 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 17 Sep 2014 12:49:25 -0700 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> Message-ID: <5419E5C5.9080401@oracle.com> On 9/17/14 11:29 AM, Volker Simonis wrote: > On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov > wrote: >> On 9/16/14 12:21 PM, Volker Simonis wrote: >>> >>> Hi Vladimir, >>> >>> thanks for looking at the change. >>> >>> 'make_frame' is only intended to be used from within the debugger to >>> simplify the usage of the new 'pns()' (i.e. "print native stack") >>> helper. It can be used as follows: >>> >>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >> >> >> It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and let >> it call make_frame()? To have make_frame() only on ppc and x86 will not >> allow to use pns() on other platforms. >> >> Would be nice to have pns() version (names different) without input >> parameters. Can we use os::current_frame() inside for that? >> >> Add pns() description to help() output. >> >>> >>> "Executing pns" >>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >>> code) >>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>> j java.lang.Thread.sleep(J)V+0 >>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>> j CrashNative.doIt()V+45 >>> v ~StubRoutines::call_stub >>> V [libjvm.so+0x71599f] >>> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >>> Thread*)+0xf8f >>> >>> What about the two fixesin in 'print_native_stack()' - do you think they >>> are OK? >> >> >> What about is_runtime_frame()? It is wrapper for runtime calls from compiled >> code. >> > > Yes, but I don't see how this could help here, because the native > wrapper which makes problems here is a nmethod and not a runtime stub. > > Maybe you mean to additionally add is_runtime_frame() to the check? Yes, that is what I meant. Thanks, Vladimir > Yes, I've just realized that that's indeed needed on amd64 to walk > runtime stubs. SPARC is more graceful and works without these changes, > but on amd64 we need them (on both Solaris and Linux) and on Sparc > they don't hurt. > > I've written a small test program which should be similar to the one > you used for 8035983: > > import java.util.Hashtable; > > public class StackTraceTest { > static Hashtable ht; > static { > ht = new Hashtable(); > ht.put("one", "one"); > } > > public static void foo() { > bar(); > } > > public static void bar() { > ht.get("one"); > } > > public static void main(String args[]) { > for (int i = 0; i < 5; i++) { > new Thread() { > public void run() { > while(true) { > foo(); > } > } > }.start(); > } > } > } > > If I run it with "-XX:-Inline -XX:+PrintCompilation > -XX:-TieredCompilation StackTraceTest" inside the debugger and crash > one of the Java threads in native code, I get the correct stack traces > on SPARC. But on amd64, I only get the following without my changes: > > Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], > sp=0xfffffd7da17f7c60, free space=1019k > Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) > C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f > V [libjvm.so+0x171443b] int > os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b > V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa > V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 > V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf > V [libjvm.so+0x18cdd00] void > ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 > V [libjvm.so+0x18cd6a7] void > ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 > V [libjvm.so+0x182f39e] void > SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e > v ~RuntimeStub::_complete_monitor_locking_Java > C 0x2aad1dd1000016d8 > > With the changes (and the additional check for is_runtime_frame()) I > get full stack traces on amd64 as well. So I think the changes should > be at least an improvement:) Good! > >> You need to check what fr.real_fp() returns on all platforms for the very >> first frame (_lwp_start). That is what this check about - stop walking when >> it reaches the first frame. fr.sender_sp() returns bogus value which is not >> stack pointer for the first frame. From 8035983 review: >> >> "It seems using fr.sender_sp() in the check work on x86 and sparc. >> On x86 it return stack_base value on sparc it returns STACK_BIAS." >> >> Also on other our platforms it could return 0 or small integer value. >> >> If you can suggest an other way to determine the first frame, please, tell. >> > > So the initial problem in 8035983 was that we used > os::is_first_C_frame(&fr) for native frames where the sender was a > compiled frame. That didn't work reliably because, > os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of > the sender and that doesn't work for compiled senders. > > So you replaced os::is_first_C_frame(&fr) by > !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() > internally which in turn uses fp() so it won't work for frames which > have a bogus frame pointer like native wrappers. > > I think using fr.real_fp() should be safe because as far as I can see > it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() > on SPARC. On Linux/amd64 both, the sp and fp of the first frame will > be 0 (still have to check on SPARC). But the example above works fine > with my changes on both, Linux/amd64 and Solaris/SPARC and > Solaris/amd64. > > I'll prepare a new webrev tomorrow which will have the documentation > for "pns" and a version of make_frame() for SPARC. > > Regards, > Volker > >>> Should I move 'print_native_stack()' to vmError.cpp as suggested by David? >> >> >> I am fine with both places. >> >> Thanks, >> Vladimir >> >> >>> >>> Thank you and best regards, >>> Volker >>> >>> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >>> wrote: >>>> >>>> Thank you for fixing frame walk. >>>> I don't see where make_frame() is used. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> >>>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi, >>>>> >>>>> while testing my change, I found two other small problems with native >>>>> stack traces: >>>>> >>>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>>> because they are treated as native "C" frames. However, if the native >>>>> wrapper was called from a compiled frame which had no valid frame >>>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >>>>> frame. This can be easily fixed by treating native wrappers like java >>>>> frames. >>>>> >>>>> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >>>>> file)" introduced a similar problem. If we walk tha stack from a >>>>> native wrapper down to a compiled frame, we will have a frame with an >>>>> invalid frame pointer. In that case, the newly introduced check from >>>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>>> should do the same but also works for compiled frames with invalid fp. >>>>> >>>>> Here's the new webrev: >>>>> >>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>>> >>>>> What dou you think? >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> >>>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>>> wrote: >>>>>> >>>>>> >>>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>>> debug.cpp. Initially I saw that vmError.cpp already included debug.hpp >>>>>> so I decided to declare it in debug.hpp. But now I realized that also >>>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>>>>> vmError.cpp. Do you want me to change that? >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>> >>>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> Hi Volker, >>>>>>> >>>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> could you please review and sponsor the following small change which >>>>>>>> should make debugging a little more comfortabel (at least on Linux >>>>>>>> for >>>>>>>> now): >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>>> >>>>>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>>>>> both, Java and native frames. >>>>>>>> It would be nice if we could make this functionality available from >>>>>>>> within gdb during debugging sessions (until now we can only print the >>>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>>> >>>>>>>> This new feature can be easily achieved by refactoring the >>>>>>>> corresponding stack printing code from VMError::report() in >>>>>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>>>>> without changing anything of the functionality. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>>> >>>>>>> David >>>>>>> ----- >>>>>>> >>>>>>> >>>>>>>> It also adds some helper functions which make it easy to call the new >>>>>>>> 'print_native_stack()' method from within gdb. There's the new helper >>>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>>> inserts >>>>>>>> a dummy frame for every call and we can't easily walk over this dummy >>>>>>>> frame from our stack printing routine. >>>>>>>> >>>>>>>> To simplify the creation of the frame object, I've added the helper >>>>>>>> functions: >>>>>>>> >>>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) { >>>>>>>> return frame(sp, fp, pc); >>>>>>>> } >>>>>>>> >>>>>>>> for x86 (in frame_x86.cpp) and >>>>>>>> >>>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>>> return frame(sp, pc); >>>>>>>> } >>>>>>>> >>>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>>>>> >>>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>>> >>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>>> >>>>>>>> "Executing pns" >>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>>> C=native >>>>>>>> code) >>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>> j CrashNative.doIt()V+45 >>>>>>>> v ~StubRoutines::call_stub >>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>>> objArrayHandle, >>>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, Handle, >>>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>>> j >>>>>>>> >>>>>>>> >>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>>> j >>>>>>>> >>>>>>>> >>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>>> j >>>>>>>> >>>>>>>> >>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>>> j >>>>>>>> >>>>>>>> >>>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>>> j CrashNative.mainJava()V+32 >>>>>>>> v ~StubRoutines::call_stub >>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>>> C [libCrashNative.so+0x9a9] JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>>> _jmethodID*, ...)+0xb9 >>>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>>> v ~StubRoutines::call_stub >>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>>> >>>>>>> >>>> >> From david.holmes at oracle.com Thu Sep 18 02:42:25 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 18 Sep 2014 12:42:25 +1000 Subject: More on memory barriers In-Reply-To: <54196D94.9050900@cs.oswego.edu> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> <54196D94.9050900@cs.oswego.edu> Message-ID: <541A4691.8010702@oracle.com> On 17/09/2014 9:16 PM, Doug Lea wrote: > On 09/17/2014 06:31 AM, Andrew Haley wrote: >> On 09/16/2014 08:40 AM, Andrew Haley wrote: >>> On 15/09/14 17:20, Vitaly Davidovich wrote: >>>> Looking at hg history, MemBarStoreStore was added a few years ago, >>>> whereas >>>> the code in question is much older. The comments in the changelist >>>> adding >>>> MemBarStoreStore seem to indicate it was done to address a specific >>>> issue, >>>> and my guess is that it wasn't "retrofitted" into all possible places. >>> >>> That sounds plausible. I'll change this to a StoreStore in the AArch64 >>> port and do some testing. >> >> Bah, that doesn't work. Escape analysis assumes that a StoreStore >> is only used in certain contexts. Back to the drawing board. >> > > The setup for StoreStore seems suspicious. I believe that this could > only work in C2 if done in the way I mentioned: StoreStore must be > handled identically to Release by c2, but possibly more cheaply > matched. Can StoreStore be reworked as a subtype or property of > Release? I'm not a C2 person but I don't understand in what way a storestore can, or should, be considered a "subtype" of release ?? David > -Doug > > From vitalyd at gmail.com Thu Sep 18 02:50:26 2014 From: vitalyd at gmail.com (Vitaly Davidovich) Date: Wed, 17 Sep 2014 22:50:26 -0400 Subject: More on memory barriers In-Reply-To: <541A4691.8010702@oracle.com> References: <541703DB.5030207@redhat.com> <54170F06.2070207@cs.oswego.edu> <54171026.8060700@redhat.com> <5417E963.5060103@redhat.com> <541962F7.5020807@redhat.com> <54196D94.9050900@cs.oswego.edu> <541A4691.8010702@oracle.com> Message-ID: It's not a subtype. I think the proposal was to add a tag to a release node that would indicate its purpose a bit more granularly. The backend (AArch64 in this case) could then match on that tag and emit a lighter code sequence/instruction. I think this is a bit of a hack to work around the fact that C2 has certain assumptions about the type of barriers and not having more fine grained node types. Sent from my phone On Sep 17, 2014 10:43 PM, "David Holmes" wrote: > On 17/09/2014 9:16 PM, Doug Lea wrote: > >> On 09/17/2014 06:31 AM, Andrew Haley wrote: >> >>> On 09/16/2014 08:40 AM, Andrew Haley wrote: >>> >>>> On 15/09/14 17:20, Vitaly Davidovich wrote: >>>> >>>>> Looking at hg history, MemBarStoreStore was added a few years ago, >>>>> whereas >>>>> the code in question is much older. The comments in the changelist >>>>> adding >>>>> MemBarStoreStore seem to indicate it was done to address a specific >>>>> issue, >>>>> and my guess is that it wasn't "retrofitted" into all possible places. >>>>> >>>> >>>> That sounds plausible. I'll change this to a StoreStore in the AArch64 >>>> port and do some testing. >>>> >>> >>> Bah, that doesn't work. Escape analysis assumes that a StoreStore >>> is only used in certain contexts. Back to the drawing board. >>> >>> >> The setup for StoreStore seems suspicious. I believe that this could >> only work in C2 if done in the way I mentioned: StoreStore must be >> handled identically to Release by c2, but possibly more cheaply >> matched. Can StoreStore be reworked as a subtype or property of >> Release? >> > > I'm not a C2 person but I don't understand in what way a storestore can, > or should, be considered a "subtype" of release ?? > > David > > -Doug >> >> >> From vladimir.kozlov at oracle.com Thu Sep 18 05:40:43 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 17 Sep 2014 22:40:43 -0700 Subject: RFR: [backport] 8044775: Improve usage of umbrella header atomic.inline.hpp. In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CF07045@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CF07045@DEWDFEMB12A.global.corp.sap> Message-ID: <541A705B.4020207@oracle.com> Goetz, These 2 changes are not bugs fixes. Why do you want to backport them (excluding "to keep code similar")? Thanks, Vladimir On 9/17/14 2:27 AM, Lindenmaier, Goetz wrote: > Hi, > > I'd like to backport this change: > JDK-8044775: Improve usage of umbrella header atomic.inline.hpp. > > It did not apply cleanly, so I please need a review: > > Some files do not exist in 8 or don't exist any more: > src/share/vm/classfile/stringTable.cpp > src/share/vm/service/memPtr.hpp > src/share/vm/service/memPtr.cpp > src/share/vm/service/memRecorder.cpp > > In some files the patch did not apply cleanly, as the context > changed: > src/share/vm/utilities/bitMap.cpp > src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp > src/share/vm/oops/instanceKlass.cpp > > Here usage of class Atomic was removed along with the header: > src/share/vm/service/memTracker.cpp > > This is the webrev for the 8u repository: > http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.8.00/ > > This is the change in 9: > http://hg.openjdk.java.net/jdk9/hs/hotspot/rev/b596a1063e90 > and the webrev sumitted to 9: > http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.01/ > > Please review this. I please need a sponsor to push the change. > > Best regards, > Goetz. > > > > From goetz.lindenmaier at sap.com Thu Sep 18 07:06:50 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 18 Sep 2014 07:06:50 +0000 Subject: RFR: [backport] 8044775: Improve usage of umbrella header atomic.inline.hpp. In-Reply-To: <541A705B.4020207@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CF07045@DEWDFEMB12A.global.corp.sap> <541A705B.4020207@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CF073D9@DEWDFEMB12A.global.corp.sap> Hi Vladimir, I guess that's the major reason ... also other backports will apply better. Finally SAP will get the change earlier as we'll do 8u40 long before 9. But if you object we can just skip it, as it's not a bugfix it's not a big deal. Best regards, Goetz. -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov Sent: Donnerstag, 18. September 2014 07:41 To: hotspot-dev at openjdk.java.net Subject: Re: RFR: [backport] 8044775: Improve usage of umbrella header atomic.inline.hpp. Goetz, These 2 changes are not bugs fixes. Why do you want to backport them (excluding "to keep code similar")? Thanks, Vladimir On 9/17/14 2:27 AM, Lindenmaier, Goetz wrote: > Hi, > > I'd like to backport this change: > JDK-8044775: Improve usage of umbrella header atomic.inline.hpp. > > It did not apply cleanly, so I please need a review: > > Some files do not exist in 8 or don't exist any more: > src/share/vm/classfile/stringTable.cpp > src/share/vm/service/memPtr.hpp > src/share/vm/service/memPtr.cpp > src/share/vm/service/memRecorder.cpp > > In some files the patch did not apply cleanly, as the context > changed: > src/share/vm/utilities/bitMap.cpp > src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp > src/share/vm/oops/instanceKlass.cpp > > Here usage of class Atomic was removed along with the header: > src/share/vm/service/memTracker.cpp > > This is the webrev for the 8u repository: > http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.8.00/ > > This is the change in 9: > http://hg.openjdk.java.net/jdk9/hs/hotspot/rev/b596a1063e90 > and the webrev sumitted to 9: > http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.01/ > > Please review this. I please need a sponsor to push the change. > > Best regards, > Goetz. > > > > From david.holmes at oracle.com Thu Sep 18 07:19:28 2014 From: david.holmes at oracle.com (David Holmes) Date: Thu, 18 Sep 2014 17:19:28 +1000 Subject: RFR: [backport] 8044775: Improve usage of umbrella header atomic.inline.hpp. In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CF073D9@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CF07045@DEWDFEMB12A.global.corp.sap> <541A705B.4020207@oracle.com> <4295855A5C1DE049A61835A1887419CC2CF073D9@DEWDFEMB12A.global.corp.sap> Message-ID: <541A8780.4020903@oracle.com> Hi Goetz, On 18/09/2014 5:06 PM, Lindenmaier, Goetz wrote: > Hi Vladimir, > > I guess that's the major reason ... also other backports will apply better. > Finally SAP will get the change earlier as we'll do 8u40 long before 9. > > But if you object we can just skip it, as it's not a bugfix it's not a big deal. I think the issue is that 8u40 has hit feature complete (or will before any new changes to hotspot will make it into the master repo [1]), so only bug fixes should be accepted from this point forward. Once jdk8u/dev is open for 8u60 changes then this could be backported. Cheers, David [1] http://openjdk.java.net/projects/jdk8u/releases/8u40.html > Best regards, > Goetz. > > -----Original Message----- > From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov > Sent: Donnerstag, 18. September 2014 07:41 > To: hotspot-dev at openjdk.java.net > Subject: Re: RFR: [backport] 8044775: Improve usage of umbrella header atomic.inline.hpp. > > Goetz, > > These 2 changes are not bugs fixes. Why do you want to backport them (excluding "to keep code similar")? > > Thanks, > Vladimir > > On 9/17/14 2:27 AM, Lindenmaier, Goetz wrote: >> Hi, >> >> I'd like to backport this change: >> JDK-8044775: Improve usage of umbrella header atomic.inline.hpp. >> >> It did not apply cleanly, so I please need a review: >> >> Some files do not exist in 8 or don't exist any more: >> src/share/vm/classfile/stringTable.cpp >> src/share/vm/service/memPtr.hpp >> src/share/vm/service/memPtr.cpp >> src/share/vm/service/memRecorder.cpp >> >> In some files the patch did not apply cleanly, as the context >> changed: >> src/share/vm/utilities/bitMap.cpp >> src/share/vm/gc_implementation/g1/g1CollectedHeap.cpp >> src/share/vm/oops/instanceKlass.cpp >> >> Here usage of class Atomic was removed along with the header: >> src/share/vm/service/memTracker.cpp >> >> This is the webrev for the 8u repository: >> http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.8.00/ >> >> This is the change in 9: >> http://hg.openjdk.java.net/jdk9/hs/hotspot/rev/b596a1063e90 >> and the webrev sumitted to 9: >> http://cr.openjdk.java.net/~goetz/webrevs/8044775-atomInc/webrev.01/ >> >> Please review this. I please need a sponsor to push the change. >> >> Best regards, >> Goetz. >> >> >> >> From goetz.lindenmaier at sap.com Thu Sep 18 08:07:20 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Thu, 18 Sep 2014 08:07:20 +0000 Subject: RFR: 8058716: Add include missing in 8015774 Message-ID: <4295855A5C1DE049A61835A1887419CC2CF07440@DEWDFEMB12A.global.corp.sap> Hi, 8015744 breaks the build on ppc as an inline function is not defined: compile.hpp:812: warning: inline function 'Node_Notes* Compile::locate_node_notes(GrowableArray*, int, bool)' used but never defined This is because compile.hpp declares and calls locate_node_notes(), which is defined in node.hpp. Therefore codeCache.cpp must include node.hpp, too. Bug: https://bugs.openjdk.java.net/browse/JDK-8058716 Webrev: http://cr.openjdk.java.net/~goetz/webrevs/8058716-inclFix/webrev.00/ Please review this change. I please need a sponsor to push it. Best regards, Goetz From mikael.gerdin at oracle.com Thu Sep 18 08:39:57 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 18 Sep 2014 10:39:57 +0200 Subject: RFR: JDK-8055141 Catch linker errors earlier in the JVM build by not allowing unresolved externals Message-ID: <2374744.b2KptO73VY@mgerdin03> Hi all, As you may know, linking an ELF shared object allows unresolved external symbols at link time. This is sometimes problematic for JVM developers since the JVM does not depend on unresolved external symbols and all missing symbols at build time are due to mistakes, usually missing includes of inline definitions. In order to disallow such unresolved externals I propose that we add "-z defs" to the linker command line when linking the JVM, thereby making unresolved externals a build-time error instead of a run-time failure when dlopen:ing the newly built JVM for the first time. On Windows ans OSX this is already the default linker behavior. I took the liberty of modifying the bsd make file since I believe that bsd uses the GNU linker which supports the "-z defs" flag. I'm not sure about the behavior or flags appropriate for AIX so I didn't change the AIX makefiles. On Solaris, linking with "-z defs" failed at first with the following message: Undefined first referenced symbol in file gethostbyname ostream.o (symbol belongs to implicit dependency /lib/64/libnsl.so.1) inet_addr ostream.o (symbol belongs to implicit dependency /lib/64/libnsl.so.1) ld: fatal: symbol referencing errors. No output written to libjvm.so This has not caused any failures earlier since libsocket depends on libnsl, so in practice the symbols are always present at runtime, but with the "-z defs" flag the linker requires the dependency to be explicitly stated. I fixed the issure by appending -lnsl to the link-time libraries for the Solaris build. Webrev: http://cr.openjdk.java.net/~mgerdin/8055141/webrev.0/ Bug: https://bugs.openjdk.java.net/browse/JDK-8055141 Testing: * Verified that the additional flag causes build-time errors on all platforms in the presence of unresolved external symbols. * Verified that the build passes on all Oracle-supported platforms with the new flag. Thanks /Mikael From erik.joelsson at oracle.com Thu Sep 18 08:55:32 2014 From: erik.joelsson at oracle.com (Erik Joelsson) Date: Thu, 18 Sep 2014 10:55:32 +0200 Subject: RFR: JDK-8055141 Catch linker errors earlier in the JVM build by not allowing unresolved externals In-Reply-To: <2374744.b2KptO73VY@mgerdin03> References: <2374744.b2KptO73VY@mgerdin03> Message-ID: <541A9E04.8050901@oracle.com> Looks good to me. /Erik On 2014-09-18 10:39, Mikael Gerdin wrote: > Hi all, > > As you may know, linking an ELF shared object allows unresolved external > symbols at link time. This is sometimes problematic for JVM developers since > the JVM does not depend on unresolved external symbols and all missing symbols > at build time are due to mistakes, usually missing includes of inline > definitions. > > In order to disallow such unresolved externals I propose that we add > "-z defs" to the linker command line when linking the JVM, thereby making > unresolved externals a build-time error instead of a run-time failure when > dlopen:ing the newly built JVM for the first time. > > On Windows ans OSX this is already the default linker behavior. > I took the liberty of modifying the bsd make file since I believe that bsd > uses the GNU linker which supports the "-z defs" flag. I'm not sure about the > behavior or flags appropriate for AIX so I didn't change the AIX makefiles. > > > On Solaris, linking with "-z defs" failed at first with the following message: > > Undefined first referenced > symbol in file > gethostbyname ostream.o (symbol belongs to implicit > dependency /lib/64/libnsl.so.1) > inet_addr ostream.o (symbol belongs to implicit > dependency /lib/64/libnsl.so.1) > ld: fatal: symbol referencing errors. No output written to libjvm.so > > This has not caused any failures earlier since libsocket depends on libnsl, so > in practice the symbols are always present at runtime, but with the "-z defs" > flag the linker requires the dependency to be explicitly stated. > I fixed the issure by appending -lnsl to the link-time libraries for the > Solaris build. > > Webrev: http://cr.openjdk.java.net/~mgerdin/8055141/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8055141 > > Testing: > * Verified that the additional flag causes build-time errors on all platforms > in the presence of unresolved external symbols. > * Verified that the build passes on all Oracle-supported platforms with the > new flag. > > Thanks > /Mikael From thomas.schatzl at oracle.com Thu Sep 18 09:33:44 2014 From: thomas.schatzl at oracle.com (Thomas Schatzl) Date: Thu, 18 Sep 2014 11:33:44 +0200 Subject: RFR [8u40] 8056084: Refactor Hashtable to allow implementations without rehashing support In-Reply-To: <15856493.62xT43KoU5@mgerdin03> References: <15856493.62xT43KoU5@mgerdin03> Message-ID: <1411032824.2709.43.camel@cirrus> Hi Mikael, On Wed, 2014-09-17 at 09:00 +0200, Mikael Gerdin wrote: > Hi all, > > I need to backport this change in order to backport 8048268 which we need for > G1 performance in 8u40. > > The patch didn't apply cleanly since StringTable was moved to a separate file > in 9. The StringTable patch hunks applied correctly to the relevant parts of > symbolTable.[ch]pp. > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8056084/8u/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056084 > > Review thread at: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-August/015039.html > Looks good. Thanks, Thomas From erik.helin at oracle.com Thu Sep 18 12:31:21 2014 From: erik.helin at oracle.com (Erik Helin) Date: Thu, 18 Sep 2014 14:31:21 +0200 Subject: RFR: JDK-8055141 Catch linker errors earlier in the JVM build by not allowing unresolved externals In-Reply-To: <2374744.b2KptO73VY@mgerdin03> References: <2374744.b2KptO73VY@mgerdin03> Message-ID: <541AD099.8090308@oracle.com> Hi Mikael, thanks for fixing this! Looks good, Reviewed. Erik On 2014-09-18 10:39, Mikael Gerdin wrote: > Hi all, > > As you may know, linking an ELF shared object allows unresolved external > symbols at link time. This is sometimes problematic for JVM developers since > the JVM does not depend on unresolved external symbols and all missing symbols > at build time are due to mistakes, usually missing includes of inline > definitions. > > In order to disallow such unresolved externals I propose that we add > "-z defs" to the linker command line when linking the JVM, thereby making > unresolved externals a build-time error instead of a run-time failure when > dlopen:ing the newly built JVM for the first time. > > On Windows ans OSX this is already the default linker behavior. > I took the liberty of modifying the bsd make file since I believe that bsd > uses the GNU linker which supports the "-z defs" flag. I'm not sure about the > behavior or flags appropriate for AIX so I didn't change the AIX makefiles. > > > On Solaris, linking with "-z defs" failed at first with the following message: > > Undefined first referenced > symbol in file > gethostbyname ostream.o (symbol belongs to implicit > dependency /lib/64/libnsl.so.1) > inet_addr ostream.o (symbol belongs to implicit > dependency /lib/64/libnsl.so.1) > ld: fatal: symbol referencing errors. No output written to libjvm.so > > This has not caused any failures earlier since libsocket depends on libnsl, so > in practice the symbols are always present at runtime, but with the "-z defs" > flag the linker requires the dependency to be explicitly stated. > I fixed the issure by appending -lnsl to the link-time libraries for the > Solaris build. > > Webrev: http://cr.openjdk.java.net/~mgerdin/8055141/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8055141 > > Testing: > * Verified that the additional flag causes build-time errors on all platforms > in the presence of unresolved external symbols. > * Verified that the build passes on all Oracle-supported platforms with the > new flag. > > Thanks > /Mikael > From stefan.johansson at oracle.com Thu Sep 18 12:56:25 2014 From: stefan.johansson at oracle.com (Stefan Johansson) Date: Thu, 18 Sep 2014 14:56:25 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <5418B0FC.9060203@oracle.com> References: <54188242.9060808@oracle.com> <1410898603.2833.7.camel@cirrus> <5418B0FC.9060203@oracle.com> Message-ID: <541AD679.90008@oracle.com> Hi Jesper, On 2014-09-16 23:51, Jesper Wilhelmsson wrote: > Thomas Schatzl skrev 16/9/14 22:16: >> Hi, >> >> On Tue, 2014-09-16 at 20:32 +0200, Jesper Wilhelmsson wrote: >>> Hi, >>> >>> The fix for JDK-8055006 was reviewed by several engineers and was >>> pushed >>> directly to 8u40 due to time constraints. This is a forward port to >>> get the same >>> changes into JDK 9. >>> >>> There are two webrevs, one for HotSpot and one for the JDK. >>> >>> The 8u40 HotSpot change applied cleanly to 9 so if this was a >>> traditional >>> backport it wouldn't require another review. But since this is a >>> weird situation >>> and I'm pushing to 9 I'll ask for reviews just to be on the safe side. >>> Also, the original 8u40 push contained some unnecessary changes that >>> was later >>> cleaned up by JDK-8056056. In this port to 9 I have merged these two >>> changes >>> into one to avoid introducing a known issue only to remove it again. >>> >> >> I would prefer if you pushed all changes in the order they were applied >> at once, even the ones that were buggy and their fix. >> >> Combining changesets during porting makes comparing source trees to find >> changesets that might have been overlooked very hard. > > OK, will do. Changes look good. Stefan > > /Jesper > >> >> Thanks, >> Thomas >> >> From vladimir.kozlov at oracle.com Thu Sep 18 20:25:00 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 18 Sep 2014 13:25:00 -0700 Subject: RFR: 8058716: Add include missing in 8015774 In-Reply-To: <4295855A5C1DE049A61835A1887419CC2CF07440@DEWDFEMB12A.global.corp.sap> References: <4295855A5C1DE049A61835A1887419CC2CF07440@DEWDFEMB12A.global.corp.sap> Message-ID: <541B3F9C.3020009@oracle.com> On 9/18/14 1:07 AM, Lindenmaier, Goetz wrote: > Hi, > > 8015744 breaks the build on ppc as an inline function is not defined: 8015774 > compile.hpp:812: warning: inline function 'Node_Notes* Compile::locate_node_notes(GrowableArray*, int, bool)' used but never defined > > This is because compile.hpp declares and calls locate_node_notes(), which is defined in node.hpp. Therefore codeCache.cpp must include node.hpp, too. Good. Thanks, Vladimir > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8058716 > Webrev: > http://cr.openjdk.java.net/~goetz/webrevs/8058716-inclFix/webrev.00/ > > Please review this change. I please need a sponsor to push it. > > Best regards, > Goetz > From david.holmes at oracle.com Thu Sep 18 23:40:37 2014 From: david.holmes at oracle.com (David Holmes) Date: Fri, 19 Sep 2014 09:40:37 +1000 Subject: RFR: JDK-8055141 Catch linker errors earlier in the JVM build by not allowing unresolved externals In-Reply-To: <2374744.b2KptO73VY@mgerdin03> References: <2374744.b2KptO73VY@mgerdin03> Message-ID: <541B6D75.90207@oracle.com> Looks good and works well! Lets get this one backported too please. :) Thanks, David On 18/09/2014 6:39 PM, Mikael Gerdin wrote: > Hi all, > > As you may know, linking an ELF shared object allows unresolved external > symbols at link time. This is sometimes problematic for JVM developers since > the JVM does not depend on unresolved external symbols and all missing symbols > at build time are due to mistakes, usually missing includes of inline > definitions. > > In order to disallow such unresolved externals I propose that we add > "-z defs" to the linker command line when linking the JVM, thereby making > unresolved externals a build-time error instead of a run-time failure when > dlopen:ing the newly built JVM for the first time. > > On Windows ans OSX this is already the default linker behavior. > I took the liberty of modifying the bsd make file since I believe that bsd > uses the GNU linker which supports the "-z defs" flag. I'm not sure about the > behavior or flags appropriate for AIX so I didn't change the AIX makefiles. > > > On Solaris, linking with "-z defs" failed at first with the following message: > > Undefined first referenced > symbol in file > gethostbyname ostream.o (symbol belongs to implicit > dependency /lib/64/libnsl.so.1) > inet_addr ostream.o (symbol belongs to implicit > dependency /lib/64/libnsl.so.1) > ld: fatal: symbol referencing errors. No output written to libjvm.so > > This has not caused any failures earlier since libsocket depends on libnsl, so > in practice the symbols are always present at runtime, but with the "-z defs" > flag the linker requires the dependency to be explicitly stated. > I fixed the issure by appending -lnsl to the link-time libraries for the > Solaris build. > > Webrev: http://cr.openjdk.java.net/~mgerdin/8055141/webrev.0/ > Bug: https://bugs.openjdk.java.net/browse/JDK-8055141 > > Testing: > * Verified that the additional flag causes build-time errors on all platforms > in the presence of unresolved external symbols. > * Verified that the build passes on all Oracle-supported platforms with the > new flag. > > Thanks > /Mikael > From goetz.lindenmaier at sap.com Fri Sep 19 08:59:58 2014 From: goetz.lindenmaier at sap.com (Lindenmaier, Goetz) Date: Fri, 19 Sep 2014 08:59:58 +0000 Subject: RFR: 8058716: Add include missing in 8015774 In-Reply-To: <541B3F9C.3020009@oracle.com> References: <4295855A5C1DE049A61835A1887419CC2CF07440@DEWDFEMB12A.global.corp.sap> <541B3F9C.3020009@oracle.com> Message-ID: <4295855A5C1DE049A61835A1887419CC2CF078B7@DEWDFEMB12A.global.corp.sap> Hi Vladimir, thanks for reviewing & pushing this! Best regards, Goetz -----Original Message----- From: hotspot-dev [mailto:hotspot-dev-bounces at openjdk.java.net] On Behalf Of Vladimir Kozlov Sent: Donnerstag, 18. September 2014 22:25 To: hotspot-dev at openjdk.java.net Subject: Re: RFR: 8058716: Add include missing in 8015774 On 9/18/14 1:07 AM, Lindenmaier, Goetz wrote: > Hi, > > 8015744 breaks the build on ppc as an inline function is not defined: 8015774 > compile.hpp:812: warning: inline function 'Node_Notes* Compile::locate_node_notes(GrowableArray*, int, bool)' used but never defined > > This is because compile.hpp declares and calls locate_node_notes(), which is defined in node.hpp. Therefore codeCache.cpp must include node.hpp, too. Good. Thanks, Vladimir > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8058716 > Webrev: > http://cr.openjdk.java.net/~goetz/webrevs/8058716-inclFix/webrev.00/ > > Please review this change. I please need a sponsor to push it. > > Best regards, > Goetz > From mikael.vidstedt at oracle.com Fri Sep 19 18:03:05 2014 From: mikael.vidstedt at oracle.com (Mikael Vidstedt) Date: Fri, 19 Sep 2014 11:03:05 -0700 Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: References: <540F7021.5080100@oracle.com> <5410CDA9.7030405@oracle.com> Message-ID: <541C6FD9.5050602@oracle.com> Volker, The proposal is only to change how the changes are pushed, not which forests changes can be pushed to. That is, we would still require hotspot changes to be pushed to one of the group repositories (jdk9/hs-{comp,gc,rt}) or to the jdk8u/hs-dev forest (jdk8u), but I propose that the relaxation be applied on all those (four) forests. Reasonable? Cheers, Mikael On 2014-09-12 11:38, Volker Simonis wrote: > Hi Mikael, > > there's one more question that came to my mind: will the new rule > apply to all hotspot respitories (i.e. jdk9/hs-rt/hotspot, > jdk9/hs-comp/hotspot, jdk9/hs-gc/hotspot, jdk9/hs-hs/hotspot AND > jdk8u/jdk8u-dev/hotspot, jdk8u/hs-dev/hotspot) ? > > Thanks, > Volker > > > On Thu, Sep 11, 2014 at 12:16 AM, Mikael Vidstedt > wrote: >> Andrew/Volker, >> >> Thanks for the positive feedback. The goal of the proposal is to simplify >> pushing changes which are effectively not tested by the jprt system anyway. >> The proposed relaxation would not affect work on other infrastructure >> projects in any relevant way, but would hopefully improve all our lives >> significantly immediately. >> >> Cheers, >> Mikael >> >> >> On 2014-09-10 01:45, Volker Simonis wrote: >>> Hi Mikael, >>> >>> thanks a lot for this proposal. I think this will dramatically >>> simplify our work to keep our ports up to date! So I fully support it. >>> >>> Nevertheless, I think this can only be a first step towards fully open >>> the JPRT system to developers outside Oracle. With "opening" I mean to >>> allow OpenJDK commiters from outside Oracle to submit and run JPRT >>> jobs as well as allowing porting projects to add hardware which builds >>> and tests the HotSpot on alternative platforms. >>> >>> So while I'm all in favor of your proposal I hope you can allay my >>> doubts that this simplification will hopefully not push the >>> realization of a truly OPEN JPRT system even further away. >>> >>> Regards, >>> Volker >>> >>> >>> On Tue, Sep 9, 2014 at 11:24 PM, Mikael Vidstedt >>> wrote: >>>> All, >>>> >>>> Made up primarily of low level C++ code, the Hotspot codebase is highly >>>> platform dependent and also tightly coupled with the tool chains on the >>>> various platforms. Each platform/tool chain combination has its set of >>>> special quirks, and code must be implemented in a way such that it only >>>> relies on the common subset of syntax and functionality across all these >>>> combinations. History has taught us that even simple changes can have >>>> surprising results when compiled with different compilers. >>>> >>>> For more than a decade the Hotspot team has ensured a minimum quality >>>> level >>>> by requiring all pushes to be done through a build and test system (jprt) >>>> which guarantees that the code resulting from applying a set of changes >>>> builds on a set of core platforms and that a set of core tests pass. Only >>>> if >>>> all the builds and tests pass will the changes actually be pushed to the >>>> target repository. >>>> >>>> We believe that testing like the above, in combination with later stages >>>> of >>>> testing, is vital to ensuring that the quality level of the Hotspot code >>>> remains high and that developers do not run into situations where the >>>> latest >>>> version has build errors on some platforms. >>>> >>>> Recently the AIX/PPC port was added to the set of OpenJDK platforms. From >>>> a >>>> Hotspot perspective this new platform added a set of AIX/PPC specific >>>> files >>>> including some platform specific changes to shared code. The AIX/PPC >>>> platform is not tested by Oracle as part of Hotspot push jobs. The same >>>> thing applies for the shark and zero versions of Hotspot. >>>> >>>> While Hotspot developers remain committed to making sure changes are >>>> developed in a way such that the quality level remains high across all >>>> platforms and variants, because of the above mentioned complexities it is >>>> inevitable that from time to time changes will be made which introduce >>>> issues on specific platforms or tool chains not part of the core testing. >>>> >>>> To allow these issues to be resolved more quickly I would like to propose >>>> a >>>> relaxation in the requirements on how changes to Hotspot are pushed. >>>> Specifically I would like to allow for direct pushes to the hotspot/ >>>> repository of files specific to the following ports/variants/tools: >>>> >>>> * AIX >>>> * PPC >>>> * Shark >>>> * Zero >>>> >>>> Today this translates into the following files: >>>> >>>> - src/cpu/ppc/** >>>> - src/cpu/zero/** >>>> - src/os/aix/** >>>> - src/os_cpu/aix_ppc/** >>>> - src/os_cpu/bsd_zero/** >>>> - src/os_cpu/linux_ppc/** >>>> - src/os_cpu/linux_zero/** >>>> >>>> Note that all changes are still required to go through the normal >>>> development and review cycle; the proposed relaxation only applies to how >>>> the changes are pushed. >>>> >>>> If at code review time a change is for some reason deemed to be risky >>>> and/or >>>> otherwise have impact on shared files the reviewer may request that the >>>> change to go through the regular push testing. For changes only touching >>>> the >>>> above set of files this expected to be rare. >>>> >>>> Please let me know what you think. >>>> >>>> Cheers, >>>> Mikael >>>> From volker.simonis at gmail.com Fri Sep 19 18:47:03 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 19 Sep 2014 20:47:03 +0200 Subject: Proposal: Allowing selective pushes to hotspot without jprt In-Reply-To: <541C6FD9.5050602@oracle.com> References: <540F7021.5080100@oracle.com> <5410CDA9.7030405@oracle.com> <541C6FD9.5050602@oracle.com> Message-ID: Thanks Mikael, that sounds good! Regards, Volker On Fri, Sep 19, 2014 at 8:03 PM, Mikael Vidstedt wrote: > > Volker, > > The proposal is only to change how the changes are pushed, not which forests > changes can be pushed to. That is, we would still require hotspot changes to > be pushed to one of the group repositories (jdk9/hs-{comp,gc,rt}) or to the > jdk8u/hs-dev forest (jdk8u), but I propose that the relaxation be applied on > all those (four) forests. Reasonable? > > Cheers, > Mikael > > > On 2014-09-12 11:38, Volker Simonis wrote: >> >> Hi Mikael, >> >> there's one more question that came to my mind: will the new rule >> apply to all hotspot respitories (i.e. jdk9/hs-rt/hotspot, >> jdk9/hs-comp/hotspot, jdk9/hs-gc/hotspot, jdk9/hs-hs/hotspot AND >> jdk8u/jdk8u-dev/hotspot, jdk8u/hs-dev/hotspot) ? >> >> Thanks, >> Volker >> >> >> On Thu, Sep 11, 2014 at 12:16 AM, Mikael Vidstedt >> wrote: >>> >>> Andrew/Volker, >>> >>> Thanks for the positive feedback. The goal of the proposal is to simplify >>> pushing changes which are effectively not tested by the jprt system >>> anyway. >>> The proposed relaxation would not affect work on other infrastructure >>> projects in any relevant way, but would hopefully improve all our lives >>> significantly immediately. >>> >>> Cheers, >>> Mikael >>> >>> >>> On 2014-09-10 01:45, Volker Simonis wrote: >>>> >>>> Hi Mikael, >>>> >>>> thanks a lot for this proposal. I think this will dramatically >>>> simplify our work to keep our ports up to date! So I fully support it. >>>> >>>> Nevertheless, I think this can only be a first step towards fully open >>>> the JPRT system to developers outside Oracle. With "opening" I mean to >>>> allow OpenJDK commiters from outside Oracle to submit and run JPRT >>>> jobs as well as allowing porting projects to add hardware which builds >>>> and tests the HotSpot on alternative platforms. >>>> >>>> So while I'm all in favor of your proposal I hope you can allay my >>>> doubts that this simplification will hopefully not push the >>>> realization of a truly OPEN JPRT system even further away. >>>> >>>> Regards, >>>> Volker >>>> >>>> >>>> On Tue, Sep 9, 2014 at 11:24 PM, Mikael Vidstedt >>>> wrote: >>>>> >>>>> All, >>>>> >>>>> Made up primarily of low level C++ code, the Hotspot codebase is highly >>>>> platform dependent and also tightly coupled with the tool chains on the >>>>> various platforms. Each platform/tool chain combination has its set of >>>>> special quirks, and code must be implemented in a way such that it only >>>>> relies on the common subset of syntax and functionality across all >>>>> these >>>>> combinations. History has taught us that even simple changes can have >>>>> surprising results when compiled with different compilers. >>>>> >>>>> For more than a decade the Hotspot team has ensured a minimum quality >>>>> level >>>>> by requiring all pushes to be done through a build and test system >>>>> (jprt) >>>>> which guarantees that the code resulting from applying a set of changes >>>>> builds on a set of core platforms and that a set of core tests pass. >>>>> Only >>>>> if >>>>> all the builds and tests pass will the changes actually be pushed to >>>>> the >>>>> target repository. >>>>> >>>>> We believe that testing like the above, in combination with later >>>>> stages >>>>> of >>>>> testing, is vital to ensuring that the quality level of the Hotspot >>>>> code >>>>> remains high and that developers do not run into situations where the >>>>> latest >>>>> version has build errors on some platforms. >>>>> >>>>> Recently the AIX/PPC port was added to the set of OpenJDK platforms. >>>>> From >>>>> a >>>>> Hotspot perspective this new platform added a set of AIX/PPC specific >>>>> files >>>>> including some platform specific changes to shared code. The AIX/PPC >>>>> platform is not tested by Oracle as part of Hotspot push jobs. The same >>>>> thing applies for the shark and zero versions of Hotspot. >>>>> >>>>> While Hotspot developers remain committed to making sure changes are >>>>> developed in a way such that the quality level remains high across all >>>>> platforms and variants, because of the above mentioned complexities it >>>>> is >>>>> inevitable that from time to time changes will be made which introduce >>>>> issues on specific platforms or tool chains not part of the core >>>>> testing. >>>>> >>>>> To allow these issues to be resolved more quickly I would like to >>>>> propose >>>>> a >>>>> relaxation in the requirements on how changes to Hotspot are pushed. >>>>> Specifically I would like to allow for direct pushes to the hotspot/ >>>>> repository of files specific to the following ports/variants/tools: >>>>> >>>>> * AIX >>>>> * PPC >>>>> * Shark >>>>> * Zero >>>>> >>>>> Today this translates into the following files: >>>>> >>>>> - src/cpu/ppc/** >>>>> - src/cpu/zero/** >>>>> - src/os/aix/** >>>>> - src/os_cpu/aix_ppc/** >>>>> - src/os_cpu/bsd_zero/** >>>>> - src/os_cpu/linux_ppc/** >>>>> - src/os_cpu/linux_zero/** >>>>> >>>>> Note that all changes are still required to go through the normal >>>>> development and review cycle; the proposed relaxation only applies to >>>>> how >>>>> the changes are pushed. >>>>> >>>>> If at code review time a change is for some reason deemed to be risky >>>>> and/or >>>>> otherwise have impact on shared files the reviewer may request that the >>>>> change to go through the regular push testing. For changes only >>>>> touching >>>>> the >>>>> above set of files this expected to be rare. >>>>> >>>>> Please let me know what you think. >>>>> >>>>> Cheers, >>>>> Mikael >>>>> > From igor.veresov at oracle.com Fri Sep 19 18:53:37 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Fri, 19 Sep 2014 11:53:37 -0700 Subject: [8u] RFR(S) 8058564: Tiered compilation performance drop in PIT Message-ID: jdk9 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev.05 jdk8 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev-8u/ JDK8 is slightly different - there?s no code aging. jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/75e7ad74fba8 JBS: https://bugs.openjdk.java.net/browse/JDK-8058564 Thanks, igor From volker.simonis at gmail.com Fri Sep 19 18:55:50 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Fri, 19 Sep 2014 20:55:50 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <5419E5C5.9080401@oracle.com> References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> <5419E5C5.9080401@oracle.com> Message-ID: Hi, so here's my new version: - documented the "pns" command with examples - removed the clumsy "make_frame" generators and introduced a genreic frame constructor on all platforms which can now be called from pns() - pns() must now be called with three arguments (usually registers like pns($sp, $fp, $pc) but some arguments may be '0' on some platforms (see the examples in the documentation of pns()) - tested on Linux (x86, x64, ppc64) and Solaris (SPARC, x64) - added additional "Summary" section to the change which mentions that the change also fixes stack traces on x86 to enable walking of runtime stubs and native wrappers. http://cr.openjdk.java.net/~simonis/webrevs/8058345.v2/ Notice that the current version requires trivial changes in your closed ports (i.e. adding the generic frame constructor) but I'd need a sponsor anyway:) Regards, Volker On Wed, Sep 17, 2014 at 9:49 PM, Vladimir Kozlov wrote: > On 9/17/14 11:29 AM, Volker Simonis wrote: >> >> On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov >> wrote: >>> >>> On 9/16/14 12:21 PM, Volker Simonis wrote: >>>> >>>> >>>> Hi Vladimir, >>>> >>>> thanks for looking at the change. >>>> >>>> 'make_frame' is only intended to be used from within the debugger to >>>> simplify the usage of the new 'pns()' (i.e. "print native stack") >>>> helper. It can be used as follows: >>>> >>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>> >>> >>> >>> It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and >>> let >>> it call make_frame()? To have make_frame() only on ppc and x86 will not >>> allow to use pns() on other platforms. >>> >>> Would be nice to have pns() version (names different) without input >>> parameters. Can we use os::current_frame() inside for that? >>> >>> Add pns() description to help() output. >>> >>>> >>>> "Executing pns" >>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>> C=native >>>> code) >>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>> j java.lang.Thread.sleep(J)V+0 >>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>> j CrashNative.doIt()V+45 >>>> v ~StubRoutines::call_stub >>>> V [libjvm.so+0x71599f] >>>> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >>>> Thread*)+0xf8f >>>> >>>> What about the two fixesin in 'print_native_stack()' - do you think they >>>> are OK? >>> >>> >>> >>> What about is_runtime_frame()? It is wrapper for runtime calls from >>> compiled >>> code. >>> >> >> Yes, but I don't see how this could help here, because the native >> wrapper which makes problems here is a nmethod and not a runtime stub. >> >> Maybe you mean to additionally add is_runtime_frame() to the check? > > > Yes, that is what I meant. > > Thanks, > Vladimir > > >> Yes, I've just realized that that's indeed needed on amd64 to walk >> runtime stubs. SPARC is more graceful and works without these changes, >> but on amd64 we need them (on both Solaris and Linux) and on Sparc >> they don't hurt. >> >> I've written a small test program which should be similar to the one >> you used for 8035983: >> >> import java.util.Hashtable; >> >> public class StackTraceTest { >> static Hashtable ht; >> static { >> ht = new Hashtable(); >> ht.put("one", "one"); >> } >> >> public static void foo() { >> bar(); >> } >> >> public static void bar() { >> ht.get("one"); >> } >> >> public static void main(String args[]) { >> for (int i = 0; i < 5; i++) { >> new Thread() { >> public void run() { >> while(true) { >> foo(); >> } >> } >> }.start(); >> } >> } >> } >> >> If I run it with "-XX:-Inline -XX:+PrintCompilation >> -XX:-TieredCompilation StackTraceTest" inside the debugger and crash >> one of the Java threads in native code, I get the correct stack traces >> on SPARC. But on amd64, I only get the following without my changes: >> >> Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], >> sp=0xfffffd7da17f7c60, free space=1019k >> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >> code) >> C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f >> V [libjvm.so+0x171443b] int >> os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b >> V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa >> V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 >> V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf >> V [libjvm.so+0x18cdd00] void >> ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 >> V [libjvm.so+0x18cd6a7] void >> ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 >> V [libjvm.so+0x182f39e] void >> >> SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e >> v ~RuntimeStub::_complete_monitor_locking_Java >> C 0x2aad1dd1000016d8 >> >> With the changes (and the additional check for is_runtime_frame()) I >> get full stack traces on amd64 as well. So I think the changes should >> be at least an improvement:) > > > Good! > > >> >>> You need to check what fr.real_fp() returns on all platforms for the very >>> first frame (_lwp_start). That is what this check about - stop walking >>> when >>> it reaches the first frame. fr.sender_sp() returns bogus value which is >>> not >>> stack pointer for the first frame. From 8035983 review: >>> >>> "It seems using fr.sender_sp() in the check work on x86 and sparc. >>> On x86 it return stack_base value on sparc it returns STACK_BIAS." >>> >>> Also on other our platforms it could return 0 or small integer value. >>> >>> If you can suggest an other way to determine the first frame, please, >>> tell. >>> >> >> So the initial problem in 8035983 was that we used >> os::is_first_C_frame(&fr) for native frames where the sender was a >> compiled frame. That didn't work reliably because, >> os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of >> the sender and that doesn't work for compiled senders. >> >> So you replaced os::is_first_C_frame(&fr) by >> !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() >> internally which in turn uses fp() so it won't work for frames which >> have a bogus frame pointer like native wrappers. >> >> I think using fr.real_fp() should be safe because as far as I can see >> it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() >> on SPARC. On Linux/amd64 both, the sp and fp of the first frame will >> be 0 (still have to check on SPARC). But the example above works fine >> with my changes on both, Linux/amd64 and Solaris/SPARC and >> Solaris/amd64. >> >> I'll prepare a new webrev tomorrow which will have the documentation >> for "pns" and a version of make_frame() for SPARC. >> >> Regards, >> Volker >> >>>> Should I move 'print_native_stack()' to vmError.cpp as suggested by >>>> David? >>> >>> >>> >>> I am fine with both places. >>> >>> Thanks, >>> Vladimir >>> >>> >>>> >>>> Thank you and best regards, >>>> Volker >>>> >>>> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >>>> wrote: >>>>> >>>>> >>>>> Thank you for fixing frame walk. >>>>> I don't see where make_frame() is used. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> >>>>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> while testing my change, I found two other small problems with native >>>>>> stack traces: >>>>>> >>>>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>>>> because they are treated as native "C" frames. However, if the native >>>>>> wrapper was called from a compiled frame which had no valid frame >>>>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >>>>>> frame. This can be easily fixed by treating native wrappers like java >>>>>> frames. >>>>>> >>>>>> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >>>>>> file)" introduced a similar problem. If we walk tha stack from a >>>>>> native wrapper down to a compiled frame, we will have a frame with an >>>>>> invalid frame pointer. In that case, the newly introduced check from >>>>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>>>> should do the same but also works for compiled frames with invalid fp. >>>>>> >>>>>> Here's the new webrev: >>>>>> >>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>>>> >>>>>> What dou you think? >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>> >>>>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>>>> debug.cpp. Initially I saw that vmError.cpp already included >>>>>>> debug.hpp >>>>>>> so I decided to declare it in debug.hpp. But now I realized that also >>>>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>>>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>>>>>> vmError.cpp. Do you want me to change that? >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>> >>>>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>>>> >>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi Volker, >>>>>>>> >>>>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> could you please review and sponsor the following small change >>>>>>>>> which >>>>>>>>> should make debugging a little more comfortabel (at least on Linux >>>>>>>>> for >>>>>>>>> now): >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>>>> >>>>>>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>>>>>> both, Java and native frames. >>>>>>>>> It would be nice if we could make this functionality available from >>>>>>>>> within gdb during debugging sessions (until now we can only print >>>>>>>>> the >>>>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>>>> >>>>>>>>> This new feature can be easily achieved by refactoring the >>>>>>>>> corresponding stack printing code from VMError::report() in >>>>>>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>>>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>>>>>> without changing anything of the functionality. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>>>> >>>>>>>> David >>>>>>>> ----- >>>>>>>> >>>>>>>> >>>>>>>>> It also adds some helper functions which make it easy to call the >>>>>>>>> new >>>>>>>>> 'print_native_stack()' method from within gdb. There's the new >>>>>>>>> helper >>>>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>>>> inserts >>>>>>>>> a dummy frame for every call and we can't easily walk over this >>>>>>>>> dummy >>>>>>>>> frame from our stack printing routine. >>>>>>>>> >>>>>>>>> To simplify the creation of the frame object, I've added the helper >>>>>>>>> functions: >>>>>>>>> >>>>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) >>>>>>>>> { >>>>>>>>> return frame(sp, fp, pc); >>>>>>>>> } >>>>>>>>> >>>>>>>>> for x86 (in frame_x86.cpp) and >>>>>>>>> >>>>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>>>> return frame(sp, pc); >>>>>>>>> } >>>>>>>>> >>>>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>>>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>>>>>> >>>>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>>>> >>>>>>>>> Thank you and best regards, >>>>>>>>> Volker >>>>>>>>> >>>>>>>>> >>>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>>>> >>>>>>>>> "Executing pns" >>>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>>>> C=native >>>>>>>>> code) >>>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>>> j CrashNative.doIt()V+45 >>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>>>> objArrayHandle, >>>>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, >>>>>>>>> Handle, >>>>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>>>> j >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>>>> j >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>>>> j >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>>>> j >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>>>> j CrashNative.mainJava()V+32 >>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>>>> C [libCrashNative.so+0x9a9] >>>>>>>>> JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>>>> _jmethodID*, ...)+0xb9 >>>>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>>>> >>>>>>>> >>>>> >>> > From vladimir.kozlov at oracle.com Fri Sep 19 18:58:27 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 19 Sep 2014 11:58:27 -0700 Subject: [8u] RFR(S) 8058564: Tiered compilation performance drop in PIT In-Reply-To: References: Message-ID: <541C7CD3.6060900@oracle.com> Looks good. Thanks, Vladimir On 9/19/14 11:53 AM, Igor Veresov wrote: > jdk9 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev.05 > jdk8 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev-8u/ > JDK8 is slightly different - there?s no code aging. > jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/75e7ad74fba8 > JBS: https://bugs.openjdk.java.net/browse/JDK-8058564 > > Thanks, > igor > > From jiangli.zhou at oracle.com Fri Sep 19 19:24:02 2014 From: jiangli.zhou at oracle.com (Jiangli Zhou) Date: Fri, 19 Sep 2014 12:24:02 -0700 Subject: [8u] RFR(S) 8058564: Tiered compilation performance drop in PIT In-Reply-To: References: Message-ID: <541C82D2.8090404@oracle.com> Hi Igor, Looks good. Thanks, Jiangli On 09/19/2014 11:53 AM, Igor Veresov wrote: > jdk9 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev.05 > jdk8 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev-8u/ > JDK8 is slightly different - there?s no code aging. > jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/75e7ad74fba8 > JBS: https://bugs.openjdk.java.net/browse/JDK-8058564 > > Thanks, > igor > > From igor.veresov at oracle.com Fri Sep 19 22:01:01 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Fri, 19 Sep 2014 15:01:01 -0700 Subject: [8u] RFR(S) 8058564: Tiered compilation performance drop in PIT In-Reply-To: <541C82D2.8090404@oracle.com> References: <541C82D2.8090404@oracle.com> Message-ID: <8C63D15D-3CA5-403C-82E3-5CF2C1B27C15@oracle.com> Thanks Jiangli and Vladimir! igor On Sep 19, 2014, at 12:24 PM, Jiangli Zhou wrote: > Hi Igor, > > Looks good. > > Thanks, > Jiangli > > On 09/19/2014 11:53 AM, Igor Veresov wrote: >> jdk9 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev.05 >> jdk8 webrev: http://cr.openjdk.java.net/~iveresov/8058564/webrev-8u/ >> JDK8 is slightly different - there?s no code aging. >> jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/75e7ad74fba8 >> JBS: https://bugs.openjdk.java.net/browse/JDK-8058564 >> >> Thanks, >> igor >> >> > From vladimir.kozlov at oracle.com Fri Sep 19 22:22:28 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Fri, 19 Sep 2014 15:22:28 -0700 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> <5419E5C5.9080401@oracle.com> Message-ID: <541CACA4.80801@oracle.com> os_solaris_sparc.cpp I think third parameter should be 'false' - originally we passed 0: - return frame(NULL, NULL, NULL); + return frame(NULL, NULL, true); Please, use one line (even if it is long): + tty->print_cr(" pns(void* sp,\n" + " void* fp,\n" + " void* pc) - print native (i.e. mixed) stack trace. E.g."); Otherwise look good. Thanks, Vladimir On 9/19/14 11:55 AM, Volker Simonis wrote: > Hi, > > so here's my new version: > > - documented the "pns" command with examples > - removed the clumsy "make_frame" generators and introduced a genreic > frame constructor on all platforms which can now be called from pns() > - pns() must now be called with three arguments (usually registers > like pns($sp, $fp, $pc) but some arguments may be '0' on some > platforms (see the examples in the documentation of pns()) > - tested on Linux (x86, x64, ppc64) and Solaris (SPARC, x64) > - added additional "Summary" section to the change which mentions > that the change also fixes stack traces on x86 to enable walking of > runtime stubs and native wrappers. > > http://cr.openjdk.java.net/~simonis/webrevs/8058345.v2/ > > Notice that the current version requires trivial changes in your > closed ports (i.e. adding the generic frame constructor) but I'd need > a sponsor anyway:) > > Regards, > Volker > > On Wed, Sep 17, 2014 at 9:49 PM, Vladimir Kozlov > wrote: >> On 9/17/14 11:29 AM, Volker Simonis wrote: >>> >>> On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov >>> wrote: >>>> >>>> On 9/16/14 12:21 PM, Volker Simonis wrote: >>>>> >>>>> >>>>> Hi Vladimir, >>>>> >>>>> thanks for looking at the change. >>>>> >>>>> 'make_frame' is only intended to be used from within the debugger to >>>>> simplify the usage of the new 'pns()' (i.e. "print native stack") >>>>> helper. It can be used as follows: >>>>> >>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>> >>>> >>>> >>>> It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and >>>> let >>>> it call make_frame()? To have make_frame() only on ppc and x86 will not >>>> allow to use pns() on other platforms. >>>> >>>> Would be nice to have pns() version (names different) without input >>>> parameters. Can we use os::current_frame() inside for that? >>>> >>>> Add pns() description to help() output. >>>> >>>>> >>>>> "Executing pns" >>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>> C=native >>>>> code) >>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>> j java.lang.Thread.sleep(J)V+0 >>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>> j CrashNative.doIt()V+45 >>>>> v ~StubRoutines::call_stub >>>>> V [libjvm.so+0x71599f] >>>>> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >>>>> Thread*)+0xf8f >>>>> >>>>> What about the two fixesin in 'print_native_stack()' - do you think they >>>>> are OK? >>>> >>>> >>>> >>>> What about is_runtime_frame()? It is wrapper for runtime calls from >>>> compiled >>>> code. >>>> >>> >>> Yes, but I don't see how this could help here, because the native >>> wrapper which makes problems here is a nmethod and not a runtime stub. >>> >>> Maybe you mean to additionally add is_runtime_frame() to the check? >> >> >> Yes, that is what I meant. >> >> Thanks, >> Vladimir >> >> >>> Yes, I've just realized that that's indeed needed on amd64 to walk >>> runtime stubs. SPARC is more graceful and works without these changes, >>> but on amd64 we need them (on both Solaris and Linux) and on Sparc >>> they don't hurt. >>> >>> I've written a small test program which should be similar to the one >>> you used for 8035983: >>> >>> import java.util.Hashtable; >>> >>> public class StackTraceTest { >>> static Hashtable ht; >>> static { >>> ht = new Hashtable(); >>> ht.put("one", "one"); >>> } >>> >>> public static void foo() { >>> bar(); >>> } >>> >>> public static void bar() { >>> ht.get("one"); >>> } >>> >>> public static void main(String args[]) { >>> for (int i = 0; i < 5; i++) { >>> new Thread() { >>> public void run() { >>> while(true) { >>> foo(); >>> } >>> } >>> }.start(); >>> } >>> } >>> } >>> >>> If I run it with "-XX:-Inline -XX:+PrintCompilation >>> -XX:-TieredCompilation StackTraceTest" inside the debugger and crash >>> one of the Java threads in native code, I get the correct stack traces >>> on SPARC. But on amd64, I only get the following without my changes: >>> >>> Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], >>> sp=0xfffffd7da17f7c60, free space=1019k >>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native >>> code) >>> C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f >>> V [libjvm.so+0x171443b] int >>> os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b >>> V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa >>> V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 >>> V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf >>> V [libjvm.so+0x18cdd00] void >>> ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 >>> V [libjvm.so+0x18cd6a7] void >>> ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 >>> V [libjvm.so+0x182f39e] void >>> >>> SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e >>> v ~RuntimeStub::_complete_monitor_locking_Java >>> C 0x2aad1dd1000016d8 >>> >>> With the changes (and the additional check for is_runtime_frame()) I >>> get full stack traces on amd64 as well. So I think the changes should >>> be at least an improvement:) >> >> >> Good! >> >> >>> >>>> You need to check what fr.real_fp() returns on all platforms for the very >>>> first frame (_lwp_start). That is what this check about - stop walking >>>> when >>>> it reaches the first frame. fr.sender_sp() returns bogus value which is >>>> not >>>> stack pointer for the first frame. From 8035983 review: >>>> >>>> "It seems using fr.sender_sp() in the check work on x86 and sparc. >>>> On x86 it return stack_base value on sparc it returns STACK_BIAS." >>>> >>>> Also on other our platforms it could return 0 or small integer value. >>>> >>>> If you can suggest an other way to determine the first frame, please, >>>> tell. >>>> >>> >>> So the initial problem in 8035983 was that we used >>> os::is_first_C_frame(&fr) for native frames where the sender was a >>> compiled frame. That didn't work reliably because, >>> os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of >>> the sender and that doesn't work for compiled senders. >>> >>> So you replaced os::is_first_C_frame(&fr) by >>> !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() >>> internally which in turn uses fp() so it won't work for frames which >>> have a bogus frame pointer like native wrappers. >>> >>> I think using fr.real_fp() should be safe because as far as I can see >>> it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() >>> on SPARC. On Linux/amd64 both, the sp and fp of the first frame will >>> be 0 (still have to check on SPARC). But the example above works fine >>> with my changes on both, Linux/amd64 and Solaris/SPARC and >>> Solaris/amd64. >>> >>> I'll prepare a new webrev tomorrow which will have the documentation >>> for "pns" and a version of make_frame() for SPARC. >>> >>> Regards, >>> Volker >>> >>>>> Should I move 'print_native_stack()' to vmError.cpp as suggested by >>>>> David? >>>> >>>> >>>> >>>> I am fine with both places. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> >>>>> >>>>> Thank you and best regards, >>>>> Volker >>>>> >>>>> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >>>>> wrote: >>>>>> >>>>>> >>>>>> Thank you for fixing frame walk. >>>>>> I don't see where make_frame() is used. >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> >>>>>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> while testing my change, I found two other small problems with native >>>>>>> stack traces: >>>>>>> >>>>>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>>>>> because they are treated as native "C" frames. However, if the native >>>>>>> wrapper was called from a compiled frame which had no valid frame >>>>>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a bad >>>>>>> frame. This can be easily fixed by treating native wrappers like java >>>>>>> frames. >>>>>>> >>>>>>> 2. the fix for "8035983: Fix "Native frames:" in crash report (hs_err >>>>>>> file)" introduced a similar problem. If we walk tha stack from a >>>>>>> native wrapper down to a compiled frame, we will have a frame with an >>>>>>> invalid frame pointer. In that case, the newly introduced check from >>>>>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>>>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>>>>> should do the same but also works for compiled frames with invalid fp. >>>>>>> >>>>>>> Here's the new webrev: >>>>>>> >>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>>>>> >>>>>>> What dou you think? >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>> >>>>>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>>>>> debug.cpp. Initially I saw that vmError.cpp already included >>>>>>>> debug.hpp >>>>>>>> so I decided to declare it in debug.hpp. But now I realized that also >>>>>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>>>>> 'print_native_stack()' in vmError.hpp and leave the implementation in >>>>>>>> vmError.cpp. Do you want me to change that? >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>>>>> >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Hi Volker, >>>>>>>>> >>>>>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> could you please review and sponsor the following small change >>>>>>>>>> which >>>>>>>>>> should make debugging a little more comfortabel (at least on Linux >>>>>>>>>> for >>>>>>>>>> now): >>>>>>>>>> >>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>>>>> >>>>>>>>>> In the hs_err files we have a nice mixed stack trace which contains >>>>>>>>>> both, Java and native frames. >>>>>>>>>> It would be nice if we could make this functionality available from >>>>>>>>>> within gdb during debugging sessions (until now we can only print >>>>>>>>>> the >>>>>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>>>>> >>>>>>>>>> This new feature can be easily achieved by refactoring the >>>>>>>>>> corresponding stack printing code from VMError::report() in >>>>>>>>>> vmError.cpp into its own method in debug.cpp. This change extracts >>>>>>>>>> that code into the new function 'print_native_stack()' in debug.cpp >>>>>>>>>> without changing anything of the functionality. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>>>>> >>>>>>>>> David >>>>>>>>> ----- >>>>>>>>> >>>>>>>>> >>>>>>>>>> It also adds some helper functions which make it easy to call the >>>>>>>>>> new >>>>>>>>>> 'print_native_stack()' method from within gdb. There's the new >>>>>>>>>> helper >>>>>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>>>>> inserts >>>>>>>>>> a dummy frame for every call and we can't easily walk over this >>>>>>>>>> dummy >>>>>>>>>> frame from our stack printing routine. >>>>>>>>>> >>>>>>>>>> To simplify the creation of the frame object, I've added the helper >>>>>>>>>> functions: >>>>>>>>>> >>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address pc) >>>>>>>>>> { >>>>>>>>>> return frame(sp, fp, pc); >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> for x86 (in frame_x86.cpp) and >>>>>>>>>> >>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>>>>> return frame(sp, pc); >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can now >>>>>>>>>> easily get a mixed stack trace of a Java thread in gdb (see below). >>>>>>>>>> >>>>>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>>>>> >>>>>>>>>> Thank you and best regards, >>>>>>>>>> Volker >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>>>>> >>>>>>>>>> "Executing pns" >>>>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>>>>> C=native >>>>>>>>>> code) >>>>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>>>> j CrashNative.doIt()V+45 >>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>>>>> objArrayHandle, >>>>>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, >>>>>>>>>> Handle, >>>>>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>>>>> j >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>>>>> j >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>>>>> j >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>>>>> j >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>>>>> j CrashNative.mainJava()V+32 >>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>>>>> C [libCrashNative.so+0x9a9] >>>>>>>>>> JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>>>>> _jmethodID*, ...)+0xb9 >>>>>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) >>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>>>>> >>>>>>>>> >>>>>> >>>> >> From jesper.wilhelmsson at oracle.com Sat Sep 20 17:16:29 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Sat, 20 Sep 2014 19:16:29 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <54188242.9060808@oracle.com> References: <54188242.9060808@oracle.com> Message-ID: <541DB66D.8030608@oracle.com> Hi all, I got approvals for the HotSpot changes and they have now been pushed to jdk9/hs-gc. For the JDK makefile change I would prefer if someone that feels comfortable with the JDK makefiles would have a look at it before I push that part. Thanks, /Jesper Jesper Wilhelmsson skrev 16/9/14 20:32: > Hi, > > The fix for JDK-8055006 was reviewed by several engineers and was pushed > directly to 8u40 due to time constraints. This is a forward port to get the same > changes into JDK 9. > > There are two webrevs, one for HotSpot and one for the JDK. > > The 8u40 HotSpot change applied cleanly to 9 so if this was a traditional > backport it wouldn't require another review. But since this is a weird situation > and I'm pushing to 9 I'll ask for reviews just to be on the safe side. > Also, the original 8u40 push contained some unnecessary changes that was later > cleaned up by JDK-8056056. In this port to 9 I have merged these two changes > into one to avoid introducing a known issue only to remove it again. > > The JDK change is new. The makefiles differ between 8u40 and 9 and this new > change makes use of functionality not present in 8u40. This patch was provided > by Erik Joelsson and I have reviewed it myself, but it needs two reviews so > another one is welcome. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8055006 > > Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/jdk9/ > > > 8u40 Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/ > > 8u40 changes: > http://hg.openjdk.java.net/jdk8u/jdk8u-dev/hotspot/rev/f933a15469d4 > http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/312152328471 > > Bug and change for the second 8u40 fix: > https://bugs.openjdk.java.net/browse/JDK-8056056 > http://hg.openjdk.java.net/jdk8u/hs-dev/hotspot/rev/9be4ca335650 > > Thanks! > /Jesper From erik.osterlund at lnu.se Sat Sep 20 21:53:11 2014 From: erik.osterlund at lnu.se (=?iso-8859-1?Q?Erik_=D6sterlund?=) Date: Sat, 20 Sep 2014 21:53:11 +0000 Subject: RFR: 8058255: Native jbyte Atomic::cmpxchg for supported x86 platforms In-Reply-To: <54197E53.9030307@oracle.com> References: <54124F11.8060100@oracle.com> <8E69D7A0-CD8D-4EC6-B708-5F2C0098B183@lnu.se> <54197E53.9030307@oracle.com> Message-ID: I don't want to break the ported builds so I made a special variant of the third option where the x86 inline methods also have a #define next to them. The atomic.inline.hpp file then simply defines an inline Atomic::cmpxchg calling the general solution if there is no #define for an override. That way it's using variant three, but there is no need to write overrides for every platform port (which are sometimes in other repos) there is to simply run the default member function. We can add them one by one instead. :) Hope this seems satisfactory. Thanks, Erik On 17 Sep 2014, at 14:28, David Holmes wrote: > On 17/09/2014 9:13 PM, Erik ?sterlund wrote: >> I am back! Did you guys have time to do some thinking? I see three different solutions: >> >> 1. Good old class inheritance! Class Atomic is-a Atomic_YourArchHere is-a AtomicAbstract >> Using the CRTP (Curiously Recurring Template Pattern) for C++, this could be done without a virtual call where we want inlining. > > I would prefer this approach (here and elsewhere) but it is not a short-term option. > >> 2. Similar except with the SFINAE idiom (Substitution Failure Is Not An Error) for C++, to pick the right overload based on statically determined constraints. >> E.g. define if Atomic::has_general_byte_CAS and based on whether this is defined or not, pick the general or specific overload variant of the CAS member function. > > Not sure what this one is but it sounds like a manual virtual dispatch - which seems not a good solution. > >> 3. Simply make the current CAS a normal function which is called from billions of new inline method definitions that we have to create for every single architecture. > > I think the simple version of 3 is just move cmpxchg(jbtye) out of the shared code and define for each platform - there aren't that many and it is consistent with many of the other variants. > >> What do we prefer here? Does anyone else have a better idea? Also, should I start a new thread or is it okay to post it here? > > Continuing this thread is fine by me. > > I think short-term the simple version of 3 is preferable. > > Thanks, > David From jesper.wilhelmsson at oracle.com Sun Sep 21 10:04:01 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Sun, 21 Sep 2014 12:04:01 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <541DB66D.8030608@oracle.com> References: <54188242.9060808@oracle.com> <541DB66D.8030608@oracle.com> Message-ID: <541EA291.9070801@oracle.com> For some reason my mail didn't make it to jdk-build-dev. I'm resending it and include build-dev this time, just in case. Sorry for the noise! /Jesper Jesper Wilhelmsson skrev 20/9/14 19:16: > Hi all, > > I got approvals for the HotSpot changes and they have now been pushed to > jdk9/hs-gc. For the JDK makefile change I would prefer if someone that feels > comfortable with the JDK makefiles would have a look at it before I push that part. > > Thanks, > /Jesper > > Jesper Wilhelmsson skrev 16/9/14 20:32: >> Hi, >> >> The fix for JDK-8055006 was reviewed by several engineers and was pushed >> directly to 8u40 due to time constraints. This is a forward port to get the same >> changes into JDK 9. >> >> There are two webrevs, one for HotSpot and one for the JDK. >> >> The 8u40 HotSpot change applied cleanly to 9 so if this was a traditional >> backport it wouldn't require another review. But since this is a weird situation >> and I'm pushing to 9 I'll ask for reviews just to be on the safe side. >> Also, the original 8u40 push contained some unnecessary changes that was later >> cleaned up by JDK-8056056. In this port to 9 I have merged these two changes >> into one to avoid introducing a known issue only to remove it again. >> >> The JDK change is new. The makefiles differ between 8u40 and 9 and this new >> change makes use of functionality not present in 8u40. This patch was provided >> by Erik Joelsson and I have reviewed it myself, but it needs two reviews so >> another one is welcome. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8055006 >> >> Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/jdk9/ >> >> >> 8u40 Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/ >> >> 8u40 changes: >> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/hotspot/rev/f933a15469d4 >> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/312152328471 >> >> Bug and change for the second 8u40 fix: >> https://bugs.openjdk.java.net/browse/JDK-8056056 >> http://hg.openjdk.java.net/jdk8u/hs-dev/hotspot/rev/9be4ca335650 >> >> Thanks! >> /Jesper From volker.simonis at gmail.com Mon Sep 22 09:31:54 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 22 Sep 2014 11:31:54 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <541CACA4.80801@oracle.com> References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> <5419E5C5.9080401@oracle.com> <541CACA4.80801@oracle.com> Message-ID: On Sat, Sep 20, 2014 at 12:22 AM, Vladimir Kozlov wrote: > os_solaris_sparc.cpp > > I think third parameter should be 'false' - originally we passed 0: > > - return frame(NULL, NULL, NULL); > + return frame(NULL, NULL, true); > Sorry, my fault. Fixed. > Please, use one line (even if it is long): > > + tty->print_cr(" pns(void* sp,\n" > + " void* fp,\n" > + " void* pc) - print native (i.e. mixed) stack trace. > E.g."); > Done. > Otherwise look good. Thanks. Here's the new webrev: http://cr.openjdk.java.net/~simonis/webrevs/8058345.v3 Regards, Volker > > Thanks, > Vladimir > > > On 9/19/14 11:55 AM, Volker Simonis wrote: >> >> Hi, >> >> so here's my new version: >> >> - documented the "pns" command with examples >> - removed the clumsy "make_frame" generators and introduced a genreic >> frame constructor on all platforms which can now be called from pns() >> - pns() must now be called with three arguments (usually registers >> like pns($sp, $fp, $pc) but some arguments may be '0' on some >> platforms (see the examples in the documentation of pns()) >> - tested on Linux (x86, x64, ppc64) and Solaris (SPARC, x64) >> - added additional "Summary" section to the change which mentions >> that the change also fixes stack traces on x86 to enable walking of >> runtime stubs and native wrappers. >> >> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v2/ >> >> Notice that the current version requires trivial changes in your >> closed ports (i.e. adding the generic frame constructor) but I'd need >> a sponsor anyway:) >> >> Regards, >> Volker >> >> On Wed, Sep 17, 2014 at 9:49 PM, Vladimir Kozlov >> wrote: >>> >>> On 9/17/14 11:29 AM, Volker Simonis wrote: >>>> >>>> >>>> On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov >>>> wrote: >>>>> >>>>> >>>>> On 9/16/14 12:21 PM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> Hi Vladimir, >>>>>> >>>>>> thanks for looking at the change. >>>>>> >>>>>> 'make_frame' is only intended to be used from within the debugger to >>>>>> simplify the usage of the new 'pns()' (i.e. "print native stack") >>>>>> helper. It can be used as follows: >>>>>> >>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>> >>>>> >>>>> >>>>> >>>>> It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and >>>>> let >>>>> it call make_frame()? To have make_frame() only on ppc and x86 will not >>>>> allow to use pns() on other platforms. >>>>> >>>>> Would be nice to have pns() version (names different) without input >>>>> parameters. Can we use os::current_frame() inside for that? >>>>> >>>>> Add pns() description to help() output. >>>>> >>>>>> >>>>>> "Executing pns" >>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>> C=native >>>>>> code) >>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>> j CrashNative.doIt()V+45 >>>>>> v ~StubRoutines::call_stub >>>>>> V [libjvm.so+0x71599f] >>>>>> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >>>>>> Thread*)+0xf8f >>>>>> >>>>>> What about the two fixesin in 'print_native_stack()' - do you think >>>>>> they >>>>>> are OK? >>>>> >>>>> >>>>> >>>>> >>>>> What about is_runtime_frame()? It is wrapper for runtime calls from >>>>> compiled >>>>> code. >>>>> >>>> >>>> Yes, but I don't see how this could help here, because the native >>>> wrapper which makes problems here is a nmethod and not a runtime stub. >>>> >>>> Maybe you mean to additionally add is_runtime_frame() to the check? >>> >>> >>> >>> Yes, that is what I meant. >>> >>> Thanks, >>> Vladimir >>> >>> >>>> Yes, I've just realized that that's indeed needed on amd64 to walk >>>> runtime stubs. SPARC is more graceful and works without these changes, >>>> but on amd64 we need them (on both Solaris and Linux) and on Sparc >>>> they don't hurt. >>>> >>>> I've written a small test program which should be similar to the one >>>> you used for 8035983: >>>> >>>> import java.util.Hashtable; >>>> >>>> public class StackTraceTest { >>>> static Hashtable ht; >>>> static { >>>> ht = new Hashtable(); >>>> ht.put("one", "one"); >>>> } >>>> >>>> public static void foo() { >>>> bar(); >>>> } >>>> >>>> public static void bar() { >>>> ht.get("one"); >>>> } >>>> >>>> public static void main(String args[]) { >>>> for (int i = 0; i < 5; i++) { >>>> new Thread() { >>>> public void run() { >>>> while(true) { >>>> foo(); >>>> } >>>> } >>>> }.start(); >>>> } >>>> } >>>> } >>>> >>>> If I run it with "-XX:-Inline -XX:+PrintCompilation >>>> -XX:-TieredCompilation StackTraceTest" inside the debugger and crash >>>> one of the Java threads in native code, I get the correct stack traces >>>> on SPARC. But on amd64, I only get the following without my changes: >>>> >>>> Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], >>>> sp=0xfffffd7da17f7c60, free space=1019k >>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>> C=native >>>> code) >>>> C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f >>>> V [libjvm.so+0x171443b] int >>>> os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b >>>> V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa >>>> V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 >>>> V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf >>>> V [libjvm.so+0x18cdd00] void >>>> ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 >>>> V [libjvm.so+0x18cd6a7] void >>>> ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 >>>> V [libjvm.so+0x182f39e] void >>>> >>>> >>>> SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e >>>> v ~RuntimeStub::_complete_monitor_locking_Java >>>> C 0x2aad1dd1000016d8 >>>> >>>> With the changes (and the additional check for is_runtime_frame()) I >>>> get full stack traces on amd64 as well. So I think the changes should >>>> be at least an improvement:) >>> >>> >>> >>> Good! >>> >>> >>>> >>>>> You need to check what fr.real_fp() returns on all platforms for the >>>>> very >>>>> first frame (_lwp_start). That is what this check about - stop walking >>>>> when >>>>> it reaches the first frame. fr.sender_sp() returns bogus value which is >>>>> not >>>>> stack pointer for the first frame. From 8035983 review: >>>>> >>>>> "It seems using fr.sender_sp() in the check work on x86 and sparc. >>>>> On x86 it return stack_base value on sparc it returns STACK_BIAS." >>>>> >>>>> Also on other our platforms it could return 0 or small integer value. >>>>> >>>>> If you can suggest an other way to determine the first frame, please, >>>>> tell. >>>>> >>>> >>>> So the initial problem in 8035983 was that we used >>>> os::is_first_C_frame(&fr) for native frames where the sender was a >>>> compiled frame. That didn't work reliably because, >>>> os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of >>>> the sender and that doesn't work for compiled senders. >>>> >>>> So you replaced os::is_first_C_frame(&fr) by >>>> !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() >>>> internally which in turn uses fp() so it won't work for frames which >>>> have a bogus frame pointer like native wrappers. >>>> >>>> I think using fr.real_fp() should be safe because as far as I can see >>>> it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() >>>> on SPARC. On Linux/amd64 both, the sp and fp of the first frame will >>>> be 0 (still have to check on SPARC). But the example above works fine >>>> with my changes on both, Linux/amd64 and Solaris/SPARC and >>>> Solaris/amd64. >>>> >>>> I'll prepare a new webrev tomorrow which will have the documentation >>>> for "pns" and a version of make_frame() for SPARC. >>>> >>>> Regards, >>>> Volker >>>> >>>>>> Should I move 'print_native_stack()' to vmError.cpp as suggested by >>>>>> David? >>>>> >>>>> >>>>> >>>>> >>>>> I am fine with both places. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> >>>>>> >>>>>> Thank you and best regards, >>>>>> Volker >>>>>> >>>>>> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Thank you for fixing frame walk. >>>>>>> I don't see where make_frame() is used. >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> >>>>>>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> while testing my change, I found two other small problems with >>>>>>>> native >>>>>>>> stack traces: >>>>>>>> >>>>>>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>>>>>> because they are treated as native "C" frames. However, if the >>>>>>>> native >>>>>>>> wrapper was called from a compiled frame which had no valid frame >>>>>>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a >>>>>>>> bad >>>>>>>> frame. This can be easily fixed by treating native wrappers like >>>>>>>> java >>>>>>>> frames. >>>>>>>> >>>>>>>> 2. the fix for "8035983: Fix "Native frames:" in crash report >>>>>>>> (hs_err >>>>>>>> file)" introduced a similar problem. If we walk tha stack from a >>>>>>>> native wrapper down to a compiled frame, we will have a frame with >>>>>>>> an >>>>>>>> invalid frame pointer. In that case, the newly introduced check from >>>>>>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>>>>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>>>>>> should do the same but also works for compiled frames with invalid >>>>>>>> fp. >>>>>>>> >>>>>>>> Here's the new webrev: >>>>>>>> >>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>>>>>> >>>>>>>> What dou you think? >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>>>>>> debug.cpp. Initially I saw that vmError.cpp already included >>>>>>>>> debug.hpp >>>>>>>>> so I decided to declare it in debug.hpp. But now I realized that >>>>>>>>> also >>>>>>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>>>>>> 'print_native_stack()' in vmError.hpp and leave the implementation >>>>>>>>> in >>>>>>>>> vmError.cpp. Do you want me to change that? >>>>>>>>> >>>>>>>>> Thank you and best regards, >>>>>>>>> Volker >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi Volker, >>>>>>>>>> >>>>>>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> could you please review and sponsor the following small change >>>>>>>>>>> which >>>>>>>>>>> should make debugging a little more comfortabel (at least on >>>>>>>>>>> Linux >>>>>>>>>>> for >>>>>>>>>>> now): >>>>>>>>>>> >>>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>>>>>> >>>>>>>>>>> In the hs_err files we have a nice mixed stack trace which >>>>>>>>>>> contains >>>>>>>>>>> both, Java and native frames. >>>>>>>>>>> It would be nice if we could make this functionality available >>>>>>>>>>> from >>>>>>>>>>> within gdb during debugging sessions (until now we can only print >>>>>>>>>>> the >>>>>>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>>>>>> >>>>>>>>>>> This new feature can be easily achieved by refactoring the >>>>>>>>>>> corresponding stack printing code from VMError::report() in >>>>>>>>>>> vmError.cpp into its own method in debug.cpp. This change >>>>>>>>>>> extracts >>>>>>>>>>> that code into the new function 'print_native_stack()' in >>>>>>>>>>> debug.cpp >>>>>>>>>>> without changing anything of the functionality. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>>>>>> >>>>>>>>>> David >>>>>>>>>> ----- >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> It also adds some helper functions which make it easy to call the >>>>>>>>>>> new >>>>>>>>>>> 'print_native_stack()' method from within gdb. There's the new >>>>>>>>>>> helper >>>>>>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>>>>>> inserts >>>>>>>>>>> a dummy frame for every call and we can't easily walk over this >>>>>>>>>>> dummy >>>>>>>>>>> frame from our stack printing routine. >>>>>>>>>>> >>>>>>>>>>> To simplify the creation of the frame object, I've added the >>>>>>>>>>> helper >>>>>>>>>>> functions: >>>>>>>>>>> >>>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address >>>>>>>>>>> pc) >>>>>>>>>>> { >>>>>>>>>>> return frame(sp, fp, pc); >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> for x86 (in frame_x86.cpp) and >>>>>>>>>>> >>>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>>>>>> return frame(sp, pc); >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can >>>>>>>>>>> now >>>>>>>>>>> easily get a mixed stack trace of a Java thread in gdb (see >>>>>>>>>>> below). >>>>>>>>>>> >>>>>>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>>>>>> >>>>>>>>>>> Thank you and best regards, >>>>>>>>>>> Volker >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>>>>>> >>>>>>>>>>> "Executing pns" >>>>>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>>>>>> C=native >>>>>>>>>>> code) >>>>>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>>>>> j CrashNative.doIt()V+45 >>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>>>>>> objArrayHandle, >>>>>>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, >>>>>>>>>>> Handle, >>>>>>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>>>>>> j >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>>>>>> j >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>>>>>> j >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>>>>>> j >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>>>>>> j CrashNative.mainJava()V+32 >>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, >>>>>>>>>>> Thread*) >>>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>>>>>> C [libCrashNative.so+0x9a9] >>>>>>>>>>> JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>>>>>> _jmethodID*, ...)+0xb9 >>>>>>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, >>>>>>>>>>> Thread*) >>>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>>>>>> >>>>>>>>>> >>>>>>> >>>>> >>> > From bengt.rutisson at oracle.com Mon Sep 22 10:52:23 2014 From: bengt.rutisson at oracle.com (Bengt Rutisson) Date: Mon, 22 Sep 2014 12:52:23 +0200 Subject: RFR [8u40] 8056084: Refactor Hashtable to allow implementations without rehashing support In-Reply-To: <15856493.62xT43KoU5@mgerdin03> References: <15856493.62xT43KoU5@mgerdin03> Message-ID: <541FFF67.3020409@oracle.com> Hi Mikael, Looks good. Bengt On 2014-09-17 09:00, Mikael Gerdin wrote: > Hi all, > > I need to backport this change in order to backport 8048268 which we need for > G1 performance in 8u40. > > The patch didn't apply cleanly since StringTable was moved to a separate file > in 9. The StringTable patch hunks applied correctly to the relevant parts of > symbolTable.[ch]pp. > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8056084/8u/webrev/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056084 > > Review thread at: > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-August/015039.html > > /Mikael From vladimir.kozlov at oracle.com Mon Sep 22 16:21:03 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 22 Sep 2014 09:21:03 -0700 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> <5419E5C5.9080401@oracle.com> <541CACA4.80801@oracle.com> Message-ID: <54204C6F.1010002@oracle.com> Looks good. Vladimir On 9/22/14 2:31 AM, Volker Simonis wrote: > On Sat, Sep 20, 2014 at 12:22 AM, Vladimir Kozlov > wrote: >> os_solaris_sparc.cpp >> >> I think third parameter should be 'false' - originally we passed 0: >> >> - return frame(NULL, NULL, NULL); >> + return frame(NULL, NULL, true); >> > > Sorry, my fault. Fixed. > >> Please, use one line (even if it is long): >> >> + tty->print_cr(" pns(void* sp,\n" >> + " void* fp,\n" >> + " void* pc) - print native (i.e. mixed) stack trace. >> E.g."); >> > > Done. > >> Otherwise look good. > > Thanks. Here's the new webrev: > > http://cr.openjdk.java.net/~simonis/webrevs/8058345.v3 > > Regards, > Volker > >> >> Thanks, >> Vladimir >> >> >> On 9/19/14 11:55 AM, Volker Simonis wrote: >>> >>> Hi, >>> >>> so here's my new version: >>> >>> - documented the "pns" command with examples >>> - removed the clumsy "make_frame" generators and introduced a genreic >>> frame constructor on all platforms which can now be called from pns() >>> - pns() must now be called with three arguments (usually registers >>> like pns($sp, $fp, $pc) but some arguments may be '0' on some >>> platforms (see the examples in the documentation of pns()) >>> - tested on Linux (x86, x64, ppc64) and Solaris (SPARC, x64) >>> - added additional "Summary" section to the change which mentions >>> that the change also fixes stack traces on x86 to enable walking of >>> runtime stubs and native wrappers. >>> >>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v2/ >>> >>> Notice that the current version requires trivial changes in your >>> closed ports (i.e. adding the generic frame constructor) but I'd need >>> a sponsor anyway:) >>> >>> Regards, >>> Volker >>> >>> On Wed, Sep 17, 2014 at 9:49 PM, Vladimir Kozlov >>> wrote: >>>> >>>> On 9/17/14 11:29 AM, Volker Simonis wrote: >>>>> >>>>> >>>>> On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov >>>>> wrote: >>>>>> >>>>>> >>>>>> On 9/16/14 12:21 PM, Volker Simonis wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi Vladimir, >>>>>>> >>>>>>> thanks for looking at the change. >>>>>>> >>>>>>> 'make_frame' is only intended to be used from within the debugger to >>>>>>> simplify the usage of the new 'pns()' (i.e. "print native stack") >>>>>>> helper. It can be used as follows: >>>>>>> >>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() and >>>>>> let >>>>>> it call make_frame()? To have make_frame() only on ppc and x86 will not >>>>>> allow to use pns() on other platforms. >>>>>> >>>>>> Would be nice to have pns() version (names different) without input >>>>>> parameters. Can we use os::current_frame() inside for that? >>>>>> >>>>>> Add pns() description to help() output. >>>>>> >>>>>>> >>>>>>> "Executing pns" >>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>> C=native >>>>>>> code) >>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>> j CrashNative.doIt()V+45 >>>>>>> v ~StubRoutines::call_stub >>>>>>> V [libjvm.so+0x71599f] >>>>>>> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >>>>>>> Thread*)+0xf8f >>>>>>> >>>>>>> What about the two fixesin in 'print_native_stack()' - do you think >>>>>>> they >>>>>>> are OK? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> What about is_runtime_frame()? It is wrapper for runtime calls from >>>>>> compiled >>>>>> code. >>>>>> >>>>> >>>>> Yes, but I don't see how this could help here, because the native >>>>> wrapper which makes problems here is a nmethod and not a runtime stub. >>>>> >>>>> Maybe you mean to additionally add is_runtime_frame() to the check? >>>> >>>> >>>> >>>> Yes, that is what I meant. >>>> >>>> Thanks, >>>> Vladimir >>>> >>>> >>>>> Yes, I've just realized that that's indeed needed on amd64 to walk >>>>> runtime stubs. SPARC is more graceful and works without these changes, >>>>> but on amd64 we need them (on both Solaris and Linux) and on Sparc >>>>> they don't hurt. >>>>> >>>>> I've written a small test program which should be similar to the one >>>>> you used for 8035983: >>>>> >>>>> import java.util.Hashtable; >>>>> >>>>> public class StackTraceTest { >>>>> static Hashtable ht; >>>>> static { >>>>> ht = new Hashtable(); >>>>> ht.put("one", "one"); >>>>> } >>>>> >>>>> public static void foo() { >>>>> bar(); >>>>> } >>>>> >>>>> public static void bar() { >>>>> ht.get("one"); >>>>> } >>>>> >>>>> public static void main(String args[]) { >>>>> for (int i = 0; i< 5; i++) { >>>>> new Thread() { >>>>> public void run() { >>>>> while(true) { >>>>> foo(); >>>>> } >>>>> } >>>>> }.start(); >>>>> } >>>>> } >>>>> } >>>>> >>>>> If I run it with "-XX:-Inline -XX:+PrintCompilation >>>>> -XX:-TieredCompilation StackTraceTest" inside the debugger and crash >>>>> one of the Java threads in native code, I get the correct stack traces >>>>> on SPARC. But on amd64, I only get the following without my changes: >>>>> >>>>> Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], >>>>> sp=0xfffffd7da17f7c60, free space=1019k >>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>> C=native >>>>> code) >>>>> C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f >>>>> V [libjvm.so+0x171443b] int >>>>> os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b >>>>> V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa >>>>> V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 >>>>> V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf >>>>> V [libjvm.so+0x18cdd00] void >>>>> ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 >>>>> V [libjvm.so+0x18cd6a7] void >>>>> ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 >>>>> V [libjvm.so+0x182f39e] void >>>>> >>>>> >>>>> SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e >>>>> v ~RuntimeStub::_complete_monitor_locking_Java >>>>> C 0x2aad1dd1000016d8 >>>>> >>>>> With the changes (and the additional check for is_runtime_frame()) I >>>>> get full stack traces on amd64 as well. So I think the changes should >>>>> be at least an improvement:) >>>> >>>> >>>> >>>> Good! >>>> >>>> >>>>> >>>>>> You need to check what fr.real_fp() returns on all platforms for the >>>>>> very >>>>>> first frame (_lwp_start). That is what this check about - stop walking >>>>>> when >>>>>> it reaches the first frame. fr.sender_sp() returns bogus value which is >>>>>> not >>>>>> stack pointer for the first frame. From 8035983 review: >>>>>> >>>>>> "It seems using fr.sender_sp() in the check work on x86 and sparc. >>>>>> On x86 it return stack_base value on sparc it returns STACK_BIAS." >>>>>> >>>>>> Also on other our platforms it could return 0 or small integer value. >>>>>> >>>>>> If you can suggest an other way to determine the first frame, please, >>>>>> tell. >>>>>> >>>>> >>>>> So the initial problem in 8035983 was that we used >>>>> os::is_first_C_frame(&fr) for native frames where the sender was a >>>>> compiled frame. That didn't work reliably because, >>>>> os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of >>>>> the sender and that doesn't work for compiled senders. >>>>> >>>>> So you replaced os::is_first_C_frame(&fr) by >>>>> !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() >>>>> internally which in turn uses fp() so it won't work for frames which >>>>> have a bogus frame pointer like native wrappers. >>>>> >>>>> I think using fr.real_fp() should be safe because as far as I can see >>>>> it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() >>>>> on SPARC. On Linux/amd64 both, the sp and fp of the first frame will >>>>> be 0 (still have to check on SPARC). But the example above works fine >>>>> with my changes on both, Linux/amd64 and Solaris/SPARC and >>>>> Solaris/amd64. >>>>> >>>>> I'll prepare a new webrev tomorrow which will have the documentation >>>>> for "pns" and a version of make_frame() for SPARC. >>>>> >>>>> Regards, >>>>> Volker >>>>> >>>>>>> Should I move 'print_native_stack()' to vmError.cpp as suggested by >>>>>>> David? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> I am fine with both places. >>>>>> >>>>>> Thanks, >>>>>> Vladimir >>>>>> >>>>>> >>>>>>> >>>>>>> Thank you and best regards, >>>>>>> Volker >>>>>>> >>>>>>> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Thank you for fixing frame walk. >>>>>>>> I don't see where make_frame() is used. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Vladimir >>>>>>>> >>>>>>>> >>>>>>>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> while testing my change, I found two other small problems with >>>>>>>>> native >>>>>>>>> stack traces: >>>>>>>>> >>>>>>>>> 1. we can not walk native wrappers on (at least not on Linux/amd64) >>>>>>>>> because they are treated as native "C" frames. However, if the >>>>>>>>> native >>>>>>>>> wrapper was called from a compiled frame which had no valid frame >>>>>>>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a >>>>>>>>> bad >>>>>>>>> frame. This can be easily fixed by treating native wrappers like >>>>>>>>> java >>>>>>>>> frames. >>>>>>>>> >>>>>>>>> 2. the fix for "8035983: Fix "Native frames:" in crash report >>>>>>>>> (hs_err >>>>>>>>> file)" introduced a similar problem. If we walk tha stack from a >>>>>>>>> native wrapper down to a compiled frame, we will have a frame with >>>>>>>>> an >>>>>>>>> invalid frame pointer. In that case, the newly introduced check from >>>>>>>>> change 8035983 will fail, because fr.sender_sp() depends on a valid >>>>>>>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>>>>>>> should do the same but also works for compiled frames with invalid >>>>>>>>> fp. >>>>>>>>> >>>>>>>>> Here's the new webrev: >>>>>>>>> >>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>>>>>>> >>>>>>>>> What dou you think? >>>>>>>>> >>>>>>>>> Thank you and best regards, >>>>>>>>> Volker >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>>>>>>> debug.cpp. Initially I saw that vmError.cpp already included >>>>>>>>>> debug.hpp >>>>>>>>>> so I decided to declare it in debug.hpp. But now I realized that >>>>>>>>>> also >>>>>>>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>>>>>>> 'print_native_stack()' in vmError.hpp and leave the implementation >>>>>>>>>> in >>>>>>>>>> vmError.cpp. Do you want me to change that? >>>>>>>>>> >>>>>>>>>> Thank you and best regards, >>>>>>>>>> Volker >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Hi Volker, >>>>>>>>>>> >>>>>>>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> could you please review and sponsor the following small change >>>>>>>>>>>> which >>>>>>>>>>>> should make debugging a little more comfortabel (at least on >>>>>>>>>>>> Linux >>>>>>>>>>>> for >>>>>>>>>>>> now): >>>>>>>>>>>> >>>>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>>>>>>> >>>>>>>>>>>> In the hs_err files we have a nice mixed stack trace which >>>>>>>>>>>> contains >>>>>>>>>>>> both, Java and native frames. >>>>>>>>>>>> It would be nice if we could make this functionality available >>>>>>>>>>>> from >>>>>>>>>>>> within gdb during debugging sessions (until now we can only print >>>>>>>>>>>> the >>>>>>>>>>>> pure Java stack with the "ps()" helper function from debug.cpp). >>>>>>>>>>>> >>>>>>>>>>>> This new feature can be easily achieved by refactoring the >>>>>>>>>>>> corresponding stack printing code from VMError::report() in >>>>>>>>>>>> vmError.cpp into its own method in debug.cpp. This change >>>>>>>>>>>> extracts >>>>>>>>>>>> that code into the new function 'print_native_stack()' in >>>>>>>>>>>> debug.cpp >>>>>>>>>>>> without changing anything of the functionality. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>>>>>>> >>>>>>>>>>> David >>>>>>>>>>> ----- >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> It also adds some helper functions which make it easy to call the >>>>>>>>>>>> new >>>>>>>>>>>> 'print_native_stack()' method from within gdb. There's the new >>>>>>>>>>>> helper >>>>>>>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>>>>>>> inserts >>>>>>>>>>>> a dummy frame for every call and we can't easily walk over this >>>>>>>>>>>> dummy >>>>>>>>>>>> frame from our stack printing routine. >>>>>>>>>>>> >>>>>>>>>>>> To simplify the creation of the frame object, I've added the >>>>>>>>>>>> helper >>>>>>>>>>>> functions: >>>>>>>>>>>> >>>>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address >>>>>>>>>>>> pc) >>>>>>>>>>>> { >>>>>>>>>>>> return frame(sp, fp, pc); >>>>>>>>>>>> } >>>>>>>>>>>> >>>>>>>>>>>> for x86 (in frame_x86.cpp) and >>>>>>>>>>>> >>>>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>>>>>>> return frame(sp, pc); >>>>>>>>>>>> } >>>>>>>>>>>> >>>>>>>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can >>>>>>>>>>>> now >>>>>>>>>>>> easily get a mixed stack trace of a Java thread in gdb (see >>>>>>>>>>>> below). >>>>>>>>>>>> >>>>>>>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>>>>>>> >>>>>>>>>>>> Thank you and best regards, >>>>>>>>>>>> Volker >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>>>>>>> >>>>>>>>>>>> "Executing pns" >>>>>>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>>>>>>> C=native >>>>>>>>>>>> code) >>>>>>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>>>>>> j CrashNative.doIt()V+45 >>>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>>> V [libjvm.so+0x9eab75] Reflection::invoke(instanceKlassHandle, >>>>>>>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>>>>>>> objArrayHandle, >>>>>>>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, >>>>>>>>>>>> Handle, >>>>>>>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>>>>>>> j >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>>>>>>> j >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>>>>>>> j >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>>>>>>> j >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>>>>>>> j CrashNative.mainJava()V+32 >>>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, >>>>>>>>>>>> Thread*) >>>>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>>>>>>> C [libCrashNative.so+0x9a9] >>>>>>>>>>>> JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>>>>>>> _jmethodID*, ...)+0xb9 >>>>>>>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>>>>>>> C [libCrashNative.so+0x87f] Java_CrashNative_nativeMethod+0x23 >>>>>>>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, JavaValue*, >>>>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, >>>>>>>>>>>> Thread*) >>>>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>> >>>> >> From volker.simonis at gmail.com Mon Sep 22 18:06:50 2014 From: volker.simonis at gmail.com (Volker Simonis) Date: Mon, 22 Sep 2014 20:06:50 +0200 Subject: RFR(S): 8058345: Refactor native stack printing from vmError.cpp to debug.cpp to make it available in gdb as well In-Reply-To: <54204C6F.1010002@oracle.com> References: <5417DE03.6060301@oracle.com> <54187D59.5050602@oracle.com> <5418B53F.7050508@oracle.com> <5419E5C5.9080401@oracle.com> <541CACA4.80801@oracle.com> <54204C6F.1010002@oracle.com> Message-ID: Thanks a lot Vladimir! I still need a sponsor to make the closed changes (i.e. add at least +#ifndef PRODUCT +// This is a generic constructor which is only used by pns() in debug.cpp. +frame::frame(void* sp, void* fp, void* pc) { + Unimplemented(); +} +#endif to every src/cpu/XXX/vm/frame_XXX.cpp) and push the change. Anybody volunteers? Thanks a lot and best regards, Volker On Mon, Sep 22, 2014 at 6:21 PM, Vladimir Kozlov wrote: > Looks good. > > Vladimir > > > On 9/22/14 2:31 AM, Volker Simonis wrote: >> >> On Sat, Sep 20, 2014 at 12:22 AM, Vladimir Kozlov >> wrote: >>> >>> os_solaris_sparc.cpp >>> >>> I think third parameter should be 'false' - originally we passed 0: >>> >>> - return frame(NULL, NULL, NULL); >>> + return frame(NULL, NULL, true); >>> >> >> Sorry, my fault. Fixed. >> >>> Please, use one line (even if it is long): >>> >>> + tty->print_cr(" pns(void* sp,\n" >>> + " void* fp,\n" >>> + " void* pc) - print native (i.e. mixed) stack >>> trace. >>> E.g."); >>> >> >> Done. >> >>> Otherwise look good. >> >> >> Thanks. Here's the new webrev: >> >> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v3 >> >> Regards, >> Volker >> >>> >>> Thanks, >>> Vladimir >>> >>> >>> On 9/19/14 11:55 AM, Volker Simonis wrote: >>>> >>>> >>>> Hi, >>>> >>>> so here's my new version: >>>> >>>> - documented the "pns" command with examples >>>> - removed the clumsy "make_frame" generators and introduced a genreic >>>> frame constructor on all platforms which can now be called from pns() >>>> - pns() must now be called with three arguments (usually registers >>>> like pns($sp, $fp, $pc) but some arguments may be '0' on some >>>> platforms (see the examples in the documentation of pns()) >>>> - tested on Linux (x86, x64, ppc64) and Solaris (SPARC, x64) >>>> - added additional "Summary" section to the change which mentions >>>> that the change also fixes stack traces on x86 to enable walking of >>>> runtime stubs and native wrappers. >>>> >>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v2/ >>>> >>>> Notice that the current version requires trivial changes in your >>>> closed ports (i.e. adding the generic frame constructor) but I'd need >>>> a sponsor anyway:) >>>> >>>> Regards, >>>> Volker >>>> >>>> On Wed, Sep 17, 2014 at 9:49 PM, Vladimir Kozlov >>>> wrote: >>>>> >>>>> >>>>> On 9/17/14 11:29 AM, Volker Simonis wrote: >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Sep 17, 2014 at 12:10 AM, Vladimir Kozlov >>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On 9/16/14 12:21 PM, Volker Simonis wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Hi Vladimir, >>>>>>>> >>>>>>>> thanks for looking at the change. >>>>>>>> >>>>>>>> 'make_frame' is only intended to be used from within the debugger to >>>>>>>> simplify the usage of the new 'pns()' (i.e. "print native stack") >>>>>>>> helper. It can be used as follows: >>>>>>>> >>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> It is strange way to use pns(). Why not pass (sp, fp, pc) to pns() >>>>>>> and >>>>>>> let >>>>>>> it call make_frame()? To have make_frame() only on ppc and x86 will >>>>>>> not >>>>>>> allow to use pns() on other platforms. >>>>>>> >>>>>>> Would be nice to have pns() version (names different) without input >>>>>>> parameters. Can we use os::current_frame() inside for that? >>>>>>> >>>>>>> Add pns() description to help() output. >>>>>>> >>>>>>>> >>>>>>>> "Executing pns" >>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>>>> C=native >>>>>>>> code) >>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>> j CrashNative.doIt()V+45 >>>>>>>> v ~StubRoutines::call_stub >>>>>>>> V [libjvm.so+0x71599f] >>>>>>>> JavaCalls::call_helper(JavaValue*,methodHandle*, JavaCallArguments*, >>>>>>>> Thread*)+0xf8f >>>>>>>> >>>>>>>> What about the two fixesin in 'print_native_stack()' - do you think >>>>>>>> they >>>>>>>> are OK? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> What about is_runtime_frame()? It is wrapper for runtime calls from >>>>>>> compiled >>>>>>> code. >>>>>>> >>>>>> >>>>>> Yes, but I don't see how this could help here, because the native >>>>>> wrapper which makes problems here is a nmethod and not a runtime stub. >>>>>> >>>>>> Maybe you mean to additionally add is_runtime_frame() to the check? >>>>> >>>>> >>>>> >>>>> >>>>> Yes, that is what I meant. >>>>> >>>>> Thanks, >>>>> Vladimir >>>>> >>>>> >>>>>> Yes, I've just realized that that's indeed needed on amd64 to walk >>>>>> runtime stubs. SPARC is more graceful and works without these changes, >>>>>> but on amd64 we need them (on both Solaris and Linux) and on Sparc >>>>>> they don't hurt. >>>>>> >>>>>> I've written a small test program which should be similar to the one >>>>>> you used for 8035983: >>>>>> >>>>>> import java.util.Hashtable; >>>>>> >>>>>> public class StackTraceTest { >>>>>> static Hashtable ht; >>>>>> static { >>>>>> ht = new Hashtable(); >>>>>> ht.put("one", "one"); >>>>>> } >>>>>> >>>>>> public static void foo() { >>>>>> bar(); >>>>>> } >>>>>> >>>>>> public static void bar() { >>>>>> ht.get("one"); >>>>>> } >>>>>> >>>>>> public static void main(String args[]) { >>>>>> for (int i = 0; i< 5; i++) { >>>>>> new Thread() { >>>>>> public void run() { >>>>>> while(true) { >>>>>> foo(); >>>>>> } >>>>>> } >>>>>> }.start(); >>>>>> } >>>>>> } >>>>>> } >>>>>> >>>>>> If I run it with "-XX:-Inline -XX:+PrintCompilation >>>>>> -XX:-TieredCompilation StackTraceTest" inside the debugger and crash >>>>>> one of the Java threads in native code, I get the correct stack traces >>>>>> on SPARC. But on amd64, I only get the following without my changes: >>>>>> >>>>>> Stack: [0xfffffd7da16f9000,0xfffffd7da17f9000], >>>>>> sp=0xfffffd7da17f7c60, free space=1019k >>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, >>>>>> C=native >>>>>> code) >>>>>> C [libc.so.1+0xc207f] _lwp_cond_wait+0x1f >>>>>> V [libjvm.so+0x171443b] int >>>>>> os::Solaris::cond_wait(_lwp_cond*,_lwp_mutex*)+0x2b >>>>>> V [libjvm.so+0x171181a] void os::PlatformEvent::park()+0x1fa >>>>>> V [libjvm.so+0x16e09c1] void ObjectMonitor::EnterI(Thread*)+0x6f1 >>>>>> V [libjvm.so+0x16dfc8f] void ObjectMonitor::enter(Thread*)+0x7cf >>>>>> V [libjvm.so+0x18cdd00] void >>>>>> ObjectSynchronizer::slow_enter(Handle,BasicLock*,Thread*)+0x2a0 >>>>>> V [libjvm.so+0x18cd6a7] void >>>>>> ObjectSynchronizer::fast_enter(Handle,BasicLock*,bool,Thread*)+0x157 >>>>>> V [libjvm.so+0x182f39e] void >>>>>> >>>>>> >>>>>> >>>>>> SharedRuntime::complete_monitor_locking_C(oopDesc*,BasicLock*,JavaThread*)+0x23e >>>>>> v ~RuntimeStub::_complete_monitor_locking_Java >>>>>> C 0x2aad1dd1000016d8 >>>>>> >>>>>> With the changes (and the additional check for is_runtime_frame()) I >>>>>> get full stack traces on amd64 as well. So I think the changes should >>>>>> be at least an improvement:) >>>>> >>>>> >>>>> >>>>> >>>>> Good! >>>>> >>>>> >>>>>> >>>>>>> You need to check what fr.real_fp() returns on all platforms for the >>>>>>> very >>>>>>> first frame (_lwp_start). That is what this check about - stop >>>>>>> walking >>>>>>> when >>>>>>> it reaches the first frame. fr.sender_sp() returns bogus value which >>>>>>> is >>>>>>> not >>>>>>> stack pointer for the first frame. From 8035983 review: >>>>>>> >>>>>>> "It seems using fr.sender_sp() in the check work on x86 and sparc. >>>>>>> On x86 it return stack_base value on sparc it returns STACK_BIAS." >>>>>>> >>>>>>> Also on other our platforms it could return 0 or small integer value. >>>>>>> >>>>>>> If you can suggest an other way to determine the first frame, please, >>>>>>> tell. >>>>>>> >>>>>> >>>>>> So the initial problem in 8035983 was that we used >>>>>> os::is_first_C_frame(&fr) for native frames where the sender was a >>>>>> compiled frame. That didn't work reliably because, >>>>>> os::is_first_C_frame(&fr) uses fr->link() to get the frame pointer of >>>>>> the sender and that doesn't work for compiled senders. >>>>>> >>>>>> So you replaced os::is_first_C_frame(&fr) by >>>>>> !on_local_stack((address)(fr.sender_sp() + 1)) but that uses addr_at() >>>>>> internally which in turn uses fp() so it won't work for frames which >>>>>> have a bogus frame pointer like native wrappers. >>>>>> >>>>>> I think using fr.real_fp() should be safe because as far as I can see >>>>>> it is always fr.sender_sp() - 2 on amd64 and equal to fr.sender_sp() >>>>>> on SPARC. On Linux/amd64 both, the sp and fp of the first frame will >>>>>> be 0 (still have to check on SPARC). But the example above works fine >>>>>> with my changes on both, Linux/amd64 and Solaris/SPARC and >>>>>> Solaris/amd64. >>>>>> >>>>>> I'll prepare a new webrev tomorrow which will have the documentation >>>>>> for "pns" and a version of make_frame() for SPARC. >>>>>> >>>>>> Regards, >>>>>> Volker >>>>>> >>>>>>>> Should I move 'print_native_stack()' to vmError.cpp as suggested by >>>>>>>> David? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> I am fine with both places. >>>>>>> >>>>>>> Thanks, >>>>>>> Vladimir >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Thank you and best regards, >>>>>>>> Volker >>>>>>>> >>>>>>>> On Tue, Sep 16, 2014 at 8:11 PM, Vladimir Kozlov >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Thank you for fixing frame walk. >>>>>>>>> I don't see where make_frame() is used. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Vladimir >>>>>>>>> >>>>>>>>> >>>>>>>>> On 9/16/14 9:35 AM, Volker Simonis wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> while testing my change, I found two other small problems with >>>>>>>>>> native >>>>>>>>>> stack traces: >>>>>>>>>> >>>>>>>>>> 1. we can not walk native wrappers on (at least not on >>>>>>>>>> Linux/amd64) >>>>>>>>>> because they are treated as native "C" frames. However, if the >>>>>>>>>> native >>>>>>>>>> wrapper was called from a compiled frame which had no valid frame >>>>>>>>>> pointer (i.e. %rbp) os::get_sender_for_C_frame(&fr) will produce a >>>>>>>>>> bad >>>>>>>>>> frame. This can be easily fixed by treating native wrappers like >>>>>>>>>> java >>>>>>>>>> frames. >>>>>>>>>> >>>>>>>>>> 2. the fix for "8035983: Fix "Native frames:" in crash report >>>>>>>>>> (hs_err >>>>>>>>>> file)" introduced a similar problem. If we walk tha stack from a >>>>>>>>>> native wrapper down to a compiled frame, we will have a frame with >>>>>>>>>> an >>>>>>>>>> invalid frame pointer. In that case, the newly introduced check >>>>>>>>>> from >>>>>>>>>> change 8035983 will fail, because fr.sender_sp() depends on a >>>>>>>>>> valid >>>>>>>>>> fp. I'll propose to replace fr.sender_sp() by fr.real_fp() which >>>>>>>>>> should do the same but also works for compiled frames with invalid >>>>>>>>>> fp. >>>>>>>>>> >>>>>>>>>> Here's the new webrev: >>>>>>>>>> >>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345.v1/ >>>>>>>>>> >>>>>>>>>> What dou you think? >>>>>>>>>> >>>>>>>>>> Thank you and best regards, >>>>>>>>>> Volker >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Sep 16, 2014 at 3:48 PM, Volker Simonis >>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 'print_native_stack()' must be visible in both vmError.cpp and >>>>>>>>>>> debug.cpp. Initially I saw that vmError.cpp already included >>>>>>>>>>> debug.hpp >>>>>>>>>>> so I decided to declare it in debug.hpp. But now I realized that >>>>>>>>>>> also >>>>>>>>>>> debug.cpp includes vmError.hpp so I could just as well declare >>>>>>>>>>> 'print_native_stack()' in vmError.hpp and leave the >>>>>>>>>>> implementation >>>>>>>>>>> in >>>>>>>>>>> vmError.cpp. Do you want me to change that? >>>>>>>>>>> >>>>>>>>>>> Thank you and best regards, >>>>>>>>>>> Volker >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Sep 16, 2014 at 8:51 AM, David Holmes >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Hi Volker, >>>>>>>>>>>> >>>>>>>>>>>> On 13/09/2014 5:15 AM, Volker Simonis wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> could you please review and sponsor the following small change >>>>>>>>>>>>> which >>>>>>>>>>>>> should make debugging a little more comfortabel (at least on >>>>>>>>>>>>> Linux >>>>>>>>>>>>> for >>>>>>>>>>>>> now): >>>>>>>>>>>>> >>>>>>>>>>>>> http://cr.openjdk.java.net/~simonis/webrevs/8058345/ >>>>>>>>>>>>> https://bugs.openjdk.java.net/browse/JDK-8058345 >>>>>>>>>>>>> >>>>>>>>>>>>> In the hs_err files we have a nice mixed stack trace which >>>>>>>>>>>>> contains >>>>>>>>>>>>> both, Java and native frames. >>>>>>>>>>>>> It would be nice if we could make this functionality available >>>>>>>>>>>>> from >>>>>>>>>>>>> within gdb during debugging sessions (until now we can only >>>>>>>>>>>>> print >>>>>>>>>>>>> the >>>>>>>>>>>>> pure Java stack with the "ps()" helper function from >>>>>>>>>>>>> debug.cpp). >>>>>>>>>>>>> >>>>>>>>>>>>> This new feature can be easily achieved by refactoring the >>>>>>>>>>>>> corresponding stack printing code from VMError::report() in >>>>>>>>>>>>> vmError.cpp into its own method in debug.cpp. This change >>>>>>>>>>>>> extracts >>>>>>>>>>>>> that code into the new function 'print_native_stack()' in >>>>>>>>>>>>> debug.cpp >>>>>>>>>>>>> without changing anything of the functionality. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Why does it need to move to debug.cpp to allow this ? >>>>>>>>>>>> >>>>>>>>>>>> David >>>>>>>>>>>> ----- >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> It also adds some helper functions which make it easy to call >>>>>>>>>>>>> the >>>>>>>>>>>>> new >>>>>>>>>>>>> 'print_native_stack()' method from within gdb. There's the new >>>>>>>>>>>>> helper >>>>>>>>>>>>> function 'pns(frame f)' which takes a frame argument and calls >>>>>>>>>>>>> 'print_native_stack()'. We need the frame argument because gdb >>>>>>>>>>>>> inserts >>>>>>>>>>>>> a dummy frame for every call and we can't easily walk over this >>>>>>>>>>>>> dummy >>>>>>>>>>>>> frame from our stack printing routine. >>>>>>>>>>>>> >>>>>>>>>>>>> To simplify the creation of the frame object, I've added the >>>>>>>>>>>>> helper >>>>>>>>>>>>> functions: >>>>>>>>>>>>> >>>>>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, intptr_t* fp, address >>>>>>>>>>>>> pc) >>>>>>>>>>>>> { >>>>>>>>>>>>> return frame(sp, fp, pc); >>>>>>>>>>>>> } >>>>>>>>>>>>> >>>>>>>>>>>>> for x86 (in frame_x86.cpp) and >>>>>>>>>>>>> >>>>>>>>>>>>> extern "C" frame make_frame(intptr_t* sp, address pc) { >>>>>>>>>>>>> return frame(sp, pc); >>>>>>>>>>>>> } >>>>>>>>>>>>> >>>>>>>>>>>>> for ppc64 in frame_ppc.cpp. With these helper functions we can >>>>>>>>>>>>> now >>>>>>>>>>>>> easily get a mixed stack trace of a Java thread in gdb (see >>>>>>>>>>>>> below). >>>>>>>>>>>>> >>>>>>>>>>>>> All the helper functions are protected by '#ifndef PRODUCT' >>>>>>>>>>>>> >>>>>>>>>>>>> Thank you and best regards, >>>>>>>>>>>>> Volker >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> (gdb) call pns(make_frame($sp, $rbp, $pc)) >>>>>>>>>>>>> >>>>>>>>>>>>> "Executing pns" >>>>>>>>>>>>> Native frames: (J=compiled Java code, j=interpreted, Vv=VM >>>>>>>>>>>>> code, >>>>>>>>>>>>> C=native >>>>>>>>>>>>> code) >>>>>>>>>>>>> C [libpthread.so.0+0xc0fe] pthread_cond_timedwait+0x13e >>>>>>>>>>>>> V [libjvm.so+0x96c4c1] os::sleep(Thread*, long, bool)+0x1a1 >>>>>>>>>>>>> V [libjvm.so+0x75f442] JVM_Sleep+0x312 >>>>>>>>>>>>> j java.lang.Thread.sleep(J)V+0 >>>>>>>>>>>>> j CrashNative.crashIt(Lsun/misc/Unsafe;I)V+10 >>>>>>>>>>>>> j CrashNative.doIt()V+45 >>>>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>>>> V [libjvm.so+0x9eab75] >>>>>>>>>>>>> Reflection::invoke(instanceKlassHandle, >>>>>>>>>>>>> methodHandle, Handle, bool, objArrayHandle, BasicType, >>>>>>>>>>>>> objArrayHandle, >>>>>>>>>>>>> bool, Thread*) [clone .constprop.218]+0xa25 >>>>>>>>>>>>> V [libjvm.so+0x9eb838] Reflection::invoke_method(oopDesc*, >>>>>>>>>>>>> Handle, >>>>>>>>>>>>> objArrayHandle, Thread*)+0x1c8 >>>>>>>>>>>>> V [libjvm.so+0x7637ae] JVM_InvokeMethod+0xfe >>>>>>>>>>>>> j >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+0 >>>>>>>>>>>>> j >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+100 >>>>>>>>>>>>> j >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6 >>>>>>>>>>>>> j >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+56 >>>>>>>>>>>>> j CrashNative.mainJava()V+32 >>>>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, >>>>>>>>>>>>> JavaValue*, >>>>>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, >>>>>>>>>>>>> Thread*) >>>>>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>>>>> V [libjvm.so+0x73b3d7] jni_CallStaticVoidMethodV+0xe7 >>>>>>>>>>>>> C [libCrashNative.so+0x9a9] >>>>>>>>>>>>> JNIEnv_::CallStaticVoidMethod(_jclass*, >>>>>>>>>>>>> _jmethodID*, ...)+0xb9 >>>>>>>>>>>>> C [libCrashNative.so+0xa10] step3(JNIEnv_*, _jobject*)+0x65 >>>>>>>>>>>>> C [libCrashNative.so+0xa69] step2(JNIEnv_*, _jobject*)+0x57 >>>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>>> C [libCrashNative.so+0xa37] step2(JNIEnv_*, _jobject*)+0x25 >>>>>>>>>>>>> C [libCrashNative.so+0xa8e] step1(JNIEnv_*, _jobject*)+0x23 >>>>>>>>>>>>> C [libCrashNative.so+0x87f] >>>>>>>>>>>>> Java_CrashNative_nativeMethod+0x23 >>>>>>>>>>>>> j CrashNative.nativeMethod()V+0 >>>>>>>>>>>>> j CrashNative.main([Ljava/lang/String;)V+9 >>>>>>>>>>>>> v ~StubRoutines::call_stub >>>>>>>>>>>>> V [libjvm.so+0x71599f] JavaCalls::call_helper(JavaValue*, >>>>>>>>>>>>> methodHandle*, JavaCallArguments*, Thread*)+0xf8f >>>>>>>>>>>>> V [libjvm.so+0x7384f5] jni_invoke_static(JNIEnv_*, >>>>>>>>>>>>> JavaValue*, >>>>>>>>>>>>> _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, >>>>>>>>>>>>> Thread*) >>>>>>>>>>>>> [clone .isra.238] [clone .constprop.250]+0x385 >>>>>>>>>>>>> V [libjvm.so+0x73b2b0] jni_CallStaticVoidMethod+0x170 >>>>>>>>>>>>> C [libjli.so+0x742a] JavaMain+0x65a >>>>>>>>>>>>> C [libpthread.so.0+0x7e9a] start_thread+0xda >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>> >>>>>>> >>>>> >>> > From magnus.ihse.bursie at oracle.com Tue Sep 23 07:13:38 2014 From: magnus.ihse.bursie at oracle.com (Magnus Ihse Bursie) Date: Tue, 23 Sep 2014 09:13:38 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <541DB66D.8030608@oracle.com> References: <54188242.9060808@oracle.com> <541DB66D.8030608@oracle.com> Message-ID: <54211DA2.7030406@oracle.com> On 2014-09-20 19:16, Jesper Wilhelmsson wrote: > Hi all, > > I got approvals for the HotSpot changes and they have now been pushed > to jdk9/hs-gc. For the JDK makefile change I would prefer if someone > that feels comfortable with the JDK makefiles would have a look at it > before I push that part. The jdk makefile changes looks good to me. /Magnus > > Thanks, > /Jesper > > Jesper Wilhelmsson skrev 16/9/14 20:32: >> Hi, >> >> The fix for JDK-8055006 was reviewed by several engineers and was pushed >> directly to 8u40 due to time constraints. This is a forward port to >> get the same >> changes into JDK 9. >> >> There are two webrevs, one for HotSpot and one for the JDK. >> >> The 8u40 HotSpot change applied cleanly to 9 so if this was a >> traditional >> backport it wouldn't require another review. But since this is a >> weird situation >> and I'm pushing to 9 I'll ask for reviews just to be on the safe side. >> Also, the original 8u40 push contained some unnecessary changes that >> was later >> cleaned up by JDK-8056056. In this port to 9 I have merged these two >> changes >> into one to avoid introducing a known issue only to remove it again. >> >> The JDK change is new. The makefiles differ between 8u40 and 9 and >> this new >> change makes use of functionality not present in 8u40. This patch was >> provided >> by Erik Joelsson and I have reviewed it myself, but it needs two >> reviews so >> another one is welcome. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8055006 >> >> Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/jdk9/ >> >> >> 8u40 Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/ >> >> 8u40 changes: >> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/hotspot/rev/f933a15469d4 >> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/312152328471 >> >> Bug and change for the second 8u40 fix: >> https://bugs.openjdk.java.net/browse/JDK-8056056 >> http://hg.openjdk.java.net/jdk8u/hs-dev/hotspot/rev/9be4ca335650 >> >> Thanks! >> /Jesper From jesper.wilhelmsson at oracle.com Tue Sep 23 08:13:25 2014 From: jesper.wilhelmsson at oracle.com (Jesper Wilhelmsson) Date: Tue, 23 Sep 2014 10:13:25 +0200 Subject: RFR: Forward port of 8055006 - Store original value of Min/MaxHeapFreeRatio In-Reply-To: <54211DA2.7030406@oracle.com> References: <54188242.9060808@oracle.com> <541DB66D.8030608@oracle.com> <54211DA2.7030406@oracle.com> Message-ID: <54212BA5.4000207@oracle.com> Thanks Magnus! /Jesper Magnus Ihse Bursie skrev 23/9/14 09:13: > On 2014-09-20 19:16, Jesper Wilhelmsson wrote: >> Hi all, >> >> I got approvals for the HotSpot changes and they have now been pushed to >> jdk9/hs-gc. For the JDK makefile change I would prefer if someone that feels >> comfortable with the JDK makefiles would have a look at it before I push that >> part. > The jdk makefile changes looks good to me. > > /Magnus >> >> Thanks, >> /Jesper >> >> Jesper Wilhelmsson skrev 16/9/14 20:32: >>> Hi, >>> >>> The fix for JDK-8055006 was reviewed by several engineers and was pushed >>> directly to 8u40 due to time constraints. This is a forward port to get the >>> same >>> changes into JDK 9. >>> >>> There are two webrevs, one for HotSpot and one for the JDK. >>> >>> The 8u40 HotSpot change applied cleanly to 9 so if this was a traditional >>> backport it wouldn't require another review. But since this is a weird >>> situation >>> and I'm pushing to 9 I'll ask for reviews just to be on the safe side. >>> Also, the original 8u40 push contained some unnecessary changes that was later >>> cleaned up by JDK-8056056. In this port to 9 I have merged these two changes >>> into one to avoid introducing a known issue only to remove it again. >>> >>> The JDK change is new. The makefiles differ between 8u40 and 9 and this new >>> change makes use of functionality not present in 8u40. This patch was provided >>> by Erik Joelsson and I have reviewed it myself, but it needs two reviews so >>> another one is welcome. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8055006 >>> >>> Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/jdk9/ >>> >>> >>> 8u40 Webrevs: http://cr.openjdk.java.net/~jwilhelm/8055006/ >>> >>> 8u40 changes: >>> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/hotspot/rev/f933a15469d4 >>> http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/rev/312152328471 >>> >>> Bug and change for the second 8u40 fix: >>> https://bugs.openjdk.java.net/browse/JDK-8056056 >>> http://hg.openjdk.java.net/jdk8u/hs-dev/hotspot/rev/9be4ca335650 >>> >>> Thanks! >>> /Jesper > From staffan.larsen at oracle.com Tue Sep 23 08:54:21 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 23 Sep 2014 10:54:21 +0200 Subject: RFR: JDK-8058936 hotspot/test/Makefile should use jtreg script from $JT_HOME/bin/jreg (instead of $JT_HOME/win32/bin/jtreg) Message-ID: <81323085-1C7D-43CA-B53B-8FA5678B1EA5@oracle.com> An upcoming version of jtreg will remove the platform-specific scripts in the distribution in favor of one single script. Hotspot?s makefiles references the win32-specific script and needs to be updated. Please see the small fix below. I plan to backport this to 7u and 8u as well. Thanks, /Staffan diff --git a/test/Makefile b/test/Makefile --- a/test/Makefile +++ b/test/Makefile @@ -259,8 +259,8 @@ EXTRA_JTREG_OPTIONS += -concurrency:$(CONCURRENCY) endif -# Default JTREG to run (win32 script works for everybody) -JTREG = $(JT_HOME)/win32/bin/jtreg +# Default JTREG to run +JTREG = $(JT_HOME)/bin/jtreg # Only run automatic tests JTREG_BASIC_OPTIONS += -a From stefan.karlsson at oracle.com Tue Sep 23 10:43:58 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 23 Sep 2014 12:43:58 +0200 Subject: RFR: JDK-8058936 hotspot/test/Makefile should use jtreg script from $JT_HOME/bin/jreg (instead of $JT_HOME/win32/bin/jtreg) In-Reply-To: <81323085-1C7D-43CA-B53B-8FA5678B1EA5@oracle.com> References: <81323085-1C7D-43CA-B53B-8FA5678B1EA5@oracle.com> Message-ID: <54214EEE.7020007@oracle.com> Looks good. StefanK On 23/09/14 10:54, Staffan Larsen wrote: > An upcoming version of jtreg will remove the platform-specific scripts in the distribution in favor of one single script. Hotspot?s makefiles references the win32-specific script and needs to be updated. Please see the small fix below. I plan to backport this to 7u and 8u as well. > > Thanks, > /Staffan > > > diff --git a/test/Makefile b/test/Makefile > --- a/test/Makefile > +++ b/test/Makefile > @@ -259,8 +259,8 @@ > EXTRA_JTREG_OPTIONS += -concurrency:$(CONCURRENCY) > endif > > -# Default JTREG to run (win32 script works for everybody) > -JTREG = $(JT_HOME)/win32/bin/jtreg > +# Default JTREG to run > +JTREG = $(JT_HOME)/bin/jtreg > > # Only run automatic tests > JTREG_BASIC_OPTIONS += -a From staffan.larsen at oracle.com Tue Sep 23 12:57:56 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 23 Sep 2014 14:57:56 +0200 Subject: Cross Component (hotspot+jdk) Development in the Hotspot Group Repos In-Reply-To: References: Message-ID: All, This change has now been pushed to jdk9/hs and jobs are running in JPRT to push the change to jdk9/hs-rt, jdk9/hs-gc and jdk9/hs-comp. We will monitor the progress and try to catch any problems. Thanks, /Staffan On 17 sep 2014, at 09:45, Staffan Larsen wrote: > All, > > We have discovered a problem with one of our internal tools (DKFL) that rely on the jdk build numbers as part of the version string. Currently when we use the latest promoted jdk the version string will have a build number, but when building the complete source the build is always set to b00. > > We will delay the change below until we have resolved this problem. > > /Staffan > > On 15 sep 2014, at 11:25, Staffan Larsen wrote: > >> All, >> >> We plan to move ahead with this change on Wednesday (Sept 17th) unless there are instabilities that prevent this. We currently have one open bug blocking this (JDK-8058251). >> >> I will follow up with an email once the switch has happened. >> >> Thanks, >> /Staffan >> >> >> On 9 sep 2014, at 08:02, Staffan Larsen wrote: >> >>> >>> ## tl;dr >>> >>> We propose a move to a Hotspot development model where we can do both >>> hotspot and jdk changes in the hotspot group repos. This will require a >>> fully populated JDK forest to push changes (whether hotspot or jdk >>> changes) through JPRT. We do not expect these changes to have much >>> affect on the open community, but it is good to note that there can be >>> changes both in hotspot and jdk code coming through the hotspot >>> repositories, and the best practise is to always clone and build the >>> complete forest. >>> >>> We propose to do this change in a few weeks time. >>> >>> ## Problem >>> >>> We see an increasing number of features (small and large) that require >>> concerted changes to both the hotspot and the jdk repos. Our current >>> development model does not support this very well since it requires jdk >>> changes to be made in jdk9/dev and hotspot changes to be made in the >>> hotspot group repositories. Alternatively, such changes results in "flag >>> days" where jdk and hotspot changes are pushed through the group repos >>> with a lot of manual work and impact on everyone working in the group >>> repos. Either way, the result is very slow and cumbersome development. >>> >>> Some examples where concerted changes have been required are JSR-292, >>> default methods, Java Flight Recorder, work on annotations, moving Class >>> fields to Java, many serviceability area tests, and so on. A lot of this >>> work will continue and we will also see new things such as jigsaw that >>> add to the mix. >>> >>> Doing concerted changes today takes a lot of manual effort and calendar >>> time to make sure nothing break. In many cases the addition of a new >>> feature needs to made first to a hotspot group repo. That change needs >>> to propagate to jdk9/dev where library code can be changed to depend on >>> it. Once that change has propagated back to the hotspot group repo, the >>> final change can be made to remove the old implementation. This dance >>> can take anywhere from 2 to 4 weeks to complete - for a single feature. >>> >>> There has also been quite a few cases where we missed taking the >>> dependency into account which results in test failures in one or more >>> repos. In some cases these failures go on for several weeks causing lots >>> of extra work and confusion simply because it takes time for the fix to >>> propagate through the repos. >>> >>> Instead, we want to move to a model where we can make both jdk and >>> hotspot changes directly in the hotspot group repos. In that way the >>> changes will always "travel together" through the repos. This will make >>> our development cycle faster as well as more reliable. >>> >>> More or less by definition these type of changes introduce a stronger >>> dependency between hotspot and the jdk. For the product as a whole to >>> work correctly the right combination of hotspot and the jdk need to be >>> used. We have long since removed the requirement that hotspot would >>> support several jdk versions (known as the Hotspot Express - or hsx - >>> model) and we continue to see a strong dependency, where matching code >>> in hotspot and the jdk needs to be used. >>> >>> ## No More Dependency on Latest Promoted Build >>> >>> The strong dependency between hotspot and jdk makes it impossible for >>> hotspot to depend on the latest promoted jdk build for testing and >>> development. To elaborate on this; if a change with hotspot+jdk >>> dependencies have been pushed to a group repo, it will not longer be >>> possible to use the latest promoted build for running or testing the >>> version of hotspot built in that repo -- the latest promoted build will >>> not have the latest change to the jdk that hotspot now depends on (or >>> vice versa). >>> >>> ## Require Fully Populated JDK Forest >>> >>> The simple solution that we can switch to today is to always require a >>> fully populated JDK forest when building (both locally and in JPRT). By >>> this we mean a clone of all the repos in the forest under, for example, >>> jdk9/hs-rt. JPRT would no longer be using the latest promoted build when >>> creating bundles, instead it will build the code from the submitted >>> forest. >>> >>> If all operations (builds, integrations, pushes, JPRT jobs) always work >>> on the full forest, then there will never be a mismatch between the jdk >>> and the hotspot code. >>> >>> The main drawbacks of this is that developers now need to clone, store >>> and build a lot more code. Cloning the full forest takes longer than >>> just cloning the hotspot forest. This can be alleviated by maintaining >>> local cached versions. Storing full forests require more disk space. >>> This can be mitigated by buying more disks or using a different workflow >>> (for example Mercurial Queues). Building a full jdk takes longer, but >>> hotspot is already one of the larger components to build and incremental >>> builds are usually quite fast. >>> >>> ## Next Steps >>> >>> Given that we would like to improve the model we use for cross component >>> development as soon as possible, we would like to switch to require a >>> fully populated JDK forest for hotspot development. All the >>> prerequisites for doing this are in place (changes to JPRT, both on the >>> servers and to the configuration files in the source repos). A group of >>> volunteering hotspot developers have been using full jdk repos for a >>> while for day-to-day work (except pushes) and have not reported any >>> showstopper problems. >>> >>> If no strong objections are rasied we need decide on a date when we >>> throw the switch. A good date is probably after the 8u40 Feature >>> Complete date of Mid September [0] so as not to impact that release >>> (although this change will only apply to JDK 9 development for now). >>> >>> Regards, >>> Jon Masamitsu, Karen Kinnear, Mikael Vidstedt, >>> Staffan Larsen, Stefan S?rne, Vladimir Kozlov >>> >>> [0] http://openjdk.java.net/projects/jdk8u/releases/8u40.html >> > From david.holmes at oracle.com Tue Sep 23 13:30:58 2014 From: david.holmes at oracle.com (David Holmes) Date: Tue, 23 Sep 2014 23:30:58 +1000 Subject: RFR: JDK-8058936 hotspot/test/Makefile should use jtreg script from $JT_HOME/bin/jreg (instead of $JT_HOME/win32/bin/jtreg) In-Reply-To: <81323085-1C7D-43CA-B53B-8FA5678B1EA5@oracle.com> References: <81323085-1C7D-43CA-B53B-8FA5678B1EA5@oracle.com> Message-ID: <54217612.60401@oracle.com> Hi Staffan, On 23/09/2014 6:54 PM, Staffan Larsen wrote: > An upcoming version of jtreg will remove the platform-specific scripts in the distribution in favor of one single script. Hotspot?s makefiles references the win32-specific script and needs to be updated. Please see the small fix below. I plan to backport this to 7u and 8u as well. How is this change being coordinated? Change itself looks okay. Thanks, David > Thanks, > /Staffan > > > diff --git a/test/Makefile b/test/Makefile > --- a/test/Makefile > +++ b/test/Makefile > @@ -259,8 +259,8 @@ > EXTRA_JTREG_OPTIONS += -concurrency:$(CONCURRENCY) > endif > > -# Default JTREG to run (win32 script works for everybody) > -JTREG = $(JT_HOME)/win32/bin/jtreg > +# Default JTREG to run > +JTREG = $(JT_HOME)/bin/jtreg > > # Only run automatic tests > JTREG_BASIC_OPTIONS += -a > From staffan.larsen at oracle.com Tue Sep 23 13:34:26 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Tue, 23 Sep 2014 15:34:26 +0200 Subject: RFR: JDK-8058936 hotspot/test/Makefile should use jtreg script from $JT_HOME/bin/jreg (instead of $JT_HOME/win32/bin/jtreg) In-Reply-To: <54217612.60401@oracle.com> References: <81323085-1C7D-43CA-B53B-8FA5678B1EA5@oracle.com> <54217612.60401@oracle.com> Message-ID: <5A660CCA-D4A7-4B61-8FFD-CFEB85183D3D@oracle.com> On 23 sep 2014, at 15:30, David Holmes wrote: > Hi Staffan, > > On 23/09/2014 6:54 PM, Staffan Larsen wrote: >> An upcoming version of jtreg will remove the platform-specific scripts in the distribution in favor of one single script. Hotspot?s makefiles references the win32-specific script and needs to be updated. Please see the small fix below. I plan to backport this to 7u and 8u as well. > > How is this change being coordinated? Jtreg already has the bin/jtreg script, so the change can be applied directly. A coming release of jtreg will remove the old scripts. > > Change itself looks okay. Thanks, Staffan > > Thanks, > David > >> Thanks, >> /Staffan >> >> >> diff --git a/test/Makefile b/test/Makefile >> --- a/test/Makefile >> +++ b/test/Makefile >> @@ -259,8 +259,8 @@ >> EXTRA_JTREG_OPTIONS += -concurrency:$(CONCURRENCY) >> endif >> >> -# Default JTREG to run (win32 script works for everybody) >> -JTREG = $(JT_HOME)/win32/bin/jtreg >> +# Default JTREG to run >> +JTREG = $(JT_HOME)/bin/jtreg >> >> # Only run automatic tests >> JTREG_BASIC_OPTIONS += -a >> From erik.helin at oracle.com Wed Sep 24 16:32:37 2014 From: erik.helin at oracle.com (Erik Helin) Date: Wed, 24 Sep 2014 18:32:37 +0200 Subject: RFR: 8049599: MetaspaceGC::_capacity_until_GC can overflow In-Reply-To: <53F4780D.9040005@oracle.com> References: <53F4780D.9040005@oracle.com> Message-ID: <5422F225.1090105@oracle.com> All, I've reworked the patch quite a bit based on (great!) internal feedback from StefanK and Mikael Gerdin. The patch still uses an overflow check and a CAS to update the high-water mark (HWM), but the new behavior should be the same as the old one (which used Atomic::add_ptr). With the current code, each thread always increments the HWM, but there is a race in that another thread can allocate metadata (due to the increased HWM) before the thread that increased the HWM gets around to allocate. With the new code, each thread will increase the pointer at most once using a CAS. Even if increasing the HWM fails, the allocation attempt might still succeed (for the reason described above). There is a theoretical problem of starvation in the new code, a thread might forever fail to increase the HWM and forever fail to allocate due to contention, but in practice this should not be a problem. In the current code, Atomic::add_ptr is implemented as a CAS in a (theoretically) never ending loop on non-x86 CPUs, so the same theoretical starvation problem is present in the current code as well (on non-x86 CPUs that is). Webrevs: - full: http://cr.openjdk.java.net/~ehelin/8049599/webrev.01/ - incremental: http://cr.openjdk.java.net/~ehelin/8049599/webrev.00-01/ Testing: - JPRT - Aurora: - Kitchensink - Weblogic+Medrec - runThese - vm.quick, regression, gc, compiler, runtime, parallel class loading, metaspace, oom - JTReg tests - Running newly added JTREG test Thanks, Erik On 2014-08-20 12:27, Erik Helin wrote: > Hi all, > > this patch fixes a problem where Metaspace::_capacityUntilGC can > overflow ("wrap around"). Since _capacityUntilGC is treated as a size_t > everywhere it used, we won't calculate with negative numbers, but an > eventual wrap around will still cause unnecessary GCs. > > The problem is solved by detecting an eventual wrap around in > Metaspace::incCapacityUntilGC. The overflow check means that > _capacityUntilGC now must be updated with a CAS. If the CAS fails more > than five times due to contention, no update will be done, because this > means that other threads must have incremented _capacityUntilGC (it is > decremented only during a safepoint). This also means that a thread > calling incCapacityUntilGC might have "its" requested memory "stolen" by > another thread, but incCapacityUntilGC has never given any fairness > guarantees. > > The patch also adds two functions to the WhiteBox API to be able to > update and read Metaspace::_capacityUntilGC from a JTREG test. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8049599 > > Webrev: > http://cr.openjdk.java.net/~ehelin/8049599/webrev.00/ > > Testing: > - JPRT > - Aurora ad-hoc testing (on all platforms, both 32-bit and 64-bit): > - Kitchensink, runThese and Dacapo > - JTREG tests > - Parallel Class Loading testlist > - GC, runtime and compiler testlists > - OOM and stress testlists > - Running newly added JTREG test > > Thanks, > Erik From igor.veresov at oracle.com Wed Sep 24 23:15:04 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Wed, 24 Sep 2014 16:15:04 -0700 Subject: [8u40] RFR: 8058744, 8059002: Crash in C1 OSRed method w/ Unsafe usage Message-ID: I?d like to backport these two fixes please. Nightlies are alright. 8058744: Crash in C1 OSRed method w/ Unsafe usage JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/bf402e85d046 JBS: https://bugs.openjdk.java.net/browse/JDK-8058744 JDK9 webrev: http://cr.openjdk.java.net/~iveresov/8058744/webrev.02 8059002: 8058744 needs a test case JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/6aada1367ea2 JBS: https://bugs.openjdk.java.net/browse/JDK-8059002 JDK9 webrev: http://cr.openjdk.java.net/~iveresov/8059002/webrev.00/ Thanks, igor From vladimir.kozlov at oracle.com Wed Sep 24 23:35:19 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Wed, 24 Sep 2014 16:35:19 -0700 Subject: [8u40] RFR: 8058744, 8059002: Crash in C1 OSRed method w/ Unsafe usage In-Reply-To: References: Message-ID: <54235537.7050208@oracle.com> Good. Vladimir On 9/24/14 4:15 PM, Igor Veresov wrote: > I?d like to backport these two fixes please. Nightlies are alright. > > 8058744: Crash in C1 OSRed method w/ Unsafe usage > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/bf402e85d046 > JBS: https://bugs.openjdk.java.net/browse/JDK-8058744 > JDK9 webrev: http://cr.openjdk.java.net/~iveresov/8058744/webrev.02 > > 8059002: 8058744 needs a test case > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/6aada1367ea2 > JBS: https://bugs.openjdk.java.net/browse/JDK-8059002 > JDK9 webrev: http://cr.openjdk.java.net/~iveresov/8059002/webrev.00/ > > Thanks, > igor From igor.veresov at oracle.com Wed Sep 24 23:50:32 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Wed, 24 Sep 2014 16:50:32 -0700 Subject: [8u40] RFR: 8058744, 8059002: Crash in C1 OSRed method w/ Unsafe usage In-Reply-To: <54235537.7050208@oracle.com> References: <54235537.7050208@oracle.com> Message-ID: Thanks, Vladimir! igor On Sep 24, 2014, at 4:35 PM, Vladimir Kozlov wrote: > Good. > > Vladimir > > On 9/24/14 4:15 PM, Igor Veresov wrote: >> I?d like to backport these two fixes please. Nightlies are alright. >> >> 8058744: Crash in C1 OSRed method w/ Unsafe usage >> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/bf402e85d046 >> JBS: https://bugs.openjdk.java.net/browse/JDK-8058744 >> JDK9 webrev: http://cr.openjdk.java.net/~iveresov/8058744/webrev.02 >> >> 8059002: 8058744 needs a test case >> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/6aada1367ea2 >> JBS: https://bugs.openjdk.java.net/browse/JDK-8059002 >> JDK9 webrev: http://cr.openjdk.java.net/~iveresov/8059002/webrev.00/ >> >> Thanks, >> igor From mikael.gerdin at oracle.com Thu Sep 25 07:32:20 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 25 Sep 2014 09:32:20 +0200 Subject: RFR: JDK-8055141 Catch linker errors earlier in the JVM build by not allowing unresolved externals In-Reply-To: <541B6D75.90207@oracle.com> References: <2374744.b2KptO73VY@mgerdin03> <541B6D75.90207@oracle.com> Message-ID: <7552091.BvVnPmvmed@mgerdin03> On Friday 19 September 2014 09.40.37 David Holmes wrote: > Looks good and works well! Lets get this one backported too please. :) Thanks for the reviews, David, Erik & Erik. I suppose it does not really matter which hs-repo I push this to, so I'll push it to hs-gc/hotspot. /Mikael > > Thanks, > David > > On 18/09/2014 6:39 PM, Mikael Gerdin wrote: > > Hi all, > > > > As you may know, linking an ELF shared object allows unresolved external > > symbols at link time. This is sometimes problematic for JVM developers > > since the JVM does not depend on unresolved external symbols and all > > missing symbols at build time are due to mistakes, usually missing > > includes of inline definitions. > > > > In order to disallow such unresolved externals I propose that we add > > "-z defs" to the linker command line when linking the JVM, thereby making > > unresolved externals a build-time error instead of a run-time failure when > > dlopen:ing the newly built JVM for the first time. > > > > On Windows ans OSX this is already the default linker behavior. > > I took the liberty of modifying the bsd make file since I believe that bsd > > uses the GNU linker which supports the "-z defs" flag. I'm not sure about > > the behavior or flags appropriate for AIX so I didn't change the AIX > > makefiles. > > > > > > On Solaris, linking with "-z defs" failed at first with the following > > message: > > > > Undefined first referenced > > > > symbol in file > > > > gethostbyname ostream.o (symbol belongs to implicit > > dependency /lib/64/libnsl.so.1) > > inet_addr ostream.o (symbol belongs to implicit > > dependency /lib/64/libnsl.so.1) > > ld: fatal: symbol referencing errors. No output written to libjvm.so > > > > This has not caused any failures earlier since libsocket depends on > > libnsl, so in practice the symbols are always present at runtime, but > > with the "-z defs" flag the linker requires the dependency to be > > explicitly stated. > > I fixed the issure by appending -lnsl to the link-time libraries for the > > Solaris build. > > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8055141/webrev.0/ > > Bug: https://bugs.openjdk.java.net/browse/JDK-8055141 > > > > Testing: > > * Verified that the additional flag causes build-time errors on all > > platforms in the presence of unresolved external symbols. > > * Verified that the build passes on all Oracle-supported platforms with > > the > > new flag. > > > > Thanks > > /Mikael From mikael.gerdin at oracle.com Thu Sep 25 07:33:59 2014 From: mikael.gerdin at oracle.com (Mikael Gerdin) Date: Thu, 25 Sep 2014 09:33:59 +0200 Subject: RFR [8u40] 8056084: Refactor Hashtable to allow implementations without rehashing support In-Reply-To: <541FFF67.3020409@oracle.com> References: <15856493.62xT43KoU5@mgerdin03> <541FFF67.3020409@oracle.com> Message-ID: <10697435.g2XDyjSMJu@mgerdin03> On Monday 22 September 2014 12.52.23 Bengt Rutisson wrote: > Hi Mikael, > > Looks good. Thanks Bengt, Thomas for the reviews. I'm all set to push this now. /Mikael > > Bengt > > On 2014-09-17 09:00, Mikael Gerdin wrote: > > Hi all, > > > > I need to backport this change in order to backport 8048268 which we need > > for G1 performance in 8u40. > > > > The patch didn't apply cleanly since StringTable was moved to a separate > > file in 9. The StringTable patch hunks applied correctly to the relevant > > parts of symbolTable.[ch]pp. > > > > > > Webrev: http://cr.openjdk.java.net/~mgerdin/8056084/8u/webrev/ > > > > Bug: https://bugs.openjdk.java.net/browse/JDK-8056084 > > > > Review thread at: > > http://mail.openjdk.java.net/pipermail/hotspot-dev/2014-August/015039.html > > > > /Mikael From vladimir.kozlov at oracle.com Thu Sep 25 18:30:21 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 25 Sep 2014 11:30:21 -0700 Subject: [8u40] RFR(S): 8050022: linux-sparcv9: assert(SharedSkipVerify || obj->is_oop()) failed: sanity check Message-ID: <54245F3D.3040600@oracle.com> Backport request. Changes was pushed into jdk9 week ago. Nighties are fine. Changes are applied cleanly. Bug: https://bugs.openjdk.java.net/browse/JDK-8050022 jdk9 webrev: http://cr.openjdk.java.net/~morris/JDK-8050022.05 Review thread: http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-September/015562.html jdk9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/ca010d2665ca Thanks, Vladimir From igor.veresov at oracle.com Thu Sep 25 19:25:24 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Thu, 25 Sep 2014 12:25:24 -0700 Subject: [8u40] RFR(S): 8050022: linux-sparcv9: assert(SharedSkipVerify || obj->is_oop()) failed: sanity check In-Reply-To: <54245F3D.3040600@oracle.com> References: <54245F3D.3040600@oracle.com> Message-ID: Good. igor On Sep 25, 2014, at 11:30 AM, Vladimir Kozlov wrote: > Backport request. Changes was pushed into jdk9 week ago. Nighties are fine. > Changes are applied cleanly. > > Bug: > https://bugs.openjdk.java.net/browse/JDK-8050022 > jdk9 webrev: > http://cr.openjdk.java.net/~morris/JDK-8050022.05 > Review thread: > http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-September/015562.html > jdk9 changeset: > http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/ca010d2665ca > > Thanks, > Vladimir From vladimir.kozlov at oracle.com Thu Sep 25 19:57:00 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Thu, 25 Sep 2014 12:57:00 -0700 Subject: [8u40] RFR(S): 8050022: linux-sparcv9: assert(SharedSkipVerify || obj->is_oop()) failed: sanity check In-Reply-To: References: <54245F3D.3040600@oracle.com> Message-ID: <5424738C.5080000@oracle.com> Thank you, Igor Vladimir On 9/25/14 12:25 PM, Igor Veresov wrote: > Good. > > igor > > On Sep 25, 2014, at 11:30 AM, Vladimir Kozlov wrote: > >> Backport request. Changes was pushed into jdk9 week ago. Nighties are fine. >> Changes are applied cleanly. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8050022 >> jdk9 webrev: >> http://cr.openjdk.java.net/~morris/JDK-8050022.05 >> Review thread: >> http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2014-September/015562.html >> jdk9 changeset: >> http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/ca010d2665ca >> >> Thanks, >> Vladimir > From tobias.hartmann at oracle.com Mon Sep 29 10:08:10 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 29 Sep 2014 12:08:10 +0200 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names Message-ID: <54292F8A.901@oracle.com> Hi, please review the following patch. Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ == Problem == The segmented code cache implementation registers a memory pool for each code heap. To be consistent with the "non-segmented" output, the names of these pools should contain the word "code heap". == Solution == I added "Code Heap" to the name of the segments. The output now looks like this: $ /export/bin/java -XX:-SegmentedCodeCache Test Code Cache [...] $ /export/bin/java -XX:+SegmentedCodeCache Test Code Heap 'non-methods' Code Heap 'profiled nmethods' Code Heap 'non-profiled nmethods' [...] Thanks, Tobias From staffan.larsen at oracle.com Mon Sep 29 10:31:24 2014 From: staffan.larsen at oracle.com (Staffan Larsen) Date: Mon, 29 Sep 2014 12:31:24 +0200 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <54292F8A.901@oracle.com> References: <54292F8A.901@oracle.com> Message-ID: <7ED503B5-438F-45A4-B478-8858921DCDB2@oracle.com> Looks good. Can you make sure to run the jdk tests for memory pools, easiest by running jprt with ?-testset svc?. Thanks, /Staffan On 29 sep 2014, at 12:08, Tobias Hartmann wrote: > Hi, > > please review the following patch. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 > Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ > > == Problem == > The segmented code cache implementation registers a memory pool for each code heap. To be consistent with the "non-segmented" output, the names of these pools should contain the word "code heap". > > == Solution == > I added "Code Heap" to the name of the segments. The output now looks like this: > > $ /export/bin/java -XX:-SegmentedCodeCache Test > Code Cache > [...] > > $ /export/bin/java -XX:+SegmentedCodeCache Test > Code Heap 'non-methods' > Code Heap 'profiled nmethods' > Code Heap 'non-profiled nmethods' > [...] > > Thanks, > Tobias From george.triantafillou at oracle.com Mon Sep 29 11:55:36 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Mon, 29 Sep 2014 07:55:36 -0400 Subject: RFR: 8058606 Detailed Native Memory Tracking (NMT) data is not output at VM exit Message-ID: <542948B8.107@oracle.com> Please review this fix for JDK-8058606. The output from the -XX:NativeMemoryTracking=detail option now outputs detailed tracking information at VM exit. Previously, only summary tracking information was output. A new test was added to verify the output from both summary and detail tracking options. Bug: https://bugs.openjdk.java.net/browse/JDK-8058606 Webrev: http://cr.openjdk.java.net/~gtriantafill/8058606/webrev/ The fix was tested locally on Linux with jtreg and the JPRT hotspot testset. -George From filipp.zhinkin at oracle.com Mon Sep 29 11:56:17 2014 From: filipp.zhinkin at oracle.com (Filipp Zhinkin) Date: Mon, 29 Sep 2014 15:56:17 +0400 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <54292F8A.901@oracle.com> References: <54292F8A.901@oracle.com> Message-ID: <542948E1.8000206@oracle.com> Hi Tobias, thank you for taking care of it. The change looks good. Regards, Filipp. On 09/29/2014 02:08 PM, Tobias Hartmann wrote: > Hi, > > please review the following patch. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 > Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ > > == Problem == > The segmented code cache implementation registers a memory pool for each code > heap. To be consistent with the "non-segmented" output, the names of these > pools should contain the word "code heap". > > == Solution == > I added "Code Heap" to the name of the segments. The output now looks like this: > > $ /export/bin/java -XX:-SegmentedCodeCache Test > Code Cache > [...] > > $ /export/bin/java -XX:+SegmentedCodeCache Test > Code Heap 'non-methods' > Code Heap 'profiled nmethods' > Code Heap 'non-profiled nmethods' > [...] > > Thanks, > Tobias From tobias.hartmann at oracle.com Mon Sep 29 12:01:16 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 29 Sep 2014 14:01:16 +0200 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <7ED503B5-438F-45A4-B478-8858921DCDB2@oracle.com> References: <54292F8A.901@oracle.com> <7ED503B5-438F-45A4-B478-8858921DCDB2@oracle.com> Message-ID: <54294A0C.6090209@oracle.com> Hi Staffan, thanks for the review. On 29.09.2014 12:31, Staffan Larsen wrote: > Looks good. > > Can you make sure to run the jdk tests for memory pools, easiest by running jprt with ?-testset svc?. Done, no failures. Best, Tobias > Thanks, > /Staffan > > > On 29 sep 2014, at 12:08, Tobias Hartmann wrote: > >> Hi, >> >> please review the following patch. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 >> Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ >> >> == Problem == >> The segmented code cache implementation registers a memory pool for each code heap. To be consistent with the "non-segmented" output, the names of these pools should contain the word "code heap". >> >> == Solution == >> I added "Code Heap" to the name of the segments. The output now looks like this: >> >> $ /export/bin/java -XX:-SegmentedCodeCache Test >> Code Cache >> [...] >> >> $ /export/bin/java -XX:+SegmentedCodeCache Test >> Code Heap 'non-methods' >> Code Heap 'profiled nmethods' >> Code Heap 'non-profiled nmethods' >> [...] >> >> Thanks, >> Tobias From tobias.hartmann at oracle.com Mon Sep 29 12:01:36 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Mon, 29 Sep 2014 14:01:36 +0200 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <542948E1.8000206@oracle.com> References: <54292F8A.901@oracle.com> <542948E1.8000206@oracle.com> Message-ID: <54294A20.6090001@oracle.com> Filipp, thanks for the review. Best, Tobias On 29.09.2014 13:56, Filipp Zhinkin wrote: > Hi Tobias, > > thank you for taking care of it. The change looks good. > > Regards, > Filipp. > > On 09/29/2014 02:08 PM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 >> Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ >> >> == Problem == >> The segmented code cache implementation registers a memory pool for >> each code heap. To be consistent with the "non-segmented" output, the >> names of these pools should contain the word "code heap". >> >> == Solution == >> I added "Code Heap" to the name of the segments. The output now looks >> like this: >> >> $ /export/bin/java -XX:-SegmentedCodeCache Test >> Code Cache >> [...] >> >> $ /export/bin/java -XX:+SegmentedCodeCache Test >> Code Heap 'non-methods' >> Code Heap 'profiled nmethods' >> Code Heap 'non-profiled nmethods' >> [...] >> >> Thanks, >> Tobias > From lois.foltan at oracle.com Mon Sep 29 13:17:13 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Mon, 29 Sep 2014 09:17:13 -0400 Subject: RFR: 8058606 Detailed Native Memory Tracking (NMT) data is not output at VM exit In-Reply-To: <542948B8.107@oracle.com> References: <542948B8.107@oracle.com> Message-ID: <54295BD9.9060609@oracle.com> Hi George, src/share/vm/services/memTracker.cpp - I don't see where the variable mem_baseline is initialized before you invoke the method baseline()? I am not overly familiar with NMT but it looks like your might need to do something like: MemBaseline& baseline = MemTracker::get_baseline(); - Your indentation for your edits at least in the webrev looks very off Thanks, Lois On 9/29/2014 7:55 AM, George Triantafillou wrote: > Please review this fix for JDK-8058606. The output from the > -XX:NativeMemoryTracking=detail option now outputs detailed tracking > information at VM exit. Previously, only summary tracking information > was output. > > A new test was added to verify the output from both summary and detail > tracking options. > > Bug: https://bugs.openjdk.java.net/browse/JDK-8058606 > Webrev: http://cr.openjdk.java.net/~gtriantafill/8058606/webrev/ > > The fix > was tested locally on Linux with jtreg and the JPRT hotspot testset. > > -George From filipp.zhinkin at oracle.com Mon Sep 29 13:44:40 2014 From: filipp.zhinkin at oracle.com (Filipp Zhinkin) Date: Mon, 29 Sep 2014 17:44:40 +0400 Subject: [8u40] RFR (XS): 8059226 : Names of rtm_state_change and unstable_if deoptimization reasons were swapped in 8u40 Message-ID: <54296248.8030106@oracle.com> Hi, please review the fix aimed to fix a glitch happened during 8030976 [1] backport to 8u40, after which names of rtm_state_change and unstable_if deoptimization reasonswere swapped [2][3], so rtm_state_change trap was loggingin compilation logas 'unstable_if' and vice versa. I've fixed order of DeoptReason values declaration so now it matches the order used in jdk9 and the names order in Deoptimization::_trap_reason_name. Bug id:https://bugs.openjdk.java.net/browse/JDK-8059226 Webrev: http://cr.openjdk.java.net/~fzhinkin/8059226/webrev.00/ Testing: JPRT, manual & automated using affected tests Thanks, Filipp. [1] https://bugs.openjdk.java.net/browse/JDK-8030976 [2] http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/6ad207fd3e26#l4.6 [3] http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/f6f9aec27858#l4.7 From vladimir.x.ivanov at oracle.com Mon Sep 29 14:51:08 2014 From: vladimir.x.ivanov at oracle.com (Vladimir Ivanov) Date: Mon, 29 Sep 2014 18:51:08 +0400 Subject: [9] RFR (S): 8059340: ConstantPool::_resolved_references is missing in heap dump Message-ID: <542971DC.2090803@oracle.com> http://cr.openjdk.java.net/~vlivanov/8059340/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8059340 VM heap dump doesn't contain ConstantPool::_resolved_references for classes which have resolved references. ConstantPool::_resolved_references points to an Object[] holding resolved constant pool entries (patches for VM anonymous classes, linked CallSite & MethodType for invokedynamic instructions). I've decided to use reserved slot in HPROF class header format. It requires an update in jhat to correctly display new info. The other approach I tried was to dump the reference as a fake static field [1], but storing VM internal ConstantPool::_resolved_references among user defined fields looks confusing. Testing: manual (verified that corresponding arrays are properly linked in Nashorn heap dump). Thanks! Best regards, Vladimir Ivanov [1] http://cr.openjdk.java.net/~vlivanov/8059340/static From vladimir.kozlov at oracle.com Mon Sep 29 16:35:59 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 29 Sep 2014 09:35:59 -0700 Subject: [8u40] RFR (XS): 8059226 : Names of rtm_state_change and unstable_if deoptimization reasons were swapped in 8u40 In-Reply-To: <54296248.8030106@oracle.com> References: <54296248.8030106@oracle.com> Message-ID: <54298A6F.7090701@oracle.com> Looks good. Thank you for fixing this. Vladimir K On 9/29/14 6:44 AM, Filipp Zhinkin wrote: > Hi, > > please review the fix aimed to fix a glitch happened during 8030976 [1] backport > to 8u40, after which names of rtm_state_change and unstable_if deoptimization > reasonswere swapped [2][3], so rtm_state_change trap was loggingin compilation > logas 'unstable_if' and vice versa. > > I've fixed order of DeoptReason values declaration so now it matches the order > used in jdk9 and the names order in Deoptimization::_trap_reason_name. > > Bug id:https://bugs.openjdk.java.net/browse/JDK-8059226 > Webrev: http://cr.openjdk.java.net/~fzhinkin/8059226/webrev.00/ > Testing: JPRT, manual & automated using affected tests > > Thanks, > Filipp. > > [1] https://bugs.openjdk.java.net/browse/JDK-8030976 > [2] http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/6ad207fd3e26#l4.6 > [3] http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/f6f9aec27858#l4.7 From vladimir.kozlov at oracle.com Mon Sep 29 17:03:48 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 29 Sep 2014 10:03:48 -0700 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <542948E1.8000206@oracle.com> References: <54292F8A.901@oracle.com> <542948E1.8000206@oracle.com> Message-ID: <542990F4.50306@oracle.com> Filipp, Are you okay with this? The name will be 'Code Cache' in non-segmented case (as before segmented code cache implementation). But the name for segmented case will star with 'Code Heap'. See examples in RFR. Should both cases have the same name? I am asking you since you filed the RFE. Thanks, Vladimir On 9/29/14 4:56 AM, Filipp Zhinkin wrote: > Hi Tobias, > > thank you for taking care of it. The change looks good. > > Regards, > Filipp. > > On 09/29/2014 02:08 PM, Tobias Hartmann wrote: >> Hi, >> >> please review the following patch. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 >> Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ >> >> == Problem == >> The segmented code cache implementation registers a memory pool for each code heap. To be consistent with the >> "non-segmented" output, the names of these pools should contain the word "code heap". >> >> == Solution == >> I added "Code Heap" to the name of the segments. The output now looks like this: >> >> $ /export/bin/java -XX:-SegmentedCodeCache Test >> Code Cache >> [...] >> >> $ /export/bin/java -XX:+SegmentedCodeCache Test >> Code Heap 'non-methods' >> Code Heap 'profiled nmethods' >> Code Heap 'non-profiled nmethods' >> [...] >> >> Thanks, >> Tobias > From igor.veresov at oracle.com Mon Sep 29 17:49:19 2014 From: igor.veresov at oracle.com (Igor Veresov) Date: Mon, 29 Sep 2014 10:49:19 -0700 Subject: [8u40] 8058536 java/lang/instrument/NativeMethodPrefixAgent.java fails due to VirtualMachineError: out of space in CodeCache for method handle intrinsic Message-ID: <745E42F5-16D1-42E9-A565-34E21B773C6A@oracle.com> JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/77c5da30c47b JBS: https://bugs.openjdk.java.net/browse/JDK-8058536 Webrev: http://cr.openjdk.java.net/~iveresov/8058536/webrev.00 Nightlies are ok, the patch doesn?t need tweaking. Thanks! igor From vladimir.kozlov at oracle.com Mon Sep 29 18:07:22 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 29 Sep 2014 11:07:22 -0700 Subject: [8u40] 8058536 java/lang/instrument/NativeMethodPrefixAgent.java fails due to VirtualMachineError: out of space in CodeCache for method handle intrinsic In-Reply-To: <745E42F5-16D1-42E9-A565-34E21B773C6A@oracle.com> References: <745E42F5-16D1-42E9-A565-34E21B773C6A@oracle.com> Message-ID: <54299FDA.6010506@oracle.com> Looks good. Thanks, Vladimir On 9/29/14 10:49 AM, Igor Veresov wrote: > JDK9 changeset: http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/77c5da30c47b > JBS: https://bugs.openjdk.java.net/browse/JDK-8058536 > Webrev: http://cr.openjdk.java.net/~iveresov/8058536/webrev.00 > > Nightlies are ok, the patch doesn?t need tweaking. > > Thanks! > igor > From filipp.zhinkin at oracle.com Mon Sep 29 19:23:20 2014 From: filipp.zhinkin at oracle.com (Filipp Zhinkin) Date: Mon, 29 Sep 2014 23:23:20 +0400 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <542990F4.50306@oracle.com> References: <54292F8A.901@oracle.com> <542948E1.8000206@oracle.com> <542990F4.50306@oracle.com> Message-ID: <5429B1A8.9030906@oracle.com> Vladimir, yes, I'm fine with that. Previously there was a single 'code cache' and now it is divided into several code heaps, so naming convention used for memory pools looks pretty natural. Thanks, Filipp. On 29.09.2014 21:03, Vladimir Kozlov wrote: > Filipp, > > Are you okay with this? > > The name will be 'Code Cache' in non-segmented case (as before > segmented code cache implementation). > But the name for segmented case will star with 'Code Heap'. See > examples in RFR. > Should both cases have the same name? I am asking you since you filed > the RFE. > > Thanks, > Vladimir > > On 9/29/14 4:56 AM, Filipp Zhinkin wrote: >> Hi Tobias, >> >> thank you for taking care of it. The change looks good. >> >> Regards, >> Filipp. >> >> On 09/29/2014 02:08 PM, Tobias Hartmann wrote: >>> Hi, >>> >>> please review the following patch. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 >>> Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ >>> >>> == Problem == >>> The segmented code cache implementation registers a memory pool for >>> each code heap. To be consistent with the >>> "non-segmented" output, the names of these pools should contain the >>> word "code heap". >>> >>> == Solution == >>> I added "Code Heap" to the name of the segments. The output now >>> looks like this: >>> >>> $ /export/bin/java -XX:-SegmentedCodeCache Test >>> Code Cache >>> [...] >>> >>> $ /export/bin/java -XX:+SegmentedCodeCache Test >>> Code Heap 'non-methods' >>> Code Heap 'profiled nmethods' >>> Code Heap 'non-profiled nmethods' >>> [...] >>> >>> Thanks, >>> Tobias >> From filipp.zhinkin at oracle.com Mon Sep 29 19:23:59 2014 From: filipp.zhinkin at oracle.com (Filipp Zhinkin) Date: Mon, 29 Sep 2014 23:23:59 +0400 Subject: [8u40] RFR (XS): 8059226 : Names of rtm_state_change and unstable_if deoptimization reasons were swapped in 8u40 In-Reply-To: <54298A6F.7090701@oracle.com> References: <54296248.8030106@oracle.com> <54298A6F.7090701@oracle.com> Message-ID: <5429B1CF.9070307@oracle.com> Vladimir, thank you for review. Filipp. On 29.09.2014 20:35, Vladimir Kozlov wrote: > Looks good. Thank you for fixing this. > > Vladimir K > > On 9/29/14 6:44 AM, Filipp Zhinkin wrote: >> Hi, >> >> please review the fix aimed to fix a glitch happened during 8030976 >> [1] backport >> to 8u40, after which names of rtm_state_change and unstable_if >> deoptimization >> reasonswere swapped [2][3], so rtm_state_change trap was loggingin >> compilation >> logas 'unstable_if' and vice versa. >> >> I've fixed order of DeoptReason values declaration so now it matches >> the order >> used in jdk9 and the names order in Deoptimization::_trap_reason_name. >> >> Bug id:https://bugs.openjdk.java.net/browse/JDK-8059226 >> Webrev: http://cr.openjdk.java.net/~fzhinkin/8059226/webrev.00/ >> Testing: JPRT, manual & automated using affected tests >> >> Thanks, >> Filipp. >> >> [1] https://bugs.openjdk.java.net/browse/JDK-8030976 >> [2] http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/6ad207fd3e26#l4.6 >> [3] http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/rev/f6f9aec27858#l4.7 From vladimir.kozlov at oracle.com Mon Sep 29 19:40:42 2014 From: vladimir.kozlov at oracle.com (Vladimir Kozlov) Date: Mon, 29 Sep 2014 12:40:42 -0700 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <5429B1A8.9030906@oracle.com> References: <54292F8A.901@oracle.com> <542948E1.8000206@oracle.com> <542990F4.50306@oracle.com> <5429B1A8.9030906@oracle.com> Message-ID: <5429B5BA.8050104@oracle.com> Tobias, You changes looks good then. Thanks, Vladimir On 9/29/14 12:23 PM, Filipp Zhinkin wrote: > Vladimir, > > yes, I'm fine with that. > > Previously there was a single 'code cache' > and now it is divided into several code heaps, > so naming convention used for memory pools looks pretty natural. > > Thanks, > Filipp. > > On 29.09.2014 21:03, Vladimir Kozlov wrote: >> Filipp, >> >> Are you okay with this? >> >> The name will be 'Code Cache' in non-segmented case (as before segmented code cache implementation). >> But the name for segmented case will star with 'Code Heap'. See examples in RFR. >> Should both cases have the same name? I am asking you since you filed the RFE. >> >> Thanks, >> Vladimir >> >> On 9/29/14 4:56 AM, Filipp Zhinkin wrote: >>> Hi Tobias, >>> >>> thank you for taking care of it. The change looks good. >>> >>> Regards, >>> Filipp. >>> >>> On 09/29/2014 02:08 PM, Tobias Hartmann wrote: >>>> Hi, >>>> >>>> please review the following patch. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 >>>> Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ >>>> >>>> == Problem == >>>> The segmented code cache implementation registers a memory pool for each code heap. To be consistent with the >>>> "non-segmented" output, the names of these pools should contain the word "code heap". >>>> >>>> == Solution == >>>> I added "Code Heap" to the name of the segments. The output now looks like this: >>>> >>>> $ /export/bin/java -XX:-SegmentedCodeCache Test >>>> Code Cache >>>> [...] >>>> >>>> $ /export/bin/java -XX:+SegmentedCodeCache Test >>>> Code Heap 'non-methods' >>>> Code Heap 'profiled nmethods' >>>> Code Heap 'non-profiled nmethods' >>>> [...] >>>> >>>> Thanks, >>>> Tobias >>> > From tobias.hartmann at oracle.com Tue Sep 30 05:06:57 2014 From: tobias.hartmann at oracle.com (Tobias Hartmann) Date: Tue, 30 Sep 2014 07:06:57 +0200 Subject: [9] RFR(S): 8059137: MemoryPoolMXBeans for different code heaps should contain 'Code heap' in their names In-Reply-To: <5429B5BA.8050104@oracle.com> References: <54292F8A.901@oracle.com> <542948E1.8000206@oracle.com> <542990F4.50306@oracle.com> <5429B1A8.9030906@oracle.com> <5429B5BA.8050104@oracle.com> Message-ID: <542A3A71.7010703@oracle.com> Thank you, Vladimir. Best, Tobias On 29.09.2014 21:40, Vladimir Kozlov wrote: > Tobias, > > You changes looks good then. > > Thanks, > Vladimir > > On 9/29/14 12:23 PM, Filipp Zhinkin wrote: >> Vladimir, >> >> yes, I'm fine with that. >> >> Previously there was a single 'code cache' >> and now it is divided into several code heaps, >> so naming convention used for memory pools looks pretty natural. >> >> Thanks, >> Filipp. >> >> On 29.09.2014 21:03, Vladimir Kozlov wrote: >>> Filipp, >>> >>> Are you okay with this? >>> >>> The name will be 'Code Cache' in non-segmented case (as before >>> segmented code cache implementation). >>> But the name for segmented case will star with 'Code Heap'. See >>> examples in RFR. >>> Should both cases have the same name? I am asking you since you >>> filed the RFE. >>> >>> Thanks, >>> Vladimir >>> >>> On 9/29/14 4:56 AM, Filipp Zhinkin wrote: >>>> Hi Tobias, >>>> >>>> thank you for taking care of it. The change looks good. >>>> >>>> Regards, >>>> Filipp. >>>> >>>> On 09/29/2014 02:08 PM, Tobias Hartmann wrote: >>>>> Hi, >>>>> >>>>> please review the following patch. >>>>> >>>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8059137 >>>>> Webrev: http://cr.openjdk.java.net/~thartmann/8059137/webrev.00/ >>>>> >>>>> == Problem == >>>>> The segmented code cache implementation registers a memory pool >>>>> for each code heap. To be consistent with the >>>>> "non-segmented" output, the names of these pools should contain >>>>> the word "code heap". >>>>> >>>>> == Solution == >>>>> I added "Code Heap" to the name of the segments. The output now >>>>> looks like this: >>>>> >>>>> $ /export/bin/java -XX:-SegmentedCodeCache Test >>>>> Code Cache >>>>> [...] >>>>> >>>>> $ /export/bin/java -XX:+SegmentedCodeCache Test >>>>> Code Heap 'non-methods' >>>>> Code Heap 'profiled nmethods' >>>>> Code Heap 'non-profiled nmethods' >>>>> [...] >>>>> >>>>> Thanks, >>>>> Tobias >>>> >> From erik.helin at oracle.com Tue Sep 30 12:43:17 2014 From: erik.helin at oracle.com (Erik Helin) Date: Tue, 30 Sep 2014 14:43:17 +0200 Subject: RFR: 8049599: MetaspaceGC::_capacity_until_GC can overflow In-Reply-To: <5422F225.1090105@oracle.com> References: <53F4780D.9040005@oracle.com> <5422F225.1090105@oracle.com> Message-ID: <542AA565.2070608@oracle.com> All, got some great feedback from StefanK: - Use cmpxchg_ptr instead of cmpxchg - Add a comment describing the while loop in expand_and_allocate - Add a comment in the test describing the overflow attempt - Change the loop in expand_and_allocate to do/while - Shorten the names of the local variables in expand_and_allocate The result can be seen in the following webrevs: - full: http://cr.openjdk.java.net/~ehelin/8049599/webrev.02/ - inc: http://cr.openjdk.java.net/~ehelin/8049599/webrev.01-02/ Thanks, Erik On 2014-09-24 18:32, Erik Helin wrote: > All, > > I've reworked the patch quite a bit based on (great!) internal feedback > from StefanK and Mikael Gerdin. The patch still uses an overflow check > and a CAS to update the high-water mark (HWM), but the new behavior > should be the same as the old one (which used Atomic::add_ptr). > > With the current code, each thread always increments the HWM, but there > is a race in that another thread can allocate metadata (due to the > increased HWM) before the thread that increased the HWM gets around to > allocate. With the new code, each thread will increase the pointer at > most once using a CAS. Even if increasing the HWM fails, the allocation > attempt might still succeed (for the reason described above). > > There is a theoretical problem of starvation in the new code, a thread > might forever fail to increase the HWM and forever fail to allocate due > to contention, but in practice this should not be a problem. In the > current code, Atomic::add_ptr is implemented as a CAS in a > (theoretically) never ending loop on non-x86 CPUs, so the same > theoretical starvation problem is present in the current code as well > (on non-x86 CPUs that is). > > Webrevs: > - full: > http://cr.openjdk.java.net/~ehelin/8049599/webrev.01/ > - incremental: > http://cr.openjdk.java.net/~ehelin/8049599/webrev.00-01/ > > Testing: > - JPRT > - Aurora: > - Kitchensink > - Weblogic+Medrec > - runThese > - vm.quick, regression, gc, compiler, runtime, parallel class loading, > metaspace, oom > - JTReg tests > - Running newly added JTREG test > > Thanks, > Erik > > On 2014-08-20 12:27, Erik Helin wrote: >> Hi all, >> >> this patch fixes a problem where Metaspace::_capacityUntilGC can >> overflow ("wrap around"). Since _capacityUntilGC is treated as a size_t >> everywhere it used, we won't calculate with negative numbers, but an >> eventual wrap around will still cause unnecessary GCs. >> >> The problem is solved by detecting an eventual wrap around in >> Metaspace::incCapacityUntilGC. The overflow check means that >> _capacityUntilGC now must be updated with a CAS. If the CAS fails more >> than five times due to contention, no update will be done, because this >> means that other threads must have incremented _capacityUntilGC (it is >> decremented only during a safepoint). This also means that a thread >> calling incCapacityUntilGC might have "its" requested memory "stolen" by >> another thread, but incCapacityUntilGC has never given any fairness >> guarantees. >> >> The patch also adds two functions to the WhiteBox API to be able to >> update and read Metaspace::_capacityUntilGC from a JTREG test. >> >> Bug: >> https://bugs.openjdk.java.net/browse/JDK-8049599 >> >> Webrev: >> http://cr.openjdk.java.net/~ehelin/8049599/webrev.00/ >> >> Testing: >> - JPRT >> - Aurora ad-hoc testing (on all platforms, both 32-bit and 64-bit): >> - Kitchensink, runThese and Dacapo >> - JTREG tests >> - Parallel Class Loading testlist >> - GC, runtime and compiler testlists >> - OOM and stress testlists >> - Running newly added JTREG test >> >> Thanks, >> Erik From stefan.karlsson at oracle.com Tue Sep 30 13:11:21 2014 From: stefan.karlsson at oracle.com (Stefan Karlsson) Date: Tue, 30 Sep 2014 15:11:21 +0200 Subject: RFR: 8049599: MetaspaceGC::_capacity_until_GC can overflow In-Reply-To: <542AA565.2070608@oracle.com> References: <53F4780D.9040005@oracle.com> <5422F225.1090105@oracle.com> <542AA565.2070608@oracle.com> Message-ID: <542AABF9.2090700@oracle.com> On 2014-09-30 14:43, Erik Helin wrote: > All, > > got some great feedback from StefanK: > - Use cmpxchg_ptr instead of cmpxchg > - Add a comment describing the while loop in expand_and_allocate > - Add a comment in the test describing the overflow attempt > - Change the loop in expand_and_allocate to do/while > - Shorten the names of the local variables in expand_and_allocate > > The result can be seen in the following webrevs: > - full: http://cr.openjdk.java.net/~ehelin/8049599/webrev.02/ > - inc: http://cr.openjdk.java.net/~ehelin/8049599/webrev.01-02/ Looks good. thanks, StefanK > > Thanks, > Erik > > On 2014-09-24 18:32, Erik Helin wrote: >> All, >> >> I've reworked the patch quite a bit based on (great!) internal feedback >> from StefanK and Mikael Gerdin. The patch still uses an overflow check >> and a CAS to update the high-water mark (HWM), but the new behavior >> should be the same as the old one (which used Atomic::add_ptr). >> >> With the current code, each thread always increments the HWM, but there >> is a race in that another thread can allocate metadata (due to the >> increased HWM) before the thread that increased the HWM gets around to >> allocate. With the new code, each thread will increase the pointer at >> most once using a CAS. Even if increasing the HWM fails, the allocation >> attempt might still succeed (for the reason described above). >> >> There is a theoretical problem of starvation in the new code, a thread >> might forever fail to increase the HWM and forever fail to allocate due >> to contention, but in practice this should not be a problem. In the >> current code, Atomic::add_ptr is implemented as a CAS in a >> (theoretically) never ending loop on non-x86 CPUs, so the same >> theoretical starvation problem is present in the current code as well >> (on non-x86 CPUs that is). >> >> Webrevs: >> - full: >> http://cr.openjdk.java.net/~ehelin/8049599/webrev.01/ >> - incremental: >> http://cr.openjdk.java.net/~ehelin/8049599/webrev.00-01/ >> >> Testing: >> - JPRT >> - Aurora: >> - Kitchensink >> - Weblogic+Medrec >> - runThese >> - vm.quick, regression, gc, compiler, runtime, parallel class >> loading, >> metaspace, oom >> - JTReg tests >> - Running newly added JTREG test >> >> Thanks, >> Erik >> >> On 2014-08-20 12:27, Erik Helin wrote: >>> Hi all, >>> >>> this patch fixes a problem where Metaspace::_capacityUntilGC can >>> overflow ("wrap around"). Since _capacityUntilGC is treated as a size_t >>> everywhere it used, we won't calculate with negative numbers, but an >>> eventual wrap around will still cause unnecessary GCs. >>> >>> The problem is solved by detecting an eventual wrap around in >>> Metaspace::incCapacityUntilGC. The overflow check means that >>> _capacityUntilGC now must be updated with a CAS. If the CAS fails more >>> than five times due to contention, no update will be done, because this >>> means that other threads must have incremented _capacityUntilGC (it is >>> decremented only during a safepoint). This also means that a thread >>> calling incCapacityUntilGC might have "its" requested memory >>> "stolen" by >>> another thread, but incCapacityUntilGC has never given any fairness >>> guarantees. >>> >>> The patch also adds two functions to the WhiteBox API to be able to >>> update and read Metaspace::_capacityUntilGC from a JTREG test. >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8049599 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~ehelin/8049599/webrev.00/ >>> >>> Testing: >>> - JPRT >>> - Aurora ad-hoc testing (on all platforms, both 32-bit and 64-bit): >>> - Kitchensink, runThese and Dacapo >>> - JTREG tests >>> - Parallel Class Loading testlist >>> - GC, runtime and compiler testlists >>> - OOM and stress testlists >>> - Running newly added JTREG test >>> >>> Thanks, >>> Erik From george.triantafillou at oracle.com Tue Sep 30 14:06:19 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Tue, 30 Sep 2014 10:06:19 -0400 Subject: RFR: 8058606 Detailed Native Memory Tracking (NMT) data is not output at VM exit In-Reply-To: <54295BD9.9060609@oracle.com> References: <542948B8.107@oracle.com> <54295BD9.9060609@oracle.com> Message-ID: <542AB8DB.7010207@oracle.com> Thanks Lois, I've incorporated your suggested changes. I've also moved the functionality of the test VerifyDetailSummaryOnExit.java to the existing test PrintNMTStatistics.java. After an offline discussion with Christian about how this change could affect error reporting in vmError.cpp, I've run a more extensive set of tests to verify the correct output when the VM crashes. You can take a look at the changes here: New webrev: http://cr.openjdk.java.net/~gtriantafill/8058606/webrev.01/ Thanks. -George On 9/29/2014 9:17 AM, Lois Foltan wrote: > Hi George, > > src/share/vm/services/memTracker.cpp > - I don't see where the variable mem_baseline is initialized > before you invoke the method baseline()? I am not > overly familiar with NMT but it looks like your might need to do > something like: MemBaseline& baseline = MemTracker::get_baseline(); > > - Your indentation for your edits at least in the webrev looks > very off > > Thanks, > Lois > > On 9/29/2014 7:55 AM, George Triantafillou wrote: >> Please review this fix for JDK-8058606. The output from the >> -XX:NativeMemoryTracking=detail option now outputs detailed tracking >> information at VM exit. Previously, only summary tracking information >> was output. >> >> A new test was added to verify the output from both summary and >> detail tracking options. >> >> Bug: https://bugs.openjdk.java.net/browse/JDK-8058606 >> Webrev: http://cr.openjdk.java.net/~gtriantafill/8058606/webrev/ >> >> The fix >> was tested locally on Linux with jtreg and the JPRT hotspot testset. >> >> -George > From frederic.parain at oracle.com Tue Sep 30 14:40:40 2014 From: frederic.parain at oracle.com (Frederic Parain) Date: Tue, 30 Sep 2014 16:40:40 +0200 Subject: RFR(L): JDK-8057777 Cleanup of old and unused VM interfaces Message-ID: <542AC0E8.9040409@oracle.com> Hi all, Please review changes for bug JDK-8057777 "Cleanup of old and unused VM interfaces" CR: https://bugs.openjdk.java.net/browse/JDK-8057777 This is basically a big cleanup of VM interfaces that are not used anymore by the JDK but have been kept in our code base for historical reasons (HotSpot Express for instance). These changesets remove these interfaces from both the JDK and the HotSpot side, and also perform some cleanup on code that directly referenced the removed interfaces. These changes do not modify the behavior of the Java classes impacted by the cleanup. VM interfaces removal has been approved by CCC and a Release Note has been prepared that explicitly list all the removed interfaces. Testing: JPRT hotspot + core, vm.quick.testlist, jdk_core Webrevs: http://cr.openjdk.java.net/~fparain/8057777/ Thank you, Fred -- Frederic Parain - Oracle Grenoble Engineering Center - France Phone: +33 4 76 18 81 17 Email: Frederic.Parain at oracle.com From aleksey.shipilev at oracle.com Tue Sep 30 15:03:40 2014 From: aleksey.shipilev at oracle.com (Aleksey Shipilev) Date: Tue, 30 Sep 2014 19:03:40 +0400 Subject: RFR (XS) 8059474: Clean up vm/utilities/Bitmap type uses Message-ID: <542AC64C.1090401@oracle.com> Hi, Not sure which group this belongs to, using the generic hotspot-dev at . vm/utilities/BitMap inconsistencies bugged me for quite some time: the mention of naked uintptr_t instead of properly typedef-ed bm_word_t alias; casting AllBits/NoBits constants of (luckily) the same type; other things like a benign inconsistency in init_pop_count_table new/free, etc. Here is a cleanup, please review: http://cr.openjdk.java.net/~shade/8059474/webrev.00/ https://bugs.openjdk.java.net/browse/JDK-8059474 Testing: JPRT, Nashorn/Octane on Linux/x86_64/fastdebug. Thanks, -Aleksey. From lois.foltan at oracle.com Tue Sep 30 17:06:02 2014 From: lois.foltan at oracle.com (Lois Foltan) Date: Tue, 30 Sep 2014 13:06:02 -0400 Subject: RFR: 8058606 Detailed Native Memory Tracking (NMT) data is not output at VM exit In-Reply-To: <542AB8DB.7010207@oracle.com> References: <542948B8.107@oracle.com> <54295BD9.9060609@oracle.com> <542AB8DB.7010207@oracle.com> Message-ID: <542AE2FA.6080503@oracle.com> Hi George, Looks good! One minor comment. Can you check the indentation of the "rptr.report();" statement within the newly added else clause of MemTracker::final_report(). It looks like it needs to be indented two spaces. I don't need to see another webrev though, reviewed. Thanks, Lois On 9/30/2014 10:06 AM, George Triantafillou wrote: > Thanks Lois, I've incorporated your suggested changes. I've also > moved the functionality of the test VerifyDetailSummaryOnExit.java to > the existing test PrintNMTStatistics.java. > > After an offline discussion with Christian about how this change could > affect error reporting in vmError.cpp, I've run a more extensive set > of tests to verify the correct output when the VM crashes. You can > take a look at the changes here: > > New webrev: > http://cr.openjdk.java.net/~gtriantafill/8058606/webrev.01/ > > > Thanks. > > -George > > On 9/29/2014 9:17 AM, Lois Foltan wrote: >> Hi George, >> >> src/share/vm/services/memTracker.cpp >> - I don't see where the variable mem_baseline is initialized >> before you invoke the method baseline()? I am not >> overly familiar with NMT but it looks like your might need to >> do something like: MemBaseline& baseline = MemTracker::get_baseline(); >> >> - Your indentation for your edits at least in the webrev looks >> very off >> >> Thanks, >> Lois >> >> On 9/29/2014 7:55 AM, George Triantafillou wrote: >>> Please review this fix for JDK-8058606. The output from the >>> -XX:NativeMemoryTracking=detail option now outputs detailed tracking >>> information at VM exit. Previously, only summary tracking >>> information was output. >>> >>> A new test was added to verify the output from both summary and >>> detail tracking options. >>> >>> Bug: https://bugs.openjdk.java.net/browse/JDK-8058606 >>> Webrev: http://cr.openjdk.java.net/~gtriantafill/8058606/webrev/ >>> >>> The fix >>> was tested locally on Linux with jtreg and the JPRT hotspot testset. >>> >>> -George >> > From george.triantafillou at oracle.com Tue Sep 30 18:21:09 2014 From: george.triantafillou at oracle.com (George Triantafillou) Date: Tue, 30 Sep 2014 14:21:09 -0400 Subject: RFR: 8058606 Detailed Native Memory Tracking (NMT) data is not output at VM exit In-Reply-To: <542AE2FA.6080503@oracle.com> References: <542948B8.107@oracle.com> <54295BD9.9060609@oracle.com> <542AB8DB.7010207@oracle.com> <542AE2FA.6080503@oracle.com> Message-ID: <542AF495.8040202@oracle.com> Hi Lois, Thanks for the comment. I checked my source and the (sdiff) webrev in Chrome and found the indentation to be correct. -George On 9/30/2014 1:06 PM, Lois Foltan wrote: > Hi George, > > Looks good! One minor comment. Can you check the indentation of the > "rptr.report();" statement within the newly added else clause of > MemTracker::final_report(). It looks like it needs to be indented two > spaces. I don't need to see another webrev though, reviewed. > > Thanks, > Lois > > On 9/30/2014 10:06 AM, George Triantafillou wrote: >> Thanks Lois, I've incorporated your suggested changes. I've also >> moved the functionality of the test VerifyDetailSummaryOnExit.java to >> the existing test PrintNMTStatistics.java. >> >> After an offline discussion with Christian about how this change >> could affect error reporting in vmError.cpp, I've run a more >> extensive set of tests to verify the correct output when the VM >> crashes. You can take a look at the changes here: >> >> New webrev: >> http://cr.openjdk.java.net/~gtriantafill/8058606/webrev.01/ >> >> >> Thanks. >> >> -George >> >> On 9/29/2014 9:17 AM, Lois Foltan wrote: >>> Hi George, >>> >>> src/share/vm/services/memTracker.cpp >>> - I don't see where the variable mem_baseline is initialized >>> before you invoke the method baseline()? I am not >>> overly familiar with NMT but it looks like your might need to >>> do something like: MemBaseline& baseline = MemTracker::get_baseline(); >>> >>> - Your indentation for your edits at least in the webrev looks >>> very off >>> >>> Thanks, >>> Lois >>> >>> On 9/29/2014 7:55 AM, George Triantafillou wrote: >>>> Please review this fix for JDK-8058606. The output from the >>>> -XX:NativeMemoryTracking=detail option now outputs detailed >>>> tracking information at VM exit. Previously, only summary tracking >>>> information was output. >>>> >>>> A new test was added to verify the output from both summary and >>>> detail tracking options. >>>> >>>> Bug: https://bugs.openjdk.java.net/browse/JDK-8058606 >>>> Webrev: http://cr.openjdk.java.net/~gtriantafill/8058606/webrev/ >>>> >>>> The fix >>>> was tested locally on Linux with jtreg and the JPRT hotspot testset. >>>> >>>> -George >>> >> > From jon.masamitsu at oracle.com Tue Sep 30 19:42:06 2014 From: jon.masamitsu at oracle.com (Jon Masamitsu) Date: Tue, 30 Sep 2014 12:42:06 -0700 Subject: RFR: 8049599: MetaspaceGC::_capacity_until_GC can overflow In-Reply-To: <542AA565.2070608@oracle.com> References: <53F4780D.9040005@oracle.com> <5422F225.1090105@oracle.com> <542AA565.2070608@oracle.com> Message-ID: <542B078E.6050400@oracle.com> Erik, Changes look good. Reviewed. Jon On 9/30/2014 5:43 AM, Erik Helin wrote: > All, > > got some great feedback from StefanK: > - Use cmpxchg_ptr instead of cmpxchg > - Add a comment describing the while loop in expand_and_allocate > - Add a comment in the test describing the overflow attempt > - Change the loop in expand_and_allocate to do/while > - Shorten the names of the local variables in expand_and_allocate > > The result can be seen in the following webrevs: > - full: http://cr.openjdk.java.net/~ehelin/8049599/webrev.02/ > - inc: http://cr.openjdk.java.net/~ehelin/8049599/webrev.01-02/ > > Thanks, > Erik > > On 2014-09-24 18:32, Erik Helin wrote: >> All, >> >> I've reworked the patch quite a bit based on (great!) internal feedback >> from StefanK and Mikael Gerdin. The patch still uses an overflow check >> and a CAS to update the high-water mark (HWM), but the new behavior >> should be the same as the old one (which used Atomic::add_ptr). >> >> With the current code, each thread always increments the HWM, but there >> is a race in that another thread can allocate metadata (due to the >> increased HWM) before the thread that increased the HWM gets around to >> allocate. With the new code, each thread will increase the pointer at >> most once using a CAS. Even if increasing the HWM fails, the allocation >> attempt might still succeed (for the reason described above). >> >> There is a theoretical problem of starvation in the new code, a thread >> might forever fail to increase the HWM and forever fail to allocate due >> to contention, but in practice this should not be a problem. In the >> current code, Atomic::add_ptr is implemented as a CAS in a >> (theoretically) never ending loop on non-x86 CPUs, so the same >> theoretical starvation problem is present in the current code as well >> (on non-x86 CPUs that is). >> >> Webrevs: >> - full: >> http://cr.openjdk.java.net/~ehelin/8049599/webrev.01/ >> - incremental: >> http://cr.openjdk.java.net/~ehelin/8049599/webrev.00-01/ >> >> Testing: >> - JPRT >> - Aurora: >> - Kitchensink >> - Weblogic+Medrec >> - runThese >> - vm.quick, regression, gc, compiler, runtime, parallel class >> loading, >> metaspace, oom >> - JTReg tests >> - Running newly added JTREG test >> >> Thanks, >> Erik >> >> On 2014-08-20 12:27, Erik Helin wrote: >>> Hi all, >>> >>> this patch fixes a problem where Metaspace::_capacityUntilGC can >>> overflow ("wrap around"). Since _capacityUntilGC is treated as a size_t >>> everywhere it used, we won't calculate with negative numbers, but an >>> eventual wrap around will still cause unnecessary GCs. >>> >>> The problem is solved by detecting an eventual wrap around in >>> Metaspace::incCapacityUntilGC. The overflow check means that >>> _capacityUntilGC now must be updated with a CAS. If the CAS fails more >>> than five times due to contention, no update will be done, because this >>> means that other threads must have incremented _capacityUntilGC (it is >>> decremented only during a safepoint). This also means that a thread >>> calling incCapacityUntilGC might have "its" requested memory >>> "stolen" by >>> another thread, but incCapacityUntilGC has never given any fairness >>> guarantees. >>> >>> The patch also adds two functions to the WhiteBox API to be able to >>> update and read Metaspace::_capacityUntilGC from a JTREG test. >>> >>> Bug: >>> https://bugs.openjdk.java.net/browse/JDK-8049599 >>> >>> Webrev: >>> http://cr.openjdk.java.net/~ehelin/8049599/webrev.00/ >>> >>> Testing: >>> - JPRT >>> - Aurora ad-hoc testing (on all platforms, both 32-bit and 64-bit): >>> - Kitchensink, runThese and Dacapo >>> - JTREG tests >>> - Parallel Class Loading testlist >>> - GC, runtime and compiler testlists >>> - OOM and stress testlists >>> - Running newly added JTREG test >>> >>> Thanks, >>> Erik From coleen.phillimore at oracle.com Tue Sep 30 22:01:37 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 30 Sep 2014 18:01:37 -0400 Subject: RFR(L): JDK-8057777 Cleanup of old and unused VM interfaces In-Reply-To: <542AC0E8.9040409@oracle.com> References: <542AC0E8.9040409@oracle.com> Message-ID: <542B2841.2080401@oracle.com> Fred, I reviewed this change. It looks great. Some of the functions removed seem to be not only unused but dangerous. Some of these I have made changes to that I didn't realize that the JVM didn't use these functions. Thank you for doing this! Coleen On 9/30/14, 10:40 AM, Frederic Parain wrote: > Hi all, > > Please review changes for bug JDK-8057777 "Cleanup of old > and unused VM interfaces" > > CR: > https://bugs.openjdk.java.net/browse/JDK-8057777 > > This is basically a big cleanup of VM interfaces that are > not used anymore by the JDK but have been kept in our code > base for historical reasons (HotSpot Express for instance). > These changesets remove these interfaces from both the > JDK and the HotSpot side, and also perform some cleanup > on code that directly referenced the removed interfaces. > > These changes do not modify the behavior of the Java > classes impacted by the cleanup. > > VM interfaces removal has been approved by CCC and > a Release Note has been prepared that explicitly list > all the removed interfaces. > > Testing: JPRT hotspot + core, vm.quick.testlist, jdk_core > > Webrevs: > http://cr.openjdk.java.net/~fparain/8057777/ > > Thank you, > > Fred > From coleen.phillimore at oracle.com Tue Sep 30 22:10:43 2014 From: coleen.phillimore at oracle.com (Coleen Phillimore) Date: Tue, 30 Sep 2014 18:10:43 -0400 Subject: RFR (XS) 8059474: Clean up vm/utilities/Bitmap type uses In-Reply-To: <542AC64C.1090401@oracle.com> References: <542AC64C.1090401@oracle.com> Message-ID: <542B2A63.2090500@oracle.com> Aleksey, This looks good to me. GC code uses BitMap a lot and JPRT tests the different collectors so I think this is fine. Someone from the GC team should look at this also. Thanks! Coleen On 9/30/14, 11:03 AM, Aleksey Shipilev wrote: > Hi, > > Not sure which group this belongs to, using the generic hotspot-dev at . > > vm/utilities/BitMap inconsistencies bugged me for quite some time: the > mention of naked uintptr_t instead of properly typedef-ed bm_word_t > alias; casting AllBits/NoBits constants of (luckily) the same type; > other things like a benign inconsistency in init_pop_count_table > new/free, etc. > > Here is a cleanup, please review: > http://cr.openjdk.java.net/~shade/8059474/webrev.00/ > https://bugs.openjdk.java.net/browse/JDK-8059474 > > Testing: > JPRT, Nashorn/Octane on Linux/x86_64/fastdebug. > > Thanks, > -Aleksey. >