RFR rev1 (M): 8078556: Runtime: implement ranges (optionally constraints) for those flags that have them missing

Lindenmaier, Goetz goetz.lindenmaier at sap.com
Thu Oct 15 07:34:03 UTC 2015


Hi Gerard, 

> From: gerard ziemski [mailto:gerard.ziemski at oracle.com]
> Sent: Mittwoch, 14. Oktober 2015 17:32
> 
> hi Goetz,
> 
> Great work, thank you for all the feedback, please see my answers inline:
> 
> 
> On 10/14/2015 04:00 AM, Lindenmaier, Goetz wrote:
> > Hi Gerald,
> >
> > I had a closer look at your change now, especially on the settings of the
> > stack protection zone sizes.
> >
> > You set the lower bounds of the stack protection pages > 1. Because of that
> > the VM on PPC does not start up, it immediately fails with:
> >    intx StackYellowPages=1 is outside the allowed range [ 6 ... 11 ]
> >    intx StackShadowPages=1 is outside the allowed range [ 8 ... 38 ]
> >    Error: Could not create the Java Virtual Machine.
> >    Error: A fatal exception has occurred. Program will exit.
> >
> > The lower bounds of these flags must be '1'.  If vm_page_size() >
> vm_default_page_size(),
> > the numbers of pages are reduced.  This is necessary
> > on systems with page size 64K, else stacks get too big.  We have such
> > systems on ppc.  See also os_linux.cpp:4649.
> 
> PPC issue I will have to comment on later.
> 
> http://cr.openjdk.java.net/~gziemski/8078556_rev2/src/cpu/sparc/vm/glob
> als_sparc.hpp.udiff.html
> > Why do you need to special-case for !LP64 on sparc?  As I understand,
> > this is no more supported in jdk9.
> 
> I'd prefer to keep the code in for completeness for those who will be reading
> that header.

I understand that, ok.

> > in globals.hpp:
> >
> http://cr.openjdk.java.net/~gziemski/8078556_rev2/src/share/vm/runtime/
> globals.hpp.udiff.html
> >
> > You might want to use SIZE_MAX for MaxDirectMemorySize:
> >     product(size_t, MaxDirectMemorySize, 0,                                   \
> > +           range(0, (size_t)max_uintx)                                      \
> 
> I like that. I will see if all the platforms are happy with SIZE_MAX

It's used in shared code already (array_oop.hpp) so it should work.

> 
> > This lower bound setting crashes the VM on linuxx86_64 and seems
> pointless:
> >      product_pd(intx, ThreadStackSize,                                         \
> > +           range(0, max_intx-os::vm_page_size())                             \
> > bin/java -XX:ThreadStackSize=0
> > #  Internal Error (.../src/os/linux/vm/os_linux.cpp:720), pid=47259,
> tid=47260
> > #  assert(JavaThread::stack_size_at_create() > 0) failed: this should be set
> >
> 
> Yes, this is a follow-up issue tracked by
> https://bugs.openjdk.java.net/browse/JDK-8136766

If !=0 fails on Windows, and ==0 fails on linux, you should leave out the 
range altogether and only introduce it in 8136766.
(I didn't test the fix I proposed on windows.)

...
> > # There is insufficient memory for the Java Runtime Environment to continue.
> > # Native memory allocation (malloc) failed to allocate 18 bytes for AllocateHeap
...
> 
> Right, this flag will be disabled from testing by the
> runtime/CommandLine/OptionsValidation as the valid values are in
> fact allowed to crash the VM for testing purposes, which is a behavior that
> can not be captured by the testing framework
> at the moment.

OK, I understand the test accepts these problems.  Basically this is ok.

...
> "Out of system resources" errors are in fact allowed. Still, is this a crash that
> dumps an error log? I do not see this
> issue in our internal testing, but I will verify on my local 64bit Linux (Ubuntu
> 14.04)

I found these issues by just grepping for hs_err* in the jtreg output.
The only one that stopped the test was -XX:ThreadStackSize=0, but 
with my fix in http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2015-October/016052.html
it passes. (Besides dumping the hs_err files with the 'insufficient memory'
issue.)

Best regards,
  Goetz



More information about the hotspot-runtime-dev mailing list