Experimenting with running a java process on graalvm-0.11-dev-dk

Gilles Duboscq gilles.m.duboscq at oracle.com
Tue Jun 7 12:41:02 UTC 2016



On 07/06/16 12:16, Jerven Bolleman wrote:
> Hi Giles,
> 
>> On 07 Jun 2016, at 11:50, Gilles Duboscq <gilles.m.duboscq at oracle.com> wrote:
>>
>> Hi Jerven,
>>
>> Thank you for the report!
>>
>> Can you give 0.12 [1] a try?
> Will do, off site and VPN to my office is blocked right now.
> Hope to be able to give you the answer by next week Monday.
>> Is there no smaller dataset we could test this on?
> Speed wise yes definitely 10GB should be ok for a significant result.
> For the Lucene error, as well but that would be a bit more 
> work to make sure it replicates on smaller systems.
> 
> If you are interested in that I can make a pure java build 
> (currently we use out of process XZ decompression)

If the 0.12 results are still bad, yes, that would be interesting.

 Gilles

> 
> Regards,
> Jerven
>>
>> Gilles
>>
>> [1] http://www.oracle.com/technetwork/oracle-labs/program-languages/downloads/index.html
>>
>> On 06/06/16 09:18, Jerven Tjalling Bolleman wrote:
>>> Dear Graal developers,
>>>
>>> Last week I started experimenting with running a largish Java application on graal to see the difference with standard hotspot.
>>>
>>> There are two pieces of sad news I have to report. The first is there are exceptions occurring in the code that are only triggered when running graal. Specifically, in the Lucene 4.10.4 code that our application uses.
>>>
>>> Exception in thread "Lucene Merge Thread #7" org.apache.lucene.index.MergePolicy$MergeException: java.lang.ArrayIndexOutOfBoundsException: 85
>>>      at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:549)
>>>      at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:522)
>>> Caused by: java.lang.ArrayIndexOutOfBoundsException: 85
>>>      at org.apache.lucene.codecs.lucene41.ForUtil.readBlock(ForUtil.java:206)
>>>      at org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.refillDocs(Lucene41PostingsReader.java:711)
>>>      at org.apache.lucene.codecs.lucene41.Lucene41PostingsReader$BlockDocsAndPositionsEnum.nextDoc(Lucene41PostingsReader.java:780)
>>>      at org.apache.lucene.codecs.MappingMultiDocsAndPositionsEnum.nextDoc(MappingMultiDocsAndPositionsEnum.java:104)
>>>      at org.apache.lucene.codecs.PostingsConsumer.merge(PostingsConsumer.java:109)
>>>      at org.apache.lucene.codecs.TermsConsumer.merge(TermsConsumer.java:164)
>>>      at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:72)
>>>      at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:399)
>>>      at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:112)
>>>      at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4223)
>>>      at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3811)
>>>      at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:409)
>>>      at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:486)
>>>
>>> This is fully in the standard lucene code under heavy concurrency.
>>>
>>> The second piece of bad news is up to the point that this exception triggers Graal-0.11 is about 1/6th slower than hotspot (1.8.0_74).
>>>
>>> Graal 29 minutes 37 to HotSpot 25 minutes 14.
>>>
>>> If you are interested I can make both the code and data available for testing.
>>> The downside is the app is large and has a lot of dependencies and data going through (Needs 650gb of diskspace) and runs for about 30-40 hours on our hardware.
>>> Single thread limited in hotspot, in graal it seems to be slower in a fully threaded part.
>>>
>>> I can also give two different sampler profiles that might point to where the problem lies.
>>>
>>> If you have some kind of tutorial to retrieve the assembly from the above section I would be interested in helping out that way as well.
>>>
>>> I hope that this kind of feedback is useful and that I can make it actionable for you.
>>>
>>> The hardware is 24 cores of AMD Opteron 6348. With 256GB of ram, of which 16GB was set aside for this JVMs heap. If you are curious the app is www.uniprot.org.
>>>
>>> Regards,
>>> Jerven
> 


More information about the graal-dev mailing list