Nashorn increase in eval duration over time

Jan Bauer Nielsen jbn at dbc.dk
Thu Mar 10 12:13:02 UTC 2016


First of all I'm sorry if this is not the right forum for this type of 
question, but I haven't been able to find a nashorn-user mailing list or 
anything like that.

We have an enterprise application (using GlassFish 4.1 as application 
server) which sits in the middle of a processing pipeline receiving data 
messages from a JMS queue and applying business logic in the form of 
substantial (30.000+ lines) javascripts to said data. The business logic 
is user defined, varies on the type of input format, and as such is not 
known to us at application startup. The setup in question allows for 8 
concurrent processor threads, where each thread evals into (and caches) 
one script engine per unique set of business logic (the cache is 
actually on the EJB level and not on the thread level).

Submitting a batch requiring business-logic-x results in below eval times:

[2016-03-10 09:15:41,756] [INFO] [p: thread-pool-1; w: 43] Module eval 
took 6540 ms
[2016-03-10 09:15:42,918] [INFO] [p: thread-pool-1; w: 39] Module eval 
took 7663 ms
[2016-03-10 09:15:44,188] [INFO] [p: thread-pool-1; w: 46] Module eval 
took 9000 ms
[2016-03-10 09:15:44,670] [INFO] [p: thread-pool-1; w: 42] Module eval 
took 9481 ms
[2016-03-10 09:15:44,920] [INFO] [p: thread-pool-1; w: 41] Module eval 
took 9681 ms
[2016-03-10 09:15:45,043] [INFO] [p: thread-pool-1; w: 40] Module eval 
took 9784 ms
[2016-03-10 09:15:45,196] [INFO] [p: thread-pool-1; w: 44] Module eval 
took 9944 ms
[2016-03-10 09:15:45,427] [INFO] [p: thread-pool-1; w: 45] Module eval 
took 10222 ms

Subsequently submitting another batch requiring business-logic-y forcing 
more script engines to be created (simulated here by clearing cached 
script engines from application and resubmitting batch for 
business-logic-x) results in much worse performance:

[2016-03-10 09:26:07,129] [INFO] [p: thread-pool-1; w: 31] Module eval 
took 226070 ms
[2016-03-10 09:26:14,760] [INFO] [p: thread-pool-1; w: 40] Module eval 
took 234753 ms
[2016-03-10 09:26:17,167] [INFO] [p: thread-pool-1; w: 26] Module eval 
took 230762 ms
[2016-03-10 09:26:22,174] [INFO] [p: thread-pool-1; w: 36] Module eval 
took 235614 ms
[2016-03-10 09:26:22,978] [INFO] [p: thread-pool-1; w: 30] Module eval 
took 238836 ms
[2016-03-10 09:26:28,909] [INFO] [p: thread-pool-1; w: 38] Module eval 
took 242011 ms
[2016-03-10 09:26:40,921] [INFO] [p: thread-pool-1; w: 32] Module eval 
took 252076 ms
[2016-03-10 09:26:50,700] [INFO] [p: thread-pool-1; w: 35] Module eval 
took 262625 ms

We are not seeing anything out of the ordinary with regard to GC, apart 
from survivor space often being near 100% full before objects are moved 
to old-gen. Heap and code cache are nowhere near to being exhausted.

Unfortunately we haven't been able to get java flight recorder to work 
with this GlassFish installation, but jvisualvm samples indicates that 
80% of our time is being spent in 
java.lang.invoke.MethodHandleNatives.setCallSiteTargetNormal().

We have tried running both single threaded and with a shared script 
engine creating separate script contexts, and although we are seeing 
much better performance in these cases, we are still seeing an increase 
in eval times nonetheless (in the shared engine case, the eval times of 
the second batch is a factor 10 less than the numbers above, but it is 
still a factor 5 larger than the initial batch).

We are running on java build 25.72-b15.

Any input as to what we might be doing wrong and possibly how to avoid 
it would be very much appreciated.

This problem has actually forced us to remain on the Rhino script engine 
for the time being.

Kind regards,

Jan Bauer Nielsen
Software developer
DBC as
http://www.dbc.dk/english


More information about the nashorn-dev mailing list