pure functions and the parallel evaluation of reduce-like operations

Jim Mayer jim at pentastich.org
Fri Nov 26 18:39:37 PST 2010


Hi all,

In the recent debate on capturing mutable local variables the "reduce"
function was used as an example application of functional programming,
sometimes with an implication that the form could be evaluated in parallel.
 It seems to me, though, that a parallel evaluation scheme is only
applicable if the operator is associative (or, better, commutative and
associative).  There's no question that it is easier and more efficient to
do parallel operations with "pure" functions, but a functional style is not
sufficient to enable parallelism.  In fact, in Java, even addition and
multiplication of most numeric types are not associative because of overflow
and loss of precision.  For "well behaved" data sets things usually work
well enough, but it really matters in some calculations.

My question is, as we consider use cases for lambda expressions and whether
to put effort into (as Brian Goetz said) "supporting map/reduce-y idioms in
the libraries", do we need to consider how functions (like 'reduce') declare
the constraints?  How we would use functional programming idioms to enable
parallelism in Java?

Do we define annotations to declare intent (e.g, @Associative)?  Do we have
multiple reduce operations (e.g., leftReduce, rightReduce,
associativeReduce, unorderedReduce)?  Do we pass in the function's
constraints as a parameter (e.g., List.reduce(Operation.ASSOCIATIVE, #{s v
-> s + v}))?

Are the mechanisms available sufficient to enable wide-spread, safe, use of
the map/reduce-y idioms?

--- Jim


More information about the lambda-dev mailing list