[OpenJDK 2D-Dev] sun.java2D.pisces big memory usage (waste ?)
Laurent Bourgès
bourges.laurent at gmail.com
Sat Mar 30 12:38:32 UTC 2013
Jim,
There are finally only few growable arrays (edges, curves, rowAARLE) and
now I have a working Pisces code (J2DBench pass OK) that performs better
than current (2x - 3x faster on dasher or big shapes) using only few
megabytes (Xmx32m) ...
Moreover, these arrays could be created once per thread (thread local) to
avoid GC (low footprint) and enhance performance (no GC / array resizing or
coyping) with an initial size of 4K or 16K representing only 16 x 4 = 65Kb !
The only array that causes troubles is the growable
PiscesCache.rowAARLE[Y][tuples] which Y size depends on the shape / clip
bounds and tuples depend on the crossing numbers at the current row Y.
For now I am using int[2048][20] but it could be tuned ...
Finally, I figured out several hard-coded values relative to AA (3x3
samples, error level ...) that could be set as rendering hints. For example
I would like to tune the number of samples to 8x8 or 2x2 ... depending on
the use case.
I expect to send an alpha version of my patch to illustrate my talks.
2013/3/30 Jim Graham <james.graham at oracle.com>
> Other thoughts - using chained buckets of edges instead of one single long
> list. It would be easier to keep a pool of buckets (each holding, say, 256
> edges?) than a "one-size-fits-all" pool of arrays. Then all you have to do
> is keep high water marks on the number of simultaneously used buckets in
> order to tune the cache for a given application.
>
> It would make the code that manages "pointers" to edges a little more
> complicated, though...
>
Good idea, but maybe difficult for me to implement. As I said, this array
may be less a problem than rowAARLE.
Of course, I need some "real life" tests to be able to perform better
diagnostics: maybe I could add some "instrumentations" / statistics to
gather while running in order to tune Pisces automatically depending on the
user work load.
Regards,
Laurent
On 3/29/2013 6:53 AM, Laurent Bourgès wrote:
> Phil,
>
> I agree it is a complex issue to improve memory usage while maintaining
> performance at the JDK level: applications can use java2d pisces in very
> different contexts: Swing app (client with only EDT thread), server-side
> application (multi thread headless) ...
>
> For the moment, I spent a lot of my time understanding the different
> classes in java2d.pisces and analyzing memory usage / performance ... using
> J2DBench (all graphics tests).
>
> In my Swing application, pisces produces a lot of waste (GC) but on server
> side, the GC overhead can be more important if several threads use pisces.
>
> Pisces uses memory differently:
> - fixed arrays (dasher, stroker)
> - dynamic arrays (edges ...) rowAARLE (very big one for big shapes)
>
> For the moment I am trying to avoid memory waste (pooling or kept
> reference) without any memory constraint (no eviction) but I agree it is an
> important aspect for server-side applications.
>
> To avoid concurrency issues, I use a ThreadLocal context named
> RendererContext to keep few temporary arrays (float6 and a BIG rowAARLE
> instance) but there is also dynamic IntArrayCache et FloatArrayCache which
> have several pools divided in buckets (256, 1024, 4096, 16384, 32768)
> containing only few instances.
>
> To have best performance, I studied pisces code to clear only the used
> array parts when recycling or using dirty arrays (only clear
> rowAARLE[...][1]).
>
> I think Andrea's proposal is interesting to maybe put some system
> properties to give hints (low memory footprint, use cache or not ...).
>
> 2013/3/28 Phil Race <philip.race at oracle.com>
>
> Maintaining a pool of objects might be an appropriate thing for an
>> applications,
>> but its a lot trickier for the platform as the application's usage pattern
>> or intent
>> is largely unknown. Weak references or soft references might be of use but
>> weak references usually go away even at the next incremental GC and soft
>> references tend to not go away at all until you run out of heap.
>>
>>
> Agreed; for the moment, pool eviction policy is not implemented but kept in
> mind.
> FYI: each RendererContext (per thread) has its own array pools (not shared)
> that could have different caching policies:
> For instance, AWT / EDT (repaint) could use a large cache although other
> threads do not use array caching at all.
>
>
> You may well be right that always doubling the array size may be too
>> simplistic,
>> but it would need some analysis of the code and its usage to see how much
>> better we can do.
>>
>
>
> There is two part:
> - initial array size for dynamic arrays: difficult to estimate but for now
> set to very low capacity (8 / 50 ...) to avoid memory waste for rectangle /
> line shapes. In my patch, I have defined MIN_ARRAY_SIZE = 128 (array pool)
> to avoid too much resizing as I am doing array recycling.
> - grow: I use x4 instead of x2 to avoid array copies.
>
> Laurent
>
>
>
> 2013/3/28 Phil Race <philip.race at oracle.com>
>
> Maintaining a pool of objects might be an appropriate thing for an
>> applications,
>> but its a lot trickier for the platform as the application's usage pattern
>> or intent
>> is largely unknown. Weak references or soft references might be of use but
>> weak references usually go away even at the next incremental GC and soft
>> references tend to not go away at all until you run out of heap.
>>
>> You may well be right that always doubling the array size may be too
>> simplistic,
>> but it would need some analysis of the code and its usage to see how much
>> better we can do.
>>
>>
>> Apparently, Arrays.fill is always faster (size in 10 ... 10 000) !
>>> I suspect hotspot to optimize its code and use native functions, isn't
>>>
>> it ???
>>
>> I suppose there is some hotspot magic involved to recognise and intrinsify
>> this
>> method, since the source code looks like a plain old for loop.
>>
>> -phil.
>>
>>
>>
>> On 3/26/2013 4:00 AM, Laurent Bourgès wrote:
>>
>> Dear all,
>>>
>>> First I joined recently the openJDK contributors, and I plan to fix
>>> java2D pisces code in my spare time.
>>>
>>> I have a full time job on Aspro2: http://www.jmmc.fr/aspro; it is an
>>> application to prepare astronomical observations at VLTI / CHARA and is
>>> very used in our community (200 users): it provides scientific
>>> computations
>>> (observability, model images using complex numbers ...) and zoomable
>>> plots
>>> thanks to jFreeChart.
>>>
>>> Aspro2 is known to be very efficient (computation parallelization) and I
>>> am often doing profiling using netbeans profiler or visualVM.
>>>
>>> To fix huge memory usages by java2d.pisces, I started implementing an
>>> efficient ArrayCache (int[] and float[]) (in thread local to concurrency
>>> problems):
>>> - arrays in sizes between 10 and 10000 (more small arrays used than big
>>> ones)
>>> - resizing support (Arrays.copyOf) without wasting arrays
>>> - reentrance i.e. many arrays are used at the same time (java2D Pisces
>>> stroke / dash creates many segments to render)
>>> - GC / Heap friendly ie support cache eviction and avoid consuming too
>>> much memory
>>>
>>> I know object pooling is known to be not efficient with recent VM (GC is
>>> better) but I think it is counter productive to create so many int[]
>>> arrays
>>> in java2d.pisces and let the GC remove such wasted memory.
>>>
>>> Does someone have implemented such (open source) array cache (core-libs)
>>> ?
>>> Opinions are welcome (but avoid "trolls").
>>>
>>>
>>>
>>>
More information about the core-libs-dev
mailing list