Blur effect on live scene?

Matthias Hänel haenel at ultramixer.com
Tue Aug 11 06:44:09 UTC 2015


Hey Jim,


internally we are argueing pretty often about pro and cons of different technologies espacially java.
When I first read about JavaFX I was very happy to see a hardware accelerated approach in the Java world that could stop the common sense of laggy java-apps.

From a software persepective I am located at the very first few layers right in the hardware (firmware) and right above (drivers, interfaces etc.). That's why please forgive me if I am wrong on UI specific stuff. This is not my expertice. Actually, I build the native opengl implementation for our software. That means it uses jogl, but it is pretty standard OGL. Therefor I a know a bit about the OGL state machine and their usage. I am very sure DirectX has similar techniques for any of those purposes.


> Am 10.08.2015 um 20:29 schrieb Jim Graham <james.graham at oracle.com>:
> 
> Let me understand what is going on here.
> 
> I get the result you are trying to achieve - blur the scene as background for something.
> 
> I get that Mac and iOS seem to have direct support for this technique which appears to work faster than what we provide via the Effect mechanism.
> 
> I also get that attempts to make it appear better via snapshot will unfortunately involve a copy to main memory to produce the "Image" object.
> 
> If we had a "snapshot to texture" mechanism then that might reduce the memory copying of the work around technique.

Well, did you say there is no snapshot to texture in JavaFX? In plain OpenGL you can use FBO's (Frame Buffer Objects) to render them. This is nothing more than a snapshot. They are even sliceable and strechable.

Yesterday we discussed the JavaFX API and we came to the conclusion that there must be tow APIs 1. standard copyFromRAMAnDraw 2. FBO based. Drawing to a framebuffer and manipulating. This is not only needed for this small effect here. It is much more. 

One very big damand of a new technology would be a feature that's called "conext sharing". This is one of the most important copy-prevention-mechanisms to us. That makes it possible to push an image to the graphics RAM and draw it two, three or even multiple times.


> I'd argue that we sort of do have something like that - it is the cache flag.  If a Node is cached then we do have a copy of it in a texture and that can help make the Blur Effect work more efficiently, but there may be some additional copies between textures if everything isn't set up right.  Still, that is an avenue for someone to check to see if there isn't a better way to achieve this effect in the short term...

I am not pretty sure what cache does. Probably some hasmap that holds objects and they are not instantly destroyed in the graphics RAM?


> There is the implication that one can add a shader to an overlay texture that will cause it to have a dynamic "blurred lens" effect.  I'm not familiar with how that would be done.  AFAIK, shaders work on inputs and produce an output which is transferred to the destination using the pixel transfer functions and you can't source the destination pixels in the shader in order to blur them.  I would imagine that the Mac/iOS technique is done by sourcing directly from the back buffer into the overlay texture using a blurring shader.  That gives the overlay texture a solid background that is a blurred copy of the back buffer.  They then draw the overlay contents (menu bar?) on top of that blurred background data and transfer the overlay texture back into the scene.  The blurred vision you are seeing is not "the pixels being blurred through the overlay texture" but rather a very quickly managed "blurred copy of" the data in the underlying buffer.
>   If the scene changes, then the entire process would need to be repeated on the new underlying pixels to get a new blurred copy of them as background in the overlay texture.  I can also imagine that they may have more direct support for blurring (there is an OpenGL EXT_convolution extension which we do not use - using our own convolution shaders instead - which may potentially work faster than what we do).  Also, they may be a little more efficient at managing the buffers involved in that dynamic operation (our Decora Effect engine isn't necessarily set up to use the "back buffer" as a source and so sometimes we may have to render parts of a scene an extra time specially to make an input texture for Decora).


I am not sure what Decora is, but you are right with the sourcing from the back. You can only shade 1:1 in OGL's shading Engine but that's not a real problem since we have FBO's :) The Background is rendered to an FBO. This FBO renders as a Background and a second time the same FBO is rendered as Blurry shaped Foreground. 


> 
> If I'm understanding all of this correctly, then it seems that:
> 
> - it may be time to investigate tighter integration of Decora and Prism texture mechanisms (they use the same underlying objects, but don't have a good way to share pre-existing textures with each other).
> 
> - Node.cache may provide some additional short-term techniques for achieving this effect
> 
> - EXT_convolution may help us with performance if it is available
> 
> - if worse comes to worst, we might have to add "blurred parent" as a direct feature of a Node to streamline the processing of this effect, but hopefully we can get there with some combination of the above.


Well, I am not sure that you understood Tobis intention correctly :) I was never the intention to get this small little effect running in some way, but he is still testing the technology and to find the problematic stuff.

From my current point the major problem with JavaFX is still the same.

1. Has a good API
2. renders most of its stuff in software, hence does not run performant
3. Has good approaches, but the overall sight on the technology is broken somewhere.


I wonder JavaFX should be used in IOT stuff. I developed a lot of small nifty hardware tools. The main goal is ALWAYS to get thos most performance
out of the least hardware. JavaFX does the opposite right now. It's not just this little example I have seem some bigger software products here build with JavaFX and it was still this "laggy java-apps"-feeling. I don't know the whole JavaFX core, but when there is always this kind of a discussion about a feature that leads to a software rendered variant, the you waste the power of a GPU. 

I wonder JavaFX could be ideal for iOS and Android, if the GPU rendering is done right. Is this not on the agenda?

I wonder JavaFX would be the game changer for desktop apps and games. Personally, I would this is also just the case when the GPU is utilized in a good way.
Desktops are much more capably and for Finance-Software it might be good to have SoftwareRendering but for any Media/Video/Game-Software it is needed to get the most power out of their GPU. We are not compared to other Java/JavaFX-Apps, but to native optimized Games and Media-Software.


It took pretty long to get the JRE with JIT where it is now and you can even convince me from the performance and even time-to-market with standard JRE-Software. Well, on the other hand I don't know ONE Software that is using either Swing or even JavaFX for Media/Games besides us. Everytime I speak with other developers, I get the same answer: "We would love to use Swing/JavaFX, but it is not made for us." Most of the code their own drawing-API or even just use native code.
The trough is. Java APIs are not made for high performance (sure in sever terms you can scale by hardware, but that is is meant) use, they can present little 60 fps Demos (at 100% cpu load) and that's it. 


The real world has a demand for a full flegded API that is performance optimized. Beeing a developer myself I know it's not that easy to achieve.
The fallback would be the two API approach for above. The you can select what to do as a user. Currently, nobod knows what actually is going on and nobody cares until something is going out of order.


regards
Matthias




More information about the openjfx-dev mailing list