JavaFX 3D
Richard Bair
richard.bair at oracle.com
Fri May 18 06:51:58 PDT 2012
Perhaps I was too subtle :-). We would be happy to provide built in importers if we get legal approval to do so. Like media (h264, webm) this is an area that requires care.
On May 18, 2012, at 5:46 AM, "Dr. Michael Paus" <mp at jugs.org> wrote:
> I completely disagree with that. High-quality importers for the most common formats
> are simply a must. What is a framework good for if you don't get any data into it?
> Or would you accept an image framework which just provides you with a pixel buffer
> but lets you write your JPEG-decoder yourself?
> This mistake was already made in the past (with Java3D for example) and we should
> not repeat that again. Actually this mistake has already been repeated because JavaFX
> does not provide any importer for vector graphics (like SVG) for example.
>
> Am 18.05.2012 14:14, schrieb Kirill.Prazdnikov:
>>
>> Then why not 3DSMax, why not Maya ?
>> Or procedural geometry and scene graph ?
>>
>> I think the best API would be format-independent.
>> And Maya\Max importers in the samples...
>>
>> -Kirill
>>
>> On 18.05.2012 16:02, Mario Torre wrote:
>>> Why not use collada?
>>>
>>> Mario
>>>
>>> 2012/5/18 Richard Bair<richard.bair at oracle.com>:
>>>> Hi Jiri,
>>>>
>>>>> where's JFX heading regarding 3D API? Will there be for example support for 3D content formats like OBJ?
>>>> Funny you should ask :-). Right now we don't have any concrete plans to provide built-in format converters. Rather, we are thinking about providing a flexible scene graph (Mesh, etc) and then it should be reasonably easy to write most importers. It seems like it would be nice to have a collection of 3rd party importers rather than bundling every format imaginable, and avoids any nasty licensing issues.
>>>>
>>>> Regarding 3D API in general, Kevin, Jasper and I were just discussing this yesterday afternoon and I would like to detail what the current thinking is. This is in no way a final path but it is my current thinking.
>>>>
>>>> The JavaFX Scene Graph today is, for the most part, a true unified scene graph between 2D and 3D. Every node has a 'z', we have 3D bounds objects and internally we use a pick ray for doing the picking, etc. Mouse, Key, Drag, and other events are exactly the same for both 2D and 3D scenes (or rather, are equally applicable).
>>>>
>>>> However there are some aspects of every Node that are 2D-centric and really provides surprising results in a 3D context. For example, in the following scene graph I have three rectangles, r1, r2 and r3. Normally r1 would be painted on the bottom, then r2, and then r3 on top (painters algorithm). However (assuming my scene was created with "depthTest" enabled), since r2 has a z value of -1 (closer to the camera), what actually will be rendered is r1, r3, and then r2. So far so good, as this is normally how one would expect 3D to behave.
>>>>
>>>> Rectangle r1 = new Rectangle(0, 0, 100, 100);
>>>> Rectangle r2 = new Rectangle(10, 10, 100, 100);
>>>> r2.setTranslateZ(-1);
>>>> Group branchA = new Group(r1, r2);
>>>> Rectangle r3 = new Rectangle(20, 20, 100, 100);
>>>> Group branchB = new Group(r3);
>>>> Group root = new Group(branchA, branchB);
>>>>
>>>> However, suppose that I now set opacity to .5 on branchA. Now what I will get is r1 painted, and then r2, and then r3 on top! The Z value no longer works the way it once did. This is because when you set opacity, we have to flatten the scene graph down into an image which we can then composite into the rest of the scene with the specified Z value. We have to do this in order to get the right result for 2D, however this is the wrong behavior for 3D. Likewise, if you set a blend mode on the group other than SRC_OVER we flatten, or a clip (that isn't an axis& pixel aligned rectangle, so as to get smooth anti-aliasing on the clip), an effect (since these are by definition a filter operation on a group there is no other way to do it and get the semantics right), or set cache to true (clearly :-)). Some of these are obvious, but some are so non-obvious as to be quite surprising and useless in true 3D programming. The current Scene Graph was designed such that a 2D application could have 3D added to it, but it wasn't deep down inside a 3D scene graph. Well, almost all of it is a true 3D scene graph, but like I said in the introduction here, there are certain things that are just 2D specific.
>>>>
>>>> Presently there is no good way to do 2D->3D->2D->3D integration either. Imagine you have a game where the background is 2D, the game on top of it is 3d (think angry birds where the birds& boxes have depth to them) there is a 2D overlay (score, etc) and on top of the 2D overlay is a 3D avatar of the current character (bird). Right now, the only way to handle this is by toggling depthTest to enabled/disabled on different branches of the scene graph and manually partitioning Z space between the two 3D layers (since all 3D content in the scene graph lives in the same Z space -- unless you set cache to true or one of the other above properties!).
>>>>
>>>> One way to resolve this situation is to just live with these kinds of mismatches. It makes the 3D scene graph use cases a bit more cumbersome, but you have a single unified scene graph API. Another way to resolve this situation is to factor the Node class such that we end up with a base class, Node, and Node3D. The vast majority of the existing API on Node would be moved up to NodeBase (I hate the XXXBase naming convention, but I made that bed and now I have to lie in it :-)). Node3D and Node would then be subclasses and the 2D specific APIs (effect, opacity, blendMode, etc) would be in Node but not Node3D. Node3D might have other APIs such as "alpha" or "blending" -- similar to "opacity" and "blendMode" except that they have different semantics and thus probably should have different names to avoid confusion.
>>>>
>>>> We've gone back and forth on it, right now I prefer a unified scene graph where we just add Shape3D nodes into the standard Scene Graph, and just document on Node that certain APIs on node cause the node to "flatten" into an image, and then just go from there. We have to document it anyway.
>>>>
>>>> So, still thinking about it :-)
>>>>
>>>> Richard
>>>
>>>
>>
>>
>
>
> --
> --------------------------------------------------------------------------------------
> Dr. Michael Paus, Chairman of the Java User Group Stuttgart e.V. (JUGS).
> For more information visit www.jugs.de.
>
More information about the openjfx-dev
mailing list