Alternative approach for behaviors, leveraging existing event system

John Hendrikx john.hendrikx at gmail.com
Fri Oct 20 12:02:40 UTC 2023


Hi Martin, I missed your initial mail again somehow; I will look into 
what's going with my mailing list setup, but I'll respond here inline.

On 18/10/2023 22:20, Andy Goryachev wrote:
>
> I wonder if this might encroach upon application-specific 
> requirements/logic (that is, does not belong to the control behavior).
>
> For example, when the user enters text in a TextArea inside of a 
> multi-cell notebook style application, SELECT_ALL (ctrl-A) may trigger 
> select all on that TextArea, or the whole notebook page, depending on 
> the actual requirements.  This means it’s the function of the 
> application or a page container to intercept the key binding, and not 
> the TextArea’s.
>
> -andy
>
> *From: *openjfx-dev <openjfx-dev-retn at openjdk.org> on behalf of Martin 
> Fox <martin at martinfox.com>
> *Date: *Wednesday, October 18, 2023 at 10:15
> *To: *John Hendrikx <john.hendrikx at gmail.com>
> *Cc: *openjfx-dev at openjdk.org <openjfx-dev at openjdk.org>
> *Subject: *Re: Alternative approach for behaviors, leveraging existing 
> event system
>
> John,
>
> I want to explore this a bit further. I think there are two separate 
> proposals in here.
>
> In most UI toolkits there’s one or may ways to translate a key event 
> into an action. For example, Ctrl-C yields the Copy action and the 
> right arrow key yields the moveRight action. This is done in a 
> centralized way to ensure the mapping is consistent across the app. 
> And it’s only done after the node has been given a chance to act on 
> the original key event.
>
> Most UI toolkits also provide a method for taking an action and 
> finding a component to act on it. This is not necessarily in response 
> to a key event. For example, selecting the Copy menu item should 
> generate a Copy action and send it out to be acted on.
>
> One of your proposals is to use the existing event machinery to send 
> out actions. It sounds like you’ve used this method in your own apps 
> and it’s basically how the Mac toolkit works. A key event is sent up 
> the responder chain; if it gets turned into an action (selector) it 
> gets sent up the same responder chain.
>
> >     TextFieldEvent {
> >         public static final EventType<TextFieldEvent> ANY = ... ;
> >         public static final EventType<TextFieldEvent> SELECT_ALL = ... ;
> >         public static final EventType<TextFieldEvent> 
> SELECT_NEXT_WORD = ... ;
> >     }
>
> We would want a more generic SELECT_ALL event, not one that’s bound to 
> the TextField (there’s no reason to treat SELECT_ALL on a text field 
> differently than SELECT_ALL on a collection of images). That would 
> enable a central mapping from Ctrl-A to SELECT_ALL.
>
> Defining and implementing all of this would be beefing up the toolkit 
> to enable some app-level operations that are useful and common in 
> other toolkits. And there are plenty of existing models to draw 
> inspiration from.
>
> You’re proposing using a similar mechanism to enable behaviors to 
> communicate with their controls. That seems like a different problem 
> that deserves a different discussion. In particular I don’t think 
> these two problems need to be solved at the same time even if they 
> both end up using events. The toolkit deficiencies are easier to 
> identify and address than the control/behavior/skin architecture 
> (which seems to be a source of endless debate).
>
Well, I primarily looked at how Behaviors can communicate with Controls, 
and especially how this was done in Andy's proposal with an indirection 
(FunctionTag).  The indirection, which acts on an event, looks 
suspiciously like translating a set of raw keys/gestures into a higher 
level event (perhaps not application level, but certainly higher level) 
and there can easily be multiple ways that result in the same function 
being executed. Some of these functions are conceivably also of value to 
trigger directly, perhaps in response to a global hot key.

The indirection with FunctionTags simply looked to me like a new system 
for something that we already have been doing in JavaFX (the ActionEvent 
for example is the result of translating the right combination of key 
events and/or mouse events).  To me it seems like using events instead 
of FunctionTags has a lot of benefits and almost no drawbacks.

In my view, Events can be at many layers: Raw events you get from 
keyboard and mouse, interpreted events, like drag/drop, and high level 
events like the ActionEvent.  There is no limit here, and I see no 
reason why using Events to communicate between Behavior and Control 
should not be considered; it certainly wouldn't block a later addition 
of even higher level events.

For example, replacing self-built solutions for such problems to simply 
use JavaFX events, has made everything a lot cleaner.  The App is split 
into multiple presentations that can be active (and stacked) and using 
the FX events system to pass an event like "Navigate Back" or "Refresh 
View" or "Construct Context Menu" trivializes a lot of code.  As the 
Event bubbles up, "Navigate Back" will be consumed by the deepest nested 
component that can actually move one step back; "Refresh View" is 
processed by all the stacked presentations to update their state, and 
"Construct Context Menu" has all the stacked presentations contributing 
their options (which are then presented in order from most specific to 
least specific).

--John

>
> Martin
>
>
> > On Oct 16, 2023, at 4:51 AM, John Hendrikx <john.hendrikx at gmail.com> 
> wrote:
> >
> > Hi Andy, hi list,
> >
> > I've had the weekend to think about the proposal made by Andy 
> Goryachev to make some of the API's surrounding InputMap / Behaviors 
> public.
> >
> > I'm having some nagging doubts if that proposal is really the way 
> forward, and I'd like to explore a different approach which leverages 
> more of FX's existing event infrastructure.
> >
> > First, let me repeat an earlier observation; I think event handlers 
> installed by users should always have priority over handlers installed 
> by FX behaviors. The reasoning here is that the user (the developer in 
> this case) should be in control.  Just like CSS will back off when the 
> user changes values directly, so should default behaviors.  For this 
> proposal to have merit, this needs to be addressed.
> >
> > One thing that I think Andy's proposal addresses very nicely is the 
> need for an indirection between low level key and mouse events and 
> their associated behavior. Depending on the platform, or even platform 
> configuration, certain keys and mouse events will result in certain 
> high level actions.  Which keys and mouse events is platform 
> specific.  A user wishing to change this behavior should not need to 
> be aware of how these key and mouse events are mapped to a behavior.
> >
> > I however think this can be addressed in a different way, and I will 
> use the Button control to illustrate this, as it is already doing 
> something similar out of the box.
> >
> > The Button control will trigger itself when a specific combination 
> of key/mouse events occurs.  In theory, a user could install event 
> handlers to check if the mouse was released over the button, and then 
> perform some kind of action that the button is supposed to perform.  
> In practice however, this is tricky, and would require mimicing the 
> whole process to ensure the mouse was also first **pressed** on that 
> button, if it wasn't moved outside the clickable area, etc.
> >
> > Obviously expecting a user to install the necessary event handlers 
> to detect button presses based on key and mouse events is a ridiculous 
> expectation, and so Button offers a much simpler alternative: the 
> ActionEvent; this is a high level event that encapsulates several 
> other events, and translates it to a new concept.  It is triggered 
> when all the criteria to fire the button have been met without the 
> user needing to be aware of what those are.
> >
> > I think the strategy of translating low level events to high level 
> events, is a really good one, and suitable for reusing for other purposes.
> >
> > One such purpose is converting platform dependent events into 
> platform independent ones. Instead of needing to know the exact key 
> press that would fire a Button, there can be an event that can fire a 
> button. Such a specific event can be filtered and listened for as 
> usual, it can be redirected, blocked and it can be triggered by anyone 
> for any reason.
> >
> > For a Button, the sequence of events is normally this:
> >
> > - User presses SPACE, resulting in a KeyEvent
> > - Behavior receives KeyEvent and arms the button
> > - User releases SPACE, resulting in a KeyEvent
> > - Behavior receives KeyEvent, disarms and fires the button
> > - Control fires an ActionEvent
> >
> > What I'm proposing is to change it to:
> >
> > - User presses SPACE, resulting in a KeyEvent
> > - Behavior receives KeyEvent, and sends out ButtonEvent.BUTTON_ARM
> > - Control receives BUTTON_ARM, and arms the button
> > - User releases SPACE, resulting in a KeyEvent
> > - Behavior receives KeyEvent and sends out ButtonEvent.BUTTON_FIRE
> > - Control receives BUTTON_FIRE, disarms the button and fires an 
> ActionEvent
> >
> > The above basically adds an event based indirection.  Normally it is 
> KeyEvent -> ActionEvent, but now it would be KeyEvent -> ButtonEvent 
> -> ActionEvent. The user now has the option of hooking into the 
> mechanics of a Button at several different levels:
> >
> > - The "raw" level, listening for raw key/mouse events, useful for 
> creating custom behavior that can be platform specific
> > - The "interpreted" level, listening for things like ARM, DISARM, 
> FIRE, SELECT_NEXT_WORD, SELECT_ALL, etc...; these are platform independent
> > - The "application" level, primarily action type events
> >
> > There is sufficient precedence for such a system.  Action events are 
> a good example, but another example are the DnD events which are 
> created by looking at raw mouse events, effectively interpreting magic 
> mouse movements and presses into more useful DnD events.
> >
> > The event based indirection here is very similar to the FunctionTag 
> indirection in Andy's proposal. Instead of FunctionTags, there would 
> be new events defined:
> >
> >     ButtonEvent {
> >         public static final EventType<ButtonEvent> ANY = ... ;
> >         public static final EventType<ButtonEvent> BUTTON_ARM = ... ;
> >         public static final EventType<ButtonEvent> BUTTON_DISARM = ... ;
> >         public static final EventType<ButtonEvent> BUTTON_FIRE = ... ;
> >     }
> >
> >     TextFieldEvent {
> >         public static final EventType<TextFieldEvent> ANY = ... ;
> >         public static final EventType<TextFieldEvent> SELECT_ALL = ... ;
> >         public static final EventType<TextFieldEvent> 
> SELECT_NEXT_WORD = ... ;
> >     }
> >
> > These events are similarly publically accessible and static as 
> FunctionTags would be.
> >
> > The internal Behavior classes would shift from translating + 
> executing a behavior to only translating it.  The Control would be 
> actually executing the behavior.
> >
> > This also simplifies the role of Behaviors, and maybe even clarifies 
> it; a Behavior's purpose is to translate platform dependent to 
> platform independent events, but not to act on those events.  Acting 
> upon the events will be squarely the domain of the control.  As this 
> pinpoints better what Behavior's purpose it, and as it simplifies 
> their implementation (event translation only) it may be the way that 
> leads to them becoming public as well.
> >
> > ---
> >
> > I've used a similar mechanism as described above in one of my FX 
> Applications; key bindings are defined in a configuration file:
> >
> >     BACKSPACE: navigateBack
> >     LEFT: player.position:subtract(10000)
> >     RIGHT: player.position:add(10000)
> >     P: player.paused:toggle
> >     SPACE: player.paused:toggle
> >     I:
> >         - overlayVisible:toggle
> >         - showInfo:trigger
> >
> > When the right key is pressed (and it is not consumed by anything), 
> it is translated to a new higher level event by a generic key binding 
> system. This event is fired to the same target (the focused node).  If 
> the high level event is consumed, the action was succesfully 
> triggered; if not, and a key has more than one mapping, another event 
> is sent out that may get consumed or not.  If none of the high level 
> events were consumed, the low level event that triggered it is allowed 
> to propogate as usual.
> >
> > The advantage of this system is obvious; the controls involved can 
> keep the action that needs to be performed separate from the exact key 
> (or something else) that may trigger it.  For "navigateBack" for 
> example, it is also an option to use the mouse; controls need not be 
> aware of this at all.  These events also bubble up; a nested control 
> that has several states may consume "navigateBack" until it has 
> reached its local "top level", and only then let it bubble up for one 
> of its parents to act on.
> >
> > --John
> >
>
Yeah, that's certainly possible
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.org/pipermail/openjfx-dev/attachments/20231020/4825bd75/attachment-0001.htm>


More information about the openjfx-dev mailing list