discussion about touch events
Pavel Safrata
pavel.safrata at oracle.com
Mon Nov 11 04:22:42 PST 2013
Hi Assaf,
this was discussed during during the multi-touch API design phase. I
completely understand what you are saying, yet it has no
straightforward/automatic solution you may have imagined. First, we
can't pick "whatever falls into the circle" - imagine there are two
buttons next to each other, do we want to activate both of them? No,
each event has to have a single target. Moreover, everybody can easily
comprehend the idea of the imprecise touch on a button, but from FX
point of view there is a bunch of nodes (not every application uses just
controls) and each of the application's nodes needs to be pickable.
At that time, if I remember correctly, there was a preliminary agreement
that the way around this might be giving the control to the user's hands
in a form of an API that would for instance make a node pickable on a
certain distance from it. So user could tell to a button "enlarge your
bounds about 10 pixels in each direction and pick on those bounds",
which would then behave as if the button had a transparent border around
it (perhaps applying only to touch events / synthesized mouse events).
Of course controls could use this API to implement a reasonable default
behavior, but user still needs to have the option to create an
unobtrusive ImageView-background with an event-greedy ImageView-button
on it.
Regards,
Pavel
On 11.11.2013 11:51, Assaf Yavnai wrote:
> Hi Guys,
>
> I hope that I'm right about this, but it seems that touch events in
> glass are translated (and reported) as a single point events (x & y)
> without an area, like pointer events.
> AFAIK, the controls response for touch events same as mouse events
> (using the same pickers) and as a result a button press, for example,
> will only triggered if the x & y of the touch event is within the
> control area.
>
> This means that small controls, or even quite large controls (like
> buttons with text) will often get missed because the 'strict' node
> picking, although from a UX point of view it is strange as the user
> clearly pressed on a node (the finger was clearly above it) but
> nothing happens...
>
> With current implementation its hard to use small features in
> controls, like scrollbars in lists, and it almost impossible to
> implement something like 'screen navigator' (the series of small dots
> in the bottom of a smart phones screen which allow you to jump
> directly to a 'far away' screen)
>
> To illustrate it consider the bellow low resolution sketch, where the
> "+" is the actual x,y reported, the ellipse is the finger touch area
> and the rectangle is the node.
> With current implementation this type of tap will not trigger the node
> handlers
>
> __
> / \
> / \
> ___/ __+_ \___ in this scenario the 'button' will not get
> pressed
> | \ / |
> |___\ ___ / __ |
> \___/
>
> If your smart phone support it, turn on the touch debugging options in
> settings and see that each point translate to a quite large circle and
> what ever fall in it, or reasonably close to it, get picked.
>
> I want to start a discussion to understand if my perspective is
> accurate and to understand what can be done, if any, for the coming
> release or the next one.
>
> We might use recently opened RT-34136
> <https://javafx-jira.kenai.com/browse/RT-34136> for logging this, or
> open a new JIRA for it
>
> Thanks,
> Assaf
More information about the openjfx-dev
mailing list