discussion about touch events

Assaf Yavnai assaf.yavnai at oracle.com
Tue Nov 12 05:11:52 PST 2013


Daniel, Pavel ,
I tend to agree with both of you and also to add some ideas and comments:

1) I think that touch picker and mouse picker should be different 
implementations used according to origin of event. This means that if 
application is written to listen only to mouse events and its running 
with a touch screen, then touch will be 'usable', if application listen 
to touch events it should be preforming well (as expected), as opposed 
to what we have now that there is no reason to listen to simple touch 
events as they are the same as mouse events. (this is also a comment for 
earlier mail from sebastian.rheinnecker at yworks.com - Button and TouchEvents)

2) the capture zone should be configurable - I don't know in which 
level: build time (bare minimum) or -D property (my preference). It 
doesn't seems that runtime configuration is a must, but surly a nice to 
have feature to be available for different layouts implementations, For 
example take the screen lock application where there is a 3X3 matrix of 
dots and user need to set and follow a pattern to unlock the screen. The 
dots are relatively small and far a part. the capture zone can be much 
bigger in this scenario. If you ever used this type of applications, you 
probably noted the loosely matching (it's feels that it is impossible to 
miss). From the other hand it can nice to set it to a tighter value when 
nodes are close together (like in the VK or the 'screen slider' scenario)

3) What do you think about  also supply hints together with the touch 
event or the action event, for example action listener on a button can 
get called on a press and a hint can be supplied like: MATCH_EXACT (when 
center point falls inside the node), MATCH_CLOSE (when node is picked 
through the capture zone) MATCH_NEARBY (not a press per-se but rather a 
press was made near the node. This of course should only be sent if no 
other node with better matching have consume the event)

Assaf

On 11/12/2013 01:11 PM, Pavel Safrata wrote:
> (Now my answer using external link)
>
> Hello Daniel,
> this is quite similar to my idea described earlier. The major 
> difference is the "fair division of capture zones" among siblings. 
> It's an interesting idea, let's explore it. What pops first is that 
> children can also overlap. So I think it would behave like this (green 
> capture zones omitted):
>
> Child in parent vs. Child over child: http://i.imgur.com/e92qEJA.jpg
>
> ..wouldn't it? From user's point of view this seems confusing, both 
> cases look the same but behave differently. Note that in the case on 
> the right, the parent may be still the same, developer only adds a 
> fancy background as a new child and suddenly the red child can't be 
> hit that easily. What do you think? Is it an issue? Or would it not 
> behave this way?
>
> Regards,
> Pavel
>
> On 12.11.2013 12:06, Daniel Blaukopf wrote:
>> (My original message didn't get through to openjfx-dev because I used 
>> inline images. I've replaced those images with external links)
>>
>> On Nov 11, 2013, at 11:30 PM, Pavel Safrata <pavel.safrata at oracle.com 
>> <mailto:pavel.safrata at oracle.com>> wrote:
>>
>>> On 11.11.2013 17:49, Tomas Mikula wrote:
>>>> On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler 
>>>> <phdoerfler at gmail.com <mailto:phdoerfler at gmail.com>> wrote:
>>>>> I see the need to be aware of the area that is covered by fingers 
>>>>> rather
>>>>> than just considering that area's center point.
>>>>> I'd guess that this adds a new layer of complexity, though. For 
>>>>> instance:
>>>>> Say we have a button on some background and both the background 
>>>>> and the
>>>>> button do have an onClick listener attached. If you tap the button 
>>>>> in a way
>>>>> that the touched area's center point is outside of the buttons 
>>>>> boundaries -
>>>>> what event will be fired? Will both the background and the button 
>>>>> receive a
>>>>> click event? Or just either the background or the button 
>>>>> exclusively? Will
>>>>> there be a new event type which gets fired in case of such 
>>>>> area-based taps?
>>>>>
>>>>> My suggestion would therefore be to have an additional area tap 
>>>>> event which
>>>>> gives precise information about diameter and center of the tap. 
>>>>> Besides
>>>>> that there should be some kind of "priority" for choosing which 
>>>>> node's
>>>>> onClick will be called.
>>>> What about picking the one that is closest to the center of the touch?
>>>>
>>>
>>> There is always something directly on the center of the touch 
>>> (possibly the scene background, but it can have event handlers too). 
>>> That's what we pick right now.
>>> Pavel
>>
>> What Seeon, Assaf and I discussed earlier was building some fuzziness 
>> into the node picker so that instead of each node capturing only 
>> events directly on top of it:
>>
>> Non-fuzzy picker: http://i.imgur.com/uszql8V.png
>>
>> ..nodes at each level of the hierarchy would capture events beyond 
>> their borders as well:
>>
>> Fuzzy picker: http://i.imgur.com/ELWamYp.png
>>
>> In the above, “Parent” would capture touch events within a certain 
>> radius around it, as would its children “Child 1” and “Child 2”. 
>> Since “Child 1” and “Child 2” are peers, they would have a sharp 
>> division between them, a watershed on either side of which events 
>> would go to one child node or the other. This would also apply if the 
>> peer nodes were further apart; they would divide the no-man’s land 
>> between them. Of course this no-man’s land would be part of “Parent” 
>> and could could be touch-sensitive - but we won’t consider “Parent” 
>> as an event target until we have ruled out using one of its 
>> children’s extended capture zones.
>>
>> The capture radius could either be a styleable property on the nodes, 
>> or could be determined by the X and Y size of a touch point as 
>> reported by the touch screen. We’d still be reporting a touch point, 
>> not a touch area. The touch target would be, as now, a single node.
>>
>> This would get us more reliable touch capture at leaf nodes of the 
>> node hierarchy at the expense of it being harder to tap the 
>> background. This is likely to be a good trade-off.
>>
>> Daniel
>>
>>
>>
>>>
>>>> Tomas
>>>>
>>>>> Maybe the draw order / order in the scene graph / z
>>>>> buffer value might be sufficient to model what would happen in the 
>>>>> real,
>>>>> physical world.
>>>>> Am 11.11.2013 13:05 schrieb "Assaf Yavnai" 
>>>>> <assaf.yavnai at oracle.com <mailto:assaf.yavnai at oracle.com>>:
>>>>>
>>>>>> The ascii sketch looked fine on my screen before I sent the mail 
>>>>>> :( I hope
>>>>>> the idea is clear from the text
>>>>>> (now in the reply dialog its also look good)
>>>>>>
>>>>>> Assaf
>>>>>> On 11/11/2013 12:51 PM, Assaf Yavnai wrote:
>>>>>>
>>>>>>> Hi Guys,
>>>>>>>
>>>>>>> I hope that I'm right about this, but it seems that touch events 
>>>>>>> in glass
>>>>>>> are translated (and reported) as a single point events (x & y) 
>>>>>>> without an
>>>>>>> area, like pointer events.
>>>>>>> AFAIK, the controls response for touch events same as mouse 
>>>>>>> events (using
>>>>>>> the same pickers) and as a result a button press, for example, 
>>>>>>> will only
>>>>>>> triggered if the x & y of the touch event is within the control 
>>>>>>> area.
>>>>>>>
>>>>>>> This means that small controls, or even quite large controls (like
>>>>>>> buttons with text) will often get missed because the 'strict' 
>>>>>>> node picking,
>>>>>>> although from a UX point of view it is strange as the user 
>>>>>>> clearly pressed
>>>>>>> on a node (the finger was clearly above it) but nothing happens...
>>>>>>>
>>>>>>> With current implementation its hard to use small features in 
>>>>>>> controls,
>>>>>>> like scrollbars in lists, and it almost impossible to implement 
>>>>>>> something
>>>>>>> like 'screen navigator' (the series of small dots in the bottom 
>>>>>>> of a smart
>>>>>>> phones screen which allow you to jump directly to a 'far away' 
>>>>>>> screen)
>>>>>>>
>>>>>>> To illustrate it consider the bellow low resolution sketch, 
>>>>>>> where the "+"
>>>>>>> is the actual x,y reported, the ellipse is the finger touch area 
>>>>>>> and the
>>>>>>> rectangle is the node.
>>>>>>> With current implementation this type of tap will not trigger 
>>>>>>> the node
>>>>>>> handlers
>>>>>>>
>>>>>>>                 __
>>>>>>>               /     \
>>>>>>>              /       \
>>>>>>>        ___/ __+_ \___    in this scenario the 'button' will not get
>>>>>>> pressed
>>>>>>>        |    \         /    |
>>>>>>>        |___\ ___ / __ |
>>>>>>>               \___/
>>>>>>>
>>>>>>> If your smart phone support it, turn on the touch debugging 
>>>>>>> options in
>>>>>>> settings and see that each point translate to a quite large 
>>>>>>> circle and what
>>>>>>> ever fall in it, or reasonably close to it, get picked.
>>>>>>>
>>>>>>> I want to start a discussion to understand if my perspective is 
>>>>>>> accurate
>>>>>>> and to understand what can be done, if any, for the coming 
>>>>>>> release or the
>>>>>>> next one.
>>>>>>>
>>>>>>> We might use recently opened RT-34136 <https://javafx-jira.kenai.
>>>>>>> com/browse/RT-34136> for logging this, or open a new JIRA for it
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Assaf
>>
>



More information about the openjfx-dev mailing list