19 How Should User Interaction Events Be Handled

19 How Should User Interaction Events be Handled #

Hello, I’m Chen Hang. Today, I’ll share with you how to respond to user interaction events.

In the previous two articles, we learned about the package management mechanism in Flutter. In Flutter, packages are functional abstractions that include external dependencies. For resource and project code dependencies, we use the package configuration file pubspec.yaml for unified management.

Through the previous chapters, we have learned how to implement custom UI and perfect business logic in Flutter through internal implementations and external dependencies. However, besides dynamic components like buttons and ListView, we are still unable to respond to user interaction behaviors. So in today’s topic, I will focus on explaining how Flutter listens to and responds to user gestures.

Gestures in Flutter can be divided into two categories:

  • The first category is raw pointer events, which are common touch events in native development, representing the displacement caused by touch (or mouse, stylus) actions on the screen;
  • The second category is gesture detectors, which represent combination operations of multiple raw pointer events, such as taps, double taps, long presses, etc., and provide semantic encapsulation for pointer events.

Next, let’s take a look at raw pointer events.

Pointer Events #

Pointer events represent the raw touch data of user interactions, such as the finger touching the screen (PointerDownEvent), the finger moving on the screen (PointerMoveEvent), the finger lifting off the screen (PointerUpEvent), and the touch being cancelled (PointerCancelEvent). This is consistent with the underlying touch events of the native system.

When a finger touches the screen and triggers a touch event, Flutter determines which components are located at the position where the finger touches the screen, and passes the touch event to the innermost component to respond. Similar to the event bubbling mechanism in browsers, the event starts from this innermost component and bubbles up along the component tree towards the root node for distribution.

However, Flutter cannot cancel or stop the further distribution of events like event bubbling in browsers. We can only adjust how the component behaves during the hit test period through hitTestBehavior, such as passing the touch event to the child component or to the component below its view hierarchy to respond.

In terms of listening to the raw pointer events at the component level, Flutter provides the Listener widget, which can listen to the raw pointer events of its sub-widgets.

Now, let’s look at an example using the Listener widget. I have defined a red square Container with a width of 300 and used the Listener widget to listen for its internal Down, Move, and Up events:

Listener(
  child: Container(
    color: Colors.red,
    width: 300,
    height: 300,
  ),
  onPointerDown: (event) => print("down $event"),
  onPointerMove: (event) => print("move $event"),
  onPointerUp: (event) => print("up $event"),
);

Let’s try to touch, move, and lift off within the red square area. You will see that the Listener has captured a series of raw pointer events and printed the position information of these events:

I/flutter (13829): up PointerUpEvent(Offset(97.7, 287.7))
I/flutter (13829): down PointerDownEvent(Offset(150.8, 313.4))
I/flutter (13829): move PointerMoveEvent(Offset(152.0, 313.4))
I/flutter (13829): move PointerMoveEvent(Offset(154.6, 313.4))
I/flutter (13829): up PointerUpEvent(Offset(157.1, 312.3))

Gesture Recognition #

You can directly listen to pointer events using Listeners. However, pointer events are too primitive, and if we want to get more details about touch events, such as determining if the user is dragging a control, using pointer events directly would be very complicated.

In general, when responding to user interaction, we use Gesture, which encapsulates semantic operations for gestures, such as onTap for single tap, onDoubleTap for double tap, onLongPress for long press, onPanUpdate for drag, onScaleUpdate for zooming, etc. Additionally, Gesture supports multiple simultaneous gesture interactions, which means we can listen to multiple events at the same time using Gesture.

Gesture is an abstraction of gesture semantics, and if we want to listen to gestures at the component level, we need to use GestureDetector. GestureDetector is a widget that handles various advanced user touch behaviors, and like Listener, it is a functional component.

Next, let’s take a look at how GestureDetector is used through an example.

I have defined a Stack layout with a red Container placed in the top left corner using Positioned widget, and at the same time, we are listening to click, double tap, long press, and drag events. In the callback method for drag events, we update the position of the container:

// Coordinates of the red container
double _top = 0.0;
double _left = 0.0;
Stack(
  // Using Stack widget to overlay views for easy control of view coordinates
  children: <Widget>[
    Positioned(
      top: _top,
      left: _left,
      child: GestureDetector(
        // Gesture recognition
        child: Container(color: Colors.red,width: 50,height: 50), // Red subview
        onTap: () => print("Tap"), // Callback for click
        onDoubleTap: () => print("Double Tap"), // Callback for double tap
        onLongPress: () => print("Long Press"), // Callback for long press
        onPanUpdate: (e) { // Callback for drag
          setState(() {
            // Update position
            _left += e.delta.dx;
            _top += e.delta.dy;
          });
        },
      ),
    )
  ],
);

Running this code and checking the console output, you will see that the red Container can respond to not only dragging behavior but also click, double tap, and long press events.

Figure 1 GestureDetector example

Although in the above example, we are listening to multiple gestures on a single widget, ultimately only one gesture will get the processing rights for the current event. To recognize multiple gestures, Flutter introduces the concept of the Gesture Arena, which is used to determine which gesture can respond to user events by considering the duration of the user touching the screen, displacement, and drag direction.

So how exactly does the Gesture Arena work?

In reality, GestureDetector internally creates a factory class for each gesture (Gesture Factory). The internal of the factory class uses a gesture recognizer (GestureRecognizer) to determine the current gesture being processed.

All the gesture factory classes are then passed to the RawGestureDetector class, which performs a lot of work in detecting gestures: it listens to raw pointer events using Listener and synchronizes information to all gesture recognizers when the state changes. Then these gestures compete in the arena to decide who will ultimately respond to the user event.

Sometimes, we may need to register the same type of gesture listener for multiple views in an application. For example, in the information flow list of a microblogging app, different responses occur when clicking different areas: clicking the profile picture leads to the user’s profile page, clicking images leads to viewing the full-size image page, and clicking other parts leads to the microblog details page, etc.

When gesture recognition occurs in multiple views with a parent-child relationship, the gesture arena will check both the parent and child views’ gestures and usually confirm that the child view will respond to the event. This is logical because, visually, the child view’s hierarchy is above the parent view, meaning that it covers it. Therefore, in terms of event handling, the child view is naturally the primary responder.

In the example below, I defined two nested Container containers, each having a click recognition event:

GestureDetector(
  onTap: () => print('Parent tapped'), // Callback for click on parent view
  child: Container(
    color: Colors.pinkAccent,
    child: Center(
      child: GestureDetector(
        onTap: () => print('Child tapped'), // Callback for click on child view
        child: Container(
          color: Colors.blueAccent,
          width: 200.0,
```dart
  height: 200.0,
),
),
),
),
),
);

Running this code and then clicking in the blue area, we can see that even though the parent container also listens to the click event, Flutter only responds to the click event of the child container.

I/flutter (16188): Child tapped

Figure 2 Parent-child nested GestureDetector example

To allow the parent container to receive gestures as well, we need to use both RawGestureDetector and GestureFactory to change the result of the arena deciding who will respond to user events.

Before that, we need to customize a gesture recognizer to add itself back to the arena when it fails in the competition, so that it can continue to respond to user events.

In the following code, I define a class that inherits from TapGestureRecognizer, a tap gesture recognizer, and override its rejectGesture method to manually revive itself:

class MultipleTapGestureRecognizer extends TapGestureRecognizer {
  @override
  void rejectGesture(int pointer) {
    acceptGesture(pointer);
  }
}

Next, we need to pass the gesture recognizer and its factory class to RawGestureDetector, so that it can immediately find the corresponding recognition method when the user generates a gesture interaction event. In fact, the configuration work done by the initialization function of RawGestureDetector is to define the mapping relationship between different gesture recognizers and their factory classes.

Here, since we only need to handle click events, we only need to configure one recognizer. The initialization of the factory class is done by the GestureRecognizerFactoryWithHandlers function, which provides the creation of the gesture recognition object and the corresponding initialization entry.

In the following code, we create a custom gesture recognizer and set the click event callback method. It should be noted that since we only need the parent container to listen to the click events of the child container, we only need to wrap the parent container in RawGestureDetector, and the child container remains unchanged:

RawGestureDetector( // Constructs the gesture recognition mapping relationship of the parent Widget itself
  gestures: {
    // Establish a mapping relationship between multiple gesture recognizers and gesture recognition factory classes, so as to return a recognizer that can respond to the gesture
    MultipleTapGestureRecognizer: GestureRecognizerFactoryWithHandlers<
        MultipleTapGestureRecognizer>(
          () => MultipleTapGestureRecognizer(),
          (MultipleTapGestureRecognizer instance) {
        instance.onTap = () => print('parent tapped '); // Click callback
      },
    )
  },
  child: Container(
    color: Colors.pinkAccent,
    child: Center(
      child: GestureDetector( // The child view can continue to use GestureDetector
        onTap: () => print('Child tapped'),
        child: Container(...),
      ),
    ),
  ),
);

Running this code, we can see that when we click on the blue container, its parent container also receives the Tap event.

I/flutter (16188): Child tapped
I/flutter (16188): parent tapped

Summary #

Alright, that’s it for today’s sharing. Let’s briefly review how Flutter handles user events.

Firstly, we learned about the underlying raw pointer events in Flutter, as well as the corresponding listening methods and event bubbling mechanism.

Then, we learned about Gestures, which encapsulate the semantics of raw pointer events. We explored various methods of recognizing gestures and their ability to support multiple gestures simultaneously.

Finally, I introduced the event handling mechanism of Gestures. In Flutter, although we can listen to multiple gestures on a single widget, or listen to the same gesture on multiple widgets, Flutter uses a gesture arena to determine which gesture should respond to user behavior. If we want multiple gestures to respond to user behavior at the same time, we need to customize the gestures using RawGestureDetector and gesture factories, and manually revive a gesture that fails in the arena.

When dealing with multiple gesture recognition scenarios, it is easy to encounter conflicts between gestures. For example, when we need to perform operations like click, long press, rotate, zoom, and drag on an image, how do we determine whether the user is currently clicking or long pressing, rotating or zooming? If we want to handle complex interactive gestures accurately, we must intervene in the gesture recognition process and resolve any conflicts.

However, it should be noted that the conflicts only occur in the semantic recognition process of gestures, not in the raw pointer events. Therefore, when dealing with complex conflict scenarios where gestures are difficult to handle, we can also directly recognize the raw pointer events through a Listener to resolve the conflicts in gesture recognition.

I have put the event handling demo mentioned in today’s sharing on GitHub. You can download it and run it yourself to further solidify your learning.

Thought Questions #

Finally, I will leave you with two thought questions.

  1. For a parent container that contains a FlatButton, if the parent container uses GestureDetector to listen for the onTap event, will the parent container’s click event be recognized when we click on the button? Why or why not?
  2. If we listen for the onDoubleTap event and double tap on the button, will the parent container’s double tap event be recognized? Why or why not?

Feel free to leave a comment in the comment section and share your thoughts. I will be waiting for you in the next article! Thank you for listening, and feel free to share this article with more friends to read.