The term gesture is used to describe an interaction between the touch screen and the user which can be detected and used to trigger an event in the app. Drags, taps, double taps, pinching, rotation motions and long presses are all considered to be gestures in SwiftUI. The goal of this chapter is to explore the use of SwiftUI gesture recognizers within a SwiftUI based app.
Creating the GestureDemo Example Project
To try out the examples in this chapter, create a new Multiplatform App Xcode project named GestureDemo.
Basic Gestures
Gestures performed within the bounds of a view can be detected by adding a gesture recognizer to that view. SwiftUI provides recognizers for tap, long press, rotation, magnification (pinch) and drag gestures.
A gesture recognizer is added to a view using the gesture() modifier, passing through the gesture recognizer to be added.
In the simplest form, a recognizer will include one or more action callbacks containing the code to be executed when a matching gesture is detected on the view. The following example adds a tap gesture detector to an Image view and implements the onEnded callback containing the code to be performed when the gesture is completed successfully:
You are reading a sample chapter from an old edition of iOS App Development Essentials. Purchase the fully updated iOS 18 App Development Essentials book. The full book contains 71 chapters, over 612 pages of in-depth information, downloadable source code, and access to over 50 SwiftUI knowledge test quizzes. |
struct ContentView: View {
var body: some View {
Image(systemName: "hand.point.right.fill")
.gesture(
TapGesture()
.onEnded { _ in
print("Tapped")
}
)
}
}
Code language: Swift (swift)
Using Live Preview in debug mode, test the above view declaration, noting the appearance of the “Tapped” message in the debug console panel when the image is clicked (if the message does not appear, try running the app in a simulator session instead of using the Live Preview).
When working with gesture recognizers, it is usually preferable to assign the recognizer to a variable and then reference that variable in the modifier. This makes for tidier view body declarations and encourages reuse:
var body: some View {
let tap = TapGesture()
.onEnded { _ in
print("Tapped")
}
return Image(systemName: "hand.point.right.fill")
.gesture(tap)
}
Code language: Swift (swift)
When using the tap gesture recognizer, the number of taps required to complete the gesture may also be specified. The following, for example, will only detect double taps:
let tap = TapGesture(count: 2)
.onEnded { _ in
print("Tapped")
}
Code language: Swift (swift)
The long press gesture recognizer is used in a similar way and is designed to detect when a view is touched for an extended length of time. The following declaration detects when a long press is performed on an Image view using the default time duration:
var body: some View {
let longPress = LongPressGesture()
.onEnded { _ in
print("Long Press")
}
return Image(systemName: "hand.point.right.fill")
.gesture(longPress)
}
Code language: Swift (swift)
To adjust the duration necessary to qualify as a long press, simply pass through a minimum duration value (in seconds) to the LongPressGesture() call. It is also possible to specify a maximum distance from the view from which the point of contact with the screen can move outside of the view during the long press. If the touch moves beyond the specified distance, the gesture will cancel and the onEnded action will not be called:
You are reading a sample chapter from an old edition of iOS App Development Essentials. Purchase the fully updated iOS 18 App Development Essentials book. The full book contains 71 chapters, over 612 pages of in-depth information, downloadable source code, and access to over 50 SwiftUI knowledge test quizzes. |
let longPress = LongPressGesture(minimumDuration: 10,
maximumDistance: 25)
.onEnded { _ in
print("Long Press")
}
Code language: Swift (swift)
A gesture recognizer can be removed from a view by passing a nil value to the gesture() modifier:
.gesture(nil)
Code language: Swift (swift)
The onChange Action Callback
In the previous examples, the onEnded action closure was used to detect when a gesture completes. Many of the gesture recognizers (except for TapGesture) also allow the addition of an onChange action callback. The onChange callback will be called when the gesture is first recognized, and each time the underlying values of the gesture change, up until the point that the gesture ends.
The onChange action callback is particularly useful when used with gestures involving motion across the device display (as opposed to taps and long presses). The magnification gesture, for example, can be used to detect the movement of touches on the screen.
struct ContentView: View {
var body: some View {
let magnificationGesture =
MagnificationGesture(minimumScaleDelta: 0)
.onEnded { _ in
print("Gesture Ended")
}
return Image(systemName: "hand.point.right.fill")
.resizable()
.font(.largeTitle)
.gesture(magnificationGesture)
.frame(width: 100, height: 90)
}
}
Code language: Swift (swift)
The above implementation will detect a pinching motion performed over the Image view but will only report the detection after the gesture ends. Within the preview canvas, pinch gestures can be simulated by holding down the keyboard Option key while clicking in the Image view and dragging.
To receive notifications for the duration of the gesture, the onChanged callback action can be added:
You are reading a sample chapter from an old edition of iOS App Development Essentials. Purchase the fully updated iOS 18 App Development Essentials book. The full book contains 71 chapters, over 612 pages of in-depth information, downloadable source code, and access to over 50 SwiftUI knowledge test quizzes. |
let magnificationGesture =
MagnificationGesture(minimumScaleDelta: 0)
.onChanged( { _ in
print("Magnifying")
})
.onEnded { _ in
print("Gesture Ended")
}
Code language: Swift (swift)
Now when the gesture is detected, the onChanged action will be called each time the values associated with the pinch operation change. Each time the onChanged action is called, it will be passed a MagnificationGesture. Value instance which contains a CGFloat value representing the current scale of the magnification.
With access to this information about the magnification gesture scale, interesting effects can be implemented such as configuring the Image view to resize in response to the gesture:
struct ContentView: View {
@State private var magnification: CGFloat = 1.0
var body: some View {
let magnificationGesture =
MagnificationGesture(minimumScaleDelta: 0)
.onChanged({ value in
self.magnification = value
})
.onEnded({ _ in
print("Gesture Ended")
})
return Image(systemName: "hand.point.right.fill")
.resizable()
.font(.largeTitle)
.scaleEffect(magnification)
.gesture(magnificationGesture)
.frame(width: 100, height: 90)
}
}
Code language: Swift (swift)
The updating Callback Action
The updating callback action is like onChanged with the exception that it works with a special property wrapper named @GestureState. GestureState is like the standard @State property wrapper but is designed exclusively for use with gestures. The key difference, however, is that @GestureState properties automatically reset to the original state when the gesture ends. As such, the updating callback is ideal for storing transient state that is only needed while a gesture is being performed.
Each time an updating action is called, it is passed the following three arguments:
- DragGesture.Value instance containing information about the gesture.
- A reference to the @GestureState property to which the gesture has been bound.
- A Transaction object containing the current state of the animation corresponding to the gesture. The DragGesture.Value instance is particularly useful and contains the following properties:
- location (CGPoint) – The current location of the drag gesture.
- predictedEndLocation (CGPoint) – Predicted final location, based on the velocity of the drag if dragging stops.
- predictedEndTranslation (CGSize) – A prediction of what the final translation would be if dragging stopped now based on the current drag velocity.
- startLocation (CGPoint) – The location at which the drag gesture started.
- time (Date) – The time stamp of the current drag event.
- translation (CGSize) – The total translation from the start of the drag gesture to the current event (essentially the offset from the start position to the current drag location).
Typically, a drag gesture updating callback will extract the translation value from the DragGesture.Value object and assign it to a @GestureState property and will typically resemble the following:
You are reading a sample chapter from an old edition of iOS App Development Essentials. Purchase the fully updated iOS 18 App Development Essentials book. The full book contains 71 chapters, over 612 pages of in-depth information, downloadable source code, and access to over 50 SwiftUI knowledge test quizzes. |
let drag = DragGesture()
.updating($offset) { dragValue, state, transaction in
state = dragValue.translation
}
Code language: Swift (swift)
The following example adds a drag gesture to an Image view and then uses the updating callback to keep a @ GestureState property updated with the current translation value. An offset() modifier is applied to the Image view using the @GestureState offset property. This has the effect of making the Image view follow the drag gesture as it moves across the screen.
struct ContentView: View {
@GestureState private var offset: CGSize = .zero
var body: some View {
let drag = DragGesture()
.updating($offset) { dragValue, state, transaction in
state = dragValue.translation
}
return Image(systemName: "hand.point.right.fill")
.font(.largeTitle)
.offset(offset)
.gesture(drag)
}
}
Code language: Swift (swift)
If it is not possible to drag the image this may be because of a problem with the live view in the current Xcode 12 release. The example should work if tested on a simulator or physical device. Note that once the drag gesture ends, the Image view returns to the original location. This is because the offset gesture property was automatically reverted to its original state when the drag ended.
Composing Gestures
So far in this chapter we have looked at adding a single gesture recognizer to a view in SwiftUI. Though a less common requirement, it is also possible to combine multiple gestures and apply them to a view. Gestures can be combined so that they are detected simultaneously, in sequence or exclusively. When gestures are composed simultaneously, both gestures must be detected at the same time for the corresponding action to be performed. In the case if sequential gestures, the first gestures must be completed before the second gesture will be detected. For exclusive gestures, the detection of one gesture will be treated as all gestures being detected.
Gestures are composed using the simultaneously(), sequenced() and exclusively() modifiers. The following view declaration, for example, composes a simultaneous gesture consisting of a long press and a drag:
struct ContentView: View {
@GestureState private var offset: CGSize = .zero
@GestureState private var longPress: Bool = false
var body: some View {
let longPressAndDrag = LongPressGesture(minimumDuration: 1.0)
.updating($longPress) { value, state, transition in
state = value
}
.simultaneously(with: DragGesture())
.updating($offset) { value, state, transaction in
state = value.second?.translation ?? .zero
}
return Image(systemName: "hand.point.right.fill")
.foregroundColor(longPress ? Color.red : Color.blue)
.font(.largeTitle)
.offset(offset)
.gesture(longPressAndDrag)
}
}
Code language: Swift (swift)
In the case of the following view declaration, a sequential gesture is configured which requires the long press gesture to be completed before the drag operation can begin. When executed, the user will perform a long press on the image until it turns green, at which point the drag gesture can be used to move the image around the screen.
You are reading a sample chapter from an old edition of iOS App Development Essentials. Purchase the fully updated iOS 18 App Development Essentials book. The full book contains 71 chapters, over 612 pages of in-depth information, downloadable source code, and access to over 50 SwiftUI knowledge test quizzes. |
struct ContentView: View {
@GestureState private var offset: CGSize = .zero
@State private var dragEnabled: Bool = false
var body: some View {
let longPressBeforeDrag = LongPressGesture(minimumDuration: 2.0)
.onEnded( { _ in
self.dragEnabled = true
})
.sequenced(before: DragGesture())
.updating($offset) { value, state, transaction in
switch value {
case .first(true):
print("Long press in progress")
case .second(true, let drag):
state = drag?.translation ?? .zero
default: break
}
}
.onEnded { value in
self.dragEnabled = false
}
return Image(systemName: "hand.point.right.fill")
.foregroundColor(dragEnabled ? Color.green : Color.blue)
.font(.largeTitle)
.offset(offset)
.gesture(longPressBeforeDrag)
}
}
Code language: Swift (swift)
Summary
Gesture detection can be added to SwiftUI views using gesture recognizers. SwiftUI includes recognizers for drag, pinch, rotate, long press and tap gestures. Gesture detection notification can be received from the recognizers by implementing onEnded, updated and onChange callback methods. The updating callback works with a special property wrapper named @GestureState. A GestureState property is like the standard state property wrapper but is designed exclusively for use with gestures and automatically resets to its original state when the gesture ends.
Gesture recognizers may be combined so that they are recognized simultaneously, sequentially or exclusively.