Measuring Neck Range of Motion with QuickPose iOS SDK
Are you looking to integrate neck range of motion tracking into your health app? The QuickPose iOS SDK offers powerful tools to transform your app into a digital goniometer, allowing for precise measurement of neck flexibility and mobility. This feature is essential for monitoring neck health, assessing posture, and tracking recovery from injuries. In this guide, you’ll learn how to implement neck range of motion measurement using QuickPose, ensuring accurate results, real-time user feedback, and an enhanced user experience. Whether you’re developing a new health app or enhancing an existing one, this feature will help you deliver advanced functionality that meets the needs of users focused on maintaining or improving their neck health.
Steps to Measure Range of Motion of the Neck in Your App:
Register an SDK Key with QuickPose
Get your free SDK key on https://dev.quickpose.ai, usage limits may apply. SDK Keys are linked to your bundle ID, please check Key before distributing to the App Store.
This is a quick look to integrate Neck Range of Motion Measurement using the QuickPose iOS SDK. You can see the full documentation here: QuickPose iOS SDK Neck ROM installation.
Activate Neck Range of Motion Feature
feature = .rangeOfMotion(.neck(clockwiseDirection: false))
feature = .rangeOfMotion(.neck(clockwiseDirection: true))
feature = .rangeOfMotion(.neck(clockwiseDirection: false), style: customOrConditionalStyle)
Key Points to Consider:
- Clockwise Direction: The
clockwiseDirection
parameter determines the direction in which the neck’s range of motion is measured. Accurate direction specification is essential for getting correct ROM values. - Custom or Conditional Styling: Adding custom or conditional styling allows for dynamic visual feedback based on the user’s ROM results. For example, you could change the color of the result display when the neck reaches a certain range, helping users quickly understand their performance.
Conditional Styling
To give user feedback consider using conditional styling so that when the user’s measurement goes above a threshold, here 0.8, a green highlight is shown.
let greenHighlightStyle = QuickPose.Style(conditionalColors: [QuickPose.Style.ConditionalColor(min: 0.8, max: nil, color: UIColor.green)])
quickPose.start(features: [.rangeOfMotion(.neck(clockwiseDirection: false), style: customOrConditionalStyle)],
onFrame: { status, image, features, feedback, landmarks in ...
})
Improving the Captured Results
The basic implementation above would likely capture an incorrect value, as in the real world users need time to understand what they are doing, change their mind, or QuickPose can simply get an incorrect value due to poor lighting or the user’s stance. These issues are partially mitigated by on-screen feedback, but it’s best to use an QuickPoseDoubleUnchangedDetector
to keep reading the values until the user has settled on a final answer.
To steady the .rangeOfMotion(.neck(clockwiseDirection: false))
results declare a configurable Unchanged detector, which can be used to turn lots of our input features to read more reliably.
@State private var unchanged = QuickPoseDoubleUnchangedDetector(similarDuration: 2)
This will on trigger the callback block when the result has stayed the same for 2 seconds, the above has the default leniency, but this can be modified in the constructor.
@State private var unchanged = QuickPoseDoubleUnchangedDetector(similarDuration: 2, leniency: Double = 0.2) // changed to 20% leniency
The unchanged detector is added to your onFrame callback, and is updated every time a result is found, triggering its onChange
callback only when the result has not changed for the specified duration.
quickPose.start(features: [.rangeOfMotion(.neck(clockwiseDirection: false))], onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
feedbackText = result.stringValue
unchanged.count(result: result.value) {
print("Final Result \(result.value)")
// your code to save result
}
} else {
feedbackText = nil // blank if no hand detected
}
case .noPersonFound:
feedbackText = "Stand in view";
case .sdkValidationError:
feedbackText = "Be back soon";
}
})
Improving Guidance
Despite the improvements above, the user doesn’t have clear instructions to know what to do, this can be fixed by adding user guidance.
Our recommended pattern is to use an enum
to capture all the states in your application.
enum ViewState: Equatable {
case intro
case measuring(score: Double)
case completed(score: Double)
case error(_ prompt: String)
var prompt: String? {
switch self {
case .intro:
return "Lean your head to your left\nas far as you can"
case .measuring(let score):
return nil
case .completed(let score):
return "Thank you\nReading: \(String(format:"%.0f°",score))"
case .error(let prompt):
return prompt
}
}
var features: [QuickPose.Feature]{
switch self {
case .intro, .measuring:
return [.rangeOfMotion(.neck(clockwiseDirection: false))]
case .completed, .error:
return []
}
}
}
Alongside the states we also provide a prompt
text, which instructs the user at each step, and similarly the features
property instructs which features to pass to QuickPose, note for the completed state QuickPose doesn’t process any features.
Declare this so your SwiftUI views can access it, starting in the .intro
state, our exmaple is simplifited to just demonstrate the pattern, as you would typically start with more positioning guidance.
@State private var state: ViewState = .intro
Next make some modifications, so that your feedbackText is pulled from the state prompt by default.
.overlay(alignment: .center) {
if let feedbackText = state.prompt {
Text(feedbackText)
.font(.system(size: 26, weight: .semibold)).foregroundColor(.white).multilineTextAlignment(.center)
.padding(16)
.background(RoundedRectangle(cornerRadius: 8).foregroundColor(Color("AccentColor").opacity(0.8)))
.padding(.bottom, 40)
}
}
This now means you can remove the feedbackText
declaration:
//@State private var feedbackText: String? = nil // remove the feedbackText
There’s two changes we need to make, first we need to update QuickPose with the features for each state:
.onChange(of: state) { _ in
quickPose.update(features: state.features)
}
Then we should start QuickPose from the state’s features as well.
.onAppear {
quickPose.start(features: state.features, onFrame: { status, image, features, feedback, landmarks in
...
And in the onFrame
callback update the state instead of the feedbackText. This allows the UI input to change the view state in a controlled manner, so that for example the .intro
state can only be accessed if the user’s hand is missing from the .measuring
state, or from the .error
state.
quickPose.start(features: state.features, onFrame: { status, image, features, feedback, landmarks in
switch status {
case .success:
overlayImage = image
if let result = features.values.first {
state = .measuring(score: result.value)
unchanged.count(result: result.value) {
state = .completed(score: result.value)
// your code to save result
}
} else if case .measuring = state {
state = .intro
} else if case .error = state {
state = .intro
}
case .noPersonFound:
state = .error("Stand in view")
case .sdkValidationError:
state = .error("Be back soon")
}
})
By following this guide, you can effectively capture and measure the range of motion of the neck, offering a seamless and informative experience for users looking to monitor their neck mobility.