With 10 subjects running two passes each through the system, I now have significant (Using one-way ANOVA) results for the Phantom setup. First, user errors:
Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 -0.3333 0.3123 -1.067 0.7110
OPEN_LOOP - HAPTIC == 0 0.5833 0.3123 1.868 0.2565
TACTOR - HAPTIC == 0 1.0000 0.3123 3.202 0.0130 *
OPEN_LOOP - HAPTIC_TACTOR == 0 0.9167 0.3123 2.935 0.0262 *
TACTOR - HAPTIC_TACTOR == 0 1.3333 0.3123 4.269 <0.001***
TACTOR - OPEN_LOOP == 0 0.4167 0.3123 1.334 0.5466
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)
Next, normalized user task completion speed
Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 0.11264 0.07866 1.432 0.4825
OPEN_LOOP - HAPTIC == 0 0.24668 0.07866 3.136 0.0118 *
TACTOR - HAPTIC == 0 0.17438 0.07866 2.217 0.1255
OPEN_LOOP - HAPTIC_TACTOR == 0 0.13404 0.07866 1.704 0.3269
TACTOR - HAPTIC_TACTOR == 0 0.06174 0.07866 0.785 0.8612
TACTOR - OPEN_LOOP == 0 -0.07230 0.07866 -0.919 0.7947
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)
So what this says is that HAPTIC_TACTOR has the lowest error occurrence, and that HAPTIC is the fastest in achieving the task (note – there may be some Force Feedback artifacts that contribute to this result but that will be dealt with in the next study)
This can be shown best by looking at some plots. Here’s the error results as means plots
And here are means plots for the task completion speed
Since this is a pilot study with only 10 participants, the populations are only just separating in a meaningful way, but looking at the charts it looks like HAPTIC and HAPTIC_TACTOR will probably continue to become more separate from OPEN_LOOP and TACTOR.
What does this mean?
First, and this is only implicit from the study – it is possible to attach simpler, cheaper sensors and actuators (force and vibration) to a haptic device and get good performance. Even with simple semi-physics, all users were able to grip and manipulate the balls in the scenario in such a way as to achieve the goal. Ninety percent of the users who made no errors in placing 5 balls in a goal took between 20 and 60 seconds, or between 4 and 12 seconds per ball (including moving to the ball, grasping the ball and successfully depositing the ball in a narrow goal). Not bad for less than $30 in sensors and actuators.
Second, force-feedback really makes a difference. Doing tasks in an “open loop” framework is significantly slower than doing the same task with force feedback. I doubt that this is something that users will get better at, so the question with respect to gesture-based interaction is how to compensate? As can be seen from the results, it is unlikely that tactors alone can help with this problem. What will?
Third, not every axis needs to have full force-feedback. It seems that as long as the “reference frame” is FF, then the inputs that work with respect to that frame don’t need to be as sophisticated. This does mean that low(ish) cost, high-DOF systems using hybrid technologies such as Force Feedback plus Force/Vibration may be possible. This might open up a new area of exploration in HCI.
Lastly, the issue of how multiple modalities and how they could effectively perform as assistive technologies needs to be explored with this system. There are only a limited set (4?) of ways to render positional information (visual, tactile, auditory, proprioceptive) to a user, and this configuration as it currently stands is capable of three of them. However, because of the way that the DirectX sound library is utilized to provide tactile information, it is trivial to extend the setup so that 5 channels of audio information could also be provided to the user. I imagine having four speakers placed at the four corners of a monitor, providing an audio rendering of the objects in the scene. A subwoofer channel could be used to provide additional tactile(?) information.
Once multiple modalities are set up, then the visual display can be constrained in a variety of ways. It could be blurred, intermittently frozen or blacked out. Configurations of haptic/tactile/auditory stimuli could then be tested against these scenarios to determine how they affect the completion of the task. Conversely, the user could be distracted (for example in a driving game), where it is impossible to pay extensive attention to the placement task. There are lots of opportunities.
Anyway, it’s been a good week.