Monthly Archives: September 2013

Results!

With 10 subjects running two passes each through the system, I now have significant (Using one-way ANOVA) results for the Phantom setup. First, user errors:

Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 -0.3333 0.3123 -1.067 0.7110
OPEN_LOOP - HAPTIC == 0 0.5833 0.3123 1.868 0.2565
TACTOR - HAPTIC == 0 1.0000 0.3123 3.202 0.0130 *
OPEN_LOOP - HAPTIC_TACTOR == 0 0.9167 0.3123 2.935 0.0262 *
TACTOR - HAPTIC_TACTOR == 0 1.3333 0.3123 4.269 <0.001***
TACTOR - OPEN_LOOP == 0 0.4167 0.3123 1.334 0.5466
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

Next, normalized user task completion speed

Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 0.11264 0.07866 1.432 0.4825
OPEN_LOOP - HAPTIC == 0 0.24668 0.07866 3.136 0.0118 *
TACTOR - HAPTIC == 0 0.17438 0.07866 2.217 0.1255
OPEN_LOOP - HAPTIC_TACTOR == 0 0.13404 0.07866 1.704 0.3269
TACTOR - HAPTIC_TACTOR == 0 0.06174 0.07866 0.785 0.8612
TACTOR - OPEN_LOOP == 0 -0.07230 0.07866 -0.919 0.7947
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

So what this says is that HAPTIC_TACTOR has the lowest error occurrence, and that HAPTIC is the fastest in achieving the task (note – there may be some Force Feedback artifacts that contribute to this result but that will be dealt with in the next study)

This can be shown best by looking at some plots. Here’s the error results as means plots

ErrorMeansPlot

And here are means plots for the task completion speed

Fastest50Percent

Since this is a pilot study with only 10 participants, the populations are only just separating in a meaningful way, but looking at the charts it looks like HAPTIC and HAPTIC_TACTOR will probably continue to become more separate from OPEN_LOOP and TACTOR.

What does this mean?

First, and this is only implicit from the study – it is possible to attach simpler, cheaper sensors and actuators (force and vibration) to a haptic device and get good performance. Even with simple semi-physics, all users were able to grip and manipulate the balls in the scenario in such a way as to achieve the goal. Ninety percent of the users who made no errors in placing 5 balls in a goal took between 20 and 60 seconds, or between 4 and 12 seconds per ball (including moving to the ball, grasping the ball and successfully depositing the ball in a narrow goal). Not bad for less than $30 in sensors and actuators.

Second, force-feedback really makes a difference. Doing tasks in an “open loop” framework is significantly slower than doing the same task with force feedback. I doubt that this is something that users will get better at, so the question with respect to gesture-based interaction is how to compensate? As can be seen from the results, it is unlikely that tactors alone can help with this problem. What will?

Third, not every axis needs to have full force-feedback. It seems that as long as the “reference frame” is FF, then the inputs that work with respect to that frame don’t need to be as sophisticated. This does mean that low(ish) cost, high-DOF systems using hybrid technologies such as Force Feedback plus Force/Vibration may be possible. This might open up a new area of exploration in HCI.

Lastly, the issue of how multiple modalities and how they could effectively perform as assistive technologies needs to be explored with this system. There are only a limited set (4?) of ways to render positional information (visual, tactile, auditory, proprioceptive) to a user, and this configuration as it currently stands is capable of three of them. However, because of the way that the DirectX sound library is utilized to provide tactile information, it is trivial to extend the setup so that 5 channels of audio information could also be provided to the user. I imagine having four speakers placed at the four corners of a monitor, providing an audio rendering of the objects in the scene. A subwoofer channel could be used to provide additional tactile(?) information.

Once multiple modalities are set up, then the visual display can be constrained in a variety of ways. It could be blurred, intermittently frozen or blacked out. Configurations of haptic/tactile/auditory stimuli could then be tested against these scenarios to determine how they affect the completion of the task. Conversely, the user could be distracted (for example in a driving game), where it is impossible to pay extensive attention to the placement task. There are lots of opportunities.

Anyway, it’s been a good week.

The Saga Continues, and Mostly Resolves.

Continuing the ongoing saga of trying to get an application written in Visual Studio 2010 in MSVC to run on ANY OTHER WINDOWS SYSTEM than the dev system. Today, I should be finishing the update of the laptop from Vista to Win7. Maybe that will work. Sigh.

Some progress. It seems you can’t use “Global” in the way specified in the Microsoft documentation about CreateFileMapping() unless you want to run everything as admin. See StackOverflow for more details.

However now the code is crashing on initialization issues. Maybe something to do with OpenGL?

It’s definitely OpenGL. All apps that use it either crash or refuse to draw.

Fixed. I needed to remove the drivers and install NVIDIA’s (earlier) versions. I’m not getting the debug text overlay, which is odd, but everything else is working. Sheesh. I may re-install the newest drivers since I now have a workable state that I know I can reach, but I think it’s time to do something else than wait for the laptop to go through another install/reboot cycle.

Started writing haptic paper. Targets are CHI, UIST, or HRI. Maybe even MIG? This is now a very different paper from the Motion Feedback paper from last year, and I’m not sure what the best way to present the information is. The novel invention part is the combinations of a simple (i.e. 3-DOF) haptic device with an N-DOF force-based device attached. The data shows that this combination has much lower error rates and faster task completion times than other configurations (tactor only and open loop), and the same times for a purely haptic system. Not sure how to organize this yet….

This is also pretty interesting… http://wintersim.org/. Either for iRevolution or ArTangibleSim

The unbearable non-standardness of Windows

I have been trying to take the Phantom setup on the road for about two weeks now. It’s difficult because the Phantom uses FireWire (IEE 1394) and it’s hard to find something small and portable that supports that.

My first attempt was to use my Mac Mini. Small. Cheap(ish). Ports galore. Using Bootcamp, I installed a copy of Windows Pro 7. That went well, but when I tried to use the Phantom, the system would hang when attempting to send forces. Reading joint angles was OK though.

I then tried My new Windows 8 laptop, which has an extension slot. The shared memory wouldn’t even run there. Access to the shared space appears not to be allowed.

The next step was to try an old development laptop that had a Vista install on it. The Phantom ran fine, but the shared memory communication caused the graphics application to crash. So I migrated the Windows 7 install from the Mac to the laptop, where I’m currently putting all the pieces back together.

It’s odd. It used to be that if you wrote code on one Windows platform that it would run on all windows platforms. Those days seem long gone. It looks like I can get around this problem if I change my communication scheme to sockets or something similar, but I hate that. Shared memory is fast and clean.

Slow. Painful. Progress. But at least it gives me some time to do writing…

Results?

Looks like we got some results with the headset system. Still trying to figure out what it means (other than the obvious that it’s easier to find the source of a single sound).HeadsetPrelimResults

Here are the confidence intervals:

confidenceIntervals

Next I try to do something with the Phantom results. I think I may need some more data before anything shakes out.

Strain Relief and Shorts

IMG_2194Yesterday, just as I was about to leave work, one of my coworkers dropped by to see what I was doing and thought it would be fun to be experimented upon. Cool.

I fired up the system, created a new setup file and ran the test. Everything ran perfectly, and I got more good results. When I cam in this morning though, the rig was pretty banged up. A wiring harness that had been fine for me working out bugs was nowhere near robust enough to run even one person through a suite of tasks. It’s the Law of Enemy Action.

You’ve heard of Murphy’s Law (Everything that can go wrong, will). The Law of Enemy action is similar: “People will use your product as if they are trying to destroy it”. In a previous life I designed fitness equipment and it was jaw dropping to see the amount of damage a customer could inflict on a product. Simply stated – you need to overdesign and overbuild if at all possible.

With that in mind, I pulled all the hardware off the Phantom and started over. New, lighter, more flexible wire. strain relieved connections. Breakaway connections. The works.

When it was done, I fired it up and started to test. Sensors – check. Actuators – check. Yay! And then the right pressure sensor started to misbehave. It was kind of beat up, so it made sense to replace it. But when I went to test, the new sensor was misbehaving in the same way. And it seemed to be related to turning on the vibro-acoustic actuators.

Time to open the box up and poke around. Nope – everything looked good. Maybe the connector? Aha! My new more flexible cable was stranded rather than solid. And a few strands from one of the wires was touching the right sensor connection.

So I pulled everything apart and replaced the cable that went into the connection with 22 gauge solid wired which then connected to my stranded cable. All fixed.And an example that even though Murphy’s Law is bad enough, you should always be prepared for Enemy Action.

 

First Results :-)

Downloaded several wav files of sine wave tones, ranging from 100hz to 1,000hz. The files are created using this generator, and are 0.5 sec in length.

Glued the tactor actuators in place, since they kept on coming loose during the testing

Fixed the file outputEach test result is now ordered

Fixed a bug where the number of targets and the number of goals were not being recorded

Added a listing of the audio files used in the experiment.

Got some initial results based on my self-testing today: firstResults
The pure haptic and tactor times to perform the task are all over the place, but it’s pretty interesting to note that Haptic/Tactor and Open Loop are probably significantly different. Hmmmm.

Packaging!

Ok, here it is, all ready to travel:

IMG_2192

It’s still a bit of a rat’s nest inside the box, but I’ll clean that up later today.

Adding a “practice mode” to the app. It will read in a setup file and allow the user to try any of the feedback modalities using srand(current milliseconds) – done

Sent an email off to these folks asking how to get their C2-transducers.

Need to look into perceptual equivalence WRT speech/nonspeech tactile interaction. Here’s one paper that might help: http://www.haskins.yale.edu/Reprints/HL0334.pdf

Fixed my truculent pressure sensor and glued the components into the enclosure. Need to order a power strip.