I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.
Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.
Come to think of it, I could put an actuator on a mouse as well and move over materials?
Tasks for today:
- Finish getting the Phidgets code working in KF_Hand_3 – done
- Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
- Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos
Brian came over last night and we were able to load up his laptop with the drivers and software and run examples. At this point, we’re looking at three things to study with the rig:
- What is/are the best frequencies for spatial orientation (position and distance) using these actuators?
- What happens with speed and accuracy when there are more than one signal?
- Do 7 actuators work better than 4?
We’re in the process now of writing up the test plan. In the meantime, I’m now going to try to adapt the Audio2X software to replace the synthesizers and use a Phidgets 1019 to replace the Ardino of the previous telepresence test rig. Once that’s done, then I’ll add in the Phantom. For reference, here’s a video of the first version running:
And here’s a picture of the latest interface that will be attached to the Phantom:
To move this along, I’ve brought a small pile of electronics in from home. Tomorrow’s goal is to make sure that I can connect to the interface board. Once that’s working I need to hook up the sensors from the old prototype (note! Bring in crimping pins for female DB15 connector!).
Alright, so I now have my audio library. Next on the agenda is a test program that tests reactions of users to vibroacoustic input. The test needs to present randomized stimuli to users, so that they can be tested for:
- Time to respond with a direction
- Accuracy of direction
- Efficacy of stimuli
Since this is probably going to be within subjects (multiple stimuli) and also between subjects (same tests on multiple users) we’re going to want to be able to present the same sequence, so we’ll need to seed the random number generator so we get the same sequence.
- Start with the default random number generator, but maybe run through a wrapper class in case we need something like a Mersenne Twister.
- Xml file to specify the input and output of the experiment. This library looks reasonable.
- Sounds to use (random distribution of sound use)
- Test type (Accuracy, Speed, or both)
- Attempts per test
- Number of tests (must be even)
- Random seed
- min/max delay between test segments
- output filename
- Test UID
- Free form note field (1024 characters?)
- Accuracy or Reaction time test
- Audio configuration
- Random seed
- Calibration results
- Time(s) to click in response to visual cue
- Time(s) to click in response to audio cue
- For each played sound
- Sequence x of total
- Audio file(s) used (WAV)
- Audio source position (x, y) in screen coordinates from the origin, where the user’s head is
- Audio playback matrix (actual speaker relative volume)
- Time to click after play start
- Duration of sound
- Click position (x, y) in screen coordinates from the origin, where the sound is perceived to have come from
- File navigator for xml file
- Runs a sequence of tests where the user has to click the mouse as quickly as possible in response to the canvas flashing white, and then all(?) speakers in the headpiece playing the calibration sound
- Calibration cues are have a randomly determined timing between X and Y seconds
- Test is disabled until calibration is run. Loading a new xml document effectively resets the system, requiring a new calibration sequence
- Shows a label that says either “Accuracy” or “Speed” based on which test is being run. We could change the background of the display as well?
- The graphics screen shows a circular cursor that resets to the center of the graphics screen at the beginning of each section. Once the audio cue plays, the user can move the mouse away from the center towards the direction of the sound. The circle is clamped in its motion so that the result is always a valid angle, as long as the user moves the cursor far enough away from the center (TBD). Clicking the mouse causes the clock to stop and the cursor to reset.
- If this is not the last test segment, then a random time period between X and Y seconds elapses before the next test is run.
- Once the test completes, the system checks to see if that is the last one. If not, a stochastic choice is made to determine if the next test should be speed or accuracy. By the time all tests have run, the number of speed and accuracy runs will be equal.
- Output file is appended throughout the test (open, write, close? Or read in the DOM, update and write out?)
Started the FLTK wrapper, and probably saved a good deal of time by going back to Erco’s FLTK page and associated videos
It looks like my 416 hours are going to be approved, so now it’s just a matter of waiting (and nagging).
Cool thing for the day, courtesy of GA Tech: “Taxels“.
Basically, my bandwidth has been exceeded, and this work can be extended into the summer, while finals are due in two weeks. It’s that whole urgent vs. important thing.
Anyway, I did find an interesting piece on Touch Mechanics: Haptic Technology and Perceptual Computing that got written up over at Intel Software. And to make this post look more interesting, I’m going to link in the TedEd talk that most of the article is based on, so you don’t even have to click the link:
Who says you can’t procrastinate productively?