I was going to have a demo for upper management of my company, but the VP of R&D got waylaid by travel issues and had to postpone. So that gave me a few additional hours to do XML parsing things and generally fix up the application. This is what a study looks like:
<?xml version="2.0" encoding="ISO-8859-1"?> <Setup> <ResearcherName>Some Researcher</ResearcherName> <SubjectName>Some Graduate Student</SubjectName> <SubjectAge>23</SubjectAge> <SubjectGender>Male</SubjectGender> <Seed>0</Seed> <SpeedTests>3</SpeedTests> <AccuracyTests>3</AccuracyTests> <Sessions>2</Sessions> <SoundFile>C:/Wavs/heli.wav</SoundFile> <SoundFile>C:/monoExport.wav</SoundFile> </Setup>
So now I have a release build that can create ad-hoc sessions or read in a test session from an xml file. The results from the test is output to a csv file where the position of the source(s), the position of the cursor, the angle between them, and the time to act are all recorded. In addition, the number of speakers used is determined and the normalized volume to each channel is recorded. So yay.
I suppose that the next step with the code would be to set the speakers explicitly, rather than letting the API determine the volume, but I think that would be another study. And I should add comments before I forget why I wrote things the way I did (wstring wst(str.begin(), str.end());, I’m looking at you…)
For the pilot study, I’m going to team up with Brian to go and find all the relavent previous work and set up the study. I’m pretty sure that we’re covered by one of Ravi’s IRBs, so I think that collecting data can begin as soon as a machine is set up in the lab that we can use.
And while that’s cooking, I can now use my shiny new audio library to hook up to the telepresence simulator. Woohoo!