Monthly Archives: July 2013

Random bits

I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.

Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.

Come to think of it, I could put an actuator on a mouse as well and move over materials?

Tasks for today:

  • Finish getting the Phidgets code working in KF_Hand_3 – done
  • Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
  • Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos

Moar Phidgeting

  • Brought in my fine collection of jumpers and connectors. Next time I won’t have to build a jumper cable…
  • Built the framework for the new hand test. The basic graphics are running
  • Added cube code to the FltkShaderSupport library. Here’s everything running:
  • KF_framework
  • Next, I’m going to integrate the Phidget sensor code into the framework, then hook that up to sound code.
  • Had Dong register for Google’s Ingress, just to see what’s going on.
  • Loaded in the Phidgets example code and the library that works is the x86 library. Using the 64bit library results in unresolved externals errors.
  • There are a lot of straight C examples. Just found the C++ class examples simple.h and simple.cpp.

Phidget about

Yesterday, I got the sensor resistance converted to voltage using this nifty tutorial from Sparkfun. A 1k ohm resistor seems to work best, since I want most of the sensitivity to be light pressure.

Today, the goal is to build a circuit with three channels that connects to the Phidgets voltage sensor. The only thing I’m wondering is if I’ll get the resolution with the voltage range I’m getting – Zero to about 2.5 volts. I’m estimating that I should get about 1500 – 3000 steps out of that, assuming -30v to +30v is resolved to a (unsigned?) 16-bit int.

Done!

ratsnest

Pilot Study and Telepresence.

Brian came over last night and we were able to load up his laptop with the drivers and software and run examples. At this point, we’re looking at three things to study with the rig:

  1. What is/are the best frequencies for spatial orientation (position and distance) using these actuators?
  2. What happens with speed and accuracy when there are more than one signal?
  3. Do 7 actuators work better than 4?

We’re in the process now of writing up the test plan. In the meantime, I’m now going to try to adapt the Audio2X software to replace the synthesizers and use a Phidgets 1019 to replace the Ardino of the previous telepresence test rig. Once that’s done, then I’ll add in the Phantom. For reference, here’s a video of the first version running:

And here’s a picture of the latest interface that will be attached to the Phantom:

IMG_1444To move this along, I’ve brought a small pile of electronics in from home. Tomorrow’s goal is to make sure that I can connect to the interface board. Once that’s working I need to hook up the sensors from the old prototype (note! Bring in crimping pins for female DB15 connector!).

 

A possible cure for statistics?

Statistics has always seemed to me stuck in a place that resembles physics before Newton. Lots of pieces that work on their own, but no unifying theory. This drives me crazy, and is probably a reason that there is so much hating on statistics.  I discovered Kolmogorov Complexity reading a paper on vacation last week, and wonder if that could be used as a basis for a unified theory of statistics. Here’s a reasonable starting point:

Kolmogorov Complexity – A Primer

which leads to

Information Distance — A Primer

I may be done coding this thing. Time for a pilot study.

I was going to have a demo for upper management of my company, but the VP of R&D got waylaid by travel issues and had to postpone. So that gave me a few additional hours to do XML parsing things and generally fix up the application. This is what a study looks like:

<?xml version="2.0" encoding="ISO-8859-1"?>
<Setup>
 <ResearcherName>Some Researcher</ResearcherName>
 <SubjectName>Some Graduate Student</SubjectName>
 <SubjectAge>23</SubjectAge>
 <SubjectGender>Male</SubjectGender>
 <Seed>0</Seed>
 <SpeedTests>3</SpeedTests>
 <AccuracyTests>3</AccuracyTests>
 <Sessions>2</Sessions>
 <SoundFile>C:/Wavs/heli.wav</SoundFile>
 <SoundFile>C:/monoExport.wav</SoundFile>
</Setup>

So now I have a release build that can create ad-hoc sessions or read in a test session from an xml file. The results from the test is output to a csv file where the position of the source(s), the position of the cursor, the angle between them, and the time to act are all recorded. In addition, the number of speakers used is determined and the normalized volume to each channel is recorded. So yay.

I suppose that the next step with the code would be to set the speakers explicitly, rather than letting the API determine the volume, but I think that would be another study. And I should add comments before I forget why I wrote things the way I did (wstring wst(str.begin(), str.end());, I’m looking at you…)

For the pilot study, I’m going to team up with Brian to go and find all the relavent previous work and set up the study. I’m pretty sure that we’re covered by one of Ravi’s IRBs, so I think that collecting data can begin as soon as a machine is set up in the lab that we can use.

And while that’s cooking, I can now use my shiny new audio library to hook up to the telepresence simulator. Woohoo!

 

Bells and Whistles. More like not-quite-essential functionality…

It was one of those days where work interfered with development. How inconvenient. I shake my fist at the power of the paycheck. Damn you!

  • Made sure that the telepresence demo was working. I need to get back to that!
  • Adding XML parsing of setup files.
  • Reading in files with newfangled std::ifstream. Fun!
  • Using rapidxml, which is working just fine, but suffering from cryptic documentation. I wasn’t sure how to get child nodes until I found this post. It looks like a small library of navigation functions might be useful. Considering there is rapidxml_iterators.hpp and rapidxml_utils.hpp I’ll look there first.

On the plus side, my house didn’t explode

Not much to report today, the gas people were running new lines to my house. It seems they do that once a century, if it needs it or not. With all the hardware at work, it makes developing at home pretty much a no go.

  • Did verify that the release builds work with the end-user DirectX install
  • Added text to emitter position on the graphics screen
  • changed the timer to to std::chrono::high_resolution_clock (check the bottom of this post for an example)
  • It looks like MSVC 2010 doesn’t have <chrono>. Made do with clock_t and clock() instead.
  • Added xml output for test setup. Why does ctime() return a string with a line feed!?

vector<tuple<float, float, string>>::iterator it = sourcePositions->begin();

C++ is like being in a candy store, full of a huge variety of bright, shiny treats that can blow your hand off if you don’t pay attention.

  • Finishing up adding multiple sound capability per test attempt. Because I’ve been away from C++ for a while and I like to try new things, I poked around with tuples for a while, which are kind of neat. Then I decided to put them into a vector and access them. That lead to code like this:
    • vector<FOURF_SAMPLE_TUPLE>::iterator it = myVector.begin();
      while(it != myVector.end()){
      	float sourceX = get<EMITTER_X>(*it);
      	float sourceY = get<EMITTER_Y>(*it);
      }
    • That is *not* the most intuitive code I’ve seen. I mean, it makes sense, and given the limited set of overloadable characters, ok. But I think “[” and “]” would have been a better choice than “<” and “>”.
  • Got the multi sound playing and the results output to .csv. Next is to get the xml setup files running.

Pulling everything apart and putting it back together

  • Adding multiple sound playback
    • Rework the output to handle multiple sounds. This means one TestResult per sound. However, the result cannon be associated with a sound, so for each release, all the emitter sources will have to be included. Later analysis can be used to determine the best fit. Note also that the number of attempts may be greater or lesser than the number of emitters.
  • Need to use the XML to write out and read in just the configuration values
  • Need to save multiple source positions in TestResult. Added bad code at the point to continue.