Category Archives: Graphics APIs

Some Assembly Required

Integrating all the pieces into one test platform. The test could be to move a collection of physically-based spheres (easy collision detect) from one area to another. Time would be recorded from the indication of a start and stop (spacebar, something in the sim, etc). Variations would be:

  • Open loop: Measure position and pressure, but no feedback
  • Force Feedback (Phantom) only
  • Vibrotactile feedback only
  • Both feedbacks

Probably only use two actuators for the simplicity of the test rig. It would bean that I could use the laptop’s headphone output. Need to test this by wiring up the actuators to a micro stereo plug. Radio Shack tonight.

Got two-way communication running between Phantom and sim.

Have force magnitude adjusting a volume.

Added a SimpleSphere class for most of the testing.

Moar Phidgeting

  • Brought in my fine collection of jumpers and connectors. Next time I won’t have to build a jumper cable…
  • Built the framework for the new hand test. The basic graphics are running
  • Added cube code to the FltkShaderSupport library. Here’s everything running:
  • KF_framework
  • Next, I’m going to integrate the Phidget sensor code into the framework, then hook that up to sound code.
  • Had Dong register for Google’s Ingress, just to see what’s going on.
  • Loaded in the Phidgets example code and the library that works is the x86 library. Using the 64bit library results in unresolved externals errors.
  • There are a lot of straight C examples. Just found the C++ class examples simple.h and simple.cpp.

Enhancements

  • Meeting with Dr. Kuber.
    • Add a “distance” component to the test and a multiple emitter test
    • Got a bunch of items to add actuators to: Hardhat, noise-blocking headphones, and a push-to-talk mic.
  • Added name and gender fields to the GUI and cleaned up the menus
  • Working on adding multiple sounds
    • Added a ‘next’ button. Once pushed, the sources can show until the center is clicked again.
    • I think the test should have options for how the sounds are added
      • Permutations (A, then B, then C, then AB, AC, BC, ABC)
      • All (Going to start with this)
      • Random?
  • Added variable distance

So that’s what happens when a programming language gets old…

  • Continuing with the test exec. I’m also going to need a class that records the data associated with each test segment.
  • Ran into a… Well, I don’t want to call it a bug. Let’s say that C++ is showing its age. FLTK uses char*. Most of Windows uses wchar_t. They don’t play well together, so I spent about half of my time working out the best way to convert between them. It’s this:
  • void setSoundFileString(LPCWSTR wps){
    	soundFileString = new wstring(wps);
    	string str(soundFileString->begin(), soundFileString->end());
    	sprintf_s(soundFile, "%s", str.c_str());
    }
  • I mean really!? Good grief.
  • Got a lot of the exec built and running. Clicking on the center button fires the sound, and you can drag to where you think the sound is. I am not all that accurate. It could be a frequency thing though. I’m running a low 10-20 HZ signal. The test should definitely try different frequencies.

Closing in on something useful

  • Put all the projects into SVN, checked them out and did a clean build. Everything still works.
  • Starting on capturing mouse events. Done. Capturing Left, Middle, Right, Wheel and Drag. I think I want to have it so that the user presses (holds down?) a “button” in the middle of the GL window, signifying that he’s ready for the next sound queue. Once the sound plays, he drags toward the source. This is indicated by a line indicating the vector (and a cursor?). Releasing the mouse is the event that marks and records the choice, vector, elapsed time and position (vector?) of the emitter.
  • Making a button class for OGL. Done
  • Making a line segment class so we can point to where the sound is. Done.
  • And just to show that I’ve been paying attention in class, the user doesn’t have to worry about hitting a particular length when dragging towards the sound. Since there is essentially no source (since the test starts after the subject clicks) and no target, we don’t have to worry about any Fitts’ Law biases 🙂
  • Progress for today:
  • vth

    Start Button (red), Sound Vector and Emitter

Audio is in and synchronized to the video

Well that was a good day.

  • Spent a good deal of time trying to figure out the best way for the GUI and the Exec to communicate. Originally, I wanted to be able to pass a pointer to the GUI from the exec so that user actions in the GUI could be executed in a more reasonable place. Due to header conflicts, I couldn’t manage to get that to work, so I put together a UI_cmd class that is set in the UI and read in the Exec. That seems to be working pretty well, though I may want to put a queue in there and turn it more into a message bus/event pump. That level of sophistication isn’t needed yet though.
  • Integrated the sound library that I wrote. I still have to reference the D3D audio library in the main application which I think is a bit odd, but I think it may be because I’m incorrectly exporting the symbol table from the static library. Again, that’s a refinement for later.
  • At this point, the 3D position of my OGL shape and the 3D postion of my continuous sound (2D actually, Y = 0) are running in an infinite circle. It’s pretty cool to hear the audio track to the image. I’m uploading a video of the running system, and although it won’t be in surround, you can hear the flanging effects from the sound moving around the helmet.

Not bad for 90 minutes worth of work…

I’m busy doing demos and presentations in my day job, so this has been suffering. Nonetheless, here’s the progress for today:

  • Added a fine-grained timer callback to the main app
  • Added an OpenGL window, set to Ortho2, and with pixel-accurate dimensions
  • Connected the timer to the OpenGL, and set the position of what will be the emitter. We won’t see this during the actual test, but it will be good for debugging.
  • I need to track mouse clicks and motion in the GL window. That will come tomorrow, and then I’ll work on integrating the audio library. That’s the basics for running the experiments. After that, I’ll work on reading and writing the input and result files.
  • Pix for today: AppProgress6.18.13