Category Archives: OpenGL

Milestones

The first draft of the paper is done! It comes out at about 12 pages. I’ll need to cut it down to 6 to submit for CHI 2014 WIP. Easier than writing though. Of course, that’s just the first draft. More to come, I’m guessing. Still, it’s a nice feeling, and since I’ve burned through most of my 20% time, it’s time for me to get back to actually earning my pay, so I’ll be taking a break from this blog for a while. More projects are coming up though, so stay tuned. I’ll finish up this post with some images of all the design variations that led to the final, working version:

Prototype Evolution

Prototype Evolution (click to enbiggen)

The chronological order of development is from left to right and top to bottom. Starting at the top left:

  • The first proof of concept. Originally force-input / motion – feedback. It was with this system that I discovered that all actuator motion had to be in relation to a proximal relative base.
  • The first prototype. It had 6 Degrees of freedom, allowing for a user to move a gripper within a 3D environment and grab items. It worked well enough that it led to…
  • The second prototype. A full 5-finger gripper attached to an XYZ base. I ran into problems with this one. It turned out that motion feedback required too much of a cognitive load to work. The user would loose track of where their fingers were, even with the proximal base. So that led to…
  • The third prototype. This used resistive force sensors and vibrotactile feedback. The feedback was provided using voice coils, which were capable of full audio range, which meant that all kinds of sophisticated contact and surface effects could be provided. That proved the point that 5 fingers could work with vibrotactile feedback, but the large scale motions of the base seemed to need motion (I’ve since learned that isometric devices are most effective over short ranges). This was also loaded with electronic concepts that I wanted to try out – Arduino sensing, midi synthesizers per finger, etc.
  • To explore direct motion for the base for the fourth prototype I made a 3D printing of a 5-finger Force Input / Vibrotactile Output (FS/VO) system that would sit on top of a mouse. This was a plug-and play substitution that worked with the previous electronics and worked quite nicely, though the ability to grip doesn’t give you much to do in the XY plane
  • To Get 3D interaction, I took two FS/VO modules and added them to a Phantom Omni. I also dropped the arduino and the synthesizer and the Arduino, using XAudio2 8-channel audio and a Phidgets interface card. This system worked very nicely. The FS/VO elements combined with a force feedback base turned out to be very effective. That’s what became the basis for the paper, and hopefully the basis for future work.
  • Project code is here (MD5: B32EE89CEA9C8E02E5B99BFAF24877A0).

Deadlines and schedules

I was just asked to see how many hours I have left for working this research. It turns out at the rate I’m going, that I can continue until mid-October. This is basically a big shout-out to Novetta, who has granted a continuation of my 20% time that was originally a hiring condition when I went to work for Edge. Thanks. And if you’d like a programming job in the DC area that supports creativity, give them a call.

I just can’t make the audio code break in writing out results. Odd. Maybe a corrupt input file can have unforeseen effects? Regardless, I’m going to stop pursuing this particular bug without more information

Fixing the state problem. Done.

Fixing the saving issue. Also changing the naming of the speakers to reflect Dolby or not. Done.

New version release built and deployed.

And back to Phantom++

TestScreenV1

I started to add in the user interface that will support experiments. Since it was already done, I pulled in most of the Fluid code from the Vibrotactile headset, which made things pretty easy. I needed to add an enclosing control system class that can move commande between the various pieces.

I’ve also decided that each sound will have an associated object with it. This allows each object to have a simple “acoustic” texture that doesn’t require any fancy data structure.

At this point, I’m estimating that the first version of the test program should be ready by Friday.

What you get when you combine FLTK, OpenGL, DirectX, OpenHaptics and shared memory

Wow, the title sounds like a laundry list 🙂

Building a two-fingered gripper

Going to add sound class to SimpleSphere so that we know what sounds are coming from what collision. Didn’t do that’ but I’m associating the sounds by index, which is good enough for now

Need to calculate individual forces for each sphere in the Phantom and return them. Done.

To keep the oscillations at a minimum, I’m passing the offsets from the origin. That way the loop uses the device position as the basis for calculations within the haptic loop.
Here’s the result of today’s work:

Flailing, but productive flailing.

Basically spent the whole day figuring out how the 4×4 phantom matrix equates to the rendering matrix (I would have said OpenGL, but that’s not true anymore. I am using the lovely math libraries from the OpenGL SuperBible 5th Edition, which makes it kinda look like the OGL of Yore.

Initially I thought I’d just use the vector components of the rotation 3×3 from the Phantom to get the orientation of the tip, but for some reason, parts of the matrix appear inverted. So instead of using them directly, I multiply the modelviewmatrix by the phantom matrix, Amazingly, this works perfectly.

To make sure that this works, I rendered a sphere at the +X, +Y and +Z axis in the local coordinate frame. Everything tracks. So now I can create my gripper class and get the positions of the end effectors from the class. And since the position is in the global coordinate frame, it kind of comes along for free.

Here’s a picture of everything working:
PhantomAxis
Tomorrow, I’ll build the gripper class and start feeding that to the Phantom. The issue will be to sum the force vectors from all the end effectors in a reasonable way.

Some Assembly Required

Integrating all the pieces into one test platform. The test could be to move a collection of physically-based spheres (easy collision detect) from one area to another. Time would be recorded from the indication of a start and stop (spacebar, something in the sim, etc). Variations would be:

  • Open loop: Measure position and pressure, but no feedback
  • Force Feedback (Phantom) only
  • Vibrotactile feedback only
  • Both feedbacks

Probably only use two actuators for the simplicity of the test rig. It would bean that I could use the laptop’s headphone output. Need to test this by wiring up the actuators to a micro stereo plug. Radio Shack tonight.

Got two-way communication running between Phantom and sim.

Have force magnitude adjusting a volume.

Added a SimpleSphere class for most of the testing.

Moar Phidgeting

  • Brought in my fine collection of jumpers and connectors. Next time I won’t have to build a jumper cable…
  • Built the framework for the new hand test. The basic graphics are running
  • Added cube code to the FltkShaderSupport library. Here’s everything running:
  • KF_framework
  • Next, I’m going to integrate the Phidget sensor code into the framework, then hook that up to sound code.
  • Had Dong register for Google’s Ingress, just to see what’s going on.
  • Loaded in the Phidgets example code and the library that works is the x86 library. Using the 64bit library results in unresolved externals errors.
  • There are a lot of straight C examples. Just found the C++ class examples simple.h and simple.cpp.

Closing in on something useful

  • Put all the projects into SVN, checked them out and did a clean build. Everything still works.
  • Starting on capturing mouse events. Done. Capturing Left, Middle, Right, Wheel and Drag. I think I want to have it so that the user presses (holds down?) a “button” in the middle of the GL window, signifying that he’s ready for the next sound queue. Once the sound plays, he drags toward the source. This is indicated by a line indicating the vector (and a cursor?). Releasing the mouse is the event that marks and records the choice, vector, elapsed time and position (vector?) of the emitter.
  • Making a button class for OGL. Done
  • Making a line segment class so we can point to where the sound is. Done.
  • And just to show that I’ve been paying attention in class, the user doesn’t have to worry about hitting a particular length when dragging towards the sound. Since there is essentially no source (since the test starts after the subject clicks) and no target, we don’t have to worry about any Fitts’ Law biases 🙂
  • Progress for today:
  • vth

    Start Button (red), Sound Vector and Emitter

Audio is in and synchronized to the video

Well that was a good day.

  • Spent a good deal of time trying to figure out the best way for the GUI and the Exec to communicate. Originally, I wanted to be able to pass a pointer to the GUI from the exec so that user actions in the GUI could be executed in a more reasonable place. Due to header conflicts, I couldn’t manage to get that to work, so I put together a UI_cmd class that is set in the UI and read in the Exec. That seems to be working pretty well, though I may want to put a queue in there and turn it more into a message bus/event pump. That level of sophistication isn’t needed yet though.
  • Integrated the sound library that I wrote. I still have to reference the D3D audio library in the main application which I think is a bit odd, but I think it may be because I’m incorrectly exporting the symbol table from the static library. Again, that’s a refinement for later.
  • At this point, the 3D position of my OGL shape and the 3D postion of my continuous sound (2D actually, Y = 0) are running in an infinite circle. It’s pretty cool to hear the audio track to the image. I’m uploading a video of the running system, and although it won’t be in surround, you can hear the flanging effects from the sound moving around the helmet.

Not bad for 90 minutes worth of work…

I’m busy doing demos and presentations in my day job, so this has been suffering. Nonetheless, here’s the progress for today:

  • Added a fine-grained timer callback to the main app
  • Added an OpenGL window, set to Ortho2, and with pixel-accurate dimensions
  • Connected the timer to the OpenGL, and set the position of what will be the emitter. We won’t see this during the actual test, but it will be good for debugging.
  • I need to track mouse clicks and motion in the GL window. That will come tomorrow, and then I’ll work on integrating the audio library. That’s the basics for running the experiments. After that, I’ll work on reading and writing the input and result files.
  • Pix for today: AppProgress6.18.13

Let’s make science!

Alright, so I now have my audio library. Next on the agenda is a test program that tests reactions of users to vibroacoustic input. The test needs to present randomized stimuli to users, so that they can be tested for:

  • Time to respond with a direction
  • Accuracy of direction
  • Efficacy of stimuli

Since this is probably going to be within subjects (multiple stimuli) and also between subjects (same tests on multiple users) we’re going to want to be able to present the same sequence, so we’ll need to seed the random number generator so we get the same sequence.

  • Start with the default random number generator, but maybe run through a wrapper class in case we need something like a Mersenne Twister.
  • Xml file to specify the input and output of the experiment. This library looks reasonable.
    • Input
      • Sounds to use (random distribution of sound use)
      • Test type (Accuracy, Speed, or both)
      • Attempts per test
      • Number of tests (must be even)
      • Random seed
      • min/max delay between test segments
      • output filename
    • Output
      • Test UID
      • Date
      • Time
      • Subject
      • Researcher
      • Free form note field (1024 characters?)
      • Accuracy or Reaction time test
      • Audio configuration
      • Random seed
      • Calibration results
        • Time(s) to click in response to visual cue
        • Time(s) to click in response to audio cue
      • For each played sound
        • Sequence x of total
        • Audio file(s) used (WAV)
        • Audio source position (x, y) in screen coordinates from the origin, where the user’s head is
        • Audio playback matrix (actual speaker relative volume)
        • Time to click after play start
        • Duration of sound
        • Click position (x, y) in screen coordinates from the origin, where the sound is perceived to have come from
  • App
    • Text
      • File navigator for xml file
    • Calibrate
      • Runs a sequence of tests where the user has to click the mouse as quickly as possible in response to the canvas flashing white, and then all(?) speakers in the headpiece playing the calibration sound
      • Calibration cues are have a randomly determined timing between X and Y seconds
      • Test is disabled until calibration is run. Loading a new xml document effectively resets the system, requiring a new calibration sequence
    • Test
      • Shows a label that says either “Accuracy” or “Speed” based on which test is being run. We could change the background of the display as well?
      • The graphics screen shows a circular cursor that resets to the center of the graphics screen at the beginning of each section. Once the audio cue plays, the user can move the mouse away from the center towards the direction of the sound. The circle is clamped in its motion so that the result is always a valid angle, as long as the user moves the cursor far enough away from the center (TBD). Clicking the mouse causes the clock to stop and the cursor to reset.
      • If this is not the last test segment, then a random time period between X and Y seconds elapses before the next test is run.
      • Once the test completes, the system checks to see if that is the last one. If not, a stochastic choice is made to determine if the next test should be speed or accuracy. By the time all tests have run, the number of speed and accuracy runs will be equal.
    • Output file is appended throughout the test (open, write, close? Or read in the DOM, update and write out?)

Started the FLTK wrapper, and probably saved a good deal of time by going back to Erco’s FLTK page and associated videos