Category Archives: OpenGL

Audio is in and synchronized to the video

Well that was a good day.

  • Spent a good deal of time trying to figure out the best way for the GUI and the Exec to communicate. Originally, I wanted to be able to pass a pointer to the GUI from the exec so that user actions in the GUI could be executed in a more reasonable place. Due to header conflicts, I couldn’t manage to get that to work, so I put together a UI_cmd class that is set in the UI and read in the Exec. That seems to be working pretty well, though I may want to put a queue in there and turn it more into a message bus/event pump. That level of sophistication isn’t needed yet though.
  • Integrated the sound library that I wrote. I still have to reference the D3D audio library in the main application which I think is a bit odd, but I think it may be because I’m incorrectly exporting the symbol table from the static library. Again, that’s a refinement for later.
  • At this point, the 3D position of my OGL shape and the 3D postion of my continuous sound (2D actually, Y = 0) are running in an infinite circle. It’s pretty cool to hear the audio track to the image. I’m uploading a video of the running system, and although it won’t be in surround, you can hear the flanging effects from the sound moving around the helmet.

Not bad for 90 minutes worth of work…

I’m busy doing demos and presentations in my day job, so this has been suffering. Nonetheless, here’s the progress for today:

  • Added a fine-grained timer callback to the main app
  • Added an OpenGL window, set to Ortho2, and with pixel-accurate dimensions
  • Connected the timer to the OpenGL, and set the position of what will be the emitter. We won’t see this during the actual test, but it will be good for debugging.
  • I need to track mouse clicks and motion in the GL window. That will come tomorrow, and then I’ll work on integrating the audio library. That’s the basics for running the experiments. After that, I’ll work on reading and writing the input and result files.
  • Pix for today: AppProgress6.18.13

Let’s make science!

Alright, so I now have my audio library. Next on the agenda is a test program that tests reactions of users to vibroacoustic input. The test needs to present randomized stimuli to users, so that they can be tested for:

  • Time to respond with a direction
  • Accuracy of direction
  • Efficacy of stimuli

Since this is probably going to be within subjects (multiple stimuli) and also between subjects (same tests on multiple users) we’re going to want to be able to present the same sequence, so we’ll need to seed the random number generator so we get the same sequence.

  • Start with the default random number generator, but maybe run through a wrapper class in case we need something like a Mersenne Twister.
  • Xml file to specify the input and output of the experiment. This library looks reasonable.
    • Input
      • Sounds to use (random distribution of sound use)
      • Test type (Accuracy, Speed, or both)
      • Attempts per test
      • Number of tests (must be even)
      • Random seed
      • min/max delay between test segments
      • output filename
    • Output
      • Test UID
      • Date
      • Time
      • Subject
      • Researcher
      • Free form note field (1024 characters?)
      • Accuracy or Reaction time test
      • Audio configuration
      • Random seed
      • Calibration results
        • Time(s) to click in response to visual cue
        • Time(s) to click in response to audio cue
      • For each played sound
        • Sequence x of total
        • Audio file(s) used (WAV)
        • Audio source position (x, y) in screen coordinates from the origin, where the user’s head is
        • Audio playback matrix (actual speaker relative volume)
        • Time to click after play start
        • Duration of sound
        • Click position (x, y) in screen coordinates from the origin, where the sound is perceived to have come from
  • App
    • Text
      • File navigator for xml file
    • Calibrate
      • Runs a sequence of tests where the user has to click the mouse as quickly as possible in response to the canvas flashing white, and then all(?) speakers in the headpiece playing the calibration sound
      • Calibration cues are have a randomly determined timing between X and Y seconds
      • Test is disabled until calibration is run. Loading a new xml document effectively resets the system, requiring a new calibration sequence
    • Test
      • Shows a label that says either “Accuracy” or “Speed” based on which test is being run. We could change the background of the display as well?
      • The graphics screen shows a circular cursor that resets to the center of the graphics screen at the beginning of each section. Once the audio cue plays, the user can move the mouse away from the center towards the direction of the sound. The circle is clamped in its motion so that the result is always a valid angle, as long as the user moves the cursor far enough away from the center (TBD). Clicking the mouse causes the clock to stop and the cursor to reset.
      • If this is not the last test segment, then a random time period between X and Y seconds elapses before the next test is run.
      • Once the test completes, the system checks to see if that is the last one. If not, a stochastic choice is made to determine if the next test should be speed or accuracy. By the time all tests have run, the number of speed and accuracy runs will be equal.
    • Output file is appended throughout the test (open, write, close? Or read in the DOM, update and write out?)

Started the FLTK wrapper, and probably saved a good deal of time by going back to Erco’s FLTK page and associated videos

Sorting out 3D Sound Libraries

Being a creature of habit, my thought was to go to J3D and use their api, which is quite nice, though essentially unchanged since 2000. It was split off of the main development line when Oracle came in and was then moved off to Java.net – more specifically, java3d.java.net.

Since I have the “The Java 3D API Specification 2nd ed”, I downloaded the latest version (1.5.2) and installed it, pulled out the audio examples from the book’s CD (I know, how quaint), loaded everything into eclipse and built the three examples and their support classes.

Things were not happy when I tried to run though. I got an error saying that I shouldn’t use the 32 bit libraries on a 64 bit machine. Problem is, I have an Intel chip and the dll is for AMD chips. So I uninstalled the 32 bit code and tried out the 64-bit. By golly it compiles and runs. The only problem is the following:

java.lang.UnsupportedOperationException: No AudioDevice specified
	at com.sun.j3d.utils.universe.Viewer.createAudioDevice(Viewer.java:986)
	at SimpleSounds.init(SimpleSounds.java:232)
	at com.sun.j3d.utils.applet.MainFrame.run(MainFrame.java:267)
	at java.lang.Thread.run(Thread.java:722)
Java 3D: audio is disabled

Now, I know I have audio on my gaming-level development box, so that’s disturbing. This forum post looks promising. I’ll give that a try tomorrow. Failing that, I can go to the LWJGL, which has hooks to OpenAL. That appears to have more activity, and I like the LWJGL folks, they write good code. They even have tutorials!

In addition, I’ve ordered a Vantec USB External 7.1 Channel Audio Adapter, and an Audio Mini Amplifier to try hooking up various sound sources to my collection of tactile transducers from Parts Express.