Monthly Archives: February 2013

Sorting out 3D Sound Libraries

Being a creature of habit, my thought was to go to J3D and use their api, which is quite nice, though essentially unchanged since 2000. It was split off of the main development line when Oracle came in and was then moved off to Java.net – more specifically, java3d.java.net.

Since I have the “The Java 3D API Specification 2nd ed”, I downloaded the latest version (1.5.2) and installed it, pulled out the audio examples from the book’s CD (I know, how quaint), loaded everything into eclipse and built the three examples and their support classes.

Things were not happy when I tried to run though. I got an error saying that I shouldn’t use the 32 bit libraries on a 64 bit machine. Problem is, I have an Intel chip and the dll is for AMD chips. So I uninstalled the 32 bit code and tried out the 64-bit. By golly it compiles and runs. The only problem is the following:

java.lang.UnsupportedOperationException: No AudioDevice specified
	at com.sun.j3d.utils.universe.Viewer.createAudioDevice(Viewer.java:986)
	at SimpleSounds.init(SimpleSounds.java:232)
	at com.sun.j3d.utils.applet.MainFrame.run(MainFrame.java:267)
	at java.lang.Thread.run(Thread.java:722)
Java 3D: audio is disabled

Now, I know I have audio on my gaming-level development box, so that’s disturbing. This forum post looks promising. I’ll give that a try tomorrow. Failing that, I can go to the LWJGL, which has hooks to OpenAL. That appears to have more activity, and I like the LWJGL folks, they write good code. They even have tutorials!

In addition, I’ve ordered a Vantec USB External 7.1 Channel Audio Adapter, and an Audio Mini Amplifier to try hooking up various sound sources to my collection of tactile transducers from Parts Express.

Multi-target tracking with Single Moving Camera

Did you know that you can get reasonably useable depth information from a single camera? I would have thought that it wasn’t practical. Clearly someone forgot to tell the folks at the UMichigan vision lab.

  • An overview with cool video
  • The first paper
  • A paper from the next year, extending the concept using a Kinnect for depth
  • Datasets. This kind of implies that the system is not real time?
  • The code on github. No, they say it’s github, but it’s actually good, old SVN. Downloading now. Done. Big.
    • Requires the following libraries
    • Boost (general libraries)
    • OpenCV (computer vision)
    • Cmake – cross-platform make

Looks like it should compile on any platform, and it looks like it’s not real time (images are stored in files). Looks like I need to set up a GCC environment