I got my amp (2 channels) and USB Dolby 7.1 adapter from Amazon last night and wired up the transducers this morning, hoping to make some vibrations this evening. However, I forgot that I needed to get the sound *from* the Dolby unit *to* the amplifier. Oops. Looks like a trip to radio shack tonight. And no work on this tomorrow, since I’ve got class.
Sigh. With luck, Friday.
So, after following the link from yesterday’s post, I put the following call in, just after main().
It all compiles fine and runs! I get the following warning:
*** WARNING: JavaSoundMixer: Streaming (uncached) audio not implemented
But since at this point I’m only interested in looping sounds anyway, this should be just fine. Nothing like a 13 year old code base that still works. A toast to Sun.
And we get 3D graphics to boot:
Now I need to get the Dolby unit installed and running, and hook up the amp to my collection of tactile transducers. Tomorrow?
Displaying sound indications on a wearable computing system
Example methods and systems for displaying one or more indications that indicate (i) the direction of a source of sound and (ii) the intensity level of the sound are disclosed. A method may involve receiving audio data corresponding to sound detected by a wearable computing system. Further, the method may involve analyzing the audio data to determine both (i) a direction from the wearable computing system of a source of the sound and (ii) an intensity level of the sound. Still further, the method may involve causing the wearable computing system to display one or more indications that indicate (i) the direction of the source of the sound and (ii) the intensity level of the sound.
Being a creature of habit, my thought was to go to J3D and use their api, which is quite nice, though essentially unchanged since 2000. It was split off of the main development line when Oracle came in and was then moved off to Java.net – more specifically, java3d.java.net.
Since I have the “The Java 3D API Specification 2nd ed”, I downloaded the latest version (1.5.2) and installed it, pulled out the audio examples from the book’s CD (I know, how quaint), loaded everything into eclipse and built the three examples and their support classes.
Things were not happy when I tried to run though. I got an error saying that I shouldn’t use the 32 bit libraries on a 64 bit machine. Problem is, I have an Intel chip and the dll is for AMD chips. So I uninstalled the 32 bit code and tried out the 64-bit. By golly it compiles and runs. The only problem is the following:
java.lang.UnsupportedOperationException: No AudioDevice specified
Java 3D: audio is disabled
Now, I know I have audio on my gaming-level development box, so that’s disturbing. This forum post looks promising. I’ll give that a try tomorrow. Failing that, I can go to the LWJGL, which has hooks to OpenAL. That appears to have more activity, and I like the LWJGL folks, they write good code. They even have tutorials!
In addition, I’ve ordered a Vantec USB External 7.1 Channel Audio Adapter, and an Audio Mini Amplifier to try hooking up various sound sources to my collection of tactile transducers from Parts Express.
Did you know that you can get reasonably useable depth information from a single camera? I would have thought that it wasn’t practical. Clearly someone forgot to tell the folks at the UMichigan vision lab.
- An overview with cool video
- The first paper
- A paper from the next year, extending the concept using a Kinnect for depth
- Datasets. This kind of implies that the system is not real time?
- The code on github. No, they say it’s github, but it’s actually good, old SVN. Downloading now. Done. Big.
- Requires the following libraries
- Boost (general libraries)
- OpenCV (computer vision)
- Cmake – cross-platform make
Looks like it should compile on any platform, and it looks like it’s not real time (images are stored in files). Looks like I need to set up a GCC environment