The company to watch seems to be Metaio. Here’s a cool (but trying a bit hard to be edgy and trendy) video:
They’ve got an API as well, so this might be something to look at in the image processing alternative rendering component of the system,
First, I got the headset. Very nice.
Second, there is a company, sixense, that looks to be making some very good immersive hardware and a free(?) api. This might be very good for the pointing test. And they have some nice sound (midi) code too. It’s somehow tied up with Intel’s perceptual computing effort. I learned about this from a slashdotted article about the Holodeck Project.
Displaying sound indications on a wearable computing system
Abstract
Example methods and systems for displaying one or more indications that indicate (i) the direction of a source of sound and (ii) the intensity level of the sound are disclosed. A method may involve receiving audio data corresponding to sound detected by a wearable computing system. Further, the method may involve analyzing the audio data to determine both (i) a direction from the wearable computing system of a source of the sound and (ii) an intensity level of the sound. Still further, the method may involve causing the wearable computing system to display one or more indications that indicate (i) the direction of the source of the sound and (ii) the intensity level of the sound.
Did you know that you can get reasonably useable depth information from a single camera? I would have thought that it wasn’t practical. Clearly someone forgot to tell the folks at the UMichigan vision lab.
Looks like it should compile on any platform, and it looks like it’s not real time (images are stored in files). Looks like I need to set up a GCC environment