Author Archives: pgfeldman

Now with more wires

I managed to get all of the components hooked up around the amps today. Now I have the USB Dolby 7.1 decoder on top of the amp stack, with all four amps bolted together. I’m waiting to get a power strip that I can hook up all the power supplies too. But everything should have a nice blue glow when it’s plugged in. Tomorrow, I’ll hook up the speakers and see how it all sounds/feels.

IMG_1988[1]

Today’s laundry list

Not much progress, due to other classes, work, life-in-general. But I have a gap today so:

Meeting at PAD lab to discuss our cohort of projects

I’m going to swing by Home Depot on the way home today to get some .25″-ish threaded rod and nuts so that I can make a tiny rack for all the amps. They also sell Y-adapters, so I’ll pick up a few more of them. Pix by the end of the day, with luck.

And I came across this cool thing:  $59 pcDuino – AllWinner A10 Board with Arduino Compatible Headers. Here’s a picture:

And here we go – the world’s cutest 8-channel amp:

IMG_1987[1]

This looks interesting

First, I got the headset. Very nice.

Second, there is a company, sixense, that looks to be making some very good immersive hardware and a free(?) api. This might be very good for the pointing test. And they have some nice sound (midi) code too. It’s somehow tied up with Intel’s perceptual computing effort. I learned about this from a slashdotted article about the Holodeck Project.

Nasty bacteria

Bronchitis sucks. In a wheezy, “I need more air” kind of way. Anyway, the amps, actuators, and a Dolby headset are on their way and should arrive today. I’m going to take advantage of the fact that they are not here to work on getting these creatures out of my lungs.

Some times you get to push the Easy Button

Not much to report today, except that everything worked the way it was supposed to.

  • Dolby worked as plug-and play.
  • The small tactile actuator seems best.
  • The amp is powerful.
  • Java3D talks to at least two channels

So the next step is to get three more amps, six more actuators, and some kind of headgear. And maybe some kind of rigid helmet to attach the actuators to? I’m thinking I can use one of my old bike helmets to start with.

Anyway. Progress!

Wireless. But in a bad way.

Image

I got my amp (2 channels) and USB Dolby 7.1 adapter from Amazon last night and wired up the transducers this morning, hoping to make some vibrations this evening. However, I forgot that I needed to get the sound *from* the Dolby unit *to* the amplifier. Oops. Looks like a trip to radio shack tonight. And no work on this tomorrow, since I’ve got class.

Sigh. With luck, Friday.

Sounding good to me!

So, after following the link from yesterday’s post, I put the following call in, just after main().

System.setProperty("j3d.audiodevice", "com.sun.j3d.audioengines.javasound.JavaSoundMixer");

It all compiles fine and runs! I get the following warning:

***
*** WARNING: JavaSoundMixer: Streaming (uncached) audio not implemented
***

But since at this point I’m only interested in looping sounds anyway, this should be just fine. Nothing like a 13 year old code base that still works. A toast to Sun.
And we get 3D graphics to boot:

SimpleSounds window

Now I need to get the Dolby unit installed and running, and hook up the amp to my collection of tactile transducers. Tomorrow?

Some Prior Art?

Displaying sound indications on a wearable computing system

Abstract

Example methods and systems for displaying one or more indications that indicate (i) the direction of a source of sound and (ii) the intensity level of the sound are disclosed. A method may involve receiving audio data corresponding to sound detected by a wearable computing system. Further, the method may involve analyzing the audio data to determine both (i) a direction from the wearable computing system of a source of the sound and (ii) an intensity level of the sound. Still further, the method may involve causing the wearable computing system to display one or more indications that indicate (i) the direction of the source of the sound and (ii) the intensity level of the sound.

Sorting out 3D Sound Libraries

Being a creature of habit, my thought was to go to J3D and use their api, which is quite nice, though essentially unchanged since 2000. It was split off of the main development line when Oracle came in and was then moved off to Java.net – more specifically, java3d.java.net.

Since I have the “The Java 3D API Specification 2nd ed”, I downloaded the latest version (1.5.2) and installed it, pulled out the audio examples from the book’s CD (I know, how quaint), loaded everything into eclipse and built the three examples and their support classes.

Things were not happy when I tried to run though. I got an error saying that I shouldn’t use the 32 bit libraries on a 64 bit machine. Problem is, I have an Intel chip and the dll is for AMD chips. So I uninstalled the 32 bit code and tried out the 64-bit. By golly it compiles and runs. The only problem is the following:

java.lang.UnsupportedOperationException: No AudioDevice specified
	at com.sun.j3d.utils.universe.Viewer.createAudioDevice(Viewer.java:986)
	at SimpleSounds.init(SimpleSounds.java:232)
	at com.sun.j3d.utils.applet.MainFrame.run(MainFrame.java:267)
	at java.lang.Thread.run(Thread.java:722)
Java 3D: audio is disabled

Now, I know I have audio on my gaming-level development box, so that’s disturbing. This forum post looks promising. I’ll give that a try tomorrow. Failing that, I can go to the LWJGL, which has hooks to OpenAL. That appears to have more activity, and I like the LWJGL folks, they write good code. They even have tutorials!

In addition, I’ve ordered a Vantec USB External 7.1 Channel Audio Adapter, and an Audio Mini Amplifier to try hooking up various sound sources to my collection of tactile transducers from Parts Express.

Multi-target tracking with Single Moving Camera

Did you know that you can get reasonably useable depth information from a single camera? I would have thought that it wasn’t practical. Clearly someone forgot to tell the folks at the UMichigan vision lab.

  • An overview with cool video
  • The first paper
  • A paper from the next year, extending the concept using a Kinnect for depth
  • Datasets. This kind of implies that the system is not real time?
  • The code on github. No, they say it’s github, but it’s actually good, old SVN. Downloading now. Done. Big.
    • Requires the following libraries
    • Boost (general libraries)
    • OpenCV (computer vision)
    • Cmake – cross-platform make

Looks like it should compile on any platform, and it looks like it’s not real time (images are stored in files). Looks like I need to set up a GCC environment