Category Archives: Sound

Milestones

The first draft of the paper is done! It comes out at about 12 pages. I’ll need to cut it down to 6 to submit for CHI 2014 WIP. Easier than writing though. Of course, that’s just the first draft. More to come, I’m guessing. Still, it’s a nice feeling, and since I’ve burned through most of my 20% time, it’s time for me to get back to actually earning my pay, so I’ll be taking a break from this blog for a while. More projects are coming up though, so stay tuned. I’ll finish up this post with some images of all the design variations that led to the final, working version:

Prototype Evolution

Prototype Evolution (click to enbiggen)

The chronological order of development is from left to right and top to bottom. Starting at the top left:

  • The first proof of concept. Originally force-input / motion – feedback. It was with this system that I discovered that all actuator motion had to be in relation to a proximal relative base.
  • The first prototype. It had 6 Degrees of freedom, allowing for a user to move a gripper within a 3D environment and grab items. It worked well enough that it led to…
  • The second prototype. A full 5-finger gripper attached to an XYZ base. I ran into problems with this one. It turned out that motion feedback required too much of a cognitive load to work. The user would loose track of where their fingers were, even with the proximal base. So that led to…
  • The third prototype. This used resistive force sensors and vibrotactile feedback. The feedback was provided using voice coils, which were capable of full audio range, which meant that all kinds of sophisticated contact and surface effects could be provided. That proved the point that 5 fingers could work with vibrotactile feedback, but the large scale motions of the base seemed to need motion (I’ve since learned that isometric devices are most effective over short ranges). This was also loaded with electronic concepts that I wanted to try out – Arduino sensing, midi synthesizers per finger, etc.
  • To explore direct motion for the base for the fourth prototype I made a 3D printing of a 5-finger Force Input / Vibrotactile Output (FS/VO) system that would sit on top of a mouse. This was a plug-and play substitution that worked with the previous electronics and worked quite nicely, though the ability to grip doesn’t give you much to do in the XY plane
  • To Get 3D interaction, I took two FS/VO modules and added them to a Phantom Omni. I also dropped the arduino and the synthesizer and the Arduino, using XAudio2 8-channel audio and a Phidgets interface card. This system worked very nicely. The FS/VO elements combined with a force feedback base turned out to be very effective. That’s what became the basis for the paper, and hopefully the basis for future work.
  • Project code is here (MD5: B32EE89CEA9C8E02E5B99BFAF24877A0).

Results?

Looks like we got some results with the headset system. Still trying to figure out what it means (other than the obvious that it’s easier to find the source of a single sound).HeadsetPrelimResults

Here are the confidence intervals:

confidenceIntervals

Next I try to do something with the Phantom results. I think I may need some more data before anything shakes out.

Life can be a drag sometimes

  • Cleaning up commands. Mostly done
  • While testing the “test” part of the app, I’m realizing that my “ratio” calculations have some issues. Before tying to fix them directly, I’m going to try just making a different gripper that has three “sensor spheres” on each finger. Then I can just let my “drag-based” physics to the whole job. Finished. That’s much better
  • Quick! What’s wrong with the following code?
	for(int i = 0; i < 3; ++i){
		position[i] += velocityVec[i];
		if(velocityVec[i] > drag*ratio){
			velocityVec[i] -= drag*ratio;
		}else{
			velocityVec[i] = 0;
		}
	}
  • Yep, the drag is only being applied for objects moving in a positive direction. This is a problem that has been driving me crazy for days. I thought is was some artifact of the communication between the Phantom control loop (1k hz) and the simulation loop (100 hz). Nope. Simple math mistake. Facepalm.
  • Pretty picture for the day. Notice that the grippers now have multiple points of contact:

BetterGripper

  • I’ve also started to notice how feedback changes the speed that you can perform the task. Haptic and tactor seem pretty close. Open loop is much worse, at least subjectively. Let’s see what the data says.

Blew my hand off for a while

I’m in the process of turning the Phantom testbed code into a research tool. This means that a lot of items that have been #defines now need to be variables and such.

One of the mechanisms that the shared memory app uses to communicate is a char[255] message. I basically sprintf whatever I want into that, and I can then debug both applications simultaneously.

However, after checking to see that some data were coming across correctly, I took the formatting argument out of the sprintf statement and left the value in. Suddenly I was overflowing the 255 limit and causing all kinds of havoc. Took a few hours to chase that one down. That’s what you get for playing with C/C++. Moving on.

Anyway, I now have an event handling loop, and am able to load target spheres into the application and associate them with a sound file. Tommorrow we’ll try getting the sounds associated with the targets to play. There are some issues, primarily that the gripper can touch multiple targets simultaneously. Still, it looks pretty straightforward. After that I’ll start to roll in the TestManager and TestResults classes into the application.

The other thing to do for the day is to check out the headset code with Brian this evening in the lab and see if the output file bug has either disappeared or can be replicated.

Sounds like Deja Vu.

Adding custom speaker number and placement as per Dr. Kuber’s request.

Looks like dot product should do the trick: DotProduct

Done! With only a couple of string compare issues. I also had to make the speaker index jump around the subwoofer channel until I can work out how to set the EQ.

And it looks like there are bugs in the code. It seems that you cannot do zero speed sessions. And the writing out of results with multiple sound files looks pretty confused. I’m not sure if extra CRs are being put in there or if some of the data isn’t being written out. Need to run some more examples.

What you get when you combine FLTK, OpenGL, DirectX, OpenHaptics and shared memory

Wow, the title sounds like a laundry list 🙂

Building a two-fingered gripper

Going to add sound class to SimpleSphere so that we know what sounds are coming from what collision. Didn’t do that’ but I’m associating the sounds by index, which is good enough for now

Need to calculate individual forces for each sphere in the Phantom and return them. Done.

To keep the oscillations at a minimum, I’m passing the offsets from the origin. That way the loop uses the device position as the basis for calculations within the haptic loop.
Here’s the result of today’s work:

Some Assembly Required

Integrating all the pieces into one test platform. The test could be to move a collection of physically-based spheres (easy collision detect) from one area to another. Time would be recorded from the indication of a start and stop (spacebar, something in the sim, etc). Variations would be:

  • Open loop: Measure position and pressure, but no feedback
  • Force Feedback (Phantom) only
  • Vibrotactile feedback only
  • Both feedbacks

Probably only use two actuators for the simplicity of the test rig. It would bean that I could use the laptop’s headphone output. Need to test this by wiring up the actuators to a micro stereo plug. Radio Shack tonight.

Got two-way communication running between Phantom and sim.

Have force magnitude adjusting a volume.

Added a SimpleSphere class for most of the testing.

Random bits

I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.

Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.

Come to think of it, I could put an actuator on a mouse as well and move over materials?

Tasks for today:

  • Finish getting the Phidgets code working in KF_Hand_3 – done
  • Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
  • Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos

vector<tuple<float, float, string>>::iterator it = sourcePositions->begin();

C++ is like being in a candy store, full of a huge variety of bright, shiny treats that can blow your hand off if you don’t pay attention.

  • Finishing up adding multiple sound capability per test attempt. Because I’ve been away from C++ for a while and I like to try new things, I poked around with tuples for a while, which are kind of neat. Then I decided to put them into a vector and access them. That lead to code like this:
    • vector<FOURF_SAMPLE_TUPLE>::iterator it = myVector.begin();
      while(it != myVector.end()){
      	float sourceX = get<EMITTER_X>(*it);
      	float sourceY = get<EMITTER_Y>(*it);
      }
    • That is *not* the most intuitive code I’ve seen. I mean, it makes sense, and given the limited set of overloadable characters, ok. But I think “[” and “]” would have been a better choice than “<” and “>”.
  • Got the multi sound playing and the results output to .csv. Next is to get the xml setup files running.