Category Archives: Haptics

Proof of concept!

For the first time, all the important pieces are working together.

I added force interaction between gripper and target sphere, then determined how to make the ratio calculation work. If the sum of all of the magnitudes on the target is greater than zero, then the ratio equals the magnitude of the sum of the force vectors divided by the sum of each magnitude. A value of 1.0 means that there is no contact. A value of 0.0 is a perfectly opposing contact. As currently implemented, if the ratio is less than 0.5, then the position of the target sphere is set to the point that lies between the two grippers, otherwise the (summed) contact force vector is applied to the target.

After firing up the sim and trying it, the sense of “capture” works well and is intuitive. 

  • Need to add walls around the work environment so that the targets can’t get out of reach.
  • Need to add the metal standoffs for the speakers. I was thinking about ordering some box tubing, but the sizes weren’t optimal. I’ll bend up some metal tonight.
  • Need to start adding in the code that will support the experiments
    • Task code
    • Feedback options
      • Position and pressure only
      • Position, force feedback and pressure
      • Position, pressure and vibration
      • Position, force feedback, pressure and vibration.
  • I also need to extend the audio feedback so that arbitrary speakers can be used and positioned without adhering to Dolby positioning rules. I’ll get back to that after getting the metal speaker supports attached to the interface.

Here’s the code that matters. First, the haptic code, then the sim code. In both cases, simToPhantom and phantomToSim are the structs that are used by the shared memory system:

HDCallbackCode BaseGeometryPatch::patchCalc(){
	HDErrorInfo error;
	hduVector3Dd forceVec;
	hduVector3Dd targForceVec;
	hduVector3Dd loopForceVec;
	hduVector3Dd patchMinusDeviceVec;
	hduVector3Dd sensorVec;
	double sensorRadius = 1.0;

	if(simToPhantom == NULL){
		return HD_CALLBACK_CONTINUE;
	}

	HHD hHD = hdGetCurrentDevice();

	/* Begin haptics frame.  ( In general, all state-related haptics calls
		should be made within a frame. ) */
	hdBeginFrame(hHD);

	/* Get the current devicePos of the device. */
	hdGetDoublev(HD_CURRENT_POSITION, devicePos);
	hdGetDoublev(HD_CURRENT_GIMBAL_ANGLES, deviceAngle);
	hdGetDoublev(HD_CURRENT_TRANSFORM, transformMat);

	forceVec[0] = 0;
	forceVec[1] = 0;
	forceVec[2] = 0;

	for(int targi = 0; targi < SimToPhantom::NUM_TARGETS; ++targi){
		patchPos[0] = simToPhantom->targetX[targi];
		patchPos[1] = simToPhantom->targetY[targi];
		patchPos[2] = simToPhantom->targetZ[targi];
		patchRadius = simToPhantom->targetRadius[targi];

		targForceVec[0] = 0;
		targForceVec[1] = 0;
		targForceVec[2] = 0;
		for(int sensi = 0; sensi < SimToPhantom::NUM_SENSORS; ++sensi){
			loopForceVec[0] = 0;
			loopForceVec[1] = 0;
			loopForceVec[2] = 0;

			sensorRadius = simToPhantom->sensorRadius[sensi];
			sensorVec[0] = simToPhantom->sensorX[sensi] + devicePos[0];
			sensorVec[1] = simToPhantom->sensorY[sensi] + devicePos[1];
			sensorVec[2] = simToPhantom->sensorZ[sensi] + devicePos[2];
			sensorRadius = simToPhantom->sensorRadius[sensi];

			if(sensorVec[0] == 0.0){
				continue;
			}

			/* >  patchMinusDeviceVec = patchPos-devicePos  < 
				Create a vector from the device devicePos towards the sphere's center. */
			//hduVecSubtract(patchMinusDeviceVec, patchPos, devicePos);
			hduVecSubtract(patchMinusDeviceVec, patchPos, sensorVec);
			hduVector3Dd dirVec;
			hduVecNormalize(dirVec, patchMinusDeviceVec);

			/* If the device position is within the sphere's surface
				center, apply a spring forceVec towards the surface.  The forceVec
				calculation differs from a traditional gravitational body in that the
				closer the device is to the center, the less forceVec the well exerts;
				the device behaves as if a spring were connected between itself and
				the well's center. */
			double penetrationDist = patchMinusDeviceVec.magnitude() - (patchRadius+sensorRadius);
			if(penetrationDist < 0)
			{
				/* >  F = k * x  < 
					F: forceVec in Newtons (N)
					k: Stiffness of the well (N/mm)
					x: Vector from the device endpoint devicePos to the center 
					of the well. */
				hduVecScale(dirVec, dirVec, penetrationDist);
				hduVecScale(loopForceVec, dirVec, stiffnessK);
			}

			if(phantomToSim != NULL){
				phantomToSim->forceMagnitude[sensi] = hduVecMagnitude(loopForceVec);
				phantomToSim->forceVec[sensi][0] = loopForceVec[0];
				phantomToSim->forceVec[sensi][1] = loopForceVec[1];
				phantomToSim->forceVec[sensi][2] = loopForceVec[2];

				phantomToSim->targForceMagnitude[targi][sensi] = hduVecMagnitude(loopForceVec);
				phantomToSim->targForceVec[targi][sensi][0] = loopForceVec[0];
				phantomToSim->targForceVec[targi][sensi][1] = loopForceVec[1];
				phantomToSim->targForceVec[targi][sensi][2] = loopForceVec[2];
			}
			hduVecAdd(forceVec, forceVec, loopForceVec);
			hduVecAdd(targForceVec, targForceVec, loopForceVec);
		}
		if(phantomToSim != NULL){
			phantomToSim->targForcesMagnitude[targi] = hduVecMagnitude(targForceVec);
		}
	}

	// divide the forceVec the number of times that a force was added?

	/* Send the forceVec to the device. */
	hdSetDoublev(HD_CURRENT_FORCE, forceVec);

	/* End haptics frame. */
	hdEndFrame(hHD);

	/* Check for errors and abort the callback if a scheduler error
	   is detected. */
	if (HD_DEVICE_ERROR(error = hdGetError()))
	{
		hduPrintError(stderr, &error, "BaseGeometryPatch.calcPatch():\n");

		if (hduIsSchedulerError(&error))
		{
			return HD_CALLBACK_DONE;
		}
	}

	if(phantomToSim != NULL){
		for(int i = 0; i < 16; ++i){
			phantomToSim->matrix[i] = transformMat[i];
		}
	}

	/* Signify that the callback should continue running, i.e. that
	   it will be called again the next scheduler tick. */
	return HD_CALLBACK_CONTINUE;
}

Next, the Sim code:

void TargetSphere::environmentCalc(){

	double forces = phantomToSim->targForcesMagnitude[targIndex];
	double sumVec[3];
	double sumMag = 0;
	for(int i = 0; i < 3; ++i){
		constraintAnchorPos[i] = 0;
		sumVec[i] = 0;
	}
	for(int i = 0; i < SimToPhantom::NUM_SENSORS; ++i){
		constraintAnchorPos[0] += sensorPos[i][0];
		constraintAnchorPos[1] += sensorPos[i][1];
		constraintAnchorPos[2] += sensorPos[i][2];

		double force = phantomToSim->targForceMagnitude[targIndex][i];
		sumMag += force;
		double forceVec[3];
		forceVec[0] = phantomToSim->targForceVec[targIndex][i][0];
		forceVec[1] = phantomToSim->targForceVec[targIndex][i][1];
		forceVec[2] = phantomToSim->targForceVec[targIndex][i][2];
		sumVec[0] += forceVec[0];
		sumVec[1] += forceVec[1];
		sumVec[2] += forceVec[2];

		//Dprint::add("targ[%d] sens[%d] forceVec = (%.2f., %.2f, %.2f) force = %.2f, forces = %.2f",
		//	targIndex, i, forceVec[0], forceVec[1], forceVec[2], force, forces);
	}

	constraintAnchorPos[0] = constraintAnchorPos[0]/SimToPhantom::NUM_SENSORS;
	constraintAnchorPos[1] = constraintAnchorPos[1]/SimToPhantom::NUM_SENSORS;
	constraintAnchorPos[2] = constraintAnchorPos[2]/SimToPhantom::NUM_SENSORS;

	//double mag = m3dGetMagnitude3(sumVec);
	//Dprint::add("sumVec (%.2f., %.2f, %.2f) Mag = %.2f, sumForce = %.2f", sumVec[0], sumVec[1], sumVec[2], mag, forces);
	double ratio = 1.0;
	if(sumMag >0){
		ratio = forces/sumMag;
		if(ratio > 0.5){
			double velocityScalar = 1.0; // should be time-based
			for(int i = 0; i < 3; ++i){
				velocityVec[i] += (-sumVec[i])*velocityScalar;
				position[i] += velocityVec[i];
				if(velocityVec[i] > 0.1){
					velocityVec[i] -= 0.1;
				}else{
					velocityVec[i] = 0;
				}
			}
		}else{
			for(int i = 0; i < 3; ++i){
				velocityVec[i] = 0;
				position[i] = constraintAnchorPos[i];
			}
		}
	}
	Dprint::add("sumMag = %.2f, sumForce = %.2f, ratio = %.2f", sumMag, forces, ratio);

}

Shared Memories

Today’s job is to integrate the Phantom code into the simulation.

  • Code is in and compiling, but there are mysterious errors:
  • HD_errors
  • HD_errors2
  • I think I need a more robust startup. Looking at more examples….
  • Hmm. After looking at other examples, the HD_TIMER_ERROR  problem appears to crop up for anything more than trivially complex. Since both programs seem to run just fine by themselves, I’m going to make two separate executables that communicate using Named Shared Memory. Uglier than I wanted, but not terrible.
  • Created a new project, KF_Phantom to hold the Phantom code
  • Stripped out all the Phantom (OpenHaptics) references from the KF_Virtual_Hand_3 project;
  • Added shared memory to KF_Phantom and tested it by creating a publisher and subscriber class within the program. It all works inside the program. Next will be to add the class to the KF_VirtualHand project (same code, I’m guessing? Not sure if MSVC is smart enough to share). Then we can see if it works there. If it does, then it’ll be time to start getting the full interaction running. And since the data transfer is essentially memcpy, I can pass communication objects around.

Dancing Phantoms

Spent most of the day trying to figure out how to deal with geometry that has to be available to both the haptic and graphics subsystems. The haptics subsystem has to run fast – about 1000hz and gets its own callback-based loop from the HD haptic libraries. The graphics run as fast as they can, but they get bogged down.

So the idea for the day was to structure the code so that a stable geometry patch can be downloaded from the main system to the haptics subsystem. I’m thinking that they could be really simple, maybe just a plane and a concave/convex surface. I started by creating a BaseGeometryPatch class that takes care of all the basic setup and implements a sphere patch model. Other inheriting classes simple override the patchCalc() method and everything should work just fine.

I also built a really simple test main loop that runs at various rates using Sleep(). The sphere is nice and stable regardless of the main loop update rate, though the transitions as the position is updated can be a little sudden. It may make sense to add some interpolation rather than just jumping to the next position. But it works. The next thing will be to make the sphere work as a convex shape by providing either a flag or using a negative length. Once that’s done (with a possible detour into interpolation), I’ll try adding it to the graphics code. In the meanwhile, here’s a video of a dancing Phantom for your viewing pleasure:

Random bits

I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.

Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.

Come to think of it, I could put an actuator on a mouse as well and move over materials?

Tasks for today:

  • Finish getting the Phidgets code working in KF_Hand_3 – done
  • Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
  • Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos

Pilot Study and Telepresence.

Brian came over last night and we were able to load up his laptop with the drivers and software and run examples. At this point, we’re looking at three things to study with the rig:

  1. What is/are the best frequencies for spatial orientation (position and distance) using these actuators?
  2. What happens with speed and accuracy when there are more than one signal?
  3. Do 7 actuators work better than 4?

We’re in the process now of writing up the test plan. In the meantime, I’m now going to try to adapt the Audio2X software to replace the synthesizers and use a Phidgets 1019 to replace the Ardino of the previous telepresence test rig. Once that’s done, then I’ll add in the Phantom. For reference, here’s a video of the first version running:

And here’s a picture of the latest interface that will be attached to the Phantom:

IMG_1444To move this along, I’ve brought a small pile of electronics in from home. Tomorrow’s goal is to make sure that I can connect to the interface board. Once that’s working I need to hook up the sensors from the old prototype (note! Bring in crimping pins for female DB15 connector!).

 

FLTK and FLUID. Simple, Good Stuff

  • Starting to put together the actual test framework. Found a good open source synthesizer (ZynAddSubFX) that I used to create a pure tone that I then cut down to one second with Audacity. It’s important to note that this app only works with MONO sounds.
  • Building up the class that will handle running multiple sessions.
  • Just a quick shout out to FLTK. I have been adding and adjusting the GUI all day long as I figure out how to run the tests. To add fields, adjust positions and just generally futz around, all I have to do is use the FLUID gui IDE, export the layout as C++ code, add in stdafx.h and compile. It all just works. A great piece of code. FLTK_rocks

Let’s make science!

Alright, so I now have my audio library. Next on the agenda is a test program that tests reactions of users to vibroacoustic input. The test needs to present randomized stimuli to users, so that they can be tested for:

  • Time to respond with a direction
  • Accuracy of direction
  • Efficacy of stimuli

Since this is probably going to be within subjects (multiple stimuli) and also between subjects (same tests on multiple users) we’re going to want to be able to present the same sequence, so we’ll need to seed the random number generator so we get the same sequence.

  • Start with the default random number generator, but maybe run through a wrapper class in case we need something like a Mersenne Twister.
  • Xml file to specify the input and output of the experiment. This library looks reasonable.
    • Input
      • Sounds to use (random distribution of sound use)
      • Test type (Accuracy, Speed, or both)
      • Attempts per test
      • Number of tests (must be even)
      • Random seed
      • min/max delay between test segments
      • output filename
    • Output
      • Test UID
      • Date
      • Time
      • Subject
      • Researcher
      • Free form note field (1024 characters?)
      • Accuracy or Reaction time test
      • Audio configuration
      • Random seed
      • Calibration results
        • Time(s) to click in response to visual cue
        • Time(s) to click in response to audio cue
      • For each played sound
        • Sequence x of total
        • Audio file(s) used (WAV)
        • Audio source position (x, y) in screen coordinates from the origin, where the user’s head is
        • Audio playback matrix (actual speaker relative volume)
        • Time to click after play start
        • Duration of sound
        • Click position (x, y) in screen coordinates from the origin, where the sound is perceived to have come from
  • App
    • Text
      • File navigator for xml file
    • Calibrate
      • Runs a sequence of tests where the user has to click the mouse as quickly as possible in response to the canvas flashing white, and then all(?) speakers in the headpiece playing the calibration sound
      • Calibration cues are have a randomly determined timing between X and Y seconds
      • Test is disabled until calibration is run. Loading a new xml document effectively resets the system, requiring a new calibration sequence
    • Test
      • Shows a label that says either “Accuracy” or “Speed” based on which test is being run. We could change the background of the display as well?
      • The graphics screen shows a circular cursor that resets to the center of the graphics screen at the beginning of each section. Once the audio cue plays, the user can move the mouse away from the center towards the direction of the sound. The circle is clamped in its motion so that the result is always a valid angle, as long as the user moves the cursor far enough away from the center (TBD). Clicking the mouse causes the clock to stop and the cursor to reset.
      • If this is not the last test segment, then a random time period between X and Y seconds elapses before the next test is run.
      • Once the test completes, the system checks to see if that is the last one. If not, a stochastic choice is made to determine if the next test should be speed or accuracy. By the time all tests have run, the number of speed and accuracy runs will be equal.
    • Output file is appended throughout the test (open, write, close? Or read in the DOM, update and write out?)

Started the FLTK wrapper, and probably saved a good deal of time by going back to Erco’s FLTK page and associated videos