Category Archives: C++

And yet more…

Got the wiring cleaned up.

Integrating collision response with the targetSphere. The math is looking reasonably good.

Added multi-target capability.

Added adjustable sensitivity to the pressure sensors. The pushing directly on the speakers is causing artifacts. I think I need to build small c-section angle that decouples the squeezing force from the Vibroacoustic feedback.

More refining.

Working on constraint code. Got the framework done, but didn’t have enough sleep to be able to do the math involved. So instead I…

Got the actuators mounted on the Phantom! Aside from having one of the force sensors break during mounting, it went pretty smoothly. I may have to adjust the sensitivity of teh sensors so that you don’t have to press so hard on them. At the current setting the voice coils aren’t behaving at higher grip forces. But the ergonomics feel pretty good, so that’s nice.

IMG_2183

What you get when you combine FLTK, OpenGL, DirectX, OpenHaptics and shared memory

Wow, the title sounds like a laundry list 🙂

Building a two-fingered gripper

Going to add sound class to SimpleSphere so that we know what sounds are coming from what collision. Didn’t do that’ but I’m associating the sounds by index, which is good enough for now

Need to calculate individual forces for each sphere in the Phantom and return them. Done.

To keep the oscillations at a minimum, I’m passing the offsets from the origin. That way the loop uses the device position as the basis for calculations within the haptic loop.
Here’s the result of today’s work:

Some Assembly Required

Integrating all the pieces into one test platform. The test could be to move a collection of physically-based spheres (easy collision detect) from one area to another. Time would be recorded from the indication of a start and stop (spacebar, something in the sim, etc). Variations would be:

  • Open loop: Measure position and pressure, but no feedback
  • Force Feedback (Phantom) only
  • Vibrotactile feedback only
  • Both feedbacks

Probably only use two actuators for the simplicity of the test rig. It would bean that I could use the laptop’s headphone output. Need to test this by wiring up the actuators to a micro stereo plug. Radio Shack tonight.

Got two-way communication running between Phantom and sim.

Have force magnitude adjusting a volume.

Added a SimpleSphere class for most of the testing.

WTF?!

Had an idea to fix a messy bug.
Searched for “MSVC shared memory”
Got a useful hit in the MSDN database.
Implemented in a test loop. Worked the first time
Implemented in the project code. Worked the first time.

I think the world is about to end.

Shared Memories

Today’s job is to integrate the Phantom code into the simulation.

  • Code is in and compiling, but there are mysterious errors:
  • HD_errors
  • HD_errors2
  • I think I need a more robust startup. Looking at more examples….
  • Hmm. After looking at other examples, the HD_TIMER_ERROR  problem appears to crop up for anything more than trivially complex. Since both programs seem to run just fine by themselves, I’m going to make two separate executables that communicate using Named Shared Memory. Uglier than I wanted, but not terrible.
  • Created a new project, KF_Phantom to hold the Phantom code
  • Stripped out all the Phantom (OpenHaptics) references from the KF_Virtual_Hand_3 project;
  • Added shared memory to KF_Phantom and tested it by creating a publisher and subscriber class within the program. It all works inside the program. Next will be to add the class to the KF_VirtualHand project (same code, I’m guessing? Not sure if MSVC is smart enough to share). Then we can see if it works there. If it does, then it’ll be time to start getting the full interaction running. And since the data transfer is essentially memcpy, I can pass communication objects around.

Dancing Phantoms

Spent most of the day trying to figure out how to deal with geometry that has to be available to both the haptic and graphics subsystems. The haptics subsystem has to run fast – about 1000hz and gets its own callback-based loop from the HD haptic libraries. The graphics run as fast as they can, but they get bogged down.

So the idea for the day was to structure the code so that a stable geometry patch can be downloaded from the main system to the haptics subsystem. I’m thinking that they could be really simple, maybe just a plane and a concave/convex surface. I started by creating a BaseGeometryPatch class that takes care of all the basic setup and implements a sphere patch model. Other inheriting classes simple override the patchCalc() method and everything should work just fine.

I also built a really simple test main loop that runs at various rates using Sleep(). The sphere is nice and stable regardless of the main loop update rate, though the transitions as the position is updated can be a little sudden. It may make sense to add some interpolation rather than just jumping to the next position. But it works. The next thing will be to make the sphere work as a convex shape by providing either a flag or using a negative length. Once that’s done (with a possible detour into interpolation), I’ll try adding it to the graphics code. In the meanwhile, here’s a video of a dancing Phantom for your viewing pleasure:

Random bits

I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.

Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.

Come to think of it, I could put an actuator on a mouse as well and move over materials?

Tasks for today:

  • Finish getting the Phidgets code working in KF_Hand_3 – done
  • Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
  • Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos

Moar Phidgeting

  • Brought in my fine collection of jumpers and connectors. Next time I won’t have to build a jumper cable…
  • Built the framework for the new hand test. The basic graphics are running
  • Added cube code to the FltkShaderSupport library. Here’s everything running:
  • KF_framework
  • Next, I’m going to integrate the Phidget sensor code into the framework, then hook that up to sound code.
  • Had Dong register for Google’s Ingress, just to see what’s going on.
  • Loaded in the Phidgets example code and the library that works is the x86 library. Using the 64bit library results in unresolved externals errors.
  • There are a lot of straight C examples. Just found the C++ class examples simple.h and simple.cpp.

I may be done coding this thing. Time for a pilot study.

I was going to have a demo for upper management of my company, but the VP of R&D got waylaid by travel issues and had to postpone. So that gave me a few additional hours to do XML parsing things and generally fix up the application. This is what a study looks like:

<?xml version="2.0" encoding="ISO-8859-1"?>
<Setup>
 <ResearcherName>Some Researcher</ResearcherName>
 <SubjectName>Some Graduate Student</SubjectName>
 <SubjectAge>23</SubjectAge>
 <SubjectGender>Male</SubjectGender>
 <Seed>0</Seed>
 <SpeedTests>3</SpeedTests>
 <AccuracyTests>3</AccuracyTests>
 <Sessions>2</Sessions>
 <SoundFile>C:/Wavs/heli.wav</SoundFile>
 <SoundFile>C:/monoExport.wav</SoundFile>
</Setup>

So now I have a release build that can create ad-hoc sessions or read in a test session from an xml file. The results from the test is output to a csv file where the position of the source(s), the position of the cursor, the angle between them, and the time to act are all recorded. In addition, the number of speakers used is determined and the normalized volume to each channel is recorded. So yay.

I suppose that the next step with the code would be to set the speakers explicitly, rather than letting the API determine the volume, but I think that would be another study. And I should add comments before I forget why I wrote things the way I did (wstring wst(str.begin(), str.end());, I’m looking at you…)

For the pilot study, I’m going to team up with Brian to go and find all the relavent previous work and set up the study. I’m pretty sure that we’re covered by one of Ravi’s IRBs, so I think that collecting data can begin as soon as a machine is set up in the lab that we can use.

And while that’s cooking, I can now use my shiny new audio library to hook up to the telepresence simulator. Woohoo!