Category Archives: C++

WTF?!

Had an idea to fix a messy bug.
Searched for “MSVC shared memory”
Got a useful hit in the MSDN database.
Implemented in a test loop. Worked the first time
Implemented in the project code. Worked the first time.

I think the world is about to end.

Shared Memories

Today’s job is to integrate the Phantom code into the simulation.

  • Code is in and compiling, but there are mysterious errors:
  • HD_errors
  • HD_errors2
  • I think I need a more robust startup. Looking at more examples….
  • Hmm. After looking at other examples, the HD_TIMER_ERROR  problem appears to crop up for anything more than trivially complex. Since both programs seem to run just fine by themselves, I’m going to make two separate executables that communicate using Named Shared Memory. Uglier than I wanted, but not terrible.
  • Created a new project, KF_Phantom to hold the Phantom code
  • Stripped out all the Phantom (OpenHaptics) references from the KF_Virtual_Hand_3 project;
  • Added shared memory to KF_Phantom and tested it by creating a publisher and subscriber class within the program. It all works inside the program. Next will be to add the class to the KF_VirtualHand project (same code, I’m guessing? Not sure if MSVC is smart enough to share). Then we can see if it works there. If it does, then it’ll be time to start getting the full interaction running. And since the data transfer is essentially memcpy, I can pass communication objects around.

Dancing Phantoms

Spent most of the day trying to figure out how to deal with geometry that has to be available to both the haptic and graphics subsystems. The haptics subsystem has to run fast – about 1000hz and gets its own callback-based loop from the HD haptic libraries. The graphics run as fast as they can, but they get bogged down.

So the idea for the day was to structure the code so that a stable geometry patch can be downloaded from the main system to the haptics subsystem. I’m thinking that they could be really simple, maybe just a plane and a concave/convex surface. I started by creating a BaseGeometryPatch class that takes care of all the basic setup and implements a sphere patch model. Other inheriting classes simple override the patchCalc() method and everything should work just fine.

I also built a really simple test main loop that runs at various rates using Sleep(). The sphere is nice and stable regardless of the main loop update rate, though the transitions as the position is updated can be a little sudden. It may make sense to add some interpolation rather than just jumping to the next position. But it works. The next thing will be to make the sphere work as a convex shape by providing either a flag or using a negative length. Once that’s done (with a possible detour into interpolation), I’ll try adding it to the graphics code. In the meanwhile, here’s a video of a dancing Phantom for your viewing pleasure:

Random bits

I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.

Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.

Come to think of it, I could put an actuator on a mouse as well and move over materials?

Tasks for today:

  • Finish getting the Phidgets code working in KF_Hand_3 – done
  • Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
  • Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos

Moar Phidgeting

  • Brought in my fine collection of jumpers and connectors. Next time I won’t have to build a jumper cable…
  • Built the framework for the new hand test. The basic graphics are running
  • Added cube code to the FltkShaderSupport library. Here’s everything running:
  • KF_framework
  • Next, I’m going to integrate the Phidget sensor code into the framework, then hook that up to sound code.
  • Had Dong register for Google’s Ingress, just to see what’s going on.
  • Loaded in the Phidgets example code and the library that works is the x86 library. Using the 64bit library results in unresolved externals errors.
  • There are a lot of straight C examples. Just found the C++ class examples simple.h and simple.cpp.

I may be done coding this thing. Time for a pilot study.

I was going to have a demo for upper management of my company, but the VP of R&D got waylaid by travel issues and had to postpone. So that gave me a few additional hours to do XML parsing things and generally fix up the application. This is what a study looks like:

<?xml version="2.0" encoding="ISO-8859-1"?>
<Setup>
 <ResearcherName>Some Researcher</ResearcherName>
 <SubjectName>Some Graduate Student</SubjectName>
 <SubjectAge>23</SubjectAge>
 <SubjectGender>Male</SubjectGender>
 <Seed>0</Seed>
 <SpeedTests>3</SpeedTests>
 <AccuracyTests>3</AccuracyTests>
 <Sessions>2</Sessions>
 <SoundFile>C:/Wavs/heli.wav</SoundFile>
 <SoundFile>C:/monoExport.wav</SoundFile>
</Setup>

So now I have a release build that can create ad-hoc sessions or read in a test session from an xml file. The results from the test is output to a csv file where the position of the source(s), the position of the cursor, the angle between them, and the time to act are all recorded. In addition, the number of speakers used is determined and the normalized volume to each channel is recorded. So yay.

I suppose that the next step with the code would be to set the speakers explicitly, rather than letting the API determine the volume, but I think that would be another study. And I should add comments before I forget why I wrote things the way I did (wstring wst(str.begin(), str.end());, I’m looking at you…)

For the pilot study, I’m going to team up with Brian to go and find all the relavent previous work and set up the study. I’m pretty sure that we’re covered by one of Ravi’s IRBs, so I think that collecting data can begin as soon as a machine is set up in the lab that we can use.

And while that’s cooking, I can now use my shiny new audio library to hook up to the telepresence simulator. Woohoo!

 

Bells and Whistles. More like not-quite-essential functionality…

It was one of those days where work interfered with development. How inconvenient. I shake my fist at the power of the paycheck. Damn you!

  • Made sure that the telepresence demo was working. I need to get back to that!
  • Adding XML parsing of setup files.
  • Reading in files with newfangled std::ifstream. Fun!
  • Using rapidxml, which is working just fine, but suffering from cryptic documentation. I wasn’t sure how to get child nodes until I found this post. It looks like a small library of navigation functions might be useful. Considering there is rapidxml_iterators.hpp and rapidxml_utils.hpp I’ll look there first.