Category Archives: C++

Bells and Whistles. More like not-quite-essential functionality…

It was one of those days where work interfered with development. How inconvenient. I shake my fist at the power of the paycheck. Damn you!

  • Made sure that the telepresence demo was working. I need to get back to that!
  • Adding XML parsing of setup files.
  • Reading in files with newfangled std::ifstream. Fun!
  • Using rapidxml, which is working just fine, but suffering from cryptic documentation. I wasn’t sure how to get child nodes until I found this post. It looks like a small library of navigation functions might be useful. Considering there is rapidxml_iterators.hpp and rapidxml_utils.hpp I’ll look there first.

On the plus side, my house didn’t explode

Not much to report today, the gas people were running new lines to my house. It seems they do that once a century, if it needs it or not. With all the hardware at work, it makes developing at home pretty much a no go.

  • Did verify that the release builds work with the end-user DirectX install
  • Added text to emitter position on the graphics screen
  • changed the timer to to std::chrono::high_resolution_clock (check the bottom of this post for an example)
  • It looks like MSVC 2010 doesn’t have <chrono>. Made do with clock_t and clock() instead.
  • Added xml output for test setup. Why does ctime() return a string with a line feed!?

vector<tuple<float, float, string>>::iterator it = sourcePositions->begin();

C++ is like being in a candy store, full of a huge variety of bright, shiny treats that can blow your hand off if you don’t pay attention.

  • Finishing up adding multiple sound capability per test attempt. Because I’ve been away from C++ for a while and I like to try new things, I poked around with tuples for a while, which are kind of neat. Then I decided to put them into a vector and access them. That lead to code like this:
    • vector<FOURF_SAMPLE_TUPLE>::iterator it = myVector.begin();
      while(it != myVector.end()){
      	float sourceX = get<EMITTER_X>(*it);
      	float sourceY = get<EMITTER_Y>(*it);
      }
    • That is *not* the most intuitive code I’ve seen. I mean, it makes sense, and given the limited set of overloadable characters, ok. But I think “[” and “]” would have been a better choice than “<” and “>”.
  • Got the multi sound playing and the results output to .csv. Next is to get the xml setup files running.

Pulling everything apart and putting it back together

  • Adding multiple sound playback
    • Rework the output to handle multiple sounds. This means one TestResult per sound. However, the result cannon be associated with a sound, so for each release, all the emitter sources will have to be included. Later analysis can be used to determine the best fit. Note also that the number of attempts may be greater or lesser than the number of emitters.
  • Need to use the XML to write out and read in just the configuration values
  • Need to save multiple source positions in TestResult. Added bad code at the point to continue.

Enhancements

  • Meeting with Dr. Kuber.
    • Add a “distance” component to the test and a multiple emitter test
    • Got a bunch of items to add actuators to: Hardhat, noise-blocking headphones, and a push-to-talk mic.
  • Added name and gender fields to the GUI and cleaned up the menus
  • Working on adding multiple sounds
    • Added a ‘next’ button. Once pushed, the sources can show until the center is clicked again.
    • I think the test should have options for how the sounds are added
      • Permutations (A, then B, then C, then AB, AC, BC, ABC)
      • All (Going to start with this)
      • Random?
  • Added variable distance

Infrastructure

So the testing part of the code is done and working. Yes, there are bugs, and some cases where exceptions are thrown that should be handled, but if you color inside of the lines, everything works. Of course, now I need to be able to record the output, so it’s time for some classes to handle the work of saving results for analysis.

  • Creating a TestResult class with the following information
    • session number
    • test number
    • test type (speed or accuracy)
    • time to lift
    • source position
    • cursor position
    • angle difference
    • speaker volume matrix
    • Also, there will be a toString(), toCsvString(), and  toXmlString() method for output;
  • The TestManager will instance and store TestResults in a container (vector?), which will have the additional fields
    • Researcher name
    • Subject name
    • Sound file
    • Total sessions
    • Speed tests per session
    • Accuracy tests per session

So that’s what happens when a programming language gets old…

  • Continuing with the test exec. I’m also going to need a class that records the data associated with each test segment.
  • Ran into a… Well, I don’t want to call it a bug. Let’s say that C++ is showing its age. FLTK uses char*. Most of Windows uses wchar_t. They don’t play well together, so I spent about half of my time working out the best way to convert between them. It’s this:
  • void setSoundFileString(LPCWSTR wps){
    	soundFileString = new wstring(wps);
    	string str(soundFileString->begin(), soundFileString->end());
    	sprintf_s(soundFile, "%s", str.c_str());
    }
  • I mean really!? Good grief.
  • Got a lot of the exec built and running. Clicking on the center button fires the sound, and you can drag to where you think the sound is. I am not all that accurate. It could be a frequency thing though. I’m running a low 10-20 HZ signal. The test should definitely try different frequencies.

FLTK and FLUID. Simple, Good Stuff

  • Starting to put together the actual test framework. Found a good open source synthesizer (ZynAddSubFX) that I used to create a pure tone that I then cut down to one second with Audacity. It’s important to note that this app only works with MONO sounds.
  • Building up the class that will handle running multiple sessions.
  • Just a quick shout out to FLTK. I have been adding and adjusting the GUI all day long as I figure out how to run the tests. To add fields, adjust positions and just generally futz around, all I have to do is use the FLUID gui IDE, export the layout as C++ code, add in stdafx.h and compile. It all just works. A great piece of code. FLTK_rocks

Closing in on something useful

  • Put all the projects into SVN, checked them out and did a clean build. Everything still works.
  • Starting on capturing mouse events. Done. Capturing Left, Middle, Right, Wheel and Drag. I think I want to have it so that the user presses (holds down?) a “button” in the middle of the GL window, signifying that he’s ready for the next sound queue. Once the sound plays, he drags toward the source. This is indicated by a line indicating the vector (and a cursor?). Releasing the mouse is the event that marks and records the choice, vector, elapsed time and position (vector?) of the emitter.
  • Making a button class for OGL. Done
  • Making a line segment class so we can point to where the sound is. Done.
  • And just to show that I’ve been paying attention in class, the user doesn’t have to worry about hitting a particular length when dragging towards the sound. Since there is essentially no source (since the test starts after the subject clicks) and no target, we don’t have to worry about any Fitts’ Law biases 🙂
  • Progress for today:
  • vth

    Start Button (red), Sound Vector and Emitter

Audio is in and synchronized to the video

Well that was a good day.

  • Spent a good deal of time trying to figure out the best way for the GUI and the Exec to communicate. Originally, I wanted to be able to pass a pointer to the GUI from the exec so that user actions in the GUI could be executed in a more reasonable place. Due to header conflicts, I couldn’t manage to get that to work, so I put together a UI_cmd class that is set in the UI and read in the Exec. That seems to be working pretty well, though I may want to put a queue in there and turn it more into a message bus/event pump. That level of sophistication isn’t needed yet though.
  • Integrated the sound library that I wrote. I still have to reference the D3D audio library in the main application which I think is a bit odd, but I think it may be because I’m incorrectly exporting the symbol table from the static library. Again, that’s a refinement for later.
  • At this point, the 3D position of my OGL shape and the 3D postion of my continuous sound (2D actually, Y = 0) are running in an infinite circle. It’s pretty cool to hear the audio track to the image. I’m uploading a video of the running system, and although it won’t be in surround, you can hear the flanging effects from the sound moving around the helmet.