Haptic Rendering Project Proposal / Plan (revised 2.8.13)

The goal of this project is to evaluate haptic rendering as a means of increasing situational awareness in infantry situations. The initial focus of this research is the creation of a “tactor headband” that the user can wear or can be incorporated into headgear. The study that this project is based on is “Guidelines for Head Tactile Communication”, by Kimberly Myles and Joel T. Kalb in 2010. A link to the report is here.

There are five major milestones in the project. These cover the development process from proof-of-concept through demonstrable prototype. It is expected that the full scope of the project described below will cover several semesters.

  1. Evaluate tactile transducers as a way of rendering haptic information.
    1. It may be possible to take advantage of low-cost off-the-shelf technology to provide haptic rendering. The fist step will be to see which of these technologies can be used to research this task
      1. There are a wide variety of wearable tactile actuators that are available in price ranges between $2.00 – $40.00 (A sampling of these actuators is listed for sale at PartsExpress). For this study, a set of these actuators that range in power and frequency response will be obtained and evaluated in a variety of acoustic environments.
      2. Dolby 7.1 provides a way of sending acoustic information to 8 independent channels. At low-frequency and high amplitudes, acoustic rendering can be perceived as tactile information. Evaluation of low-cost USB-powered Dolby units such as the Vantec will be performed.
  2. Develop codebase for testing haptic rendering
    1. Three-dimensional sound APIs may be an effective way of presenting positional information to the user. Two of these libraries are implemented in Java, which has the advantage of being the preferred development language for Android applications. These libraries are Java3D and OpenAL (incorporated nicely into the LWJGL). Evaluation of Java3D with a failover to LWJGL will be performed with the goal being to develop a framework for supporting studies to test the efficacy of haptic rendering for situational awareness. Further, orientation, and possibly position of the user’s head is required so that haptic elements can be rendered in a consistent spatial frame. Orientation can be tracked using an Arduino shield such as the HMC6352, while position may be adressable with GPS hardware such as the RTL10709. Communication with the Arduino can be done over USB using JSON.
    2. Since it may be necessary to track the position and orientation of the user and a pointing device in a study, the incorporation of a vision library, such as OpenCV will be evaluated as well.
  3. Perform a preliminary study on the effectiveness of haptic rendering as an aid to situational awareness.
    1. As currently imagined, this study will evaluate seated subjects wearing headgear incorporating haptic rendering hardware as well as a means for tracking the position and orientation of the head. Additionally, the user will be able to indicate the perceived position (and distance?) of a rendered haptic cue, possibly using a pointing device.  This pointing device may either be tracked visually, or could have the same Arduino shields for compass heading and position described above.
    2. In the study, subjects would be presented with a set of haptic cues that are derived from the calculated 3D position of a virtual object. The subjects will asked to indicate their position using the pointing device. When they believe they are pointing at the cue, they will press a button that will indicate they have completed the test. The time from the introduction of the cue to the button hit and the angular difference from the pointing device to the actual position will be recorded. Multiple randomized cues across multiple subjects will be tested and evaluated qualitatively to establish the level of effectiveness of the system. Variations on this study could include visible and invisible targets, the effect of blindfolding the user, and the effect of misdirected visual cues to see if an occasional glance (for example) is more or less important in building a spatial model compared to continuous haptic rendering. It is expected that a SIGCHI paper could be a realistic goal of the study
  4. Do a more complex study in a larger, freeform environment
    1. Based on the success of the above study, the system described above could be made portable (backpackable?) and provided to a squad-sized group of users. A set of tasks requiring various levels of situational awareness could be evaluated, possibly with the haptic rendering systems on, off, and in degraded modes. Since position and orientation is being tracked already, the ability to record detailed information about the group dynamics of the subjects in the study should be straightforward. It is expected that a follow-on SIGCHI paper could be a realistic goal of the study
    2. To make for a much more exciting submission video, one element of the study might take place during a Humans vs. Zombies scenario, assuming that HvZ is still a thing.
  5. Integrate with other data gathering systems
    1. Situational awareness can only improve in the presence of more and better data. Downstream additions to the study could include integration with vision systems such as those developed at the University of Michigan, as well as more traditional spatial databases, such as those provided by Esri.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s