The theme is the merging of tangible computing and AR.
One place to look at this is training. Training is most effective when it most resembles the actual task in situations ranging from flight simulators to surgical trainers (See Jacob, 1976 and Haque, 2006 below). Tangible computing opens up the possibility of flight-simulator-level training for hands-on tasks that are not normally instrumented in a way that supports this kind of training. With this in mind, let’s think about what a good hands on trainer might look like.
- It would be reusable
- It would be extensible
- It would look right
- It would feel right.
- Regular tools would work in their normal way.
- It would support multiple users in a shared space.
What does this mean with respect to training/teaching surgery?
Despite ongoing advances in minimally-invasive surgery, the vast majority of procedures are still hands-on operations where a surgeon directly manipulates the tools of the OR, ranging from scalpel and sutures to rib-spreaders and bone drills (see CDC, below).
The idea for this project is to develop a surgical simulator that takes advantage of the recent advances in augmented reality, 3D printing and patient-specific imaging. In this concept, volumetric data from MRI, CAT and other scans are used to produce a 3D printed model of the area of interest. The printer would create the model, based on patient-specific volumetric data, laying down realistic layers of tissue, including arteries and veins that can be operated by pumps in the base that the printer builds the model on.
The model itself is colored so that 3D computer graphics can be superimposed on the model using augmented reality. These graphics portray the visual characteristics of living tissue, while allowing the user to see their hands and tools in the scene. The model is tracked as it is manipulated using Kinnect-like devices so that the imagery tracks and cuts and changes to the model that the user causes.
Because the system is based on 3D printing, any surgical site can be replicated, from phlebotomy to hip replacement. Specific training models can be developed as well as models based on patient-specific data. Also, because the surgical site is being tracked by stereoscopic cameras, the performance of the user can be monitored with respect to several criteria, including time to complete the procedure, tools used, and possible errors.
The model is printed on a special base that includes heaters and gas/fluid pumps. Arteries, veins, bladders, etc are printed such that they are filled during the procedure with heated fluids as needed. Body motions that affect the operating site, such as peristalsis, heartbeat, or breathing can be achieved by inflating deflating printed bladders. The model is draped appropriately, and the surgical team can interact as a group. Possible extensions of this interaction could include the anesthesiologist and other specialists. This could be particularly useful in the case of medical school.
Because the models can be based on actual patient data, it becomes possible to take the concept of “see one, do one, teach one” to a higher level of fidelity. Imagine a surgeon performing a particularly rare or interesting procedure. The patients’ data (MRI/CT etc) is recorded prior to the operation, and it may be possible to record heartbeat, breathing, body temperature etc during the procedure. This means that a student or students can see the procedure, then perform the same procedure using a patient-specific model using patient specific recorded behavior. Further, by monitoring student’s performance with the simulation, it might be possible to rank students with respect to speed, efficacy and errors so that they could get additional attention from the performing/teaching surgeon. “See one, do one, teach one” could easily become “simulate, see, show”.
To evaluate the efficacy of the training, a study would need to be built that evaluates skills transfer (anything else?) from procedures used in the training system to (actual? cadaver?, animal?) surgery. Since it is anticipated that the 3D printing of tissues will be the last component of this system to be developed, the studies will probably have to be structured to accommodate more primitive models that what should be available in a production system.
Annotated Bibliography:
1) Zahariev, Mihaela A., and Christine L. MacKenzie. “Auditory, graphical and haptic contact cues for a reach, grasp, and place task in an augmented environment.“Proceedings of the 5th international conference on Multimodal interfaces
- investigated the effects of auditory and graphical contact cues on a direct manipulation task performed in a tabletop augmented environment. The task was to reach and grasp a physical object and place it on a graphical target.
- Alternative to HMD
2) Adler, Simon, et al. “Overlay of patient-specific anatomical data for advanced navigation in surgery simulation.” Proceedings of the First International Workshop on Digital Engineering. ACM, 2010.
- Explores overlay systems using patient-specific data. Techniques should be transferable to chromakey overlays
3) Nilsson, Susanna, and Björn Johansson. “Fun and usable: augmented reality instructions in a hospital setting.” Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces. ACM, 2007.
- More work on mixed reality, which is OK, but there is a qualitative part about why people would use the system (i.e. it’s fun), that makes this valuable.
4) Tang, Ziying, et al. “A multimodal virtual environment for interacting with 3d deformable models.” Proceedings of the international conference on Multimedia. ACM, 2010.
- High-speed complex deformable models, including organs
5) Ehara, Jun, and Hideo Saito. “Texture overlay onto deformable surface for virtual clothing.” Proceedings of the 2005 international conference on Augmented tele-existence. ACM, 2005.
- Motion capture with markers providing the basis for the organ tracking.
- This makes me think that the markers should be printed throughout the model so that the camera can recognize incisions. This might imply different marker design that is detectable as a cross-section (add color information? Fluorescent?)
6) Okumura, Kohei, Hiromasa Oku, and Masatoshi Ishikawa. “Lumipen: Projection-Based Mixed Reality for Dynamic Objects.” Multimedia and Expo (ICME), 2012 IEEE International Conference on. IEEE, 2012.
- An alternative to some kind of HMD
- An additional possibility of a projected light would be the ability of off-axis cameras to determine depth information.
7) Franke, Tobias, et al. “Enhancing realism of mixed reality applications through real-time depth-imaging devices in x3d.” Proceedings of the 16th international conference on 3D web technology. ACM, 2011.
- Looks like a good place to start on the actual coding. Plus, a really good reference section
8) Liarokapis, Fotis, and Robert M. Newman. “Design experiences of multimodal mixed reality interfaces.” Proceedings of the 25th annual ACM international conference on Design of communication. ACM, 2007.
- Looks like a good source for issues in design of the system that are (probably?) tangential to the core issue, but still very important for usability.
9) Wilson, Andrew D. “Depth-sensing video cameras for 3d tangible tabletop interaction.” Horizontal Interactive Human-Computer Systems, 2007. TABLETOP’07. Second Annual IEEE International Workshop on. IEEE, 2007.
- Tabletop system with dynamic updating of a virtual environment using physical objects.
- Cool video here: http://research.microsoft.com/en-us/um/people/awilson/publications/wilsontabletop2007/wilsontabletop2007.html
10) Israel, Johann Habakuk, et al. “An object-centric interaction framework for tangible interfaces in virtual environments.” Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction. ACM, 2011.
- There are a lot of surgical components, ranging from scalpels to saws and drills. It could make sense to make these objects tangible and trackable. If the system knows that a knife is to cut with, then it has a hint with how to deal with the texture overlays. Also, a tool (like a syringe) might make an incision that can’t be tracked by a camera. Tracking the object, while also knowing about the position of the model allows for the determination of correct/incorrect placement (e.g. hitting the vein in phlebotomy, or accessing cerebrospinal fluid in a lumbar puncture)
11) Haque, Syed, and Shankar Srinivasan. “A meta-analysis of the training effectiveness of virtual reality surgical simulators.” Information Technology in Biomedicine, IEEE Transactions on 10.1 (2006): 51-58.
12) Jacobs, Robert S., and Stanley N. Roscoe. “Simulator cockpit motion and the transfer of initial flight training.” Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 19. No. 2. Sage Publications, 1975.
13) Center for Disease Control, “National Hospital Discharge Survey: 2010 table, Procedures by selected patient characteristics – Number by procedure category and age” http://www.cdc.gov/nchs/fastats/insurg.htm