Monthly Archives: October 2013

Milestones

The first draft of the paper is done! It comes out at about 12 pages. I’ll need to cut it down to 6 to submit for CHI 2014 WIP. Easier than writing though. Of course, that’s just the first draft. More to come, I’m guessing. Still, it’s a nice feeling, and since I’ve burned through most of my 20% time, it’s time for me to get back to actually earning my pay, so I’ll be taking a break from this blog for a while. More projects are coming up though, so stay tuned. I’ll finish up this post with some images of all the design variations that led to the final, working version:

Prototype Evolution

Prototype Evolution (click to enbiggen)

The chronological order of development is from left to right and top to bottom. Starting at the top left:

  • The first proof of concept. Originally force-input / motion – feedback. It was with this system that I discovered that all actuator motion had to be in relation to a proximal relative base.
  • The first prototype. It had 6 Degrees of freedom, allowing for a user to move a gripper within a 3D environment and grab items. It worked well enough that it led to…
  • The second prototype. A full 5-finger gripper attached to an XYZ base. I ran into problems with this one. It turned out that motion feedback required too much of a cognitive load to work. The user would loose track of where their fingers were, even with the proximal base. So that led to…
  • The third prototype. This used resistive force sensors and vibrotactile feedback. The feedback was provided using voice coils, which were capable of full audio range, which meant that all kinds of sophisticated contact and surface effects could be provided. That proved the point that 5 fingers could work with vibrotactile feedback, but the large scale motions of the base seemed to need motion (I’ve since learned that isometric devices are most effective over short ranges). This was also loaded with electronic concepts that I wanted to try out – Arduino sensing, midi synthesizers per finger, etc.
  • To explore direct motion for the base for the fourth prototype I made a 3D printing of a 5-finger Force Input / Vibrotactile Output (FS/VO) system that would sit on top of a mouse. This was a plug-and play substitution that worked with the previous electronics and worked quite nicely, though the ability to grip doesn’t give you much to do in the XY plane
  • To Get 3D interaction, I took two FS/VO modules and added them to a Phantom Omni. I also dropped the arduino and the synthesizer and the Arduino, using XAudio2 8-channel audio and a Phidgets interface card. This system worked very nicely. The FS/VO elements combined with a force feedback base turned out to be very effective. That’s what became the basis for the paper, and hopefully the basis for future work.
  • Project code is here (MD5: B32EE89CEA9C8E02E5B99BFAF24877A0).

A little more direction?

  • In meeting with Dr. Kuber, I brought up something that I’ve been thinking about since the weekend. The interface works, provably so. The pilot study shows that it can be used for (a) training and (b) “useful” work. If the goal is to produce “blue collar telecommuting”, then the question becomes, how do we actually achieve that? A dumb master-slave system makes very little sense for a few reasons:
    • Time lag. It may not be possible to always get a fast enough response loop to make haptics work well
    • Machine intelligence. With robots coming online like Baxter, there is certainly some level of autonomy that the on-site robot can perform. So, what’s a good human-robot synergy?
  • I’m thinking that a hybrid virtual/physical interface might be interesting.
    • The robotic workcell is constantly scanned and digitized by cameras. The data is then turned into models of the items that the robot is to work with.
    • These items are rendered locally to the operator, who manipulates the virtual objects using tight-loop haptics, 3D graphics, etc. Since (often?) the space is well known, the objects can be rendered from a library of CAD-correct parts.
    • The operator manipulates the virtual objects. The robot follows the “path” laid down by the operator. The position and behavior of the actual robot is represented in some way (ghost image, warning bar, etc). This is known as Mediated Teleoperation, and described nicely in this paper.
    • The novel part, at least as far as I can determine at this point is using mediated telepresence to train a robot in a task:
      • The operator can instruct the robot to learn some or all of a particular procedure. This probably entails setting entry, exit, and error conditions for tasks, which the operator is able to create on the local workstation.
      • It is reasonable to expect that in many cases, this sort of work will be a mix of manual control and automated behavior. For example, placing of a part may be manual, but screwing a bolt into place to a particular torque could be entirely automatic. If a robot’s behavior is made  fully autonomous, the operator needs simply to monitor the system for errors or non-optimal behavior. At that point, the operator could engage another robot and repeat the above process.
      • User interfaces that inform the operator when the robot is coming out of autonomous modes in a seamless way need to be explored.