Category Archives: Misc

Three views of the Odyssey

  • I’ve been thinking of ways to describe the differences between information visualizations with respect to maps, diagrams, and lists. Here’s The Odyssey as a geographic map:
  • Odysseus'_Journey
  • The first thing that I notice is just how far Odysseus travelled. That’s about half of the Mediterranean! I thought that it all happened close to Greece. Maps afford this understanding. They are diagrams that support the plotting of trajectories.Which brings me to the point that we lose a lot of information about relationships in narratives. That’s not their point. This doesn’t mean that non-map diagrams don’t help sometimes. Here’s a chart of the characters and their relationships in the Odyssey:
  •  odyssey
  • There is a lot of information here that is helpful. And this I do remember and understood from reading the book. Stories are good about depicting how people interact. But though this chart shows relationships, the layout does not really support navigation. For example, the gods are all related by blood and can pretty much contact each other at will. This chart would have Poseidon accessing Aeolus and  Circe by going through Odysseus.  So this chart is not a map.
  • Lastly, is the relationship that comes at us through search. Because the implicit geographic information about the Odyssey is not specifically in the text, a search request within the corpora cannot produce a result that lets us integrate it
  • OdysseySearchJourney
  • There is a lot of ambiguity in this result, which is similar to other searches that I tried which included travelsail and other descriptive terms. This doesn’t mean that it’s bad, it just shows how search does not handle context well. It’s not designed to. It’s designed around precision and recall. Context requires a deeper understanding about meaning, and even such recent innovations such as sharded views with cards, single answers, and pro/con results only skim the surface of providing situationally appropriate, meaningful context.

Some thoughts about awareness and trust

I had some more thoughts about how behavior patterns emerge from the interplay between trust and awareness. I think the following may be true:

  1. Awareness refers to how complete the knowledge of an information domain is. Completely aware indicates complete information. Unaware indicates not only absent information but no knowledge of the domain at all.
  2. Trust is a social construct to deal with incomplete information. It’s a shortcut that essentially states “based on some set of past experiences, I will assume that this (now trusted) entity will behave in a predictable, reliable, and beneficial way for me”
  3. Healthy behaviors emerge when trust and awareness are equivalent.
  4. Low trust and low awareness is reasonable. It’s like walking through a dark, unknown space. You go slow, bump into things, and adjust.
  5. Low trust and high awareness is paralytic.
  6. High trust and low awareness is reckless. Runaway conditions like echo chambers. The quandary here is that high trust is efficient. Consider the prisoner’s dilemma:
      1. dilemma
      2. In the normal case, the two criminals have to evaluate what the best action is based on all the actions the other individual could choose, ideally resulting in a Nash Equilibrium. For two players (p), there are 4 choices (c). However, if each player believes that the other player will make the same choice, then only the two diagonal choices remain. For two players, this reduces the complexity by half. But for multiple dissimilar players, the options go up by cp, so that if this were The Usual Suspects, there would be 32 possibilities to be worked out by each player. But for 5 identical prisoners, the number of choices remains 2, which is basically “what should we all do?”. The more we believe that the others in our social group see the world the same way, the less work we all have to do.
  7. Diversity is a mechanism for extending awareness, but it depends on trusting those who are different. That may be the essence of the explore/exploit dilemma.
  8. Attention is a form of focused awareness, can reduce general awareness. This is one to the reasons that Tufekci’s thoughts on the attention economy matter so much. As technology increases attention on proportionally more “marketable” items, the population’s social awareness is distorted.
  9. In a healthy group context, trust falls off as a function of awareness. That’s why we get flocking. That is the pattern that emerges when you trust more those who are close, while they in turn do the same, building a web of interaction. It’s kind of like interacting ripples?
  10. This may work for any collection of entities that have varied states that undergo change in some predictable way. If they were completely random, then awareness of the state is impossible, and trust should be zero.
    1. Human agent trust chains might proceed from self to family to friends to community, etc.
    2. Machine agent trust chains might proceed from self to direct connections (thumb drives, etc) to LAN/WAN to WAN
    3. Genetic agent trust chain is short – self to species. Contact is only for reproduction. Interaction would reflect the very long sampling times.
    4. Note that (1) is evolved and is based on incremental and repeated interactions, while (2) is designed and is based on arbitrary rules that can change rapidly. Genetics are maybe dealing with different incentives? The only issue is persisting and spreading (which helps in the persisting)
  11. Computer-mediated-communication disturbs this process (as does probably every form of mass communication) because the trust in the system is applied to the trust of the content. This can work in both ways. For example, lowering trust in the press allows for claims of Fake News. Raising the trust of social networks that channel anonymous online sources allows for conspiracy thinking.
  12. An emerging risk is how this affects artificial intelligence, given that currently high trust in the algorithms and training sets is assumed by the builders
    1. Low numbers of training sets mean low diversity/awareness,
    2. Low numbers of algorithms (DNNs) also mean low diversity/awareness
    3. Since training/learning is spread by update, the installed base is essentially multiple instances of the same individual. So no diversity and very high trust. That’s a recipe for a stampede of 10,000 self driving cars.

Since I wrote this, I’ve had some additional thoughts. I think that our understanding of Awareness and Trust is getting confused with Faith and Doubt. Much of what we believe to be true is no longer based on direct evidence, or even an understandable chain of reasoning. Particularly as more and more of our understanding comes from statistical analysis of large sets of fuzzy data, the line between Awareness and Faith becomes blurred, I think.

Doubt is an important part of faith, and it has to do with the mind coming up against the unknowable. The question does God exist? contains the basics of the tension between faith and doubt. Proving the existence of God can even be thought of as distraction from the attempt to come to terms with the mysteries of life. Within every one of us is the ability to reject all prior religious thought and start our own journey that aligns with our personal understandings.

Conversely, it is impossible to increase awareness without trusting the prior work. Isaac Newton had to trust in large part, the shoulders of the giants he stood on, even if he was refining notions of what gravity was. So too with Albert Einstein, Rosalind Franklin and others in their fields. The scientific method is a framework for building a large, broad-based, interlocking tapestry awareness.

When science is approached from a perspective of Faith and Doubt, communities like the Flat Earth Society emerge. It’s based on the faith that the since the world appears flat here, it must be flat everywhere, and doubt of a history of esoteric measurements and math that disprove this personally reasonable assumption. From this perspective, the Flat Earthers are a protestant movement, much in the way that the community that emerged around Martin Luther, when he rejected the organized, carefully constructed orthodoxy of the Catholic Church, based on his personally reasonable interpretation of scripture.

Confusing Awareness and Trust with Faith and Doubt is toxic to both. Ongoing, systemic doubt in trustworthy information will destroy progress, ultimately unraveling the tapestry of awareness. Trust that mysteries can be proven is toxic in its own way, since it gives rise to confusion between reality and fantasy like we see in doomsday cults.

My sense is that as our ability to manipulate and present information is handed over to machines, that we will need to educate them in these differences, and make sure that they do not become as confused as we are. Because we are rapidly heading for a time where these machines will be co complex and capable that our trust in them will be based on faith.

A little more direction?

  • In meeting with Dr. Kuber, I brought up something that I’ve been thinking about since the weekend. The interface works, provably so. The pilot study shows that it can be used for (a) training and (b) “useful” work. If the goal is to produce “blue collar telecommuting”, then the question becomes, how do we actually achieve that? A dumb master-slave system makes very little sense for a few reasons:
    • Time lag. It may not be possible to always get a fast enough response loop to make haptics work well
    • Machine intelligence. With robots coming online like Baxter, there is certainly some level of autonomy that the on-site robot can perform. So, what’s a good human-robot synergy?
  • I’m thinking that a hybrid virtual/physical interface might be interesting.
    • The robotic workcell is constantly scanned and digitized by cameras. The data is then turned into models of the items that the robot is to work with.
    • These items are rendered locally to the operator, who manipulates the virtual objects using tight-loop haptics, 3D graphics, etc. Since (often?) the space is well known, the objects can be rendered from a library of CAD-correct parts.
    • The operator manipulates the virtual objects. The robot follows the “path” laid down by the operator. The position and behavior of the actual robot is represented in some way (ghost image, warning bar, etc). This is known as Mediated Teleoperation, and described nicely in this paper.
    • The novel part, at least as far as I can determine at this point is using mediated telepresence to train a robot in a task:
      • The operator can instruct the robot to learn some or all of a particular procedure. This probably entails setting entry, exit, and error conditions for tasks, which the operator is able to create on the local workstation.
      • It is reasonable to expect that in many cases, this sort of work will be a mix of manual control and automated behavior. For example, placing of a part may be manual, but screwing a bolt into place to a particular torque could be entirely automatic. If a robot’s behavior is made  fully autonomous, the operator needs simply to monitor the system for errors or non-optimal behavior. At that point, the operator could engage another robot and repeat the above process.
      • User interfaces that inform the operator when the robot is coming out of autonomous modes in a seamless way need to be explored.

Results!

With 10 subjects running two passes each through the system, I now have significant (Using one-way ANOVA) results for the Phantom setup. First, user errors:

Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 -0.3333 0.3123 -1.067 0.7110
OPEN_LOOP - HAPTIC == 0 0.5833 0.3123 1.868 0.2565
TACTOR - HAPTIC == 0 1.0000 0.3123 3.202 0.0130 *
OPEN_LOOP - HAPTIC_TACTOR == 0 0.9167 0.3123 2.935 0.0262 *
TACTOR - HAPTIC_TACTOR == 0 1.3333 0.3123 4.269 <0.001***
TACTOR - OPEN_LOOP == 0 0.4167 0.3123 1.334 0.5466
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

Next, normalized user task completion speed

Linear Hypotheses:
Estimate Std. Error t value Pr(>|t|)
HAPTIC_TACTOR - HAPTIC == 0 0.11264 0.07866 1.432 0.4825
OPEN_LOOP - HAPTIC == 0 0.24668 0.07866 3.136 0.0118 *
TACTOR - HAPTIC == 0 0.17438 0.07866 2.217 0.1255
OPEN_LOOP - HAPTIC_TACTOR == 0 0.13404 0.07866 1.704 0.3269
TACTOR - HAPTIC_TACTOR == 0 0.06174 0.07866 0.785 0.8612
TACTOR - OPEN_LOOP == 0 -0.07230 0.07866 -0.919 0.7947
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Adjusted p values reported -- single-step method)

So what this says is that HAPTIC_TACTOR has the lowest error occurrence, and that HAPTIC is the fastest in achieving the task (note – there may be some Force Feedback artifacts that contribute to this result but that will be dealt with in the next study)

This can be shown best by looking at some plots. Here’s the error results as means plots

ErrorMeansPlot

And here are means plots for the task completion speed

Fastest50Percent

Since this is a pilot study with only 10 participants, the populations are only just separating in a meaningful way, but looking at the charts it looks like HAPTIC and HAPTIC_TACTOR will probably continue to become more separate from OPEN_LOOP and TACTOR.

What does this mean?

First, and this is only implicit from the study – it is possible to attach simpler, cheaper sensors and actuators (force and vibration) to a haptic device and get good performance. Even with simple semi-physics, all users were able to grip and manipulate the balls in the scenario in such a way as to achieve the goal. Ninety percent of the users who made no errors in placing 5 balls in a goal took between 20 and 60 seconds, or between 4 and 12 seconds per ball (including moving to the ball, grasping the ball and successfully depositing the ball in a narrow goal). Not bad for less than $30 in sensors and actuators.

Second, force-feedback really makes a difference. Doing tasks in an “open loop” framework is significantly slower than doing the same task with force feedback. I doubt that this is something that users will get better at, so the question with respect to gesture-based interaction is how to compensate? As can be seen from the results, it is unlikely that tactors alone can help with this problem. What will?

Third, not every axis needs to have full force-feedback. It seems that as long as the “reference frame” is FF, then the inputs that work with respect to that frame don’t need to be as sophisticated. This does mean that low(ish) cost, high-DOF systems using hybrid technologies such as Force Feedback plus Force/Vibration may be possible. This might open up a new area of exploration in HCI.

Lastly, the issue of how multiple modalities and how they could effectively perform as assistive technologies needs to be explored with this system. There are only a limited set (4?) of ways to render positional information (visual, tactile, auditory, proprioceptive) to a user, and this configuration as it currently stands is capable of three of them. However, because of the way that the DirectX sound library is utilized to provide tactile information, it is trivial to extend the setup so that 5 channels of audio information could also be provided to the user. I imagine having four speakers placed at the four corners of a monitor, providing an audio rendering of the objects in the scene. A subwoofer channel could be used to provide additional tactile(?) information.

Once multiple modalities are set up, then the visual display can be constrained in a variety of ways. It could be blurred, intermittently frozen or blacked out. Configurations of haptic/tactile/auditory stimuli could then be tested against these scenarios to determine how they affect the completion of the task. Conversely, the user could be distracted (for example in a driving game), where it is impossible to pay extensive attention to the placement task. There are lots of opportunities.

Anyway, it’s been a good week.

The Saga Continues, and Mostly Resolves.

Continuing the ongoing saga of trying to get an application written in Visual Studio 2010 in MSVC to run on ANY OTHER WINDOWS SYSTEM than the dev system. Today, I should be finishing the update of the laptop from Vista to Win7. Maybe that will work. Sigh.

Some progress. It seems you can’t use “Global” in the way specified in the Microsoft documentation about CreateFileMapping() unless you want to run everything as admin. See StackOverflow for more details.

However now the code is crashing on initialization issues. Maybe something to do with OpenGL?

It’s definitely OpenGL. All apps that use it either crash or refuse to draw.

Fixed. I needed to remove the drivers and install NVIDIA’s (earlier) versions. I’m not getting the debug text overlay, which is odd, but everything else is working. Sheesh. I may re-install the newest drivers since I now have a workable state that I know I can reach, but I think it’s time to do something else than wait for the laptop to go through another install/reboot cycle.

Started writing haptic paper. Targets are CHI, UIST, or HRI. Maybe even MIG? This is now a very different paper from the Motion Feedback paper from last year, and I’m not sure what the best way to present the information is. The novel invention part is the combinations of a simple (i.e. 3-DOF) haptic device with an N-DOF force-based device attached. The data shows that this combination has much lower error rates and faster task completion times than other configurations (tactor only and open loop), and the same times for a purely haptic system. Not sure how to organize this yet….

This is also pretty interesting… http://wintersim.org/. Either for iRevolution or ArTangibleSim

The unbearable non-standardness of Windows

I have been trying to take the Phantom setup on the road for about two weeks now. It’s difficult because the Phantom uses FireWire (IEE 1394) and it’s hard to find something small and portable that supports that.

My first attempt was to use my Mac Mini. Small. Cheap(ish). Ports galore. Using Bootcamp, I installed a copy of Windows Pro 7. That went well, but when I tried to use the Phantom, the system would hang when attempting to send forces. Reading joint angles was OK though.

I then tried My new Windows 8 laptop, which has an extension slot. The shared memory wouldn’t even run there. Access to the shared space appears not to be allowed.

The next step was to try an old development laptop that had a Vista install on it. The Phantom ran fine, but the shared memory communication caused the graphics application to crash. So I migrated the Windows 7 install from the Mac to the laptop, where I’m currently putting all the pieces back together.

It’s odd. It used to be that if you wrote code on one Windows platform that it would run on all windows platforms. Those days seem long gone. It looks like I can get around this problem if I change my communication scheme to sockets or something similar, but I hate that. Shared memory is fast and clean.

Slow. Painful. Progress. But at least it gives me some time to do writing…

Results?

Looks like we got some results with the headset system. Still trying to figure out what it means (other than the obvious that it’s easier to find the source of a single sound).HeadsetPrelimResults

Here are the confidence intervals:

confidenceIntervals

Next I try to do something with the Phantom results. I think I may need some more data before anything shakes out.

Moving beyond PoC

Switched out the old, glued together stack of sensors for a set of c-section parts that allow pressure on the sensor to be independent of the speaker. They keep falling off though.

Trying now with more glue and cure time. I also need to get some double-stick tape.

More glue worked!
IMG_2185

Modified the code so that multiple targets can exist and experimented with turning forces off.

More refining.

Working on constraint code. Got the framework done, but didn’t have enough sleep to be able to do the math involved. So instead I…

Got the actuators mounted on the Phantom! Aside from having one of the force sensors break during mounting, it went pretty smoothly. I may have to adjust the sensitivity of teh sensors so that you don’t have to press so hard on them. At the current setting the voice coils aren’t behaving at higher grip forces. But the ergonomics feel pretty good, so that’s nice.

IMG_2183

Random bits

I think I know what the vibroacoustic study should be. I put an actuator on the Phantom and drive wav files based on the material associated with the collision. I can use the built-in haptic pattern playback as a control. To make the wav files, it might be as simple as recording the word, or using a microphone to contact a material, move across it and lift off (personally, I like this because it mimics what could be done with telepresence. The use of multiple sensor/actuator pairs can be used in a later study.

Which means that I don’t actually need the Phidgets code in the new KF hand codebase. I’m going to include it anyway, simple because I’m so close and can use it later.

Come to think of it, I could put an actuator on a mouse as well and move over materials?

Tasks for today:

  • Finish getting the Phidgets code working in KF_Hand_3 – done
  • Start to add sound classes – done inasmuch as sounds are loaded and played using the library I wrote. More detail will come later.
  • Start to integrate Phantom. Got HelloHapticDevice2 up and running again, as well as quite a few demos