Category Archives: Graphics APIs

Typescript Headers and Browser Quirks.

It’s been a pretty good week. The WebGl graphics in the directive are connected to the user functionality in the controller, I have tooltips running, and even have raycasting working, so the 2D items appear in the overlay plane above the 3D object:

AngularJSWebGl

 

The big problem that I needed to chase down was circular references in the typescript files. TypeScript uses reference path comments to tell the compiler where to look for type and structure information. Below is the information that I need for the angular module that creates the above application

/// <reference path="../../definitelytyped/angularjs/angular.d.ts" />
/// <reference path="../controllers/WGLA1_controller.d.ts" />
/// <reference path="../directives/WGLA2_directives.d.ts" />

In this case note that there is a path for controller and directive code. In this case, pointing directly to the code file is fine, but I have a case where my WebGLCanvas has to know about WebGLComponents and vice versa. The typescript compiler (tsc) doesn’t like that, and barfs a ‘duplicate definition’ error. At this point, I was wondering why TypeScript doesn’t have a #pragma once directive that would prevent this sort of thing, or even an #ifndef capability. It’s a preprocessor after all, and it should be able to do this. Easily.

But TypeScript does have interfaces. So in this case, I put interfaces for both modules in a single file, which I could then refer to in the downstream files and avoid the circular dependency issue.

The other issue was browsers not playing well together. I kind of thought that we had gotten beyond that, but no.

I develop with IntelliJ, and their debugger plays the best with Chrome, so that’s my default browser. At the end of the day, I’ll check to see that everything runs in IE and FF. And today FF was not playing well, and the tooltips I worked so hard on were not showing. WTF, I say.

If you look at the screenshot above, you’ll see the white text at the upper left. That’s my real-time logging (it’s pointless to write to the console at 30hz). And I could see that the unit mouse values were NaN. Again, WTF.

Now FF has my favorite debugger, and it even works (generally) with typescript, as long as you have all the .ts and .map files alongside your .js files. So I stepped into the code at the handleMouseEvents() method in WebGlCanvasClasses and started looking.

I’ve been getting the mouse coordinate from MouseEvent.offsetX. That turns out be used by IE and Chrome, but not FF. so I changed

var sx:number = ev.offsetX; to var sx:number = ev.offsetX | ev.layerX;

All fixed, I thought. But wait! There’s more! It turns out that IE has both of these values, and they don’t mean the same thing. so in the end I wind up with the following monkeypatch:

handleMouseEvents = (ev:MouseEvent):void => {
    var sx = ev.layerX;
    var sy = ev.layerY;

    if(ev.offsetX < sx){
        sx = ev.offsetX;
        sy = ev.offsetY;
    }
}

This works because the smaller value has to be the coordinate of the mouse on the div I’m interested, since all screen coordinates increase from 0. So it’s quick, but jeez.

Text Overlay for ThreeJS with Angular and TypeScript

This is not my first foray into WebGL. The last time I was working on a 3D charting API using the YUI framework, which could do things like this:

Personally, I can’t do any debugging at 30fps without having a live list of debugging text that I can watch. So almost immediately after the ‘hello world’ spinning cube, I set that up. And now I’m in the middle of moving my framework over to Angular and TypeScript. For the most part, I like how things are working out, but when it comes to lining up a transparent text plane over a threeJS element, YUI gives a lot more support than Angular. The following is so brute-force that I feel like I must be doing it wrong (And there may be a jquery-lite pattern, but after trying a few StackOverflow suggestions that didn’t work), I went with the following.

First, this all happens in the directive. I try to keep that pretty clean:

// The webGL directive. Instantiates a webGlBase-derived class for each scope
export class ngWebgl {
   private myDirective:ng.IDirective;

   constructor() {
      this.myDirective = null;
   }

   private linkFn = (scope:any, element:any, attrs:any) => {
      //var rb:WebGLBaseClasses.RootBase = new WebGLBaseClasses.RootBase(scope, element, attrs);
      var rb:WebGlRoot = new WebGlRoot(scope, element, attrs);
      scope.webGlBase = rb;
      var initObj:any = {
         showStage: true
      };
      rb.initializer(initObj);
      rb.animate();
   };

   public ctor = ():ng.IDirective => {
      if (!this.myDirective) {
         this.myDirective = {
            restrict: 'AE',
            scope: {
               'width': '=',
               'height': '=',
            },
            link: this.linkFn
         }
      }
      return this.myDirective;
   }
}

The interface with all the webGL code happens in the linkFn() method. Note that the WebGLRoot class gets assigned to the scope. This allows for multiple canvases.

WebGLRoot is a class that inherits from WebGLBaseClasses.CanvasBase, which is one of the two big classes I’m currently working on. It’s mostly there to make sure that everything inherits correctly and I don’t break that without noticing:-)

Within WebGLBaseClasses.CanvasBase is the initializer() method. That in turn calls the methods that set up the WebGL and the ‘stage’ that I want to interact with. The part we’re interested for our overlay plane is the overlay canvas’ context. You’ll needthat  to draw into later:

overlayContext:CanvasRenderingContext2D;

This is set up along with the renderer. Interesting bits are in bold:

this.renderer = new THREE.WebGLRenderer({antialias: true});
this.renderer.setClearColor(this.blackColor, 1);
this.renderer.setSize(this.contW, this.contH);

// element is provided by the angular directive
this.renderer.domElement.setAttribute("class", "glContainer");
this.myElements[0].appendChild(this.renderer.domElement);

var overlayElement:HTMLCanvasElement = document.createElement("canvas");
overlayElement.setAttribute("class", "overlayContainer");
this.myElements[0].appendChild(overlayElement);
this.overlayContext = this.overlayElement.getContext("2d");

The first thing to notice is that I have to add CSS classes to the elements. These are pretty simple, just setting absolute and Z-index:

.glContainer {
    position: absolute;
    z-index: 0;
}

.overlayContainer {
    position: absolute;
    z-index: 1;
}

That forces everything to have the same upper left corner. And once that problem was solved, drawing is pretty straightforward. The way I have things set up is with an animate method that uses requestAnimationFrame() wich then calls the render() method. That draws the 3D, and then hands the 2D context off to the draw2D() method:

draw2D = (ctx:CanvasRenderingContext2D):void =>{
   var canvas:HTMLCanvasElement = ctx.canvas;
   canvas.width = this.contW;
   canvas.height = this.contH;
   ctx.clearRect(0, 0, canvas.width, canvas.height);
   ctx.font = '12px "Times New Roman"';
   ctx.fillStyle = 'rgba( 255, 255, 255, 1)'; // Set the letter color
   ctx.fillText("Hello, framecount: "+this.frameCount, 10, 20);
};

render = ():void => {
   // do the 3D rendering
   this.camera.lookAt(this.scene.position);
   this.renderer.render(this.scene, this.camera);
   this.frameCount++;

   this.draw2D(this.overlayContext);
};

I’m supplying links to to the running code and directives, but please bear in mind that this is in-process development and not an minimal application for clarity.

Milestones

The first draft of the paper is done! It comes out at about 12 pages. I’ll need to cut it down to 6 to submit for CHI 2014 WIP. Easier than writing though. Of course, that’s just the first draft. More to come, I’m guessing. Still, it’s a nice feeling, and since I’ve burned through most of my 20% time, it’s time for me to get back to actually earning my pay, so I’ll be taking a break from this blog for a while. More projects are coming up though, so stay tuned. I’ll finish up this post with some images of all the design variations that led to the final, working version:

Prototype Evolution

Prototype Evolution (click to enbiggen)

The chronological order of development is from left to right and top to bottom. Starting at the top left:

  • The first proof of concept. Originally force-input / motion – feedback. It was with this system that I discovered that all actuator motion had to be in relation to a proximal relative base.
  • The first prototype. It had 6 Degrees of freedom, allowing for a user to move a gripper within a 3D environment and grab items. It worked well enough that it led to…
  • The second prototype. A full 5-finger gripper attached to an XYZ base. I ran into problems with this one. It turned out that motion feedback required too much of a cognitive load to work. The user would loose track of where their fingers were, even with the proximal base. So that led to…
  • The third prototype. This used resistive force sensors and vibrotactile feedback. The feedback was provided using voice coils, which were capable of full audio range, which meant that all kinds of sophisticated contact and surface effects could be provided. That proved the point that 5 fingers could work with vibrotactile feedback, but the large scale motions of the base seemed to need motion (I’ve since learned that isometric devices are most effective over short ranges). This was also loaded with electronic concepts that I wanted to try out – Arduino sensing, midi synthesizers per finger, etc.
  • To explore direct motion for the base for the fourth prototype I made a 3D printing of a 5-finger Force Input / Vibrotactile Output (FS/VO) system that would sit on top of a mouse. This was a plug-and play substitution that worked with the previous electronics and worked quite nicely, though the ability to grip doesn’t give you much to do in the XY plane
  • To Get 3D interaction, I took two FS/VO modules and added them to a Phantom Omni. I also dropped the arduino and the synthesizer and the Arduino, using XAudio2 8-channel audio and a Phidgets interface card. This system worked very nicely. The FS/VO elements combined with a force feedback base turned out to be very effective. That’s what became the basis for the paper, and hopefully the basis for future work.
  • Project code is here (MD5: B32EE89CEA9C8E02E5B99BFAF24877A0).

Deadlines and schedules

I was just asked to see how many hours I have left for working this research. It turns out at the rate I’m going, that I can continue until mid-October. This is basically a big shout-out to Novetta, who has granted a continuation of my 20% time that was originally a hiring condition when I went to work for Edge. Thanks. And if you’d like a programming job in the DC area that supports creativity, give them a call.

I just can’t make the audio code break in writing out results. Odd. Maybe a corrupt input file can have unforeseen effects? Regardless, I’m going to stop pursuing this particular bug without more information

Fixing the state problem. Done.

Fixing the saving issue. Also changing the naming of the speakers to reflect Dolby or not. Done.

New version release built and deployed.

And back to Phantom++

TestScreenV1

I started to add in the user interface that will support experiments. Since it was already done, I pulled in most of the Fluid code from the Vibrotactile headset, which made things pretty easy. I needed to add an enclosing control system class that can move commande between the various pieces.

I’ve also decided that each sound will have an associated object with it. This allows each object to have a simple “acoustic” texture that doesn’t require any fancy data structure.

At this point, I’m estimating that the first version of the test program should be ready by Friday.

Sounds like Deja Vu.

Adding custom speaker number and placement as per Dr. Kuber’s request.

Looks like dot product should do the trick: DotProduct

Done! With only a couple of string compare issues. I also had to make the speaker index jump around the subwoofer channel until I can work out how to set the EQ.

And it looks like there are bugs in the code. It seems that you cannot do zero speed sessions. And the writing out of results with multiple sound files looks pretty confused. I’m not sure if extra CRs are being put in there or if some of the data isn’t being written out. Need to run some more examples.

What you get when you combine FLTK, OpenGL, DirectX, OpenHaptics and shared memory

Wow, the title sounds like a laundry list 🙂

Building a two-fingered gripper

Going to add sound class to SimpleSphere so that we know what sounds are coming from what collision. Didn’t do that’ but I’m associating the sounds by index, which is good enough for now

Need to calculate individual forces for each sphere in the Phantom and return them. Done.

To keep the oscillations at a minimum, I’m passing the offsets from the origin. That way the loop uses the device position as the basis for calculations within the haptic loop.
Here’s the result of today’s work:

Flailing, but productive flailing.

Basically spent the whole day figuring out how the 4×4 phantom matrix equates to the rendering matrix (I would have said OpenGL, but that’s not true anymore. I am using the lovely math libraries from the OpenGL SuperBible 5th Edition, which makes it kinda look like the OGL of Yore.

Initially I thought I’d just use the vector components of the rotation 3×3 from the Phantom to get the orientation of the tip, but for some reason, parts of the matrix appear inverted. So instead of using them directly, I multiply the modelviewmatrix by the phantom matrix, Amazingly, this works perfectly.

To make sure that this works, I rendered a sphere at the +X, +Y and +Z axis in the local coordinate frame. Everything tracks. So now I can create my gripper class and get the positions of the end effectors from the class. And since the position is in the global coordinate frame, it kind of comes along for free.

Here’s a picture of everything working:
PhantomAxis
Tomorrow, I’ll build the gripper class and start feeding that to the Phantom. The issue will be to sum the force vectors from all the end effectors in a reasonable way.

Some Assembly Required

Integrating all the pieces into one test platform. The test could be to move a collection of physically-based spheres (easy collision detect) from one area to another. Time would be recorded from the indication of a start and stop (spacebar, something in the sim, etc). Variations would be:

  • Open loop: Measure position and pressure, but no feedback
  • Force Feedback (Phantom) only
  • Vibrotactile feedback only
  • Both feedbacks

Probably only use two actuators for the simplicity of the test rig. It would bean that I could use the laptop’s headphone output. Need to test this by wiring up the actuators to a micro stereo plug. Radio Shack tonight.

Got two-way communication running between Phantom and sim.

Have force magnitude adjusting a volume.

Added a SimpleSphere class for most of the testing.

Moar Phidgeting

  • Brought in my fine collection of jumpers and connectors. Next time I won’t have to build a jumper cable…
  • Built the framework for the new hand test. The basic graphics are running
  • Added cube code to the FltkShaderSupport library. Here’s everything running:
  • KF_framework
  • Next, I’m going to integrate the Phidget sensor code into the framework, then hook that up to sound code.
  • Had Dong register for Google’s Ingress, just to see what’s going on.
  • Loaded in the Phidgets example code and the library that works is the x86 library. Using the 64bit library results in unresolved externals errors.
  • There are a lot of straight C examples. Just found the C++ class examples simple.h and simple.cpp.

Enhancements

  • Meeting with Dr. Kuber.
    • Add a “distance” component to the test and a multiple emitter test
    • Got a bunch of items to add actuators to: Hardhat, noise-blocking headphones, and a push-to-talk mic.
  • Added name and gender fields to the GUI and cleaned up the menus
  • Working on adding multiple sounds
    • Added a ‘next’ button. Once pushed, the sources can show until the center is clicked again.
    • I think the test should have options for how the sounds are added
      • Permutations (A, then B, then C, then AB, AC, BC, ABC)
      • All (Going to start with this)
      • Random?
  • Added variable distance