Sunday, May 15, 2005

5-15-2005 More on Fish

With Lee's new graphics, I have put together a new fish scene. For the orange fish, I beefed up the manipulation force and inverted the action, so that the fish are attracted to the mouse. They swim toward the cursor, swim in ellipses around it, and leave at tangents - this is an emergent behavior in the movement algorithm that looked kind of neat during testing. Having them lock onto the cursor was just kind of boring. The smaller purple fish flee the cursor as before.

5-15-2005

It's all pretty much working now. We swapped out Lee's G5 tower for a Dell D600 laptop, giving us a massive improvement in performance. In the heaviest scenes, I estimate that the latency is down to about a quarter of a second. Foggy windows, Tinkertoys, and Fish all work very well, and JJ's scary blob transition scene has a very different character.

Working on a few things still. Tinkertoys has a problem where any background layers are being drawn on top of the layer that the drawing API works on. I'll need to look into that more. Lee put up some new fish graphics, and I'm working on some new code for this new species of fish that will have them moving a bit differently.

Incidentally, I added code to all of the fish that has them react when they are directly touched. It's based on a hitTest method, and alters the fish's destination point as if the user had touched that instead. The old movement method is still in there, so fish will tend (though not always) to steer around the user, but they will also try to swim away very fast when they are touched. I think this provides the kind of immediate feedback that tells users that they are indeed manipulating the fish.

Wednesday, April 20, 2005

4-20-2005 Pushing and Pulling

I think I've got 'pushing' and 'pulling' ironed out. Both activities are based upon the Barslund repulsion algorithm that I picked up from levitated.

The basic algorithm, as implemented in Flash, is:

xDif = _root._xmouse-this._x;
yDif = _root._ymouse-this._y;
distance = Math.sqrt(xDif*xDif+yDif*yDif);
tempX = this._x-(force/distance)*(xDif/distance);
tempY = this._y-(force/distance)*(yDif/distance);
this._x=(homeX-this._x)/2+tempX;
this._y=(homeY-this._y)/2+tempY;

The target object is checked against the mouse coordinates for both the distance and the orientation. Force is a constant set up earlier in the code, for the demo this was set to something like 70. homeX and homeY in the demo are the 'base' position of the object - the result of this equation is to have the object squish away when the mouse gets close to it, orbiting around a fixed point.

I modified this equation extensively, making a method called scoot:
public function scoot(pusherX, pusherY){
xDif = pusherX - this._x;
yDif = pusherY - this._y;
distance = Math.sqrt(xDif*xDif+yDif*yDif);
if((distance <> 10)){
vx -= ((force/distance)*(xDif/distance)) / 15;
vy -= ((force/distance)*(yDif/distance)) / 15;
}
}
The core is similar, but the results are very different. pusherX and pusherY are the coordinates of a pushing object, which in Luminance are the tracking nodes. Every frame, the screen object is moved by its vx and vy values (v is short for velocity). When a node comes within range, in this case less than 120 pixels, it starts to influence the screen object's movement values (vx and vy). The closer the node is, the greater the effect. In this example, I have the alteration (((force/distance)*(yDif/distance))/15) being subtracted from the object's velocity, which causes the object to be 'pushed' from the tracking node; counterintuitive I know, but that's how it works out. Adding the alteration value to the velocity results in a 'pulling' behavior, which can be useful for other segments of Luminance.

I'm not totally certain that I need to divide the alteration value at the end (((force/distance)*(yDif/distance))/15). Without this operation, objects repelled by nodes tended to be flung away at a really high velocity. I think I can simplify the calculation by just reducing the force constant (right now it is set to 125). I'll do that in some later experiments.

To-Do:
  • Figure out how to sort arrays by length. This will let me mod the proximity engine to figure out which cell has the highest population in/around it, which would tell us where to point the eyeball in the opening segment.
  • Come up with a distance checking algorithm to check the neighbors returned by the proximity engine. Pushing and pulling are still a little flaky - nodes are shifting faster than the engine seems to keep up.

Monday, April 18, 2005

4-18-2005 Other Updates

Whoa, been a while.
Been busy, nevertheless. First things first:

Everything has been converted to Actionscript 2.0. All of the screen objects I come across (window tiles, tracking nodes, etc.) are now instantiated from base classes that extend the basic movieClip class. We've already seen substantial improvements in development potential, as well as performance and object management.

The conversion of the tracking nodes to objects was highly successful. I have a few base behaviors assigned to the tracking node primitive, described in the external class file trackingNode.as. At movie startup, a set of 40 tracking nodes are instantiated on the stage, with references to the nodes stored in a master list called nodeList. A function in the main movie parses the incoming XML data and invokes a movement method called setCoord on appropriate nodes, which takes care of moving them around the screen. Another behavior tells each node to hide off of the stage when it gets identical data too many times in a row.

When a node is de-assigned in EyesWeb, its coordinates are not unassigned, they are simply left as their last value. This meant that when EyesWeb lost tracking on a given point in the previous setup, it would appear to get stuck on the stage in Flash, holding position until it was reassigned (the actual explanation is a little more complicated but this is a good enough description of the behavior).

The approach involving setting up listeners in flash to handle object updates didn't exactly pan out, though I managed to get around it. Most of the objects that I have built so far are sort of flaky when their internal 'onEnterFrame' scripts are involved. Directly interacting with an object from the main frame script seems to trigger its internal behavior, though - I'll have to do more experimentation to get more reliable results.

4-18-2005 - the Proximity Manager

The grid-based proximity sensor class by Grant Skinner is implemented and working fine. Caught a bit of a break on that one, though it took some careful study, as Mr. Skinner is a bit stingy on comments in this particular bit of code. It works thusly:

  • A proximity manager object is instantiated. This object is described in the external file ProximityManager.as. When instantiated, a grid size is specified - this size value should multiply evenly into the dimensions of the stage for best results.
  • The proximity manager keeps track of managed objects in a two dimensional array, effectively creating a sort of virtual grid on the stage. When an object is attached to this proximity manager through its addItem method, its coordinates are mathematically approximated to a single cell on this grid, and a reference to that object is stored in that cell of the array.
  • The business end of the proximity manager is its getNeighbours method. The main frame script invokes this method, passing it a reference to (get this) any movie clip in existence on the stage. The manager figures out where the referenced clip would be on its virtual grid, then it builds up a list of all of the managed clips in that cell and all of the neighboring cells. After that, it returns an array full of movie clip references (the neighbors) to whatever called getNeighbours.

We can do other things with this neighbor list once it is received. For example, distance calculations can get somewhat CPU-intensive, especially when you are doing a lot of them (we use the Pythagorean Theorem to determine distance, so we are calculating a lot of square roots). Using Skinner's proximity manager, we can filter out more distant objects using simple, computationally cheap division operations. This saves CPU time for more explicit distance calculations, and ultimately more artwork on the screen.

It should be noted that the grid size specified is pretty important. Let's say you want a 'shark' object to interact whenever it gets within 300 pixels of a 'fish' object. If the grid size is only set to, say, 25, this means that the shark will only really see the fish when it gets inside of 50 pixels. Let me illustrate this a bit better:

Let's say you have three fish swimming around on the stage:



The proximity manager will track references of these objects in a virtual grid, visualized here:



Let's say a shark is added to the stage, so that it looks like this:



Let's say the frame script calls the getNeighbours method of the manager, sending a reference to the shark clip as its parameter. The proximity manager checks all of the surrounding cells, ignoring the rest.



A list of tracked objects in these surrounding cells is built, and sent back as the list of neighbors. In this case, it would return 'fish3' as the only neighbor of 'shark.' If the shark were in the next cell up, the manager would return both 'fish1' and 'fish3' as its neighbors.


Part of the problem, though, is that the cell size is only 40 pixels across. If the shark fell into the lower left corner of one cell, and a fish into the upper right corner of the upper-right neighboring cell, the distance between them would be about 114 pixels. This becomes a problem if I want the shark to react when it is within 200 pixels of a fish; its "sense" range is a square 120 pixels across.

The solution, of course, is to increase the grid size, so that a fish within 200 pixels of the shark will never be outside of the proximity mask. Increasing the grid to this size, though, means that there will be a bit of a dead zone beyond this radius where fish can be returned as neighbors but will technically be over 200 pixels away.



The next step in the solution, then, is to run actual distance calculations on the neighbors returned by the manager, then do some stuff based on whatever passes that test.

Ultimately, this means that the proximity manager is in place to augment distance calculations, not substitute for them.

Tuesday, March 01, 2005

3-1-2005

Couple of updates:

Was working on a control panel type SWF for various movies. I am putting this one on a back burner for now. Ditto for the 'density of motion' sensor in Eyesweb, to make the big floaty eye look toward the region with the most screen motion.

Pulled in an Actionscript 2.0 grid-based proximity sensor class, courtesy of one Grant Skinner. This necessitated finding an Actionscript 2.0 version of Laco's tweening library, which happened to be available at his site. I encountered no problems dropping this into an older version of the interactive foggy windows test - just had to update the #includes to import commands.

I have a larger idea for handling tracking nodes, which dovetails into techniques for handling larger populations of screen objects. In the last version of Foggy Windows that I saw, the squares faded out to a given value then stayed clear. JJ had their reactions directly tied to tracking node updates; as updates streamed in, they would trigger object reactions at that rate - the reaction methods are directly called in the input parser.

This approach works well enough in a test environment, but I can foresee management and development issues as our exhibits become more complicated. Right now, I'm thinking of creating a simple 'tracking node' class that the tracking nodes will be instantiated from (as opposed to taking an empty movie clip and drawing a circle in it. This should let us assign default behaviors to them, like clearing to a safe area off the stage when they don't get updates. This may necessitate setting up listeners in each object, while relegating the XML input parser to the role of a broadcaster. I think this will work, assuming I understand those concepts correctly.

A little after that, I think I can use those same techniques to instantiate other primitive objects, on which prettier, more interactive objects can be based.

Also to do: get Skinner's proximity manager working with our tracking points. This is critical to later object interactions, though I need the tracking points handled a little differently, first.

Thursday, February 17, 2005

2-17-2005

Couple of updates. We got a screen in our new workspace: 8 foot screen, 8 foot 4 inch wide room. We have to crouch low to get to the camera and capture server, but I think it is all worth it.

Rear capture works! I'm very pleased with the rear projection screen material; our 2000 lumen projector puts out plenty of light for a crisp image on the back. Need to work a bit with some measuring tape to get the alignment between the camera and the screen squared off.

I've also noticed a bright spot in the center of the projected image, not strong enough to be visible from the front but just intense enough to occasionally pass the threshold of the filter in eyesweb. It will probably take some slight tweaking of the threshold values to correct for this.

Been working a bit on a Flash control panel template for quick adjusting some of our movie variables. JJ suggested using local shared objects, and I've had some moderately good results with that - just have to tell the movies the same shared object name and give them the same path, so that they are looking at the same object. I even got the client movie to check the object for updates at intervals; the trick was in not instantiating a copy of the shared object in the client movie, instead using the sharedObject.getLocal method in place of where the local instance would go in the code.

To get the control panel working at this point would require reauthoring our movies with constructors and deconstructors, to clean out old data, objects and movie clips, and reinstantiate them with new data. This might not be necessary for everything, though -- I think we could alter things like constants on the fly, like proximity triggering values, color values, etc.

Going to have to get into the lab after dark soon to work out some system calibration.

[discontinuity]

Thursday, November 18, 2004

11-17-2004

WOO breakthrough on the EW-flash linkup. The "gum" issue was a minor one after all; seems the sample flash file had it appending all new messages to a variable instead of overwriting the previous ones, meaning that the variable would take up more and more memory at a linear rate. I fixed this by rewriting all of the "incoming += (new item)" lines to "incoming = (new item)" or the equivalent, and it was pretty stable after that.

JJ's code ended up slotting in place fairly neatly, with few modifications needed. Most of what we did last night had to do with adjusting his code to parse the incoming OSC packets. As they come out of EW, the packets have a name (defined in the OSC block in EW as their 'OSC_Command'), and two arguments, both of the input coordinates for each point. The arguments always come as X first and Y second, so it was not necessary to append or modify that data to differentiate them. In the flash, JJ has the movie generating an empty movie clip for each distinct packet, then drawing a circle at the proscribed coordinates. He also has the movie assigning the clip names based on the incoming packet names, meaning that 1) we might be able to 'smooth' the motion between updates and 2) the number of points drawn in flash grows automatically with the number of points sent from EW. I tested this by adding two more transmit subpatches into EW to make a total of five; the flash faithfully drew five circles without any modification.

To do:

Increase the number of transmitted points in EW, while checking for stability. Mostly grunt work; I'll probably use the EW 4.0.0 beta to take advantage of their new interface.

Run EW on the sample video file to check for climbing memory usage, indicative of a memory leak or something similar. This may cause stability issues during prolonged usage... something to keep in mind.

Saturday we plan to try a rough prototype of the system with a live camera feed. I expect some degree of offset in where the points draw in relation to the shadow, due to the necessary difference between the projection angle and the camer viewing angle. This is something that can be worked out in later tuning; right now, we just want to test the system from stem to stern.

Monday, November 15, 2004

11-15-2004

Picked up a response this morning on the discussion board post:

extracting the matrix into indevidual values and then exporting it to flash
is the way to go.
i wasnt able to prase a message with more then one value in it to flash, but
something like
/xball_01 = 34
/yball_01 = 234
/xball_02 = 33
/yball_02 = 212
and then merging it in flash works, and might even be simpler then bringing
an array (=matrix) into flash and then spliting it into indevidual values.
anyways, i know of no other way to send multiple values to flash thru osc.
sorry for the lousy spelling.
post back if you find a way to send a couple of values as a single string
and then spliting it in flash into a nice array :)

So... he's mostly affirming my approach with the existing tools.

In the meantime, I tossed the Java Runtime Environment 5 on my machine and fired up flosc. Installation was *very* smooth and easy, no problems starting and running the thing either. So far, I've been able to connect to it from both EW and flash, and using the demo .fla file I was able to get the mouse coordinates to show up in flash.

I also figured out how to increase the number of inputs available on a StringToOSC or ScalarToOSC block, allowing me to bundle multiple data elements in a single OSC packet. I haven't found any naming options for them, but they do consistently come in in the same order. Besides, I can always concatenate some labels in front of the coordinate strings if this becomes a problem.

On to the bad news: even three tracked testing points were enough to gum up the system in sending out to flash. Tracking the test video runs this workstation's processor at about 65-70% capacity, if flosc is also running (which doesn't take much when idle). When flash is displaying the coordinate data as well, I can watch the processor usage ramp up to 100%, at which point the flash demo sort of gets "gummed up" and pretty much stops responding. It isn't a hard crash, though... if I stop EW, the system eventually catches up, though the data that streams out of EW seems to get lost. I'll have to do some more work to determine whether it is flosc or flash gumming up the works.

The best remedy I can think of is to relocate the flash client to another machine. All we have to do is point flash at the IP address of the flosc server and let it run. I think this would work well in the final setup too, because it means we could keep the bulk of the processing with the camera itself, and nothing more than an ethernet cable would need to run to the rendering system (or we could go wireless if we are feeling bold).