Difference between revisions of "ReacTable"
|Line 48:||Line 48:|
== Using reacTIVision ==
== Using reacTIVision ==
Revision as of 10:37, 20 October 2010
- 1 Introbuction
- 2 Building the table
- 3 Using reacTIVision
- 4 TUIO in Processing
This is a multi touch table that reads markers placed on the screen and does a corresponding action. Thus reading the object on the screen and functioning accordingly. The table uses reactivision software. This application was designed to track specially designed fiducial markers. By downloading the TUIO library you can cerate a program that operates on these markers. the table uses infra red light for finger tracking. a webcam takes in the feed and runs it through reactivision. It is then processed in processing and displayed again on the screen.
We used this to give information and protocals when a certian object was put on it.
How reacTIVision works
In a nutshell the system works like this: reacTIVision tracks specially designed fiducial markers in a real time video stream. The source image frame is first converted to a black&white image with an adaptive thresholding algorithm. Then this image is segmented into a tree of alternating black and white regions (region adjacency graph). This graph is then searched for unique left heavy depth sequences, which have been encoded into the fiducial symbol. Finally the found tree sequences are matched to a dictionary to retrieve an unique ID number. The fiducial design allows the efficient calculation of the marker's center point as well as its orientation. OSC messages implementing the TUIO protocol encode the fiducials' presence, location, orientation and identity and transmit this data to the client applications. Additionally reacTIVision uses the result of the image segmentation in order to retrieve and identify small round white blobs as finger tips on the surface. A quick and dirty shape matching algorithm selects the actual finger blobs from the possible region candidates. A complementary blob tracking algorithm is also taking advantage of the same data in order to track eventually not recognized fiducials, for example where fast movements destroy the actual fiducial structure in the image.
Building the table
table & surfaceA camera and a projector with wide-angle lenses need to be placed underneath the table, so they can both cover the entire surface.
For the tracking, the objects need to be properly illuminated, so the camera and thus the computer vision application can see them correctly. For the projection onto a table, the surface needs to be dark though, so the user can see the projected image well enough. Since these two necessary steps logically exclude each other, the solution is to operate in two different spectra: The projection has to be visible to the user, so the computer vision component needs to operate in a different, invisible spectrum such as near infrared in the range of 850nm. Most CCD cameras are perfectly sensitive within the near IR spectrum, therefore infrared LED lamps can be used to illuminate the table. All light from the visible spectrum needs to be filtered in the camera, so the computer vision algorithm is not disturbed by the projection. Eventually an existing infrared blocker needs to be removed from the camera sensor.
You should make sure that the camera has an acceptable lens and sensor size. For lowest latency and best performance we recommend firewire cameras from the top range, such as industrial cameras with a high framerate, resolution and sensor size. These cameras usually also come with high quality C-mount lenses. Cheaper firewire cameras, such as the unibrain fire-i also allow optional wide-angle lenses. From the large range of available USB cameras we recommend to use high end models with a native resolution of at least 640x480 at a frame rate of 30Hz. A very affordable and relatively good camera for this purpose is the Sony PS3eye, which is also working well under Windows, Linux and Mac OSX. DV cameras supporting full-frame mode are suitable, while those with interlaced mode only, will not work at all. an IR bypass filter needs to be placed on the lens instead of the ordinary camera filter if you are using 2 different spectra
How Our box functions
1. We used a simple wood box(3 x 1.5 x 2.5 ft) painted black on the inside so there is less reflection of light. We used a projector to project onto a mirror which reflects the projection onto the screen.The keystone setting on the projector is set at -120 so that the projection on the screen is a rectangular shape.
2. We couldn't find an IR bulb so we placed IR LEDs on 2 opposite sides of the perspex board so that the IR light refracts in the perspex board. When a finger is placed on the screen, it forms a blob of IR light on the inside of the box and is read by the webcam.
3. We made a stand for the laptop on top of the projector and used the laptop webcam itself as the image was pretty clear. We used a processed photography negative as the IR bypass filter on the webcam.
This application was designed to track specially designed fiducial markers. You will find the default "amoeba" fiducial set within the document "default.pdf" within the symbols folder of the reactivision download. Print this document and attach the labels to any object you want to track. The default fiducial tracking engine is using Ross Bencina's fidtrack library which is basically a newer high-performance implementation of Enrico Costanza's d-touch concept. Alternatively you can use the "classic" or the "dtouch" sets. See below how to configure the application using these older symbol sets.
Since reacTIVision was initially designed for fiducial tracking, its thresholder and segmentation modules are optimized for this task. Finger tracking was added at a later stage, and is taking advantage of the existing image processing infrastructure with almost no additional performance overhead. On the other hand it might seem difficult to set up reacTIVision in order to achieve good tracking performance for both the fiducial symbols and the finger tips. When used with diffused illumination, the setup needs strong and even illumination in order to achieve the necessary contrast for finger tracking, also the overall image control such as brightness, gain and shutter speed will improve the tracking quality (O key). Another important control parameter is the threshold "gradient gate" which should be set as low as possible, just before too much image noise becomes visible (G key). Finally the finger tracking can be configured by adjusting the average finger size and tracking sensitivity (F key).
Common settings can be edited within the file "reacTIVision.xml" where all changes are stored automatically when closing the application. Under Mac OS X the XML configuration file can be found within the application bundle's Resources folder. Select "Show Package Contents" from the application's context menu in order to access and edit the file.
The reacTIVision application usually sends the TUIO messages to port 3333 on locahost (127.0.0.1) You can change this setting by adding or editing the XML tag <tuio host="127.0.0.1" port="3333"> to the configuration.
The <fiducial engine="amoeba" tree="default"/> XML tag lets you select the fiducial engine or an alternative amoeba tree order. The default engine is using the fastest and effective 'amoeba' fiducial set. Add the 'classic' option in order to use Ross' initial d-touch reimplementation. You can select Enrico's original d-touch engine by providing the 'dtouch' option.
The display attribute defines the default screen upon startup. The <image display="dest" equalize="false" gradient="32" /> lets you adjust the default gradient gate value. reacTIVision comes with a background subtraction module, which in some cases can simplify the recognition performance of both the finger and fiducial tracking. Within the running application you can toggle this with the 'E' key or recalculate the background subtraction by hitting the SPACE bar.
The camera options can be adjusted by pressing the 'O' key. On Windows and Mac OS this will show a system dialog that allows the adjustment of the available camera parameters. On Linux (Mac OS X when using IEEE1394 cameras), the available camera settings can be adjusted with a simple on screen display. The overall camera settings can be configured within the camera.xml configuration file. Please note that for Windows and the Quicktime mode on Mac OS X this only allows the configuration of the image size. For IEEE1394 cameras on Mac OS X as well as all camera types on Linux all image parameters can be fully configured using this configuration file.
Calibration and Distortion
Some tables, such as the reacTable are using wide-angle or fish-eye lens in order to increase the area visible to the camera at a minimal distance. These lenses unfortunately distort the image and reacTIVision can correct that distortion and the overall alignment of the image. For the calibration print and place one of these rectangular or square calibration sheets on the table and adjust the grid points to the grid printed on the sheet.
To calibrate reacTIVision switch to calibration mode hitting 'C'. Use the keys A,D,W,X to move within grid, moving with the cursor keys will adjust the grid point. 'J' resets the whole calibration grid, 'U' resets the selected point and 'K' reverts to the saved grid. To check if the distortion is working properly press 'R'. This will show the fully distorted live video image in the target window. Of course the distortion algorithm only corrects the found positions instead of the full image.
TUIO in Processing
Copy the complete contents of this distribution to a folder named TUIO to the libraries folder of your Processing sketchbook. In order to use this library, you will need to provide the according import statement within the header of your Processing sketch: import TUIO.*;
Application Programming Interface
First you need to create an instance of the TuioProcessing client, providing the instance of your sketch to the constructor using the this argument. The TuioProcessing client immediately starts listening to incoming TUIO messages and generates higher level events based on the object and cursor movements. TuioProcessing tuioClient = new TuioProcessing(this);
Therefore your sketch needs to implement the following methods in order to be able to receive these TUIO events properly:
-addTuioObject(TuioObject tobj) this is called when an object becomes visible
-removeTuioObject(TuioObject tobj) an object was removed from the table
-updateTuioObject(TuioObject tobj) an object was moved on the table surface
-addTuioCursor(TuioCursor tcur) this is called when a new cursor is detected
-removeTuioCursor(TuioCursor tcur) a cursor was removed from the table
-updateTuioCursor(TuioCursor tcur) a cursor was moving on the table surface
-refresh(TuioTime bundleTime) this method is called after each bundle,
use it to repaint your screen for example:
Each TuioObject or TuioCursor is identified with a unique SessionID, which it maintains over its lifetime. Additionally each TuioObject carries a SymbolID that corresponds to its attached fiducial marker number. The CursorID of the TuioCursor is always a number in the range of all currently detected cursors. You can retrieve these ID numbers with the according getSessionID(), getSymbolID() or getCursorID() methods.
The TuioObject and TuioCursor references are updated automatically by the TuioProcessing client and are always referencing the same instance over the object's or cursor's lifetime. All the TuioObject and TuioCursor attributes are encapsulated, and can be accessed with methods such as getX(), getY() and getAngle(). There exist further methods for the retrieval of speed and acceleration values, please see the provided example sketches for a complete list. TuioObject and TuioCursor also have some additional convenience methods for the calculation of distances and angles between objects. The getPath() method returns a Vector of TuioPoint representing the movement path of the object. Please refer to the documentation of the TUIO Java reference implementations for further details on all the available methods.
Alternatively, the TuioProcessing class contains some methods for the polling of the current object and cursor states. There are methods which return either a list or individual TuioObject and TuioCursor objects.
-getTuioObjects() returns a Vector of all currently present TuioObjects
-getTuioCursors() returns a Vector of all currently present TuioCursors
-getTuioObject(long s_id) returns a TuioObject or NULL depending on its presence
-getTuioCursor(long s_id) returns a TuioCursor or NULL depending on its presence
In our program each marker is attached to an image that discribes a cretian equipment and how it is used. the fudicial is stuck onto the equipment.