Difference between revisions of "ReacTable"
|Line 1:||Line 1:|
== Introbuction ==
== Introbuction ==
Revision as of 05:07, 20 October 2010
This is a multi touch table that reads markers placed on the screen and does a corresponding action. Thus reading the object on the screen and functioning accordingly. The table uses reactivision software. This application was designed to track specially designed fiducial markers. By downloading the TUIO library you can cerate a program that operates on these markers. the table uses infra red light for finger tracking. a webcam takes in the feed and runs it through reactivision. It is then processed in processing and displayed again on the screen.
We used this to give information and protocals when a certian object was put on it.
How reacTIVision works
In a nutshell the system works like this: reacTIVision tracks specially designed fiducial markers in a real time video stream. The source image frame is first converted to a black&white image with an adaptive thresholding algorithm. Then this image is segmented into a tree of alternating black and white regions (region adjacency graph). This graph is then searched for unique left heavy depth sequences, which have been encoded into the fiducial symbol. Finally the found tree sequences are matched to a dictionary to retrieve an unique ID number. The fiducial design allows the efficient calculation of the marker's center point as well as its orientation. OSC messages implementing the TUIO protocol encode the fiducials' presence, location, orientation and identity and transmit this data to the client applications. Additionally reacTIVision uses the result of the image segmentation in order to retrieve and identify small round white blobs as finger tips on the surface. A quick and dirty shape matching algorithm selects the actual finger blobs from the possible region candidates. A complementary blob tracking algorithm is also taking advantage of the same data in order to track eventually not recognized fiducials, for example where fast movements destroy the actual fiducial structure in the image.
Building the table
table & surface
A camera and a projector with wide-angle lenses need to be placed underneath the table, so they can both cover the entire surface. Alternatively a mirror can be used in order to achieve a larger projection distance. For the interactive surface itself a normal perspex board can be used in conjunction with some ordinary tracing paper on the top side for the projection. This material is completely transparent for objects and finger tips in direct contact with the surface. In order to avoid direct reflections of the light source and projector lamp, the lower side of the surface should have a matte finish, while maintaing the overall transparency.
For the tracking, the objects need to be properly illuminated, so the camera and thus the computer vision application can see them correctly. For the projection onto a table, the surface needs to be dark though, so the user can see the projected image well enough. Since these two necessary steps logically exclude each other, the solution is to operate in two different spectra: The projection has to be visible to the user, so the computer vision component needs to operate in a different, invisible spectrum such as near infrared in the range of 850nm. Most CCD cameras are perfectly sensitive within the near IR spectrum, therefore infrared LED lamps can be used to illuminate the table. All light from the visible spectrum needs to be filtered in the camera, so the computer vision algorithm is not disturbed by the projection. Eventually an existing infrared blocker needs to be removed from the camera sensor.
You should make sure that the camera has an acceptable lens and sensor size. For lowest latency and best performance we recommend firewire cameras from the top range, such as industrial cameras with a high framerate, resolution and sensor size. These cameras usually also come with high quality C-mount lenses. Cheaper firewire cameras, such as the unibrain fire-i also allow optional wide-angle lenses. From the large range of available USB cameras we recommend to use high end models with a native resolution of at least 640x480 at a frame rate of 30Hz. A very affordable and relatively good camera for this purpose is the Sony PS3eye, which is also working well under Windows, Linux and Mac OSX. DV cameras supporting full-frame mode are suitable, while those with interlaced mode only, will not work at all.
This application was designed to track specially designed fiducial markers. You will find the default "amoeba" fiducial set within the document "default.pdf" within the symbols folder. Print this document and attach the labels to any object you want to track. The default fiducial tracking engine is using Ross Bencina's fidtrack library which is basically a newer high-performance implementation of Enrico Costanza's d-touch concept. Alternatively you can use the "classic" or the "dtouch" sets. See below how to configure the application using these older symbol sets.
Since reacTIVision was initially designed for fiducial tracking, its thresholder and segmentation modules are optimized for this task. Finger tracking was added at a later stage, and is taking advantage of the existing image processing infrastructure with almost no additional performance overhead. On the other hand it might seem difficult to set up reacTIVision in order to achieve good tracking performance for both the fiducial symbols and the finger tips. When used with diffused illumination, the setup needs strong and even illumination in order to achieve the necessary contrast for finger tracking, also the overall image control such as brightness, gain and shutter speed will improve the tracking quality (O key). Another important control parameter is the threshold "gradient gate" which should be set as low as possible, just before too much image noise becomes visible (G key). Finally the finger tracking can be configured by adjusting the average finger size and tracking sensitivity (F key).
Common settings can be edited within the file "reacTIVision.xml" where all changes are stored automatically when closing the application. Under Mac OS X the XML configuration file can be found within the application bundle's Resources folder. Select "Show Package Contents" from the application's context menu in order to access and edit the file.
The reacTIVision application usually sends the TUIO messages to port 3333 on locahost (127.0.0.1) You can change this setting by adding or editing the XML tag <tuio host="127.0.0.1" port="3333"> to the configuration.
The <fiducial engine="amoeba" tree="default"/> XML tag lets you select the fiducial engine or an alternative amoeba tree order. The default engine is using the fastest and effective 'amoeba' fiducial set. Add the 'classic' option in order to use Ross' initial d-touch reimplementation. You can select Enrico's original d-touch engine by providing the 'dtouch' option.
The display attribute defines the default screen upon startup. The <image display="dest" equalize="false" gradient="32" /> lets you adjust the default gradient gate value. reacTIVision comes with a background subtraction module, which in some cases can simplify the recognition performance of both the finger and fiducial tracking. Within the running application you can toggle this with the 'E' key or recalculate the background subtraction by hitting the SPACE bar.
The camera options can be adjusted by pressing the 'O' key. On Windows and Mac OS this will show a system dialog that allows the adjustment of the available camera parameters. On Linux (Mac OS X when using IEEE1394 cameras), the available camera settings can be adjusted with a simple on screen display. The overall camera settings can be configured within the camera.xml configuration file. Please note that for Windows and the Quicktime mode on Mac OS X this only allows the configuration of the image size. For IEEE1394 cameras on Mac OS X as well as all camera types on Linux all image parameters can be fully configured using this configuration file.
TUIO vs. MIDI
The application can alternatively send MIDI messages, which allows to map any object dimension (xpos, ypos, angle) to a MIDI control via an XML configuration file. Adding and removing objects can be mapped to simple note ON/OFF events. Keep in mind though that MIDI has less bandwidth and data resolution compared to Open Sound Contol, so the MIDI feature is meant as an convenient alternative in some cases, but TUIO still will be the primary messaging layer.
Adding <midi config="midi/demo.xml"/> to reacTIVision.xml switches to MIDI mode and specifies the MIDI configuration file that contains the mappings and MIDI device selection. An example configuration file "demo.xml" along with an example PD patch "demo.pd" can be found in the "midi" folder. You can list all available MIDI devices with the "-l midi" option.
Calibration and Distortion
Some tables, such as the reacTable are using wide-angle or fish-eye lens in order to increase the area visible to the camera at a minimal distance. These lenses unfortunately distort the image and reacTIVision can correct that distortion and the overall alignment of the image. For the calibration print and place one of these rectangular or square calibration sheets on the table and adjust the grid points to the grid printed on the sheet.
To calibrate reacTIVision switch to calibration mode hitting 'C'. Use the keys A,D,W,X to move within grid, moving with the cursor keys will adjust the grid point. 'J' resets the whole calibration grid, 'U' resets the selected point and 'K' reverts to the saved grid. To check if the distortion is working properly press 'R'. This will show the fully distorted live video image in the target window. Of course the distortion algorithm only corrects the found positions instead of the full image.