The INGENIOUS project aims at inventing, developing, and experimenting a novel method for mid-air gesture recognition for collaboratively manipulating multimedia objects, such as images, maps, plans, drawings, videos, in multiple contexts of use. Such contexts of use are both professional (e.g., project members working together around a tabletop interactive surface to achieve common goals) and private (e.g., family members organize their pictures and videos for history on a large wall display). Each context of use is characterized by three inter-related models: a user model with the collaboration aspects and task definitions, a platform model expressing the devices, their interactive surface, and the gesture sensors used, and an environment model describing the physical location in which the interaction takes place.
Existing recognition methods are often tied to a particular device or gesture sensor and are not easily transferable from one context to another. To address this shortcoming, the novel method will exploit radar echos of the human movement by ultra-wide band to characterize movement against the three models of the context and apply pattern matching with a mathematical representation of real-world gestures elicited from end-users for their tasks. This representation consists of a vectorial curve modelled based on Clifford geometric algebra, an algebra particularly suitable to ensure properties of gesture invariance with respect to different contexts of use.
Gestures handled by this method support cross-context transfer: gestures are recognized indifferently of the device (whether the user is close or far from the device, independently of the location with respect to the array radars), independently of the surface (whether the device is vertical, horizontal or oblique), of the objects (independent of the multimedia object manipulated), and from one person to another by transitive preference analysis and clustering.