ICTEAM > ISPGroup > Research@ISPGroup > AnalysisInterpretation


Analysis and Interpretation

Video are segmented into semantically meaningful objects, based on texture and motion analysis. Images descriptors are extracted to characterize visual contents. Targets of interest are detected, recognized, and tracked to understand behaviors in natural scenes. Application domains include autonomous production of visual reports (e.g. for team sport events), but also intelligent vision in surveillance, or cells images analysis in biology.

ResearchCategory

Video segmentation: Image/video segmentation aims at partitioning the visual frames into non-overlapping areas with different semantical contents. It has tremendous applications in data compression, tracking, augmented reality, activity or object recognition, video annotation and video retrieval. Our group focus on fast and efficient segmentation methods, in such a way to extract content with high-level of abstraction from videos.

Foreground object detection: Background modeling and foreground mask extraction are key components of low-level computer vision systems. They aim at extracting moving objects in natural scenes observed with static cameras, and thereby often constitute preliminary steps to object recognition, scene understanding and behavioral analysis.

Sport player foreground detector reinforcement through visual texture classification: In the context of sport events, this project deals with the improvement of a player detector based on a foreground detector. It considers some visual texture features in order to discriminate true from false detections.

Ball detection and tracking: Foreground detector reinforcement through trajectory analysis: In the context of team sport events monitoring, the various phases of the game must be delimited and interpreted. In the case of a basketball game, the detection and the tracking of the ball are mandatory. However, one of the difficulties is that the ball is often occluded by players. This project deals with the detection of the ballistic trajectory of a ball thrown between two players or toward the basket. Ballistic trajectories are build on the 3D ball candidates previously detected at each time-stamp from a foreground detector.

Multiple object tracking with prior detections and graph formalisms: This project considers the tracking of multiple objects within video sequence(s). Fundamentally, it aims at formalising application scenarios in which reliability and the discriminability of the object features vary over time. In order to address problems involving large number of targets, and because automatic detection algorithms have gained maturity, our work assumes that a set of prior and plausible targets detections are available at each time instant.

Automatic Team Sport Coverage: Computer vision tools drive the automatic and democratic production of sport events video reports, by using scene analysis to select an appropriate viewpoint in a static and/or dynamic multi-camera infrastructure.

Target tracking for the automatic control of Pan Tilt Zoom cameras: Capturing close-up video sequences of an object of interest evolving in a large field of view often requires to cover this field of view with tens of cameras. This is especially the case in surveillance and sport coverage contexts. The use of Pan-Tilt-Zoom cameras allows zooming and focusing on an object along its displacement with a single camera, but requires a sufficiently reliable feedback about the target position/trajectory from the image processing module in order to perform high quality automatic tracking.

Detection of facial expressions and CG animation: This work is aimed at developing a semi-automate system to animate facial expressions. The system consists of detection of facial expressions and Computer Graphics animations of a facial charactor.




Last updated February 01, 2017, at 08:42 AM