ICTEAM > ISPGroup > Research@ISPGroup > ViewInterpolation

## Virtual view synthesis in wide-baseline stereo

Nowadays, when viewing a pre-recorded video content, the spectator's viewpoint is restricted to one of the cameras that have recorded the scene. To improve the viewer's experience, the next generation of video content aims at enabling him/her to interactively define his/her viewpoint. This domain is known as view synthesis and consists in the interpolation of images that would be seen from a different viewpoint than the ones captured by real cameras.

The applications of this research domain are manifold, especially in the video production field.

For example, sport events are usually rendered from the border of the field, because adding a camera into the scene is either very expensive or even not possible or not allowed. Moreover, the viewer viewpoint location is restricted to a same side of the field. Indeed, any discontinuous jump of viewpoint from one side to the other leads to a 180-degree flip of the scene, which severely disorientates the audience. Such a disturbance could be avoided by generating a smooth "virtual" transition from one camera to another, to make the viewers feel like moving around the scene, such as illustrated in the following movies.

Since the last decade, this problem is investigated in the field of computer vision. However, to generate such a virtual view, the current solutions rely either on

• A lot of cameras, very close from each other, which limits the range of synthesized views.
• A tremendous amount of distant (wide-baseline) real cameras (from tens to hundreds of thousands), which require a time-consuming set-up.
• Active cameras, which are costly (> 10k $) and influenced by the environment (e.g. no working outside). To circumvent this issue, our research focuses on the view interpolation when only two real cameras observe the scene from very different viewpoints. This minimalist and challenging camera configuration, called wide-baseline stereo, makes the view synthesis and its underlying 3D estimation problem ill-posed, i.e., multiple reconstructed views are possible. Our research lab mainly focus on the proposition of new and original priors to address the problem. Especially, our expertise concerns: • The learning and use of object shape priors to disambiguate the reconstruction of the Epipolar Plane Image Volume, which describes the object transformation when continuously moving the viewpoint of the synthetic view in-between the reference cameras. • The reconstruction of piecewise-planar scenes, based on a robust matching of planar surfaces on dense (but very noisy) point clouds. {$ \hspace{1.5cm} $} Left view {$ \hspace{2.1cm} $} Virtual 0.25 {$ \hspace{2.1cm} $} Virtual 0.5 {$ \hspace{2.1cm} $} Virtual 0.75 {$ \hspace{2.1cm} \$} Virtual 1

• The determination of dense correspondences in-between two wide-baseline images, by pushing towards the preservation of the order of the elements in the scene.

## Collaborators

• Thomas Maugey (EPFL, LTS4 - Lausanne, Switzerland)
• Pascal Frossard (EPFL, LTS4 - Lausanne, Switzerland)

## ISPGroup Participants

Last updated June 29, 2016, at 03:49 AM

 ISPGroup