Light fields are becoming an increasingly popular method of digital content production for visual effects and virtual/augmented reality as they capture a view dependent representation enabling photo realistic rendering over a range of viewpoints. Light field video is generally captured using arrays of cameras resulting in tens to hundreds of images of a scene at each time instance. An open problem is how to efficiently represent the data preserving the view-dependent detail of the surface in such a way that is compact to store and efficient to render. In this paper we showthat constructing an eigen texture basis representation from the light field using an approximate 3D surface reconstruction as a geometric proxy provides a compact representation that maintains view-dependent realism. We demonstrate that the proposed method is able to reduce storagerequirements by >95% while maintaining the visual quality of the captured data. An efficient view-dependent rendering technique is also proposed which is performed in eigen space allowing smooth continuous viewpoint interpolation through the light field.
Light Field Compression using Eigen Textures
Marco Volino
Armin Mustafa
Jean-Yves Guillemaut and
Adrian Hilton
International Conference on 3D Vision (3DV) 2019
@inproceedings{Volino:3DV:2019, AUTHOR = "Volino, Marco and Mustafa, Armin and Guillemaut, Jean-Yves and Hilton, Adrian", TITLE = "Light Field Compression using Eigen Textures", BOOKTITLE = "International Conference on 3D Vision (3DV)", YEAR = "2019", }