Deep Lighting Environment Map Estimation from Spherical Panoramas

Object relighting within indoor scenes

Abstract

Estimating a scene’s lighting is a very important task when compositing synthetic content within real environments, with applications in mixed reality and postproduction. In this work we present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama. In addition to being a challenging and ill-posed problem, the lighting estimation task also suffers from a lack of facile illumination ground truth data, a fact that hinders the applicability of data-driven methods. We approach this problem differently, exploiting the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism. This relies on a global Lambertian assumption that helps us overcome issues related to pre-baked lighting. We relight our training data and complement the model’s supervision with a photometric loss, enabled by a differentiable image-based relighting technique. Finally, since we predict spherical spectral coefficients, we show that by imposing a distribution prior on the predicted coefficients, we can greatly boost performance. Code and models available at vcl3d.github.io/DeepPanoramaLighting.

Publication
In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Click the Cite button above to copy/download publication metadata (*.bib).
Nikolaos Zioulis
Nikolaos Zioulis
Computer Vision, Graphics & Machine Learning Engineer & Scientist

My research interests lie at the intersection of computer vision, computer graphics and modern data-driven approaches.