An Integrated Platform for Live 3D Human Reconstruction and Motion Capturing

3D Reconstruction Result

Abstract

The latest developments in 3D capturing, processing, and rendering provide means to unlock novel 3D application pathways. The main elements of an integrated platform, which target tele-immersion and future 3D applications, are described in this paper, addressing the tasks of real-time capturing, robust 3D human shape/appearance reconstruction, and skeleton-based motion tracking. More specifically, initially, the details of a multiple RGB-depth (RGB-D) capturing system are given, along with a novel sensors’ calibration method. A robust, fast reconstruction method from multiple RGB-D streams is then proposed, based on an enhanced variation of the volumetric Fourier transform-based method, parallelized on the Graphics Processing Unit, and accompanied with an appropriate texture-mapping algorithm. On top of that, given the lack of relevant objective evaluation methods, a novel framework is proposed for the quantitative evaluation of real-time 3D reconstruction systems. Finally, a generic, multiple depth stream-based method for accurate real-time human skeleton tracking is proposed. Detailed experimental results with multi-Kinect2 data sets verify the validity of our arguments and the effectiveness of the proposed system and methodologies.

Publication
In 2017, IEEE Transactions on Circuits and Systems for Video Technology
Click the Cite button above to copy/download publication metadata (*.bib).
Nikolaos Zioulis
Nikolaos Zioulis
Computer Vision, Graphics & Machine Learning Engineer & Scientist

My research interests lie at the intersection of computer vision, computer graphics and modern data-driven approaches.