ビデオカメラの前で一回転→人の3Dモデル – Video Based Reconstruction of 3D People Models
📄

ビデオカメラの前で一回転→人の3Dモデル – Video Based Reconstruction of 3D People Models

Entry
ビデオカメラの前で一回転→人の3Dモデル – Video Based Reconstruction of 3D People Models
Simple Title
Alldieck, Thiemo, et al. "Video based reconstruction of 3d people models." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
Type
Paper
Year
2018
Posted at
Apr 18, 2018
Tags
image

Overview - 何がすごい?

カメラの前で手を広げてぐるっと一回転するだけで、その人の3Dモデルが生成されるという論文.

Abstract

This paper describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline achieving 3D model fits with 5mm accuracy also for clothed people. Our main contribution is a method to nonrigidly deform the silhouette cones corresponding to the dynamic human silhouettes, resulting in a visual hull in a common reference frame that enables surface reconstruction. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. We present evaluation results for a number of test subjects and analyze overall performance. Requiring only a smartphone or webcam, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.

Architecture

システムは3段階から構成されています:

1. 背景から人が写っている部分を切り出し、機械学習で関節の位置を推定2. 各フレームの情報を統合して一つの3Dモデルを生成3. 3Dモデルの表面に色・テクスチャをつける

(Deep Learningは使われていないようです)

Results

5mm以内の誤差で推定できているというから驚きです.  

Further Thoughts

自分のアバターを簡単に生成してゲームやVRコンテンツ内で使う. そんな場面が今後増えそうですね.

Links