36

3D hair reconstructed from a single image

 5 years ago
source link: https://www.tuicool.com/articles/hit/NBzeMnF
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

YNzuayI.jpg!web

Our method automatically generates 3D hair strands from a variety of single-view input. Each panel from left to right: input image, volumetric representation with color-coded local orientations predicted by our method, the final synthesized hair strands rendered from two viewing points.

Recent advances in single-view 3D hair digitization have made the creation of high-quality CG characters scalable and accessible to end-users, enabling new forms of personalized VR and gaming experiences. To handle the complexity and variety of hair structures, most cutting-edge techniques rely on the successful retrieval of a particular hair model from a comprehensive hair database. Not only are the aforementioned data-driven methods storage intensive, but they are also prone to failure for highly unconstrained input images, exotic hairstyles, failed face detection. Instead of using a large collection of 3D hair models directly, we propose to represent the manifold of 3D hairstyles implicitly through a compact latent space of a volumetric variational autoencoder (VAE). This deep neural network is trained with volumetric orientation field representations of 3D hair models and can synthesize new hairstyles from a compressed code. To enable end-to-end 3D hair inference, we train an additional regression network to predict the codes in the VAE latent space from any input image. Strand-level hairstyles can then be generated from the predicted volumetric representation. Our fully automatic framework does not require any ad-hoc face fitting, intermediate classification and segmentation, or hairstyle database retrieval. Our hair synthesis approach is significantly more robust than and can handle a much wider variation of hairstyles than state-of-the-art data-driven hair modeling techniques w.r.t. challenging inputs, including photos that are low-resolution, over-exposed, or contain extreme head poses. The storage requirements are minimal and a 3D hair model can be produced from an image in a second. Our evaluations also show that successful reconstructions are possible from highly stylized cartoon images, non-human subjects, and pictures taken from the back of a person. Our approach is particularly well suited for continuous and plausible hair interpolation between very different hairstyles.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK