Papers
arxiv:2207.05378

Collaborative Neural Rendering using Anime Character Sheets

Published on Jul 12, 2022
Authors:
,
,
,
,

Abstract

Drawing images of characters with desired poses is an essential but laborious task in anime production. In this paper, we present the Collaborative Neural Rendering (CoNR) method, which creates new images for specified poses from a few reference images (AKA Character Sheets). In general, the high diversity of body shapes of anime characters defies the employment of universal body models like SMPL, which are developed from real-world humans. To overcome this difficulty, CoNR uses a compact and easy-to-obtain landmark encoding to avoid creating a unified UV mapping in the pipeline. In addition, the performance of CoNR can be significantly improved when referring to multiple reference images, thanks to feature space cross-view warping in a carefully designed neural network. Moreover, we have collected a character sheet dataset containing over 700,000 hand-drawn and synthesized images of diverse poses to facilitate research in this area. Our code and demo are available at https://github.com/megvii-research/CoNR.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2207.05378 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2207.05378 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 1