Logo image
Skinned Motion Retargeting with Residual Perception of Motion Semantics & Geometry
Conference proceeding

Skinned Motion Retargeting with Residual Perception of Motion Semantics & Geometry

Jiaxu Zhang, Junwu Weng, Di Kang, Fang Zhao, Shaoli Huang, Xuefei Zhe, Linchao Bao, Ying Shan, Jue Wang, Zhigang Tu, …
Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Online), Vol.2023-, pp.13864-13872
06/2023

Abstract

body Computer vision Geometry gesture Humans: Face Interference Interpolation Logic gates movement pose Semantics Shape
A good motion retargeting cannot be reached without reasonable consideration of source-target differences on both the skeleton and shape geometry levels. In this work, we propose a novel Residual RETargeting network (R 2 ET) structure, which relies on two neural modification modules, to adjust the source motions to fit the target skeletons and shapes progressively. In particular, a skeleton-aware module is introduced to preserve the source motion semantics. A shape-aware module is designed to perceive the geometries of target characters to reduce interpenetration and contact-missing. Driven by our explored distance-based losses that explicitly model the motion semantics and geometry, these two modules can learn residual motion modifications on the source motion to generate plausible retargeted motion in a single inference without postprocessing. To balance these two modifications, we further present a balancing gate to conduct linear interpolation between them. Extensive experiments on the public dataset Mixamo demonstrate that our R 2 ET achieves the state-of-the-art performance, and provides a good balance between the preservation of motion semantics as well as the attenuation of interpenetration and contact-missing. Code is available at https://github.com/Kebii/R2ET.

Metrics

1 Record Views

Details

Logo image