Autoregressive Appearance Prediction for
3D Gaussian Avatars

Michael Steiner1,2*, Zhang Chen2, Alexander Richard2, Vasu Agrawal2, Markus Steinberger1, Michael Zollhoefer2

1TU Graz logo 2Meta logo

* Work done during an internship at Meta

Teaser figure showing the overview of our method.

To avoid one-to-many ambiguity in capture training data, we decouple appearance from pose for stable driving. We learn per-frame appearance latents from extracted UV textures during training, then autoregressively predict them from short pose sequences at test time to produce smooth renderings with realistic appearance variation.

Abstract

A photorealistic and immersive human avatar experience demands capturing fine, person-specific details such as cloth and hair dynamics, subtle facial expressions, and characteristic motion patterns. Achieving this requires large, high-quality datasets, which often introduce ambiguities and spurious correlations when very similar poses correspond to different appearances. Models that fit these details during training can overfit and produce unstable, abrupt appearance changes for novel poses. We propose a 3D Gaussian Splatting avatar model with a spatial MLP backbone that is conditioned on both pose and an appearance latent. The latent is learned during training by an encoder, yielding a compact representation that improves reconstruction quality and helps disambiguate pose-driven renderings. At driving time, our predictor autoregressively infers the latent, producing temporally smooth appearance evolution and improved stability. Overall, our method delivers a robust and practical path to high-fidelity, stable avatar driving.

Test Sequence Comparison

We compare the results using our appearance predictor to: a re-implementation of [Zhan et al., 2025] with added face and hand/finger conditioning, denoted as MMLPs; a non-relightable version of the Relightable Full-Body Gaussian Codec Avatar (RFGCA) model from [Wang et al., 2025], denoted as nRFGCA.

Localized Pose Parameters

Qualitative comparison of global (MMLPs [Zhan et al., 2025]) versus localized (Ours) pose conditioning. We manually perturb a single pose parameter: global conditioning induces large, non-local deformations and spurious appearance changes, whereas localized conditioning confines the effect to local, physically plausible motion.

References

BibTeX

@misc{steiner2026aap3dga, title={Autoregressive Appearance Prediction for 3D Gaussian Avatars}, author={Michael Steiner and Zhang Chen and Alexander Richard and Vasu Agrawal and Markus Steinberger and Michael Zollhöfer}, year={2026}, eprint={2604.00928}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2604.00928}, }