Nerfies demonstrates deformation-aware neural radiance fields that reconstruct and render dynamic, real-world scenes from casual video. Instead of assuming a static world, the method learns a canonical space plus a deformation field that maps changing poses or expressions back to that space during training. This lets the system generate photorealistic novel views of nonrigid subjects—faces, bodies, cloth—while preserving fine detail and consistent lighting. The training pipeline handles imperfect captures by modeling camera poses, exposure variations, and background segmentation, producing stable geometry and appearance. A set of utilities manages dataset preparation, pose estimation, and checkpoints so researchers can reproduce results on their own footage. The work sits at the intersection of graphics and vision, showing how learned volumetric rendering can handle human motion without dense markers or studio rigs.
Features
- Canonical-space NeRF with learned nonrigid deformation field
- High-quality novel-view synthesis of moving, deformable subjects
- Robust training with pose, exposure, and segmentation handling
- Utilities for dataset prep, experiments, and checkpoints
- Support for reenactment and expression/pose interpolation
- Reproducible baselines for dynamic NeRF research