- Accepts (20000,3) pointclouds per target instance.
- Fits each instance with 200 steps of Iterative Closest Points alignment, using 10k points subsampled from the input.
- Root mean square of L2 bidirectional chamfer distances is computed pairwise with the final alignments.
- Statistics for the results are reported.
The ground truth is Camera-aligned. Camera-aligned means that the rotation from the Model-view matrix has been applied. Along with the xz component of translation. The benchmark contains fixed draws of 20k-point pointclouds from the Camera-aligned versions of the Animodel dataset.
User must provide camera-aligned pointclouds with exactly 20k points for every instance in the benchmark. Iterative Closest Points (ICP) can be unstable. Better initial alignment will give better results.
- create a config, see config/animodel_points as a template
- call
scripts/run_animodel_points.pyto evaluate a given animal, interface below..
python scripts/run_animodel_points.py --animal <horse/cow/sheep> --method <method_name> (--no_rotation) --config <path/to/config, eg. config/animodel_points> (<any hydra overrides>)