Code companion for the same-named paper, available at arXiv.
Images sampled from the KL-divergence barycenter of two auxiliary models at various interpolation weights. The same Gaussian noise was used to seed the diffusion process for all setups. From left to right, column-wise:
Create a Python virtual environment and install the required packages:
# Create a virtual environment
python -m venv your_venv_name
# Activate the virtual environment
source your_venv_name/bin/activate # On Linux/Mac
# or
your_venv_name\Scripts\activate # On Windows
pip install -r requirements.txtIMPORTANT: whether you are running a Python script or a Jupyter notebook, make sure your session's current working directory is the root of the project directory (the directory containing this README file) before running any script or notebook.
Most experiments can be reproduced by running Jupyter notebooks in the notebooks/ folder; their names should be self-explanatory of the experiment steps they reproduce, but feel free to contact us in case of doubt. For generating large number of images, especially for the SDXL experiments, it is recommended to run the Python scripts in the hpc_scripts/ folder and submit the jobs to a GPU cluster. We have included template .slurm scripts for this purpose in the hpc_scripts/ folder, if your cluster uses SLURM as the job scheduler.
The folder out/ will store outputs like generated image samples and SLURM job logs. cache/ stores some intermediate results (CLIP distance values, PyTorch model checkpoints, etc.) we used to generate the paper figures. src/ contains modules that are shared across different Jupyter notebooks and Python scripts.
If you find this work useful, please cite it as follows:
@article{scorefusion,
title={{S}core{F}usion: {F}using {S}core-based Generative Models via {K}ullback-{L}eibler Barycenters},
author={Liu, Hao and Ye, Junze T and Blanchet, Jose and Si, Nian},
year={2025},
journal={Artificial Intelligence and Statistics (AISTATS)}
}
