[ICLR'25] Training-free Diffusion Model Alignment with Sampling Demons

Academia Sinica, Google DeepMind
Reference cat photo used as the alignment target

Reference

Image generated by the PF-ODE baseline — visibly different from the reference

PF-ODE baseline
DINOv2 sim. 0.5414

Image generated by Demon — closely matches the reference

Demon (ours)
DINOv2 sim. 0.8549

Our proposed stochastic optimization algorithm, Demon, allows to guide the denoising process with the human preference through direct user interaction during inference.

The author selects images marked by a red border based on their preferences—non-preferred images remain unselected—to align with a reference image (top left). Improved performance is observed by measuring the cosine similarity of DINOv2 features between the baseline PF-ODE 0.5414 (bottom left) and the final state 0.8549 (right).

Abstract

Aligning diffusion models with user preferences has been a key challenge. Existing methods for aligning diffusion models either require retraining or are limited to differentiable reward functions. To address these limitations, we propose a stochastic optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining. Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through stochastic optimization. We provide comprehensive theoretical and empirical evidence to support and validate our approach, including experiments that use non-differentiable sources of rewards such as Visual-Language Model (VLM) APIs and human judgements. To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models. Our method can be easily integrated with existing diffusion models without further training. Our experiments show that the proposed approach significantly improves the average aesthetics scores for text-to-image generation.

TL;DR

  • Aligns diffusion-model outputs with user preference at inference time — no retraining, no backprop through the reward.
  • Works with non-differentiable rewards — VLM APIs, human clicks, custom scorers.
  • Plug-and-play with existing samplers; significantly improves aesthetic scores in text-to-image generation.

Method Overview


  1. While the video demonstrates using a first-order solver for simplicity, the paper employs a second-order solver to enhance optimization efficiency.
  2. Unlike the demo, the paper projects the weighted noise onto a sphere of radius \( \sqrt{N} \) instead of dividing by \( \sqrt{K} \), while retaining the same concept to generate Gaussian-like noise.

BibTeX

@inproceedings{
  yeh2025trainingfree,
  title={Training-Free Diffusion Model Alignment with Sampling Demons},
  author={Po-Hung Yeh, Kuang-Huei Lee, Jun-cheng Chen},
  booktitle={International Conference on Learning Representations},
  year={2025},
}