Training-free Diffusion Model Alignment with Sampling Demons

Academia Sinica, Google DeepMind
Teaser Image

Our proposed stochastic optimization algorithm, Demon, allows to guide the denoising process with the human preference through direct user interaction during inference.

The author selects images marked by a red border based on their preferences—non-preferred images remain unselected—to align with a reference image (top left). Improved performance is observed by measuring the cosine similarity of DINOv2 features between the baseline PF-ODE 0.5414 (bottom left) and the final state 0.8549 (right).

Abstract

Aligning diffusion models with user preferences has been a key challenge. Existing methods for aligning diffusion models either require retraining or are limited to differentiable reward functions. To address these limitations, we propose a stochastic optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining. Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through stochastic optimization. We provide comprehensive theoretical and empirical evidence to support and validate our approach, including experiments that use non-differentiable sources of rewards such as Visual-Language Model (VLM) APIs and human judgements. To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models. Our method can be easily integrated with existing diffusion models without further training. Our experiments show that the proposed approach significantly improves the average aesthetics scores for text-to-image generation.

Method Overview

Method Overview

  1. While the video demonstrates using a first-order solver for simplicity, the paper employs a second-order solver to enhance optimization efficiency.
  2. Unlike the demo, the paper projects the weighted noise onto a sphere of radius \( \sqrt{N} \) instead of dividing by \( \sqrt{K} \), while retaining the same concept to generate Gaussian-like noise.

Preprint

BibTeX

@misc{yeh2024trainingfreediffusionmodelalignment,
  title={Training-free Diffusion Model Alignment with Sampling Demons}, 
  author={Po-Hung Yeh and Kuang-Huei Lee and Jun-Cheng Chen},
  year={2024},
  eprint={2410.05760},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2410.05760}, 
}