LATINO-PRO: LAtent consisTency INverse sOlver with PRompt Optimization

ICCV 2025
*Indicates Equal Contribution
Before 1 After 1 Before 2 After 2 Before 3 After 3 Before 4 After 4 Before 4 After 4 Before 4 After 4

Abstract

Text-to-image latent diffusion models (LDMs) have recently emerged as powerful generative models with great potential for solving inverse problems in imaging. However, leveraging such models in a Plug & Play (PnP), zero-shot manner remains challenging because it requires identifying a suitable text prompt for the unknown image of interest. Also, existing text-to-image PnP approaches are highly computationally expensive. We herein address these challenges by proposing a novel PnP inference paradigm specifically designed for embedding generative models within stochastic inverse solvers, with special attention to Latent Consistency Models (LCMs), which distill LDMs into fast generators. We leverage our framework to propose LAtent consisTency INverse sOlver (LATINO), the first zero-shot PnP framework to solve inverse problems with priors encoded by LCMs. Our conditioning mechanism avoids automatic differentiation and reaches SOTA quality in as little as 8 neural function evaluations. As a result, LATINO delivers remarkably accurate solutions and is significantly more memory and computationally efficient than previous approaches. We then embed LATINO within an empirical Bayesian framework that automatically calibrates the text prompt from the observed measurements by marginal maximum likelihood estimation. Extensive experiments show that prompt self-calibration greatly improves estimation, allowing LATINO with PRompt Optimization to define new SOTAs in image reconstruction quality and computational efficiency.

One LATINO solver step
One step of the LATINO solver, a discretization of the Langevin SDE which targets the posterior $p(\vx\mid\vy, c)$. The current iterate $\vx_k$ is encoded by the VAE encoder and propagated forward via a noising diffusion kernel $p(\vz_t\mid\vz_0)$. This process is then reversed via the latent consistency model and the VAE decoder, followed by the proximal operator to involve the likelihood $p(\vy\mid\vx)$.
Algorithm 1: LATINO
  1. Given $\vx^{(0)}=\mathcal{A}^\dagger\vy$, text prompt $c$, number of steps $N\!\in\!\{4,8\}$, latent consistency model $G_\theta$, latent-space decoder $\decoder$, encoder $\encoder$, and sequences $\{t_k,\delta_k\}_{k=1}^N$.
  2. For $k = 1,\ldots,N$
  3. $\boldsymbol{\epsilon}\sim\mathcal{N}(0,\mathrm{Id})$
  4. $\vz_{t_k}^{(k)} \;\gets\; \sqrt{\alpha_{t_k}}\, \encoder\!\bigl(\vx^{(k-1)}\bigr) \;+\; \sqrt{1-\alpha_{t_k}}\, \boldsymbol{\epsilon} \qquad$
  5. $\vu^{(k)} \;\gets\; \decoder\! \bigl(G_\theta(\vz_{t_k}^{(k)},t_k,c)\bigr)$
  6. $\vx^{(k)} \;\gets\; \prox_{\delta_k g_\vy}\! \bigl(\vu^{(k)}\bigr) \quad\bigl( g_\vy:\,\vx\mapsto -\log p(\vy\mid\vx) \bigr)$
  7. Return $\vx^{(N)}$
Algorithm 2: LATINO-PRO
  1. Given $\vx^{(0)}=\mathcal{A}^\dagger\vy$, initial prompt $c_0$ and admissible set $C$; number of SAPG steps $M$; sub-iteration parameters $\{N_m,\gamma_m\}_{m=1}^M$, $\{t_k,\delta_k\}_{k=1}^{N_m}$; latent consistency model $G_\theta$, decoder $\decoder$, encoder $\encoder$.
  2. For $m = 1,\ldots,M$
  3. For $k = 1,\ldots,N_m$ (LATINO)
  4. $\boldsymbol{\epsilon}\sim\mathcal{N}(0,\mathrm{Id})$
  5. $\vz_{t_k}^{(k)} \gets \sqrt{\alpha_{t_k}}\, \encoder\!\bigl(\vx^{(k-1)}\bigr) + \sqrt{1-\alpha_{t_k}}\, \boldsymbol{\epsilon}$
  6. $\vu^{(k)} \gets \decoder\! \bigl(G_\theta(\vz_{t_k}^{(k)},t_k,c_m)\bigr)$
  7. $\vx^{(k)} \gets \prox_{\delta_k g_\vy}\!\bigl(\vu^{(k)}\bigr)$
  8. $h(c_m)\gets\nabla_c\log p\!\bigl(\vz_{t_1}^{(1)},\ldots,\vz_{t_{N_m}}^{(N_m)}\mid c_m\bigr)$
  9. $c_{m+1}=\Pi_C\!\bigl[c_m+\gamma_m\,h(c_m)\bigr]$   (SAPG update)
  10. $\vx^{(0)}\gets\vx^{(N_m)}$   (carry state forward)
  11. Return $\vx^{(N_M)}$

BibTeX

@misc{spagnoletti2025latinoprolatentconsistencyinverse,
      title={LATINO-PRO: LAtent consisTency INverse sOlver with PRompt Optimization}, 
      author={Alessio Spagnoletti and Jean Prost and Andrés Almansa and Nicolas Papadakis and Marcelo Pereyra},
      year={2025},
      eprint={2503.12615},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.12615}, 
}