Post-hoc Probabilistic Vision-Language Models

1Technical University of Munich, 2Aalto University, 3Finnish Center for Artificial Intelligence, 4University of Tübingen, 5Helmholtz Munich, 6Munich Center for Machine Learning (MCML), 7Munich Data Science Institute (MDSI)

T L D R :

We propose a well-principled and efficient post-hoc uncertainty estimation approach for large-scale vision-language models (VLMs) combined with analytic propagation of uncertainties applicable to any probabilistic VLM model. Our approach enables interpretable and well-calibrated uncertainty estimates and improves performance in active learning without additional training.

Abstract

Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descriptions to a joint latent space in which their similarity is assessed using the cosine similarity. However, a deterministic mapping of inputs fails to capture uncertainties over concepts arising from domain shifts when used in downstream tasks. In this work, we propose post-hoc uncertainty estimation in VLMs that does not require additional training. Our method leverages a Bayesian posterior approximation over the last layers in VLMs and analytically quantifies uncertainties over cosine similarities. We demonstrate its effectiveness for uncertainty quantification and support set selection in active learning. Compared to baselines, we obtain improved and well-calibrated predictive uncertainties, interpretable uncertainty estimates, and sample-efficient active learning. Our results show promise for safety-critical applications of large-scale models.

Pipeline

Illustration of uncertainty propagation in VLMs: We estimate uncertainties over the last linear layers of both encoders using a Laplace approximation, which induces distributions over the feature projections. We then approximate the distribution over cosine similarities by estimating the expected value and variance accordingly. The cosine similarity distribution is then propagated further to the output.

BibTeX

@article{baumann2024bayesvlm,
  title = {Post-hoc Probabilistic Vision-Language Models},
  author = {Anton Baumann, Rui Li, Marcus Klasson, Santeri Mentu, Shyamgopal Karthik, Zeynep Akata, Arno Solin and Martin Trapp},
  year = {2024},
  journal = {arXiv preprint arxiv:2412.06014}
}