(ICCV’25) ViLU: Learning Vision-Language Uncertainties for Failure Prediction

Published in International Conference on Computer Vision, ICCV 2025, 2025

Recommended citation: Lafon M.*, Karmim Y.*, Silva-Rodríguez J., Couairon P., Rambour C., Fournier-S'niehotta R., Ben Ayed I., Dolz J., Thome N. ViLU: Learning Vision-Language Uncertainties for Failure Prediction. ICCV 2025. https://arxiv.org/abs/2409.17986v1 https://arxiv.org/abs/2409.17986

Reliable Uncertainty Quantification (UQ) and failure prediction remain open challenges for Vision-Language Models (VLMs). We introduce ViLU, a new Vision-Language Uncertainty quantification framework that contextualizes uncertainty estimates by leveraging all task-relevant textual representations. ViLU constructs an uncertainty-aware multi-modal representation by integrating the visual embedding, the predicted textual embedding, and an image-conditioned textual representation via cross-attention. Unlike traditional UQ methods based on loss prediction, ViLU trains an uncertainty predictor as a binary classifier to distinguish correct from incorrect predictions using a weighted binary cross-entropy loss, making it loss-agnostic. In particular, our proposed approach is well-suited for post-hoc settings, where only vision and text embeddings are available without direct access to the model itself. Extensive experiments on diverse datasets show the significant gains of our method compared to state-of-the-art failure prediction methods. We apply our method to standard classification datasets, such as ImageNet-1k, as well as large-scale image-caption datasets like CC12M and LAION-400M. Ablation studies highlight the critical role of our architecture and training in achieving effective uncertainty quantification.

Download paper here