Metric Workshop
Evaluating the Evaluators: Metrics for Compositional Text-to-Image Generation

1Department of Computer Engineering, Sharif University of Technology

A systematic study of text-to-image evaluation metrics, highlighting their reliability, limitations, and alignment with human judgment.



Abstract

Text–image generation has advanced rapidly, but assessing whether outputs truly capture the objects, attributes, and relations described in prompts remains a central challenge. Evaluation in this space relies heavily on automated metrics, yet these are often adopted by convention or popularity rather than validated against human judgment. Because evaluation and reported progress in the field depend directly on these metrics, it is critical to understand how well they reflect human preferences. To address this, we present a broad study of widely used metrics for compositional text–image evaluation. Our analysis goes beyond simple correlation, examining their behavior across diverse compositional challenges and comparing how different metric families align with human judgments. The results show that no single metric performs consistently across tasks. These findings underscore the importance of careful and transparent metric selection, both for trustworthy evaluation and for their use as reward models in generation.

BibTeX

@misc{kasaei2025metric,
      title={Evaluating the Evaluators: Metrics for Compositional Text-to-Image Generation}, 
      author={Seyed Amir Kasaei and Ali Aghayari and Arash Marioriyad and Niki Sepasian and MohammadAmin Fazli and Mahdieh Soleymani Baghshah and Mohammad Hossein Rohban},
      year={2025},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/XXXX.XXXXX}, 
}