Datasets:
image
image |
|---|
VenusBench-GD: A Comprehensive Multi-Platform GUI Benchmark for Diverse Grounding Tasks
Introduction
GUI grounding is a critical component in building capable GUI agents. However, existing grounding benchmarks suffer from significant limitations: they either provide insufficient data volume and narrow domain coverage, or focus excessively on a single platform and require highly specialized domain knowledge, hindering the development and fair evaluation of GUI grounding models. In this work, we present VenusBench-GD, a comprehensive, bilingual benchmark for GUI grounding that spans multiple platforms, enabling hierarchical evaluation for real-word applications. VenusBench-GD contributes as follows: (i) we introduce a large-scale, cross-platform benchmark with extensive coverage of applications, diverse UI elements, and rich annotated data, (ii) we establish a high-quality data construction pipeline for grounding tasks, achieving higher annotation accuracy than existing benchmarks as verified through rigorous sampling-based evaluation. (ii) We extend the scope of element grounding by proposing a hierarchical task taxonomy that divides grounding into basic and advanced categories, encompassing six distinct subtasks designed to evaluate models from complementary perspectives. Our experimental findings reveal critical insights not captured by previous benchmarks: general-purpose multimodal models now match or even surpass specialized GUI models on basic grounding tasks, suggesting these tasks are nearing performance saturation and losing discriminative power. In contrast, advanced tasksβparticularly those requiring functional understanding or multi-step reasoningβstill favor GUI-specialized models, though they exhibit significant overfitting and poor robustness, especially on refusal grounding. These results underscore the necessity of comprehensive, multi-tiered evaluation frameworks like VenusBench-GD to guide future progress in GUI agent development.
Data Structure
The repository is organized as follows:
VenusBench-GD/
βββ instruction/ # Dataset annotations
β βββ element_grounding.json
β βββ spatial_grounding.json
β βββ visual_grounding.json
β βββ reasoning_grounding.json
β βββ functional_grounding.json
β βββ refusal_spatial.json
βββ images/ # Dataset images
β βββ web/
β βββ mobile/
β βββ desktop/
βββ assets/
βββ meta.json
βββ README.md
Usage
To compare with the models listed in our work, visit the Github repository for evaluation code. If you have any suggestions or problems regarding the dataset, please contact the authors.
- Downloads last month
- -