Abstract
Three frontier models show declining accuracy on a new spatial competence benchmark, with performance saturating quickly under token budget constraints.
Spatial competence is the quality of maintaining a consistent internal representation of an environment and using it to infer discrete structure and plan actions under constraints. Prevailing spatial evaluations for large models are limited to probing isolated primitives through 3D transformations or visual question answering. We introduce the Spatial Competence Benchmark (SCBench), spanning three hierarchical capability buckets whose tasks require executable outputs verified by deterministic checkers or simulator-based evaluators. On SCBench, three frontier models exhibit monotonically decreasing accuracy up the capability ladder. Sweeping output-token caps shows that accuracy gains concentrate at low budgets and saturate quickly, and failures are dominated by locally plausible geometry that breaks global constraints. We release the task generators, verifiers, and visualisation tooling.
Community
Most spatial benchmarks test simple pattern recognition and use a visual question-answer format. SCBench, on the other hand, tests whether a model can generate executable outputs that satisfy task constraints when checked by deterministic verifiers or simulators.
SCBench spans 22 tasks across Topology, Computational Geometry, Graphics and Spatial reasoning puzzles. It has three levels: axiomatic inference, constructive synthesis, and planning. Some representative tasks are shown in the image below.
Three findings stood out:
- Performance for most frontier models declines consistently as tasks move from inferring structure to constructing valid spatial outputs to planning actions through changing environments.
- The dominant failure mode is: models can produce locally plausible outputs but fail to build on them to satisfy global constraints.
- Larger token budgets and tool use improve results, but do not resolve the core difficulty of maintaining globally consistent spatial structure.
We are particularly proud of this work since it was conducted independently and self-funded!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- From Perception to Action: An Interactive Benchmark for Vision Reasoning (2026)
- From Pixels to BFS: High Maze Accuracy Does Not Imply Visual Planning (2026)
- TopoBench: Benchmarking LLMs on Hard Topological Reasoning (2026)
- How Far Are Vision-Language Models from Constructing the Real World? A Benchmark for Physical Generative Reasoning (2026)
- TACIT Benchmark: A Programmatic Visual Reasoning Benchmark for Generative and Discriminative Models (2026)
- Pencil Puzzle Bench: A Benchmark for Multi-Step Verifiable Reasoning (2026)
- TraversalBench: Challenging Paths to Follow for Vision Language Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.09594 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper