Abstract
A Vision-Language-Action model trained on extensive real-world robotic data demonstrates superior performance and generalization across multiple platforms while offering enhanced efficiency through optimized training infrastructure.
Offering great potential in robotic manipulation, a capable Vision-Language-Action (VLA) foundation model is expected to faithfully generalize across tasks and platforms while ensuring cost efficiency (e.g., data and GPU hours required for adaptation). To this end, we develop LingBot-VLA with around 20,000 hours of real-world data from 9 popular dual-arm robot configurations. Through a systematic assessment on 3 robotic platforms, each completing 100 tasks with 130 post-training episodes per task, our model achieves clear superiority over competitors, showcasing its strong performance and broad generalizability. We have also built an efficient codebase, which delivers a throughput of 261 samples per second per GPU with an 8-GPU training setup, representing a 1.5~2.8times (depending on the relied VLM base model) speedup over existing VLA-oriented codebases. The above features ensure that our model is well-suited for real-world deployment. To advance the field of robot learning, we provide open access to the code, base model, and benchmark data, with a focus on enabling more challenging tasks and promoting sound evaluation standards.
Community
A Pragmatic VLA Foundation Model
arXivlens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/a-pragmatic-vla-foundation-model-1530-04d54819
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InternVLA-A1: Unifying Understanding, Generation and Action for Robotic Manipulation (2026)
- Being-H0.5: Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization (2026)
- See Once, Then Act: Vision-Language-Action Model with Task Learning from One-Shot Video Demonstrations (2025)
- Towards Accessible Physical AI: LoRA-Based Fine-Tuning of VLA Models for Real-World Robot Control (2025)
- Robotic VLA Benefits from Joint Learning with Motion Image Diffusion (2025)
- Unified Embodied VLM Reasoning with Robotic Action via Autoregressive Discretized Pre-training (2025)
- LoLA: Long Horizon Latent Action Learning for General Robot Manipulation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
arXiv explained breakdown of this paper ๐ https://arxivexplained.com/papers/a-pragmatic-vla-foundation-model
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper