vla_foundry
Collection
VLA Foundry: pretrained LLM, VLM, and VLA checkpoints. • 8 items • Updated • 3
A 1.2B parameter language model pretrained on 1T tokens, part of the VLA Foundry collection.
Continuation of Foundry-LLM-1.2B-800B with an additional 200B tokens of cosine-decayed training.
Multiple-choice reasoning benchmarks:
| HellaSwag | MMLU | ARC-e | ARC-c | PIQA | WinoGrande | OpenBookQA | BoolQ |
|---|---|---|---|---|---|---|---|
| 66.7 | 26.6 | 71.7 | 39.3 | 77.5 | 62.6 | 40.8 | 65.4 |
git clone https://github.com/TRI-ML/vla_foundry.git
cd vla_foundry
pip install -e .
from vla_foundry.models.base_model import BaseModel
model = BaseModel.from_pretrained("TRI-ML/Foundry-LLM-1.2B-1T")