Emergent temporal abstractions in autoregressive models enable hierarchical reinforcement learning Paper • 2512.20605 • Published 4 days ago • 43
Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models Paper • 2402.19427 • Published Feb 29, 2024 • 56
Linear Transformers are Versatile In-Context Learners Paper • 2402.14180 • Published Feb 21, 2024 • 7
Uncovering mesa-optimization algorithms in Transformers Paper • 2309.05858 • Published Sep 11, 2023 • 13