Abstract
Transformers require exponentially more training data than RNNs for state tracking tasks and fail to share learned mechanisms across different sequence lengths, while RNNs demonstrate effective amortized learning through weight sharing.
Despite the remarkable practical success of transformer-based language models, recent work has raised concerns about their ability to perform state tracking. In particular, a growing body of literature has shown this limitation primarily through failures in out-of-distribution (OOD) generalization, such as length extrapolation. In this work, we shift attention to the in-distribution implications of these limitations. We conduct a large-scale experimental study of the data efficiency of transformers and recurrent neural networks (RNNs) across multiple supervision regimes. We find that the amount of training data required by transformers grows much more rapidly with state-space size and sequence length than for RNNs. Furthermore, we analyze the extent to which learned state-tracking mechanisms are shared across different sequence lengths. We show that transformers exhibit negligible or even detrimental weight sharing across lengths, indicating that they learn length-specific solutions in isolation. In contrast, recurrent models exhibit effective amortized learning by sharing weights across lengths, allowing data from one sequence length to improve performance on others. Together, these results demonstrate that state tracking remains a fundamental challenge for transformers, even when training and evaluation distributions match.
Community
Transformers are data‑hungry in sequential tasks because they lack the right inductive bias.
It’s well known that for many sequential problems (from adding numbers to step‑by‑step agentic execution and multi‑hop reasoning), transformers fail to generalize to longer sequences than they were trained on. “Train short, test long” often fails.
The usual workaround is to "just train on whatever length you’ll need at test time".
📉 But we show the consequence of this is data inefficiency:
- Transformers can learn tasks for a single fixed sequence length fairly efficiently, but learning across multiple lengths requires much more data.
- More importantly, transformers tend not to share mechanisms across tasks of different lengths; instead, they often learn isolated, length‑specific solutions.
🧪 A simple way to test this:
Consider modular addition (with and without CoT). Train a model to add 2, 3, …, L numbers at once and measure the data needed. Then train separate models for each length (2, 3, …, L) and sum their data requirements.
💡The intuition:
If a model truly shares mechanisms across lengths, learning a distribution of lengths should require far fewer samples than learning each length separately.
This comes from amortizing the learning cost: data for length n also helps the model learn length n+k.
📊 Results:
Sharing Factor κ = (sum of samples to learn each length separately) ÷ (samples to learn all lengths jointly)
- κ > 1: mechanism sharing and amortized learning.
- κ ≈ 1: learning length-specific solutions in isolation.
- κ < 1: destructive interference; length-specific solutions compete for model capacity.
Transformers showed low sharing factors, and even destructive interference with CoT.
✨ Implications:
This suggests that end-to-end learning in applied agentic settings, like robotics or GUI control, could be even more challenging. If data requirements grow unfavorably with sequence length, that might also help explain the persistent issues we see at large context lengths (e.g., context rot).
Standard attention mechanism appears inefficient for step-by-step tasks, and we may ultimately be better off with recurrent agents.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Dissecting Linear Recurrent Models: How Different Gating Strategies Drive Selectivity and Generalization (2026)
- Learning State-Tracking from Code Using Linear RNNs (2026)
- Benchmarking the Computational and Representational Efficiency of State Space Models against Transformers on Long-Context Dyadic Sessions (2026)
- Shattered Compositionality: Counterintuitive Learning Dynamics of Transformers for Arithmetic (2026)
- Recurrent Preference Memory for Efficient Long-Sequence Generative Recommendation (2026)
- Representational Homomorphism Predicts and Improves Compositional Generalization In Transformer Language Model (2026)
- Patch-Level Tokenization with CNN Encoders and Attention for Improved Transformer Time-Series Forecasting (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
