Scaling Latent Reasoning via Looped Language Models
This paper introduces Ouro, a family of Looped Language Models (LoopLM) that achieve 2-3× parameter efficiency by implementing iterative computation through shared parameters and adaptive depth allocation, demonstrating …
AI · Architecture