State of AI: An Empirical 100 Trillion Token Study with OpenRouter
AI development has shifted from simple chat interactions to complex reasoning workflows. A new study analyzing 100 trillion tokens of real-world usage from OpenRouter reveals how developers build AI applications today—and what this means for the future.
The Scale of Modern AI Usage
OpenRouter now serves over 5 million developers across 300+ models from 60+ providers. The platform processes more than 1 trillion tokens daily, giving it comprehensive visibility into how AI works in production. This dataset represents the largest empirical study of AI usage patterns to date.
The Rise of Agentic Inference
The fastest-growing behavior on OpenRouter is agentic inference—AI systems that perform extended sequences of actions rather than single responses. These workflows involve:
- Multi-step reasoning: Models break down complex problems into manageable parts
- Tool integration: AI systems retrieve information from APIs and external sources
- Iterative refinement: Models revise outputs until tasks are complete
- Extended sessions: Conversations have more turns and longer prompts
This shift began with OpenAI’s o1 reasoning model in December 2024. Unlike earlier models that excelled at pattern prediction, reasoning models plan, search, and evaluate different approaches to problems.
What Developers Are Building
The data reveals three dominant use cases driving token volume:
Creative applications lead usage, with developers building AI systems for content generation, writing assistance, and design workflows.
Coding tools represent the second-largest category, reinforcing AI’s central role in software development.
Reasoning workflows are growing fastest, as developers create applications that perform analysis, planning, and decision-making.
The Evolving Model Landscape
Real-world usage patterns differ significantly from benchmark leaderboards:
- Open-source models gain share: DeepSeek R1 and Kimi K2 capture users through cost efficiency and flexibility
- Personality matters: Models retain users based on conversational style, not just accuracy
- Breakthrough moments drive switching: When new capabilities meaningfully change what users can accomplish, they switch models permanently
Why This Matters for Developers
The shift toward agentic workflows creates new opportunities for AI application builders:
Design for reasoning: Build applications that leverage multi-step problem-solving rather than single-turn interactions.
Embrace tool use: Integrate external APIs and data sources to extend AI capabilities beyond text generation.
Plan for persistence: Design workflows that maintain context across extended sessions and complex tasks.
Focus on orchestration: Success depends on coordinating multiple AI interactions, not just individual model performance.
Implementation Strategies
Start building agentic workflows by:
- Identify multi-step processes in your domain that benefit from AI reasoning
- Design tool integration points where AI can access external information
- Create feedback loops that allow models to refine their outputs
- Build session management to maintain context across extended interactions
The Competitive Landscape
Traditional AI metrics—accuracy, speed, cost—remain important but insufficient. The new competitive frontier centers on:
- Reliability in extended workflows: Can the model complete complex tasks consistently?
- Tool integration capabilities: How effectively does it work with external systems?
- Planning and orchestration: Can it break down problems and coordinate solutions?
Next Steps
The full study provides deeper analysis of geographic patterns, model preferences, and retention curves. Access the complete report at OpenRouter’s State of AI study.
For developers building AI applications, this data offers a roadmap: the future belongs to systems that reason, plan, and act—not just respond. Start designing for agentic workflows today.