System Architecture
⚠️ Partially Outdated — Mixed Legacy/Current Architecture
This page describes Trade Clash's technical architecture with mix of old and current concepts:
Outdated sections: News Ingestion Pipeline, hourly cycles, news crawlers
Still relevant: Simulation Core, AI Agent architecture, economic models, blockchain integration
Current game: Uses Polymarket API + Hivemind aggregation, 3-hour cycles
Major rewrite needed. Treat "news" sections as historical reference.
Building for Chaos at Scale
How do you architect a system where failure is content and emergence is the feature? Very carefully.
System Overview
The Five Pillars
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Data Layer │────▶│ Simulation Core │────▶│ Presentation │
│ │ │ │ │ Layer │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ External Feeds │ │ AI Agents │ │ Player APIs │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Blockchain │
│ Integration │
└─────────────────┘Design Principles
Principle 1: Embrace Eventual Consistency Perfect synchronization is impossible and unnecessary. Let chaos flow.
Principle 2: Fail Gracefully, Fail Interestingly When systems break, they should create content, not kill gameplay.
Principle 3: Observable Complexity Complex interactions, simple monitoring. See everything, understand patterns.
Principle 4: Stateless Where Possible Each hour is independent. History influences but doesn't determine.
The Data Layer
News Ingestion Pipeline
Scale Handling:
1000+ articles/minute peak
50ms classification target
99.9% uptime requirement
Graceful degradation
Tech Stack:
Crawler: Distributed Python workers
Queue: Redis + Kafka hybrid
Classification: Fine-tuned BERT
Storage: PostgreSQL + TimescaleDB
Data Architecture Philosophy
What We Store Forever:
Every headline processed
Every AI decision made
Every player bet placed
Every economic outcome
What We Calculate Fresh:
Current economic state
Relationship matrices
Trade possibilities
Cascade potentials
Why: Historical analysis creates edges. Fresh calculation prevents staleness.
The Simulation Core
The Agent-Based Model Engine
Performance Characteristics
Single Simulation Run:
16 AI agents deciding: ~200ms
Trade flow calculation: ~100ms
Economic updates: ~150ms
Cascade processing: ~50ms
Total: <500ms for hour resolution
Parallelization Strategy:
Each AI agent runs independently
Trade flows parallel by partnership
Economic indicators parallel by country
Cascades sequential (dependencies)
State Management
Per-Hour State (Ephemeral):
Persistent State (Carried):
The AI Agent Architecture
Individual Agent Design
The Personality Engine
Static Configuration:
Utility function weights
Base behavioral biases
Aggression parameters
Memory characteristics
Dynamic Adaptation:
Relationship updates
Success/failure learning
Emotional state shifts
Strategic pivots
Emergence Properties:
Grudge formation
Alliance patterns
Economic strategies
Unique narratives
The Presentation Layer
Real-Time Data Flow
The Chaos Observatory Architecture
3D Visualization Pipeline:
Performance Targets:
60 FPS visualization
<100ms data updates
Smooth transitions
Mobile compatibility
API Design
Player-Facing APIs:
Rate Limiting:
Authenticated: 100 req/minute
Public: 20 req/minute
WebSocket: No limit (throttled)
The Scaling Architecture
Horizontal Scaling Points
Stateless Services (Easy Scale):
Web servers
API endpoints
Crawler workers
Classification pipeline
Stateful Services (Careful Scale):
Simulation engine
AI agent processors
Economic calculators
WebSocket managers
Load Distribution
Caching Strategy
Edge Caching (CDN):
Static assets
Historical data
Public leaderboards
Application Caching (Redis):
Current hour state
Recent decisions
Hot player data
No Caching:
Live betting
AI decisions
Economic calculations
The Reliability Architecture
Failure Scenarios
News Feed Failure:
Fallback to aggregated sources
Use historical patterns
Generate synthetic events
Continue gameplay
AI Agent Crash:
Isolate failed agent
Use conservative defaults
Log for analysis
Continue simulation
Database Overload:
Write to queue
Batch process later
Prioritize gameplay
Eventual consistency
Monitoring Stack
Key Metrics:
Simulation cycle time
API response times
AI decision variance
Player action rates
Economic indicator drift
The Security Architecture
Attack Surface
Game Level:
Automated betting
Collusion attempts
Multi-accounting
Exploit seeking
System Level:
DDoS attacks
Data scraping
API abuse
State manipulation
Defense Layers
Application Security:
Rate limiting everything
Behavioral analysis
Pattern detection
Anomaly flagging
Infrastructure Security:
WAF protection
DDoS mitigation
Encrypted transport
Isolated services
The Future Architecture
Phase 2: Multi-Region Active
Phase 3: Decentralized Components
Distributed AI processing
Community-run nodes
Decentralized oracles
Cross-chain integration
The Architectural Truth
We built a system where:
Chaos is the feature
Scale enables emergence
Failure creates content
Monitoring reveals patterns
The architecture doesn't prevent disasters. It enables them safely.
That's not a bug. That's the entire point.
Ready to see how we make chaos beautiful? Continue to The Chaos Observatory.
Last updated