Project Infinity is a sophisticated, procedural world-generation engine and AI agent architecture. It transforms a general-purpose Large Language Model (LLM) into a specialized Game Master by combining a codified agent protocol with an external mechanical authority, ensuring a consistent, fair, and deep RPG experience.
This mode utilizes an external Model Context Protocol (MCP) server to act as the absolute authority for game mechanics. By offloading logic to a dedicated server, it eliminates "LLM luck" and hallucinations regarding stats and dice rolls.
The MCP Advantage:
- Verified Dice: All rolls are performed externally and returned to the AI.
- State Authority: Player progress is tracked in a real-time SQLite database, preventing "memory drift."
- Fairness: Every mechanical result is mathematically accurate and transparent.
Requirements:
- Python 3.8+
- Ollama installed and running.
- Supported models:
deepseek-v3.2:cloud,qwen3.5:397b-cloud,qwen3.5:cloud, orglm-5.1:cloud.
Quick Start:
- Install dependencies:
pip install -r requirements.txt
- Launch the game:
python3 play.py
- Select your model and world file (
.wwf).
Project Infinity ensures game consistency through these authoritative systems:
To ensure fairness, the engine splits mechanical outcomes into two distinct layers:
- Complexity Checks (The d20): Uses
perform_checkto determine binary success or failure for both players and NPCs against a Difficulty Class (DC). - Magnitude & Damage (The Multi-Dice): Uses
roll_diceto determine the impact of actions for all participants (players and creatures), including damage, healing, and quantity. - Verification: All rolls MUST be output in a transparent formula:
{actor} {notation}: {total} ({rolls} + {mod}).
To solve the problem of LLM "forgetfulness," the engine implements a dynamic state-tracking system:
- In-Memory SQLite Engine: Upon boot, the MCP server initializes a queryable database from the player file.
- Real-Time Synchronization: The Game Master updates the player database via MCP tools immediately as changes occur in the narrative.
To prevent "model collapse" during high-complexity turns, the engine implements a State Checkpoint Protocol:
- Tool-First Execution: For sequences requiring multiple tool calls, the GM executes all mechanical tools and suppresses immediate narrative output.
- System Handshake: The GM emits a pause token, which
play.pyintercepts to inject a resume signal. - Coherent Narrative: This process resets the LLM's attention window, ensuring the final storytelling is based on the complete, resolved mechanical state.
Use the World Forge to create a world tailored to your character.
Run the forge:
python3 main.pyThe Forge guides you through character creation and procedurally generates a world knowledge graph (.wwf file) and a corresponding character state file (.player) in the output/ directory. Together, these files serve as the complete source of truth for your adventure.
When you launch play.py, the system feeds the GameMaster_MCP.md protocol and the .wwf file to the LLM to set the stage. Simultaneously, play.py initializes dice_server.py using the .player file to boot the SQLite database.
- Model Selection: Larger models generally produce richer narratives and better adhere to the complex MCP protocols.
- Model Performance Note: Smaller models may struggle with the complexity of the protocol, potentially truncating narratives or failing to report dice rolls despite using MCP tools correctly. Note that GameMaster performance may vary in different sessions using the same model.
- Verbose Mode: Use the
--verboseor-vflag when launchingplay.pyto see detailed MCP tool calls and responses. - Developer Debug Mode: Use the
--debugor-dflag for deep inspection. This displays the raw JSON responses from the LLM—including internal reasoning and thought processes—and automatically enables Verbose Mode. - Note on Model Behavior: The GameMaster may occasionally be forgetful about awarding XP, gold, or syncing the database. If you notice this, simply remind the GameMaster, and it will update the state accordingly.
Core Dependencies:
mcp: Model Context Protocol for external tool integration.ollama: Local LLM orchestration.rich: High-fidelity Terminal User Interface (TUI).pydantic: Data validation and settings management.numpy: Procedural generation logic.pyyaml: Protocol and schema configuration.
Infrastructure:
- Python 3
- SQLite (In-memory engine)
- Graph RAG architecture
