Skip to content

electronistu/Project_Infinity

Repository files navigation

Project Infinity: A Dynamic, Text-Based RPG World Engine

Project Infinity is a sophisticated, procedural world-generation engine and AI agent architecture. It transforms a general-purpose Large Language Model (LLM) into a specialized Game Master by combining a codified agent protocol with an external mechanical authority, ensuring a consistent, fair, and deep RPG experience.

Project Infinity TUI


🎮 How to Play: The Authoritative Experience

This mode utilizes an external Model Context Protocol (MCP) server to act as the absolute authority for game mechanics. By offloading logic to a dedicated server, it eliminates "LLM luck" and hallucinations regarding stats and dice rolls.

The MCP Advantage:

  • Verified Dice: All rolls are performed externally and returned to the AI.
  • State Authority: Player progress is tracked in a real-time SQLite database, preventing "memory drift."
  • Fairness: Every mechanical result is mathematically accurate and transparent.

Requirements:

  • Python 3.8+
  • Ollama installed and running.
  • Supported models: deepseek-v3.2:cloud, qwen3.5:397b-cloud, qwen3.5:cloud, or glm-5.1:cloud.

Quick Start:

  1. Install dependencies:
    pip install -r requirements.txt
  2. Launch the game:
    python3 play.py
  3. Select your model and world file (.wwf).

🔬 Technical Architecture

Project Infinity ensures game consistency through these authoritative systems:

The Roll Engine

To ensure fairness, the engine splits mechanical outcomes into two distinct layers:

  • Complexity Checks (The d20): Uses perform_check to determine binary success or failure for both players and NPCs against a Difficulty Class (DC).
  • Magnitude & Damage (The Multi-Dice): Uses roll_dice to determine the impact of actions for all participants (players and creatures), including damage, healing, and quantity.
  • Verification: All rolls MUST be output in a transparent formula: {actor} {notation}: {total} ({rolls} + {mod}).

State Authority

To solve the problem of LLM "forgetfulness," the engine implements a dynamic state-tracking system:

  • In-Memory SQLite Engine: Upon boot, the MCP server initializes a queryable database from the player file.
  • Real-Time Synchronization: The Game Master updates the player database via MCP tools immediately as changes occur in the narrative.

Cognitive Load Management

To prevent "model collapse" during high-complexity turns, the engine implements a State Checkpoint Protocol:

  • Tool-First Execution: For sequences requiring multiple tool calls, the GM executes all mechanical tools and suppresses immediate narrative output.
  • System Handshake: The GM emits a pause token, which play.py intercepts to inject a resume signal.
  • Coherent Narrative: This process resets the LLM's attention window, ensuring the final storytelling is based on the complete, resolved mechanical state.

🛠 The World Forge

Use the World Forge to create a world tailored to your character.

Run the forge:

python3 main.py

The Forge guides you through character creation and procedurally generates a world knowledge graph (.wwf file) and a corresponding character state file (.player) in the output/ directory. Together, these files serve as the complete source of truth for your adventure.

When you launch play.py, the system feeds the GameMaster_MCP.md protocol and the .wwf file to the LLM to set the stage. Simultaneously, play.py initializes dice_server.py using the .player file to boot the SQLite database.


🌟 The Game Master's Codex

  • Model Selection: Larger models generally produce richer narratives and better adhere to the complex MCP protocols.
  • Model Performance Note: Smaller models may struggle with the complexity of the protocol, potentially truncating narratives or failing to report dice rolls despite using MCP tools correctly. Note that GameMaster performance may vary in different sessions using the same model.
  • Verbose Mode: Use the --verbose or -v flag when launching play.py to see detailed MCP tool calls and responses.
  • Developer Debug Mode: Use the --debug or -d flag for deep inspection. This displays the raw JSON responses from the LLM—including internal reasoning and thought processes—and automatically enables Verbose Mode.
  • Note on Model Behavior: The GameMaster may occasionally be forgetful about awarding XP, gold, or syncing the database. If you notice this, simply remind the GameMaster, and it will update the state accordingly.

🛠 Technology Stack

Core Dependencies:

  • mcp: Model Context Protocol for external tool integration.
  • ollama: Local LLM orchestration.
  • rich: High-fidelity Terminal User Interface (TUI).
  • pydantic: Data validation and settings management.
  • numpy: Procedural generation logic.
  • pyyaml: Protocol and schema configuration.

Infrastructure:

  • Python 3
  • SQLite (In-memory engine)
  • Graph RAG architecture

About

Project Infinity leverages MCP and Graph RAG to turn LLMs into a professional D&D 5e Game Master, governed by a dedicated dice server and a persistent player database for a truly consistent adventure.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages