The AI collaboration framework that predicts problems before they happen.
pip install empathy-framework[full]- Unified Typer CLI — One
empathycommand with Rich output, subcommand groups, and cheatsheet - Dev Container Support — One-click VS Code dev environment with Docker Compose
- Python 3.13 Support — Test matrix now covers 3.10-3.13 across macOS, Linux, Windows
- Diátaxis Framework — Restructured docs into Tutorials, How-to, Explanation, Reference
- Improved Navigation — Clearer paths from learning to mastery
- Fixed Asset Loading — CSS now loads correctly on all documentation pages
- Smart Router — Natural language wizard dispatch: "Fix security in auth.py" → SecurityWizard
- Memory Graph — Cross-wizard knowledge sharing across sessions
- Auto-Chaining — Wizards automatically trigger related wizards
- Resilience Patterns — Retry, Circuit Breaker, Timeout, Health Checks
- Multi-Model Provider System — Anthropic, OpenAI, Ollama, or Hybrid mode
- 80-96% Cost Savings — Smart tier routing: cheap models detect, best models decide
- VSCode Dashboard — 10 integrated workflows with input history persistence
pip install empathy-framework[full]# Auto-detect your API keys and configure
python -m empathy_os.models.cli provider
# Or set explicitly
python -m empathy_os.models.cli provider --set anthropic
python -m empathy_os.models.cli provider --set hybrid # Best of all providersfrom empathy_os import EmpathyOS
os = EmpathyOS()
result = await os.collaborate(
"Review this code for security issues",
context={"code": your_code}
)
print(result.current_issues) # What's wrong now
print(result.predicted_issues) # What will break in 30-90 days
print(result.prevention_steps) # How to prevent it| Feature | Empathy | SonarQube | GitHub Copilot |
|---|---|---|---|
| Predicts future issues | 30-90 days ahead | No | No |
| Persistent memory | Redis + patterns | No | No |
| Multi-provider support | Claude, GPT-4, Ollama | N/A | GPT only |
| Cost optimization | 80-96% savings | N/A | No |
| Your data stays local | Yes | Cloud | Cloud |
| Free for small teams | ≤5 employees | No | No |
pip install empathy-framework- Works out of the box with sensible defaults
- Auto-detects your API keys
# Enable hybrid mode for 80-96% cost savings
python -m empathy_os.models.cli provider --set hybrid| Tier | Model | Use Case | Cost |
|---|---|---|---|
| Cheap | GPT-4o-mini / Haiku | Summarization, simple tasks | $0.15-0.25/M |
| Capable | GPT-4o / Sonnet | Bug fixing, code review | $2.50-3.00/M |
| Premium | o1 / Opus | Architecture, complex decisions | $15/M |
from empathy_llm_toolkit import EmpathyLLM
llm = EmpathyLLM(provider="anthropic", enable_model_routing=True)
# Automatically routes to appropriate tier
await llm.interact(user_id="dev", user_input="Summarize this", task_type="summarize") # → Haiku
await llm.interact(user_id="dev", user_input="Fix this bug", task_type="fix_bug") # → Sonnet
await llm.interact(user_id="dev", user_input="Design system", task_type="coordinate") # → OpusInstall the Empathy VSCode extension for:
- Real-time Dashboard — Health score, costs, patterns
- One-Click Workflows — Research, code review, debugging
- Visual Cost Tracking — See savings in real-time
- See also:
docs/dashboard-costs-by-tier.mdfor interpreting the By tier (7 days) cost breakdown.
- See also:
from empathy_os.agents import AgentFactory
# Create domain-specific agents with inherited memory
security_agent = AgentFactory.create(
domain="security",
memory_enabled=True,
anticipation_level=4
)python -m empathy_os.models.cli provider # Show current config
python -m empathy_os.models.cli provider --set anthropic # Single provider
python -m empathy_os.models.cli provider --set hybrid # Best-of-breed
python -m empathy_os.models.cli provider --interactive # Setup wizard
python -m empathy_os.models.cli provider -f json # JSON outputpython -m empathy_os.models.cli registry # Show all models
python -m empathy_os.models.cli registry --provider openai # Filter by provider
python -m empathy_os.models.cli costs --input-tokens 50000 # Estimate costspython -m empathy_os.models.cli telemetry # Summary
python -m empathy_os.models.cli telemetry --costs # Cost savings report
python -m empathy_os.models.cli telemetry --providers # Provider usage
python -m empathy_os.models.cli telemetry --fallbacks # Fallback statsempathy-memory serve # Start Redis + API server
empathy-memory status # Check system status
empathy-memory stats # View statistics
empathy-memory patterns # List stored patternsempathy-inspect . # Run full inspection
empathy-inspect . --format sarif # GitHub Actions format
empathy-inspect . --fix # Auto-fix safe issues
empathy-inspect . --staged # Only staged changesEnable structured XML prompts for consistent, parseable LLM responses:
# .empathy/workflows.yaml
xml_prompt_defaults:
enabled: false # Set true to enable globally
workflow_xml_configs:
security-audit:
enabled: true
enforce_response_xml: true
template_name: "security-audit"
code-review:
enabled: true
template_name: "code-review"Built-in templates: security-audit, code-review, research, bug-analysis, perf-audit, refactor-plan, test-gen, doc-gen, release-prep, dependency-check
from empathy_os.prompts import get_template, XmlResponseParser, PromptContext
# Use a built-in template
template = get_template("security-audit")
context = PromptContext.for_security_audit(code="def foo(): pass")
prompt = template.render(context)
# Parse XML responses
parser = XmlResponseParser(fallback_on_error=True)
result = parser.parse(llm_response)
print(result.summary, result.findings, result.checklist)Route natural language requests to the right wizard automatically:
from empathy_os.routing import SmartRouter
router = SmartRouter()
# Natural language routing
decision = router.route_sync("Fix the security vulnerability in auth.py")
print(f"Primary: {decision.primary_wizard}") # → security-audit
print(f"Also consider: {decision.secondary_wizards}") # → [code-review]
print(f"Confidence: {decision.confidence}")
# File-based suggestions
suggestions = router.suggest_for_file("requirements.txt") # → [dependency-check]
# Error-based suggestions
suggestions = router.suggest_for_error("NullReferenceException") # → [bug-predict, test-gen]Cross-wizard knowledge sharing - wizards learn from each other:
from empathy_os.memory import MemoryGraph, EdgeType
graph = MemoryGraph()
# Add findings from any wizard
bug_id = graph.add_finding(
wizard="bug-predict",
finding={
"type": "bug",
"name": "Null reference in auth.py:42",
"severity": "high"
}
)
# Connect related findings
fix_id = graph.add_finding(wizard="code-review", finding={"type": "fix", "name": "Add null check"})
graph.add_edge(bug_id, fix_id, EdgeType.FIXED_BY)
# Find similar past issues
similar = graph.find_similar({"name": "Null reference error"})
# Traverse relationships
related_fixes = graph.find_related(bug_id, edge_types=[EdgeType.FIXED_BY])Wizards automatically trigger related wizards based on findings:
# .empathy/wizard_chains.yaml
chains:
security-audit:
auto_chain: true
triggers:
- condition: "high_severity_count > 0"
next: dependency-check
approval_required: false
- condition: "vulnerability_type == 'injection'"
next: code-review
approval_required: true
bug-predict:
triggers:
- condition: "risk_score > 0.7"
next: test-gen
templates:
full-security-review:
steps: [security-audit, dependency-check, code-review]
pre-release:
steps: [test-gen, security-audit, release-prep]from empathy_os.routing import ChainExecutor
executor = ChainExecutor()
# Check what chains would trigger
result = {"high_severity_count": 5}
triggers = executor.get_triggered_chains("security-audit", result)
# → [ChainTrigger(next="dependency-check"), ...]
# Execute a template
template = executor.get_template("full-security-review")
# → ["security-audit", "dependency-check", "code-review"]Analyze, generate, and optimize prompts:
from coach_wizards import PromptEngineeringWizard
wizard = PromptEngineeringWizard()
# Analyze existing prompts
analysis = wizard.analyze_prompt("Fix this bug")
print(f"Score: {analysis.overall_score}") # → 0.13 (poor)
print(f"Issues: {analysis.issues}") # → ["Missing role", "No output format"]
# Generate optimized prompts
prompt = wizard.generate_prompt(
task="Review code for security vulnerabilities",
role="a senior security engineer",
constraints=["Focus on OWASP top 10"],
output_format="JSON with severity and recommendation"
)
# Optimize tokens (reduce costs)
result = wizard.optimize_tokens(verbose_prompt)
print(f"Reduced: {result.token_reduction:.0%}") # → 20% reduction
# Add chain-of-thought scaffolding
enhanced = wizard.add_chain_of_thought(prompt, "debug")# Recommended (all features)
pip install empathy-framework[full]
# Minimal
pip install empathy-framework
# Specific providers
pip install empathy-framework[anthropic]
pip install empathy-framework[openai]
pip install empathy-framework[llm] # Both
# Development
git clone https://github.com/Smart-AI-Memory/empathy-framework.git
cd empathy-framework && pip install -e .[dev]| Component | Description |
|---|---|
| Empathy OS | Core engine for human↔AI and AI↔AI collaboration |
| Smart Router | Natural language wizard dispatch with LLM classification |
| Memory Graph | Cross-wizard knowledge sharing (bugs, fixes, patterns) |
| Auto-Chaining | Wizards trigger related wizards based on findings |
| Multi-Model Router | Smart routing across providers and tiers |
| Memory System | Redis short-term + encrypted long-term patterns |
| 17 Coach Wizards | Security, performance, testing, docs, prompt engineering |
| 10 Cost-Optimized Workflows | Multi-tier pipelines with XML prompts |
| Healthcare Suite | SBAR, SOAP notes, clinical protocols (HIPAA) |
| Code Inspection | Unified pipeline with SARIF/GitHub Actions support |
| VSCode Extension | Visual dashboard for memory and workflows |
| Telemetry & Analytics | Cost tracking, usage stats, optimization insights |
| Level | Name | Behavior | Example |
|---|---|---|---|
| 1 | Reactive | Responds when asked | "Here's the data you requested" |
| 2 | Guided | Asks clarifying questions | "What format do you need?" |
| 3 | Proactive | Notices patterns | "I pre-fetched what you usually need" |
| 4 | Anticipatory | Predicts future needs | "This query will timeout at 10k users" |
| 5 | Transformative | Builds preventing structures | "Here's a framework for all future cases" |
Empathy operates at Level 4 — predicting problems before they manifest.
# Required: At least one provider
export ANTHROPIC_API_KEY="sk-ant-..." # For Claude models
export OPENAI_API_KEY="sk-..." # For GPT models
# Optional: Redis for memory
export REDIS_URL="redis://localhost:6379"
# Or use a .env file (auto-detected)
echo 'ANTHROPIC_API_KEY=sk-ant-...' >> .env- Star this repo if you find it useful
- Join Discussions — Questions, ideas, show what you built
- Read the Book — Deep dive into the philosophy
- Full Documentation — API reference, examples, guides
For those interested in the development history and architectural decisions:
- Development Logs — Execution plans, phase completions, and progress tracking
- Architecture Docs — System design, memory architecture, and integration plans
- Marketing Materials — Pitch decks, outreach templates, and commercial readiness
- Guides — Publishing tutorials, MkDocs setup, and distribution policies
Fair Source License 0.9 — Free for students, educators, and teams ≤5 employees. Commercial license ($99/dev/year) for larger organizations. Details →
Built by Smart AI Memory · Documentation · Examples · Issues