-
Notifications
You must be signed in to change notification settings - Fork 0
refactor(benchmarks): consolidate to re-export from openadapt-evals #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Two-package architecture: openadapt-evals (foundation) + openadapt-ml (ML) - Verified audit findings: 10 dead files confirmed, 3 previously marked dead but used - CLI namespacing: oa evals <cmd>, oa ml <cmd> - Dependency direction: openadapt-ml depends on openadapt-evals (not circular) - Agents with ML deps (PolicyAgent, BaselineAgent) move to openadapt-ml - adapters/waa/ subdirectory pattern for benchmark organization Co-Authored-By: Claude Opus 4.5 <[email protected]>
Add [benchmarks] optional dependency for benchmark evaluation: - pip install openadapt-ml[benchmarks] This is part of the repo consolidation to establish: - openadapt-evals: Foundation for benchmarks + infrastructure - openadapt-ml: ML training (depends on evals for benchmarks) Co-Authored-By: Claude Opus 4.5 <[email protected]>
- oa ml serve: serve trained models for inference - oa ml dashboard: training dashboard for monitoring This distinguishes the two use cases clearly: - serve = model inference endpoint - dashboard = training progress UI Co-Authored-By: Claude Opus 4.5 <[email protected]>
Migrate benchmark infrastructure to two-package architecture: - openadapt-evals: Foundation package with all adapters, agents, runner - openadapt-ml: ML-specific agents that wrap openadapt-ml internals Changes: - Convert base.py, waa.py, waa_live.py, runner.py, data_collection.py, live_tracker.py to deprecation stubs that re-export from openadapt-evals - Keep only ML-specific agents in agent.py: PolicyAgent, APIBenchmarkAgent, UnifiedBaselineAgent - Update __init__.py to import from openadapt-evals with deprecation warning - Update tests to import from correct locations - Remove test_waa_live.py (tests belong in openadapt-evals) Net: -3540 lines of duplicate code removed Co-Authored-By: Claude Opus 4.5 <[email protected]>
…-evals Remove deprecation stubs since there are no external users. Tests now import directly from openadapt-evals (canonical location). Deleted: - base.py, waa.py, waa_live.py, runner.py, data_collection.py, live_tracker.py Kept: - agent.py (ML-specific agents: PolicyAgent, APIBenchmarkAgent, UnifiedBaselineAgent) - __init__.py (simplified to only export ML-specific agents) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Add section 15 for Windows Agent Arena benchmark results with clearly marked placeholders. Results will be filled in when full evaluation completes. Warning banner indicates PR should not merge until placeholders are replaced. Sections added: - 15.1 Benchmark Overview - 15.2 Baseline Reproduction (paper vs our run) - 15.3 Model Comparison (GPT-4o, Claude, Qwen variants) - 15.4 Domain Breakdown Co-Authored-By: Claude Opus 4.5 <[email protected]>
WAA benchmark results belong in openadapt-evals (the benchmark infrastructure package) rather than openadapt-ml (the training package). See: OpenAdaptAI/openadapt-evals#22 Co-Authored-By: Claude Opus 4.5 <[email protected]>
|
Related PR (benchmark results section): OpenAdaptAI/openadapt-evals#22 |
- Add setup_vnc_tunnel_and_browser() helper for automatic VNC access - Add VM_SIZE_FAST constants with D8 series sizes - Add VM_SIZE_FAST_FALLBACKS for automatic region/size retry - Add --fast flag to create command for faster installations - Add --fast flag to start command for more QEMU resources (6 cores, 16GB) - Opens browser automatically after container starts Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Document --fast VM flag usage - Explain parallelization options - Detail golden image approach for future optimization Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add section 13.5 with log viewing commands - Add benchmark run commands with examples - Renumber screenshot capture tool section to 13.6 Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add logs --run command for viewing task progress - Add logs --run -f for live streaming - Add logs --run --tail N for last N lines Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add example output for `logs` (container status) - Add example output for `logs --run -f` (benchmark execution) Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add _show_benchmark_progress() function - Parse run logs for completed task count - Calculate elapsed time and estimated remaining - Show progress percentage Example usage: uv run python -m openadapt_ml.benchmarks.cli logs --progress Co-Authored-By: Claude Opus 4.5 <[email protected]>
Comprehensive analysis of Cua (YC X25) computer-use agent platform: - Architecture comparison (composite agents, sandbox-first) - Benchmark framework differences (cua-bench vs openadapt-evals) - Training data generation (trajectory replotting) - Recommendations: adopt patterns, not full migration Key findings: - Cua's parallelization uses multiple sandboxes (like our multi-VM plan) - Composite agent pattern could reduce API costs - HTML capture enables training data diversity Co-Authored-By: Claude Opus 4.5 <[email protected]>
…kers WAA natively supports parallel execution by distributing tasks across workers. Usage: # Run on single VM (default) run --num-tasks 154 # Run in parallel on multiple VMs VM1: run --num-tasks 154 --worker-id 0 --num-workers 3 VM2: run --num-tasks 154 --worker-id 1 --num-workers 3 VM3: run --num-tasks 154 --worker-id 2 --num-workers 3 Tasks auto-distribute: worker 0 gets tasks 0-51, worker 1 gets 52-103, etc. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Expand cua_waa_comparison.md with: - Success rate gap analysis (38.1% vs 19.5%) - Market positioning comparison (TAM, buyers, value props) - Where sandbox approach fails (Citrix, licensed SW, compliance) - Shell applications convergence opportunities - Bottom line: Windows enterprise automation is hard, validates OpenAdapt approach Co-Authored-By: Claude Opus 4.5 <[email protected]>
- Add WAA_PARALLELIZATION_DESIGN.md documenting: - Official WAA approach (Azure ML Compute) - Our dedicated VM approach (dev/debug) - When to use each approach - Add WAA_UNATTENDED_SCALABLE.md documenting: - Goal: unattended, scalable, programmatic WAA - Synthesized approach using official run_azure.py - Implementation plan and cost estimates - Update Dockerfile comments to clarify: - API agents (api-claude, api-openai) run externally - openadapt-evals CLI connects via SSH tunnel - No internal run.py patching needed Co-Authored-By: Claude Opus 4.5 <[email protected]>
Latest additions (commit 6022772)Added WAA parallelization design documentation:
Also clarified in the Dockerfile that API agents (api-claude, api-openai) are run externally via the openadapt-evals CLI connecting over SSH tunnel, rather than patching run.py internally. |
Co-Authored-By: Claude Opus 4.5 <[email protected]>
Replace imports from deleted benchmark files with direct imports from openadapt-evals: - azure.py: BenchmarkResult, BenchmarkTask, WAAAdapter - waa_demo/runner.py: BenchmarkAction, WAAMockAdapter, etc. This completes the migration to the two-package architecture where openadapt-evals is the canonical source for benchmark infrastructure. Co-Authored-By: Claude Opus 4.5 <[email protected]>
PR Review UpdateIssue Found: The PR description was inaccurate - it claimed files were "converted to deprecation stubs" but they were actually deleted. Fix Applied (commit 4336e81):
Verification:
Remaining Blocker: |
- Update azure.py to import BenchmarkAgent from openadapt_evals - Add EvaluationConfig to runner.py imports Fixes CI failure: F821 Undefined name `EvaluationConfig` Co-Authored-By: Claude Opus 4.5 <[email protected]>
v0.1.0 uses task ID format "browser_1" but tests expect "mock_browser_001" which was added in v0.1.1. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Summary
Migrates benchmark infrastructure to two-package architecture where
openadapt-evalsis the foundation package andopenadapt-mlfocuses on ML-specific agents.Changes:
base.py,waa.py,waa_live.py,runner.py,data_collection.py,live_tracker.pyagent.py: PolicyAgent, APIBenchmarkAgent, UnifiedBaselineAgentNote: The removed files were not converted to deprecation stubs - they were fully removed to avoid code duplication. Users should import from
openadapt_evalsdirectly:openadapt_ml.benchmarks.baseopenadapt_evals.adapters.baseopenadapt_ml.benchmarks.waaopenadapt_evals.adapters.waa.mockopenadapt_ml.benchmarks.waa_liveopenadapt_evals.adapters.waa.liveopenadapt_ml.benchmarks.runneropenadapt_evals.benchmarks.runneropenadapt_ml.benchmarks.data_collectionopenadapt_evals.benchmarks.data_collectionopenadapt_ml.benchmarks.live_trackeropenadapt_evals.benchmarks.live_trackerValidation:
Test plan
Post-merge note:
viewer.pystill contains full implementation with deprecation warning. Consider making it a thin re-export stub in a follow-up PR for consistency.Generated with Claude Code