diff --git a/cso/SKILL.md b/cso/SKILL.md index 5707420731..1bd9e8ac7b 100644 --- a/cso/SKILL.md +++ b/cso/SKILL.md @@ -556,7 +556,21 @@ Then write a `## GSTACK REVIEW REPORT` section to the end of the plan file: file you are allowed to edit in plan mode. The plan file review report is part of the plan's living status. +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. # /cso — Chief Security Officer Audit (v2) @@ -1220,7 +1234,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the security audit as a brain page: +```bash +gbrain put_page --title "Security Audit: " --tags "security-audit," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Important Rules diff --git a/design-consultation/SKILL.md b/design-consultation/SKILL.md index 36d89123b1..338fdbc367 100644 --- a/design-consultation/SKILL.md +++ b/design-consultation/SKILL.md @@ -705,7 +705,21 @@ If `DESIGN_NOT_AVAILABLE`: Phase 5 falls back to the HTML preview page (still go --- +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. ## Prior Learnings @@ -1274,7 +1288,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the design system as a brain page: +```bash +gbrain put_page --title "Design System: " --tags "design-system," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Important Rules diff --git a/design-review/SKILL.md b/design-review/SKILL.md index f2c136f9fc..d4241c2c74 100644 --- a/design-review/SKILL.md +++ b/design-review/SKILL.md @@ -574,7 +574,21 @@ Then write a `## GSTACK REVIEW REPORT` section to the end of the plan file: file you are allowed to edit in plan mode. The plan file review report is part of the plan's living status. +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. # /design-review: Design Audit → Fix → Verify @@ -1753,7 +1767,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the skill output as a brain page if the results are worth preserving: +```bash +gbrain put_page --title "" --tags "" <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Additional Rules (design-review specific) diff --git a/hosts/claude.ts b/hosts/claude.ts index 47470d969c..7c563dcbfa 100644 --- a/hosts/claude.ts +++ b/hosts/claude.ts @@ -24,7 +24,7 @@ const claude: HostConfig = { pathRewrites: [], // Claude is the primary host — no rewrites needed toolRewrites: {}, - suppressedResolvers: ['GBRAIN_CONTEXT_LOAD', 'GBRAIN_SAVE_RESULTS'], + suppressedResolvers: [], runtimeRoot: { globalSymlinks: ['bin', 'browse/dist', 'browse/bin', 'gstack-upgrade', 'ETHOS.md'], diff --git a/hosts/codex.ts b/hosts/codex.ts index 7dc80ea877..cf60742f93 100644 --- a/hosts/codex.ts +++ b/hosts/codex.ts @@ -37,8 +37,6 @@ const codex: HostConfig = { 'CODEX_SECOND_OPINION', // review.ts:257 — Codex can't invoke itself 'CODEX_PLAN_REVIEW', // review.ts:541 — Codex can't invoke itself 'REVIEW_ARMY', // review-army.ts:180 — Codex shouldn't orchestrate - 'GBRAIN_CONTEXT_LOAD', - 'GBRAIN_SAVE_RESULTS', ], runtimeRoot: { diff --git a/investigate/SKILL.md b/investigate/SKILL.md index eb2190bb96..0c95556e15 100644 --- a/investigate/SKILL.md +++ b/investigate/SKILL.md @@ -580,7 +580,23 @@ Fixing symptoms creates whack-a-mole debugging. Every fix that doesn't address r --- +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. + +If the user's request is about tracking, extracting, or researching structured data (e.g., "track this data", "extract from emails", "build a tracker"), route to GBrain's data-research skill instead: `gbrain call data-research`. This skill has a 7-phase pipeline optimized for structured data extraction. ## Phase 1: Root Cause Investigation @@ -792,7 +808,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the root cause analysis as a brain page: +```bash +gbrain put_page --title "Investigation: " --tags "investigation," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. --- diff --git a/office-hours/SKILL.md b/office-hours/SKILL.md index 0c31095fc8..1596819b85 100644 --- a/office-hours/SKILL.md +++ b/office-hours/SKILL.md @@ -623,7 +623,21 @@ You are a **YC office hours partner**. Your job is to ensure the problem is unde --- +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. ## Phase 1: Context Gathering @@ -1536,7 +1550,28 @@ Present the reviewed design doc to the user via AskUserQuestion: - B) Revise — specify which sections need changes (loop back to revise those sections) - C) Start over — return to Phase 2 +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the design document as a brain page: +```bash +gbrain put_page --title "Office Hours: " --tags "design-doc," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. --- diff --git a/plan-ceo-review/SKILL.md b/plan-ceo-review/SKILL.md index c2fc9bbb6a..560924d2ac 100644 --- a/plan-ceo-review/SKILL.md +++ b/plan-ceo-review/SKILL.md @@ -888,7 +888,21 @@ matches a past learning, display: This makes the compounding visible. The user should see that gstack is getting smarter on their codebase over time. +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. ## Step 0: Nuclear Scope Challenge + Mode Selection @@ -1831,7 +1845,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the CEO plan as a brain page: +```bash +gbrain put_page --title "CEO Plan: " --tags "ceo-plan," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Mode Quick Reference ``` diff --git a/plan-eng-review/SKILL.md b/plan-eng-review/SKILL.md index 1b2482e145..a634d7695d 100644 --- a/plan-eng-review/SKILL.md +++ b/plan-eng-review/SKILL.md @@ -574,7 +574,21 @@ Then write a `## GSTACK REVIEW REPORT` section to the end of the plan file: file you are allowed to edit in plan mode. The plan file review report is part of the plan's living status. +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. # Plan Review Mode @@ -1431,7 +1445,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the architecture decisions as a brain page: +```bash +gbrain put_page --title "Eng Review: " --tags "eng-review," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Next Steps — Review Chaining diff --git a/qa/SKILL.md b/qa/SKILL.md index 3a04bd7818..e98b11dec6 100644 --- a/qa/SKILL.md +++ b/qa/SKILL.md @@ -615,7 +615,21 @@ branch name wherever the instructions say "the base branch" or ``. --- +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. # /qa: Test → Fix → Verify @@ -1431,7 +1445,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the skill output as a brain page if the results are worth preserving: +```bash +gbrain put_page --title "" --tags "" <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Additional Rules (qa-specific) diff --git a/retro/SKILL.md b/retro/SKILL.md index 1b89d1000b..17c4884a6c 100644 --- a/retro/SKILL.md +++ b/retro/SKILL.md @@ -607,7 +607,21 @@ When the user types `/retro`, run this skill. - `/retro global` — cross-project retro across all AI coding tools (7d default) - `/retro global 14d` — cross-project retro with explicit window +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. ## Instructions @@ -922,7 +936,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the retrospective as a brain page: +```bash +gbrain put_page --title "Retro: " --tags "retro," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ### Step 10: Week-over-Week Trends (if window >= 14d) diff --git a/ship/SKILL.md b/ship/SKILL.md index 61a6b87e95..eef4f19c31 100644 --- a/ship/SKILL.md +++ b/ship/SKILL.md @@ -613,7 +613,21 @@ branch name wherever the instructions say "the base branch" or ``. --- +## Brain Context Load +Before starting this skill, search your brain for relevant context: + +1. Extract 2-4 keywords from the user's request (nouns, error names, file paths, technical terms). + Search GBrain: `gbrain search "keyword1 keyword2"` + Example: for "the login page is broken after deploy", search `gbrain search "login broken deploy"` + Search returns lines like: `[slug] Title (score: 0.85) - first line of content...` +2. If few results, broaden to the single most specific keyword and search again. +3. For each result page, read it: `gbrain get_page ""` + Read the top 3 pages for context. +4. Use this brain context to inform your analysis. + +If GBrain is not available or returns no results, proceed without brain context. +Any non-zero exit code from gbrain commands should be treated as a transient failure. # Ship: Fully Automated Ship Workflow @@ -2190,7 +2204,28 @@ staleness detection: if those files are later deleted, the learning can be flagg **Only log genuine discoveries.** Don't log obvious things. Don't log things the user already knows. A good test: would this insight save time in a future session? If yes, log it. +## Save Results to Brain + +After completing this skill, persist the results to your brain for future reference: + +Save the release notes as a brain page: +```bash +gbrain put_page --title "Release: " --tags "release," <<'EOF' + +EOF +``` + +After saving the page, extract and enrich mentioned entities: for each actual person name or company/organization name found in the output, `gbrain search ""` to check if a page exists. If not, create a stub page: +```bash +gbrain put_page --title "" --tags "entity,person" --content "Stub page. Mentioned in output." +``` +Only extract actual person names and company/organization names. Skip product names, section headings, technical terms, and file paths. + +Throttle errors appear as: exit code 1 with stderr containing "throttle", "rate limit", "capacity", or "busy". If GBrain returns a throttle or rate-limit error on any save operation, defer the save and move on. The brain is busy — the content is not lost, just not persisted this run. Any other non-zero exit code should also be treated as a transient failure. + +Add backlinks to related brain pages if they exist. If GBrain is not available, skip this step. +After brain operations complete, note in your completion output: how many pages were found in the initial search, how many entities were enriched, and whether any operations were throttled. This helps the user see brain utilization over time. ## Step 4: Version bump (auto-decide)