Skip to content

feat: add /seo-audit skill — Live SEO + GEO audit with algorithm-awareness#1052

Open
manuelbenitez wants to merge 1 commit intogarrytan:mainfrom
manuelbenitez:main
Open

feat: add /seo-audit skill — Live SEO + GEO audit with algorithm-awareness#1052
manuelbenitez wants to merge 1 commit intogarrytan:mainfrom
manuelbenitez:main

Conversation

@manuelbenitez
Copy link
Copy Markdown

@manuelbenitez manuelbenitez commented Apr 17, 2026

feat: add /seo-audit skill — Live SEO + GEO audit with algorithm-awareness

LLM bots crawl 3.6x more pages than Googlebot (Cloudflare Radar, Q1 2026). ClaudeBot crawls 23,951 pages per 1 referral. Every SEO tool on the market optimizes for one of those numbers. /seo-audit scores both — 50pts traditional SEO, 50pts GEO (Generative Engine Optimization) — and fixes the gaps.

Why this isn't a stale checklist

Every SEO skill on GitHub is a snapshot of the month it was written. Google changes the algorithm, the skill rots. This one doesn't.

On every run, the skill fires two live WebSearch queries:

  • "Google core algorithm update [MONTH YEAR]"
  • "GEO generative engine optimization best practices [YEAR]"

The report opens with an ALGORITHM_STATE block — current changes with source URLs, plus a relevance flag if any recent update applies to the audited site. No if (year === 2024) branches. No quarterly rewrites.

Field finding: the bug that was invisible to every other tool

Live-tested against longevity.mbdev.to. On the first run, the skill's LLM-bot-access check flagged something no Lighthouse run, no Screaming Frog crawl, and no Ahrefs audit would catch:

Cloudflare's Managed robots.txt was silently overriding the deployed robots.txt.
Allowed requests across Google, OpenAI, Anthropic, Perplexity, Meta, ByteDance: zero. Every AI crawler on the planet was being blocked, from a config toggle inside Cloudflare's UI that traditional SEO tools don't look at. The fix was one click. Without the audit, this would have stayed invisible indefinitely.

That's the kind of check this skill exists for. Most SEO tooling treats AI crawlers as a footnote. /seo-audit treats them as 50% of the score.

Live proof — longevity.mbdev.to

One session, before-and-after, on a real production Next.js site with en/es i18n, 72 ingredient pages, and 18 recipes.

Before

Score: 47/100
Traditional SEO: 27/50 | GEO / AI Visibility: 20/50

Critical Issues
- robots.txt: 404
- sitemap.xml: 404
- llms.txt: 404
- canonical URL: missing
- Open Graph tags: all missing
- JSON-LD: none

After (same session)

Score: 97/100
Traditional SEO: 47/50 | GEO / AI Visibility: 50/50

Traditional SEO
✓ Title tag (53 chars)
✓ Meta description (89 chars)
✓ Canonical URL
✓ Open Graph (title, description, image 1200x630, url)
✓ robots.txt — explicit allow for GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot
✓ sitemap.xml — 183 URLs across en/es locales
✓ H1 (exactly one)
✓ lang="en"
✓ LCP 2271ms (Good), CLS 0 (Good)

GEO / AI Visibility
✓ llms.txt (200, text/plain)
✓ LLM bots explicitly allowed in robots.txt
✓ JSON-LD WebSite schema
✓ Content front-loading
✓ hreflang auto-awarded (single locale)

Post-deploy signal landed fast:

  • 189 pages discovered in Google Search Console within hours
  • Claude-SearchBot crawling within 24h of deploy
  • GSC robots.txt validator: "All files are valid" across every fetched variant

Proof of work — open source. Every fix the skill proposed was applied to an open-source repo via /ship. Anyone can verify the output in real production code:

How it works

/seo-audit                          # auto-detects local dev server
/seo-audit https://yoursite.com     # audit a live deployment
  1. Pulls the current Google algorithm state via WebSearch (no stale checklists)
  2. Crawls the page via real browser — handles JS-rendered content
  3. Runs Lighthouse for Core Web Vitals (if installed)
  4. Checks 9 traditional SEO signals + 5 GEO signals
  5. Scores 0-100 (50pts each)
  6. Outputs a scored report with copy-pasteable code fixes
  7. Offers to apply all fixes via /ship, then re-runs the audit to confirm

What the skill checks

Traditional SEO (50 pts)

Check Points
Title tag (< 60 chars) 5
Meta description (< 160 chars) 5
Canonical URL 5
Open Graph (all 4 tags) 5
robots.txt 5
sitemap.xml 5
H1 (exactly one) 5
lang attribute 5
Core Web Vitals (LCP/CLS/INP) 10

GEO / AI Visibility (50 pts)

Check Points
llms.txt 15
LLM bots allowed in robots.txt 10
JSON-LD (content-type aware) 15
Content front-loading 5
hreflang 5

How it compares

Tool Traditional SEO GEO Algorithm-live Free
Screaming Frog Crawl cap
Ahrefs Partial No
Lighthouse Partial
/seo-audit

Post-fix: verify it's working

The skill tells you what to do after fixing — not just "re-run the audit." The new post-fix section covers both Google-side and AI-bot-side validation:

  • GSC robots.txt validator — universal, free. Confirms Googlebot can parse your new allow rules. Works on any host.
  • GSC Crawl Stats — shows real Googlebot activity over 90 days. Response breakdown, file types, trend chart.
  • Cloudflare AI Crawl Control — for proxied domains only (DNS-only records are invisible to CF).
  • Vercel logs / Next.js middleware — the free-tier path for monitoring non-Google AI bots when Vercel Analytics is paywalled and GA4 is blind to crawlers.

Expected signal timeline:

Signal When
AI bots hitting llms.txt Hours (Cloudflare or server logs)
GSC robots.txt validator clean 24h after recrawl
GSC sitemap indexed count 24-48h
GSC impressions 3-7 days
Organic traffic change 4-8 weeks

Time to value

Task Manual (Lighthouse + spreadsheet + Next.js config by hand) /seo-audit
Full audit ~2 hours ~45 seconds
Apply all fixes ~2 hours (per-file code edits) ~2 min via /ship
Re-verify ~30 min ~30 seconds

What this doesn't do

  • Doesn't track organic traffic over time — that's a GSC/GA job.
  • Doesn't deep-audit JS UX regressions — use /qa for that.
  • Doesn't replace human judgment on content quality or backlink strategy.
  • Doesn't scan non-HTML assets (PDFs, XML feeds beyond sitemap).

Tested on

  • Next.js static export — longevity.mbdev.to · source: manuelbenitez/my-longevity-wiki (en/es i18n, 72 ingredient pages, 18 recipes). Open source — every skill-proposed fix is visible in the commit history.
  • Next.js app-router portfolio (mbdev.to — personal portfolio, dark/light theming)
  • Skill works on any gstack project with the browse binary set up

Checklist

  • seo-audit/SKILL.md.tmpl — single source of truth
  • Generated for all 8 hosts (bun run gen:skill-docs --host all)
  • Tier 1 static tests pass (bun test) — 329 pass, browse API validated
  • Live tested against 2 production sites
  • Before/after scores documented with screenshots
  • Tier 2 E2E test (test/skill-e2e-seo.test.ts) — spec in design doc, deferred until a real site is wired into the test harness

Dogfooded on longevity.mbdev.to and mbdev.to across 3 sessions. Both live and in Google's index as of 2026-04-17. Built with /office-hours → /plan-eng-review → /seo-audit → /ship.

@manuelbenitez
Copy link
Copy Markdown
Author

manuelbenitez commented Apr 17, 2026

@garrytan — first gstack contribution. Hopefully saves other builders the "deployed llms.txt + Cloudflare silently blocking every AI crawler anyway" debugging session I just went through on my own site. The skill caught it day one. Happy to iterate on any feedback.

SKILL.md + SKILL.md.tmpl: live SEO + GEO audit that scores traditional
SEO and AI/generative visibility 50/50, algorithm-aware via WebSearch.

- Post-fix verification covering GSC robots.txt validator, GSC crawl
  stats, Vercel logs, and Next.js middleware for monitoring AI bots
  without paid analytics.
- Portable Chrome detection: probe mirrors chrome-launcher's binary
  resolution ($CHROME_PATH, Linux google-chrome-stable/chromium*, macOS
  .app bundles, Windows git-bash paths) instead of invoking `lighthouse
  about:blank`, which always emits INVALID_URL regardless of Chrome
  state and produced a false-positive LIGHTHOUSE_CHROME_MISSING on
  working installs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant