← All Skills
AI Skill

aeo-audit

Run a complete AEO health audit on a deployed site — discovery files, crawler access, citation-shaped content, schema, freshness signals. Triggers — 'AEO audit'

Quick Install
npx skills add aeo-audit

Skill: AEO Health Audit

Three-tier audit of how well a site is set up for AI answer engines (ChatGPT, Claude, Perplexity, Gemini, Bing Copilot). Run weekly for Tier 1, monthly for Tier 2, quarterly for Tier 3.

Inputs

  • DOMAIN — e.g. acme.com
  • CANONICAL_URL — e.g. https://www.acme.com
  • SAMPLE_PAGES (optional) — list of 5-10 high-value page URLs to deep-audit. Default: pull from sitemap, take top 10 by sitemap priority.

Tier 1 — Discovery surface (must pass weekly)

For each check, use WebFetch to retrieve the URL, then verify the body content matches the assertion.

#CheckHow to verifyPass criteria
1/robots.txt reachable + permissiveWebFetch {URL}/robots.txtBody contains GPTBot, ClaudeBot, PerplexityBot, Google-Extended, OAI-SearchBot. None blocked.
2/llms.txt reachableWebFetch {URL}/llms.txt200, ≥ 500 chars, contains brand name in first 100 chars, has Last Updated line within 90 days
3/llms-full.txt reachable (if used)WebFetch {URL}/llms-full.txt200, ≥ 10 KB OR clear "site too small for full dump" comment
4/.well-known/ai-plugin.json valid JSONWebFetch + JSON.parseSchema fields present: schema_version, name_for_human, description_for_model, contact_email
5/sitemap.xml reachableWebFetch + XML parse200, valid XML, ≥ 1
6/sitemap-llm.xml exists (if site > 100 pages)WebFetch200 OR explicit "skipped intentionally" rationale
7Canonical URL agreesWebFetch {URL}/, parse href = CANONICAL_URL (or with trailing slash)
8HTTPS onlycurl -I http://{DOMAIN}301/302 to https
9No noindex on cornerstone pagesFetch each SAMPLE_PAGES, check None says noindex unless intentional
10Server-rendered content (not JS-only)WebFetch + check whether main content appears in raw HTMLPrimary content present without JS execution
Output Tier 1: Pass/fail table. Any fail = block of ≥ 1 line in remediation.

Tier 2 — Page-level citation-readiness (monthly)

For each SAMPLE_PAGES URL, fetch and score:

A. Answer-first lede (40-60 words)

  • Read the first paragraph after the H1.
  • Does it directly answer the page's implied question?
  • Word count between 40-80?
  • Score: 0 (no), 1 (partial), 2 (yes — quotable)

B. Fact anchors (specific numbers)

  • Count specific numbers/dates/proper nouns in first 300 words.
  • ≥ 3 = good (citation-shaped). 0-2 = weak (vague claims).
  • Score: 0/1/2

C. Heading structure

  • One H1, multiple H2s, H3s nested under H2s.
  • H2s phrased as questions when natural ("What is X?", "How does Y work?").
  • Score: 0 (no H1, or all H1s, or random), 1 (clean hierarchy), 2 (clean + question-shaped H2s)

D. FAQ / HowTo / Article schema

  • Inspect