# technical-seo-audit

Technical SEO Audit

Overview
Technical SEO Audit is the skill of checking whether your site can actually be crawled, indexed, understood, and shared correctly before you spend more time writing content. For a one-person company, this is the minimum viable search hygiene layer. If canonicals are wrong, redirects are sloppy, thumbnails are missing, or metadata is inconsistent, good pages still underperform.

When to Use This Skill
Use this before a launch, after a redesign, when traffic stalls despite publishing, when social shares look broken, or when search pages are live but feel structurally weak.

What This Skill Does
This skill gives you a fast technical audit for the pages that matter most. It checks crawl access, indexability, canonical consistency, redirects, metadata, thumbnails, schema basics, and route hygiene. The output should tell you what blocks discovery, what weakens click-through, and what makes pages less citation-worthy in AI search.

How to Use
Step 1: Check crawl and index basics. Confirm `robots.txt` is reachable, `sitemap.xml` exists, important pages return `200`, and there is no accidental `noindex` or canonical conflict.
Step 2: Check canonical discipline. Every important page should have one clean canonical URL. Old traffic paths should redirect to the right live page instead of falling back to the homepage.
Step 3: Check metadata coverage. Homepage, category pages, and top skill pages all need a clear `<title>`, meta description, canonical tag, Open Graph title/description/image, and Twitter card/image.
Step 4: Check thumbnail readiness. Shared pages should resolve to a large image that matches the current design system. Missing or off-brand thumbnails reduce click-through and make the site look unfinished.
Step 5: Check structured data sanity. Core pages should have clean schema that matches the visible page purpose. Do not add noisy markup that contradicts the content.
Step 6: Check mobile-safe rendering. Important pages should not rely on desktop-only layouts, broken assets, or unreadable content blocks.
Step 7: Prioritize fixes by impact. First fix crawl/index blockers, then canonical and redirect issues, then metadata and thumbnail gaps, then polish.

Output
The output should include:
A blocker list by severity
Affected URLs or patterns
The fix recommendation
Whether the issue hurts crawl, indexing, click-through, or answer-engine citation potential

## Freshness Reinforcement (2026-04-08)

- Revalidated all source links in this section and confirmed they still resolve to canonical documentation pages for crawl, index, rendering, and verification workflows.
- Expanded source diversity coverage across IETF + Google Search Central + Bing Webmaster + Schema.org so multi-engine crawl/index policy is supported by direct references.
- Tightened evidence language so each audit run explicitly records canonical, robots/meta, sitemap, and Search Console URL inspection checkpoints before release.
- Added a pre/post verification requirement so technical SEO fixes ship with measurable quality deltas instead of checklist-only completion.

## Authority and Citations Table

- Robots protocol baseline: Crawl directives should be validated against the formal Robots Exclusion Protocol syntax and behavior. Source: IETF RFC 9309, "Robots Exclusion Protocol" - https://www.rfc-editor.org/rfc/rfc9309
- Canonical discipline: Important pages should expose one canonical URL and avoid duplicate variants competing in search. Source: Google Search Central, "Consolidate duplicate URLs" - https://developers.google.com/search/docs/crawling-indexing/consolidate-duplicate-urls
- Robots/noindex controls: Indexing directives should be implemented with valid robots meta tags and/or `X-Robots-Tag` headers. Source: Google Search Central, "Robots meta tag, data-nosnippet, and X-Robots-Tag" - https://developers.google.com/search/docs/crawling-indexing/robots-meta-tag
- Sitemap policy: XML sitemaps should list canonical, index-worthy URLs instead of preview, template, or junk paths. Source: Google Search Central, "Build and submit a sitemap" - https://developers.google.com/search/docs/crawling-indexing/sitemaps/build-sitemap
- Structured data guidance: Schema should match visible page content and page purpose. Source: Google Search Central, "Intro to structured data markup in Search" - https://developers.google.com/search/docs/appearance/structured-data/intro-structured-data
- Mobile-first indexing guidance: Mobile-safe rendering matters because Google primarily uses the mobile version of content for indexing and understanding. Source: Google Search Central, "Mobile-first indexing best practices" - https://developers.google.com/search/docs/crawling-indexing/mobile/mobile-sites-mobile-first-indexing
- Indexing verification workflow: URL-level inspection should be used to confirm crawl/index state after technical fixes land. Source: Google Search Console Help, "Inspect a URL" - https://support.google.com/webmasters/answer/9012289
- Cross-engine crawl policy support: Robots behavior should also be checked against Bing-specific webmaster guidance to reduce crawler-policy drift. Source: Bing Webmaster Tools Help, "Which robots.txt directives does Bing support?" - https://www.bing.com/webmasters/help/which-robots-txt-directive-does-bing-support-5198d240
- Search quality baseline: Technical SEO implementations should remain within published webmaster quality guidelines. Source: Bing Webmaster Tools Help, "Webmaster Guidelines" - https://www.bing.com/webmasters/help/webmaster-guidelines-30fba23a
- Structured vocabulary reference: Schema implementation details should map to the official Schema.org model and examples. Source: Schema.org Documentation, "Getting Started with Schema.org" - https://schema.org/docs/gs.html
- Performance verification: Lighthouse checks can be used to validate user-facing quality signals before and after technical SEO fixes. Source: Chrome for Developers, "Lighthouse overview" - https://developer.chrome.com/docs/lighthouse/overview

## Evidence Pack Template

- Audit date (UTC): `YYYY-MM-DD`
- Property and environment: `domain + production/staging`
- URLs reviewed: `homepage + top categories + top money pages + top skill pages`
- Critical issues found: `N`
- Canonical conflicts found: `N`
- Redirect problems found: `N`
- Metadata gaps found: `N`
- Thumbnail failures found: `N`
- Structured data mismatches found: `N`
- Robots/meta directive issues found: `N`
- Mobile rendering issues found: `N`
- Core Web Vitals deltas (before -> after): `LCP`, `INP`, `CLS`
- Actions shipped: `redirect`, `canonical fix`, `meta fix`, `thumbnail fix`, `schema correction`, `robots fix`, `performance fix`, `link cleanup`
- Verification evidence: report paths, screenshot paths, or release evidence references
- Verification checkpoints: `pre-fix snapshot id`, `post-fix snapshot id`

## Named Examples

- A legacy `.html` route that still resolves can split crawl signals unless it redirects to the clean canonical path.
- A homepage that ships a new design without updating `og:image` often looks broken in Slack, X, and WhatsApp previews even when the page itself loads.
- A sitemap that still lists preview or template pages teaches crawlers to spend time on the wrong URLs.

What Good Looks Like
A healthy one-person-company site has:
One canonical URL per page
Clean redirects from legacy paths
Consistent title and description coverage
Working `og:image` and `twitter:image` on homepage, library, and top skill pages
Schema that reinforces the page instead of decorating it
Robots and noindex directives that match intent and are consistently validated

Common Mistakes
Do not treat technical SEO as a giant enterprise checklist.
Do not let old URLs dump traffic onto the homepage.
Do not publish pages that have no usable social thumbnail.
Do not assume sitemap presence means the page is healthy.
Do not keep creating new pages while core routes still have metadata gaps.
