TL;DR: GEO packages verifiable claims with schema and provenance inside a coherent topical map so AI systems can select, cite and summarize your pages, which compounds topical authority at the cluster level.
You want AI systems to cite your pages, not your competitors. This guide gives you a clear definition, a map-first checklist and reproducible assets you can ship today. It serves B2B SaaS, e-commerce and multi-location teams. You manage clusters, internal links and schema.
If you are new to clusters, start with our topical map and read the topical authority explainer.
Use this page to:
- Learn what GEO is and why topical maps raise selection and AI-citation
- Ship a hub-and-leaf GEO package with schema and provenance
- Measure AI-citation lift at the cluster level with open telemetry
What is Generative Engine Optimization (GEO)?
Generative Engine Optimization (GEO) packages content, schema and provenance so generative engines select and cite your page as the trusted answer.
GEO works best on a site with strong topical authority and a coherent topical map. Put the canonical claim on the hub, reinforce it with well-linked leaves and expose human-visible anchors plus machine-readable schema. See the topical map methodology for how to define hubs, leaves and depth.
- When GEO matters: when ChatGPT, Perplexity, Gemini, and Google AI Overviews assemble concise answers and select a small set of sources for AI-citation
- How maps help GEO: a hub claim plus supportive leaves increases trust, improves extraction and makes your cluster easier to cite
- One action to start today: publish a provenance-backed
Claim
JSON-LD on your hub, add FAQPage or HowTo on 3–5 leaves, and link hub ↔ leaves with descriptive anchors
Why do topical maps and topical authority matter for GEO?
GEO selection improves when your hub claim sits inside a coherent topical map and supporting leaves build topical authority across the cluster.
What do engines prefer in a cluster?
A clear hub with a short, verifiable claim; well-linked leaves that cover adjacent subtopics; consistent anchors and schema across the cluster.
- Hub with a 40–80 word lead claim and a visible provenance line
- Leaves that cover adjacent subtopics and cite the hub
- Consistent anchors, schema and terminology across the cluster
How does a topical map raise selection?
Coverage, structure and signals make extraction and citation easier.
- Coverage: publish planned leaves so answers have multiple corroborating sources
- Structure: link hub ↔ leaves and leaf ↔ leaf with descriptive anchors that match your topical map
- Signals: align each page to one query intent, keep IDs stable, and expose provenance on hub and leaves to compound topical authority
Which terms should you align on?
- Hub: the canonical page for a topic
- Leaf: a subtopic page that supports the hub
- Cluster: the hub plus its leaves
- Depth: the number of clicks from the hub to a leaf
What failure modes should you avoid?
- Strong hub with missing leaves
- Leaves that do not cite or link back to the hub
- Inconsistent anchors or schema that break claim-to-evidence mapping
What should you implement this week for GEO on a map?
Ship the GEO package on one cluster: one hub and 3–5 leaves.
How do you select the cluster?
- Pick a hub with demand in your topical map
- Choose 3–5 leaves that answer adjacent intents
- Confirm each page has one primary query and a clear role in the cluster
How do you write hub and leaf leads with provenance?
- Hub: add a 40–80 word lead claim near the top, then a one-line provenance anchor
- Leaves: add 25–60 word supporting leads that reference the hub
- Visible anchor pattern:
Source: [Team], [YYYY-MM-DD]. Evidence: [#section-id]
How do you add map-aware schema?
- Hub:
Claim
JSON-LD withcitation
to#evidence
,about
terms that match your cluster andisPartOf
a cluster container - Leaves:
FAQPage
orHowTo
aligned to the supporting lead - Hierarchy:
BreadcrumbList
that reflects hub → leaf - Stability: keep
@id
,claim_id
and anchors stable
How do you wire internal links?
- Hub → every leaf with descriptive anchors that match query intent
- Every leaf → hub once near the lead and once in the footer or sidebar
- Selective leaf ↔ leaf links where users compare or continue tasks
How do you publish and request indexing?
- Update the XML sitemap, lastmod and canonical
- Keep hub and leaves in the same cluster path
- Request indexing after visible leads and schema are live
How do you start the baseline?
- Log hub and leaf URLs, intents and roles
- Record week 0 AI-citation presence for hub queries in ChatGPT, Perplexity, Gemini, Google AI Mode, and Google AI Overviews
- Capture internal link counts hub ↔ leaves to track improvements
How do you run a reproducible GEO experiment at the cluster level?
Run a matched test on two comparable clusters. Apply the GEO package to the Treatment cluster and keep the Control cluster unchanged. Measure AI-citation and cluster authority signals for 12 weeks.
How should you design the study?
Use a simple, controlled setup that isolates the GEO package.
- Unit: cluster
- Arms: one Treatment cluster, one Control cluster
- Window: 12 weeks with frozen non-GEO edits
- Scope: one language and country
Which clusters qualify?
Pick clusters that are comparable in demand and structure.
- Similar demand and seasonality in your topical map
- One clear hub plus 3–5 leaves per cluster
- Comparable crawl depth and internal link exposure
What goes into the Treatment cluster?
Ship the same package on hub and leaves.
- Hub: 40–80 word lead claim with a visible provenance line
- Leaves: 25–60 word supporting leads that reference the hub
- Schema:
Claim
on hub,FAQPage
orHowTo
on leaves,BreadcrumbList
on all pages - Links: hub ↔ every leaf, selected leaf ↔ leaf
How do you measure outcomes?
Track AI-citation plus cluster authority KPIs.
Metric | Definition | Target |
AI-citation share | Share of answers that cite your hub for tracked queries in ChatGPT, Perplexity, Gemini, and Google AI Overviews | Treatment ≥ Control + 10% for 3 weeks |
Answer presence | Weeks with hub cited in answers | ≥50% of weeks |
Share of voice | Hub citations ÷ all cited domains for the same answers | +5 percentage points by week 8 |
Topic coverage | Published leaves ÷ planned leaves in the cluster | ≥80% by week 8 |
Hub link ratio | Leaf→hub links ÷ total internal links on leaves | ≥0.30 |
Leaf cohesion | Average leaf↔leaf links per leaf | ≥1.0 |
What files should you publish?
Make the dataset reproducible and map-aware.
telemetry.csv
week_start,engine,query,cluster_id,page_url,node_type,variant,answer_present,our_domain_cited,cited_domains,ai_citation_share,sov,notespages.csv
page_url,cluster_id,node_type(hub|leaf),depth,status(published|planned),inlinks_to_hub,outlinks_to_siblingsqueries.csv
query,cluster_id,intent(hub|leaf),priority
How should you analyze results?
Compute weekly means by variant. Use difference-in-differences on AI-citation share across weeks 1–12. Confirm movement in share of voice, then validate CTR and assisted conversions.
Which developer artifacts should you copy for map-aware GEO?
Use hub and leaf schema that reflects the cluster. Expose hierarchy, anchors and provenance in human and machine-readable forms.
Below are templates – replace bracketed fields and example.com with your values.
Hub Claim template
{
"@context": "https://schema.org",
"@type": "Claim",
"@id": "https://www.example.com/geo#claim-definition",
"claimText": "[Your one-sentence canonical claim]",
"author": { "@type": "Organization", "name": "[Your Brand]", "url": "https://www.example.com" },
"datePublished": "[YYYY-MM-DD]",
"citation": { "@type": "CreativeWork", "url": "https://www.example.com/geo#evidence" },
"about": [
{ "@type": "DefinedTerm", "name": "[Cluster Term 1]", "inDefinedTermSet": "https://www.example.com/topical-map#[cluster-id]" },
{ "@type": "DefinedTerm", "name": "[Cluster Term 2]", "inDefinedTermSet": "https://www.example.com/topical-map#[cluster-id]" }
],
"isPartOf": { "@type": "CreativeWorkSeries", "name": "[Cluster Name]", "url": "https://www.example.com/topical-map#[cluster-id]" },
"mainEntityOfPage": { "@id": "https://www.example.com/geo" }
}
FAQPage leaf template
{
"@context": "https://schema.org",
"@type": "FAQPage",
"@id": "https://www.example.com/geo/faq#page",
"mainEntity": [{
"@type": "Question",
"@id": "https://www.example.com/geo/faq#q1",
"name": "[Question users ask about the hub topic]",
"acceptedAnswer": {
"@type": "Answer",
"text": "[25–60 word supporting answer that references the hub claim]",
"citation": { "@type": "CreativeWork", "url": "https://www.example.com/geo#evidence" },
"author": { "@type": "Organization", "name": "[Your Brand]" },
"datePublished": "[YYYY-MM-DD]"
}
}],
"about": [{ "@type": "DefinedTerm", "name": "[Leaf Term]" }],
"isPartOf": { "@type": "CreativeWorkSeries", "name": "[Cluster Name]", "url": "https://www.example.com/topical-map#[cluster-id]" },
"mainEntityOfPage": { "@id": "https://www.example.com/geo/faq" }
}
HowTo leaf template
{
"@context": "https://schema.org",
"@type": "HowTo",
"@id": "https://www.example.com/geo/how-to#page",
"name": "[Task users perform in this cluster]",
"description": "[Short description aligned to the supporting lead]",
"totalTime": "PT[minutes]M",
"step": [
{ "@type": "HowToStep", "name": "Select cluster", "text": "Pick a hub and 3–5 leaves.", "url": "https://www.example.com/geo#select-cluster" },
{ "@type": "HowToStep", "name": "Write leads", "text": "Add a 40–80 word hub lead and shorter leaf leads.", "url": "https://www.example.com/geo#leads" },
{ "@type": "HowToStep", "name": "Add schema", "text": "Use Claim, FAQPage or HowTo and BreadcrumbList.", "url": "https://www.example.com/geo#schema" }
],
"about": [{ "@type": "DefinedTerm", "name": "[Cluster Term]" }],
"isPartOf": { "@type": "CreativeWorkSeries", "name": "[Cluster Name]", "url": "https://www.example.com/topical-map#[cluster-id]" },
"author": { "@type": "Organization", "name": "[Your Brand]" },
"mainEntityOfPage": { "@id": "https://www.example.com/geo/how-to" },
"citation": { "@type": "CreativeWork", "url": "https://www.example.com/geo#evidence" },
"datePublished": "[YYYY-MM-DD]"
}
BreadcrumbList template
{
"@context": "https://schema.org",
"@type": "BreadcrumbList",
"itemListElement": [
{ "@type": "ListItem", "position": 1, "name": "Home", "item": "https://www.example.com" },
{ "@type": "ListItem", "position": 2, "name": "[Hub Topic]", "item": "https://www.example.com/geo" },
{ "@type": "ListItem", "position": 3, "name": "[Leaf Name]", "item": "https://www.example.com/geo/[leaf-slug]" }
]
}
ItemList on hub template
{
"@context": "https://schema.org",
"@type": "ItemList",
"@id": "https://www.example.com/geo#leaves",
"itemListElement": [
{ "@type": "ListItem", "position": 1, "url": "https://www.example.com/geo/faq" },
{ "@type": "ListItem", "position": 2, "url": "https://www.example.com/geo/how-to" }
]
}
How should you measure GEO and topical authority on a cluster?
Track AI-citation lift on the hub and authority signals across the cluster with a small, map-aware metric set.
What metrics should you track?
Measure hub selection and cluster structure with consistent field names.
Metric | Definition | Calculation | Source | Threshold | Cadence |
AI-citation share | Share of answers that cite your hub for tracked queries | Mean of ai_citation_share for hub rows | Weekly engine checks | Treatment ≥ Control + 10% for 3 weeks | Weekly |
Answer presence | Weeks the hub appears in answer citations | Mean of answer_present (0 or 1) for hub rows | Weekly engine checks | ≥50% of weeks | Weekly |
Share of voice | Hub citations ÷ all cited domains | Mean of sov for hub rows | Weekly engine checks | +5 percentage points by week 8 | Weekly |
Topic coverage | Published leaves ÷ planned leaves in cluster | published_leaves ÷ planned_leaves | pages.csv | ≥80% by week 8 | Monthly |
Hub link ratio | Leaf→hub links ÷ all internal leaf links | Σ(inlinks_to_hub) ÷ [Σ(inlinks_to_hub)+Σ(outlinks_to_siblings)] | pages.csv | ≥0.30 | Monthly |
Leaf cohesion | Avg leaf↔leaf links per leaf | AVERAGE(outlinks_to_siblings) | pages.csv | ≥1.0 | Monthly |
Time to first AI-citation | Weeks until the hub first appears as a cited source | Min week where answer_present=1 | Telemetry | ≤6 weeks | Weekly |
How do you structure the datasets?
Keep tidy, joinable files with a shared cluster_id
.
telemetry.csv: week_start,engine,query,cluster_id,page_url,node_type,variant,answer_present,our_domain_cited,cited_domains,ai_citation_share,sov,notes
pages.csv: page_url,cluster_id,node_type(hub|leaf),depth,status(published|planned),inlinks_to_hub,outlinks_to_siblings
queries.csv: query,cluster_id,intent(hub|leaf),priority
How do you compute the weekly rollups?
Filter to the hub, then group by week_start
and variant
.
AI-citation share (hub only)
=AVERAGEIFS(telemetry!K:K, telemetry!A:A, $A2, telemetry!G:G, $B2, telemetry!F:F, "hub")
Answer presence (hub only)
=AVERAGEIFS(telemetry!H:H, telemetry!A:A, $A2, telemetry!G:G, $B2, telemetry!F:F, "hub")
Share of voice (hub only)
=AVERAGEIFS(telemetry!L:L, telemetry!A:A, $A2, telemetry!G:G, $B2, telemetry!F:F, "hub")
How do you track cluster authority KPIs?
Compute coverage and link structure from pages.csv.
Topic coverage
=COUNTIFS(pages!C:C,"leaf", pages!E:E,"published") / COUNTIFS(pages!C:C,"leaf")
Hub link ratio
=SUMIFS(pages!F:F, pages!C:C,"leaf") / (SUMIFS(pages!F:F, pages!C:C,"leaf") + SUMIFS(pages!G:G, pages!C:C,"leaf"))
Leaf cohesion
=AVERAGEIFS(pages!G:G, pages!C:C,"leaf")
What alert rules should you set?
Use simple, testable triggers tied to actions.
- Success: hub AI-citation share ≥ Control + 10% for 3 consecutive weeks
- Coverage gap: topic coverage < 70% by week 6
- Structure gap: hub link ratio < 0.20 or leaf cohesion < 0.5
- Regression: week-over-week drop ≥20% in hub AI-citation share
- Outcome check: CTR or assisted conversions down ≥10% vs baseline after 4 weeks
What cadence should you follow?
Run a steady loop that favors small, fast iterations.
- Weekly: scrape answers, update telemetry, review hub metrics, log notes
- Biweekly: harden anchors on weak leaves, add missing leaf links, adjust leads
- Monthly: update coverage, link ratios and cohesion; decide to extend or end the test
What should you do next?
Publish one canonical hub, fill 3–5 leaves, ship schema and provenance, then run the 12-week cluster test and monitor AI-citation and authority signals.
What are the immediate actions?
Commit to a short list that moves the cluster.
- Finalize the hub’s 40–80 word claim with a visible provenance line
- Add
Claim
JSON-LD on the hub,FAQPage
orHowTo
on leaves, plusBreadcrumbList
- Wire hub ↔ leaves and selective leaf ↔ leaf links with descriptive anchors
- Publish datasets and start weekly telemetry
How do you move from pilot to scale?
Replicate the package to similar clusters while keeping IDs stable.
- Roll out to two more clusters with comparable demand
- Keep IDs, anchors and cluster slugs stable across deploys
- Promote the open dataset and methods to invite third-party replication
What does success look like?
Define success so teams can declare an outcome.
- The hub earns consistent AI-citation presence
- Topic coverage reaches or exceeds 80 percent
- Hub link ratio and leaf cohesion meet targets
- CTR and assisted conversions trend up as the cluster matures
What CSV starters should you use for telemetry and maps?
Use tidy, joinable CSVs for telemetry, pages and queries so your team can compute AI-citation and cluster KPIs without friction.
Headers to copy
telemetry.csv
week_start,engine,query,cluster_id,page_url,node_type,variant,answer_present,our_domain_cited,cited_domains,ai_citation_share,sov,notes
pages.csv
page_url,cluster_id,node_type,depth,status,inlinks_to_hub,outlinks_to_siblings
queries.csv
query,cluster_id,intent,priority
Data dictionary (concise)
week_start
ISO dateengine
ChatGPT, Perplexity, Gemini, Google AI Mode, Google AI Overviewsquery
exact query textcluster_id
stable cluster keypage_url
canonical URLnode_type
hub or leafvariant
Treatment or Controlanswer_present
0 or 1our_domain_cited
0 or 1cited_domains
semicolon separatedai_citation_share
0–1sov
share of voice 0–1notes
free textdepth
clicks from hubstatus
published or plannedinlinks_to_hub
count from leaf to huboutlinks_to_siblings
count of leaf↔leaf linksintent
hub or leafpriority
1 is highest
Download the starter files
- telemetry_template.csv
- pages_template.csv
- queries_template.csv
We’re using example.com
in these templates. Replace with your domain when you roll out.
What is the cluster QA checklist before you publish?
Run this quick, map-first QA so engines can extract, cite and verify your cluster.
Hub checks
- Lead claim is 40–80 words near the top
- Visible provenance line with author, date and
#evidence
anchor Claim
JSON-LD present with stable@id
andcitation
to#evidence
- Breadcrumb shows hub position in the cluster
- ItemList of leaves exists or is planned
Leaf checks
- Supporting lead is 25–60 words and references the hub
- Leaf links to hub near the lead and once more on the page
- Leaf links to at least one sibling where relevant
FAQPage
orHowTo
JSON-LD matches visible copyabout
terms align to cluster vocabulary
Anchor and ID checks
#evidence
resolves to the exact claim text@id
,claim_id
,cluster_id
are stable across deploys- No duplicate anchors in hub or leaves
Crawl and index checks
- Canonicals and sitemaps updated with lastmod
- Robots allow hub and leaves
- Internal links use descriptive anchors that match intent
Telemetry readiness
telemetry.csv
seeded with week 0 rowspages.csv
lists hub and leaves with correct node typesqueries.csv
lists hub and leaf intents with priorities
Provenance and dataset
/provenance.json
lists the hub claim withchecksum_sha256
of telemetry- Dataset location is stable and publicly accessible
- Dates in visible copy match JSON-LD and provenance
When this QA passes, publish the cluster, request indexing and start the 12-week measurement loop.
How do you set up RAG so it cites your hub and leaves?
Use narrow retrieval, boost cluster terms and anchors, and enforce short answers with citations.
What retrieval steps should you use?
Normalize intent, restrict scope to the cluster, and boost anchors.
- Normalize the query to intent
- Filter search to your domain and the cluster path
- Boost passages that contain
claim_id
,#evidence
, or clusterDefinedTerm
names - Return top passages with their URLs and fragment IDs
What system and user prompts work?
Constrain answers to provided context and require anchor citations.
System prompt
Answer from the provided context only. Cite sources as [url#anchor]. Keep answers to 80 words unless asked for more. If context is insufficient, say what is missing.
User prompt template
Question: {{user_query}}
Context:
{{top_k_passages_with_urls_and_anchors}}
Task: Give a short, direct answer with 1–3 citations to the most relevant anchors.
How do you check output quality?
Validate answer shape and citations before you show results.
Expected answer shape
{
"answer_text": "[<= 80 words answering the question]",
"citations": [
{ "url": "https://www.example.com/geo#evidence", "claim_id": "claim-definition" }
]
}
Acceptance checks
- Answer length ≤ 80 words
- At least one citation to the hub
#evidence
anchor - Optional second citation to a leaf that supports the hub
- No citations to off-domain pages unless the task requires it
Where should you place schema and provenance in common CMSs?
Inject one JSON-LD block per type, expose a stable #evidence
anchor, and serve a small provenance index at the site root.
How do you add this in WordPress?
Use theme head or a code-injection plugin to add JSON-LD and anchors.
- Insert
<script type="application/ld+json">…</script>
in the theme head - Place the visible claim and
#evidence
near the top of the hub - Keep one JSON-LD block per type on each page
Claim snippet (template)
<script type="application/ld+json">
{
"@context":"https://schema.org",
"@type":"Claim",
"@id":"https://www.example.com/geo#claim-definition",
"claimText":"[Your one-sentence canonical claim]",
"author":{"@type":"Organization","name":"[Your Brand]"},
"datePublished":"[YYYY-MM-DD]",
"citation":{"@type":"CreativeWork","url":"https://www.example.com/geo#evidence"},
"mainEntityOfPage":{"@id":"https://www.example.com/geo"}
}
</script>
<p id="evidence">[Visible claim text that matches the JSON-LD]</p>
How do you add this in Shopify?
Edit theme.liquid
or section templates for products and articles.
- Paste JSON-LD in
<head>
oftheme.liquid
or in a section for that template - Add the
#evidence
anchor inside the content block - Avoid duplicating the same JSON-LD type on the same URL
How do you add this in Webflow or Wix?
Use the page-level custom code area.
- Paste JSON-LD before
</body>
on the hub and each leaf - Add
#evidence
to the visible claim paragraph - Validate with a structured data tester
How do you add this in headless setups?
Render JSON-LD at build time and serve a provenance file.
- Next.js or Gatsby: render
<script type="application/ld+json">
in<Head>
per route - Serve
/provenance.json
at the site root with stable claim records - Keep
@id
,claim_id
, and anchors stable across deploys
Provenance index (template)
{
"publisher": "[Your Brand]",
"updated": "[YYYY-MM-DD]",
"claims": [
{
"claim_id": "claim-definition",
"page_url": "https://www.example.com/geo",
"anchor": "#evidence",
"claim": "[Your one-sentence canonical claim]",
"author": "[Your Team]",
"date": "[YYYY-MM-DD]",
"dataset_url": "https://www.example.com/dataset/telemetry.csv",
"checksum_sha256": "[sha256-of-telemetry.csv]"
}
]
}
What validation steps should you run?
Check anchors, IDs and crawl signals before publishing.
- Click the
#evidence
anchor and confirm it resolves to the claim - Validate JSON-LD and ensure no duplicate types per page
- Update sitemap and
lastmod
, then request indexing
How do you troubleshoot GEO when a cluster fails to earn citations?
Work from evidence to packaging to structure. Fix the smallest thing that blocks selection.
What symptoms signal selection problems?
A missing hub presence, unstable anchors or weak structure.
- Hub absent from answer citations for tracked queries
#evidence
anchor not resolving to the visible claim- Conflicting canonicals or blocked pages in robots
- Duplicate JSON-LD types on the same URL
- Leaves published without links back to the hub
What fixes usually work first?
Tighten the claim, anchors and schema before bigger changes.
- Place a 40–80 word claim at the top of the hub
- Add a visible provenance line with author, date and #evidence
- Map
Claim.citation.url
to the exact#evidence
anchor - Keep one JSON-LD block per type per page
- Add descriptive links hub ↔ leaves and selective leaf ↔ leaf
- Update sitemap and lastmod, then request indexing
What diagnostics should you run?
Confirm crawl, render and extraction paths.
- Fetch as mobile and confirm final HTML contains JSON-LD and
#evidence
- Validate structured data and remove duplicates
- Verify canonical, hreflang and robots rules
- Compare clusters for crawl depth and internal link exposure
- Record one week of baseline telemetry before copy changes
What should you avoid?
Large rewrites or stacked interventions during the test.
- Moving URLs or anchors during the 12-week window
- Adding or removing multiple JSON-LD types at once
- Running overlapping experiments on the same cluster
How do you govern IDs, versioning and reproducibility?
Use stable IDs, explicit versions and a public change log so others can audit and replicate results.
What needs stable IDs?
Claims, pages and clusters need stable identifiers.
claim_id
for each visible claim@id
in JSON-LD for hub and leavescluster_id
shared across telemetry and pages
How do you version datasets and claims?
Create immutable snapshots and update checksums.
- Name datasets with dates, for example
telemetry_2025-09-01.csv
- Maintain a moving
telemetry.csv
that points to the latest snapshot - Recompute
checksum_sha256
in/provenance.json
when data changes - Add a
version
field to claims when the wording changes
Provenance entry with version (template)
{
"claim_id": "claim-definition",
"version": "v1.1",
"page_url": "https://www.example.com/geo",
"anchor": "#evidence",
"author": "[Your Team]",
"date": "2025-09-01",
"dataset_url": "https://www.example.com/dataset/telemetry_2025-09-01.csv",
"checksum_sha256": "[sha256-of-telemetry_2025-09-01.csv]"
}
What change log should you keep?
Log every change that can affect selection or measurement.
- Date, field, page, reason, expected effect
- Lead claim edits and schema updates
- New leaves published or links added
- Experiment pauses or protocol deviations
methods.md snippet
## Change log
2025-09-01: Hub lead claim tightened from 92 to 68 words. Updated Claim.citation to #evidence. Expect improved extraction.
2025-09-08: Added leaf-to-leaf links between FAQ and HowTo. Expect higher cohesion.
2025-09-15: Published leaf “geo vs seo”. No other changes.
How do you retire or merge clusters?
Minimize disruption and keep provenance intact.
- Freeze edits, export final telemetry and publish a closing note
- 301 redirect deprecated leaves to the hub or the new cluster hub
- Keep
claim_id
stable if the claim persists; bumpversion
if wording changes - Retain old datasets and checksums for auditability
Which FAQs do teams ask about GEO and topical maps?
Is GEO the same as AEO?
No. They overlap but differ in focus.
- Scope: AEO covers snippets, voice and AI chat. GEO targets LLM-based generative engines.
- Methods: AEO favors concise Q&A and structured data. GEO adds provenance-backed claims, stable IDs and copyable schema.
- Measurement: AEO tracks snippet presence and answer selection. GEO tracks AI-citation share, share of voice and time to first citation.
Use AEO if your program includes snippets, voice and chat.
Use GEO if your priority is selection and citation inside generative engines.
Does GEO replace SEO?
No. SEO earns rankings and clicks. GEO packages claims, schema and provenance so engines cite you.
How many leaves should a cluster have?
Start with 3–5 leaves that cover adjacent intents to the hub.
How fast can you see AI-citation lift?
Many teams see movement in 3–6 weeks. Use the 12-week protocol to confirm.
Do all pages need schema?
Put a Claim
on the hub. Use FAQPage
or HowTo
on leaves. Keep IDs and anchors stable.
Can GEO work without a topical map?
Yes, but clusters improve selection. Hubs and leaves create corroboration and cleaner extraction.
What glossary should teams share for GEO and topical maps?
Align on these short definitions to reduce implementation drift.
- Generative Engine Optimization (GEO): Packaging content, schema and provenance so AI systems select and cite your page.
- AI-citation: A generative answer credits your domain as a source.
- Hub: The canonical page that carries the cluster’s lead claim.
- Leaf: A subtopic page that supports the hub.
- Cluster: The hub plus its leaves that cover one topic.
- Topic coverage: Published leaves ÷ planned leaves in the cluster.
- Hub link ratio: Leaf→hub links ÷ all internal links on leaves.
- Leaf cohesion: Average leaf↔leaf links per leaf.
- Claim: The one-sentence, canonical statement you want engines to select.
- Evidence anchor: The on-page fragment ID that resolves to the claim text.
- Provenance: Verifiable author, date and evidence that back a claim.
claim_id
: Stable identifier for a claim used in schema and telemetry.cluster_id
: Stable key that joins telemetry, pages and queries.DefinedTerm
: Schema term that tags pages with cluster vocabulary.BreadcrumbList
: Schema that exposes hub → leaf hierarchy.- Telemetry: Weekly rows logging engine, query, page, citations and notes.
- Share of voice: Your citations ÷ all cited domains for the same answers.
- Treatment cluster: The cluster that receives the GEO package.
- Control cluster: The comparable cluster left unchanged during the test.