How the GEO Score is calculated
The exact formula we use to turn raw mentions into a single 0–100 visibility number, with worked examples.
GEO Score is a single 0–100 number that answers two questions at once: how often does AI surface your brand for the questions you care about, and how strongly does it frame you when it does? Mention Rate alone would tell you only the first half — you can be mentioned 80% of the time and still be the third option in a list. The score combines visibility with quality.
The formula
For each engine we compute an effective rate that blends mention rate with mention quality:
mentionRate = mentioned / total
avgQuality = qualitySum / mentioned (0..1, see scale below)
effectiveRate = mentionRate × (BASELINE + (1 − BASELINE) × avgQuality)
BASELINE is fixed at 0.4. The intuition: even being mentioned with
zero qualitative endorsement is worth 40% of full credit — being noticed at
all matters. The remaining 60% is a quality lift driven by how the AI
framed you (positive recommendation, top-list position, dedicated section).
We then take a weighted average across engines that returned at least one result for the query batch:
GEO Score = round( Σ (effectiveRate × engineWeight) / Σ engineWeight × 100 )
Engine weights
0.45
ChatGPT
Largest AI-search audience
0.30
Google AI Mode
1.5B reach / month
0.25
Perplexity
Citation-rich answers
Weights are calibrated to Q2 2026 data on AI-search reach and share of buyer-discovery traffic. Engines that don't return any results for the query (e.g. Google AI Mode for a query Google doesn't render) drop out of the denominator — they aren't penalised, they're just absent.
For the longer story on why we landed on these numbers, see Methodology → engine weights.
The mention quality scale
Per scan_result we classify the mention into one of four buckets and assign a quality value normalised to 0–1:
| Bucket | Quality | What it means |
|---|---|---|
not_mentioned | 0.00 | Brand not present in the answer at all. |
mentioned | 0.40 | Named once, neutrally — included in a list, mentioned in passing. |
recommended | 0.70 | Framed positively or singled out as a candidate option. |
top_recommended | 0.90 | Top-of-list, explicit "best for X", dedicated paragraph. |
We never assign 1.0 — even the strongest mention has room to grow.
A worked example
Imagine you're a deployment platform competing with Vercel and Netlify (this is the running test domain in these docs). You track 4 buyer questions on the Pro plan, so one weekly deep scan produces 3 engines × 4 questions = 12 scan_results.
- ChatGPT mentioned you in 3 / 4 questions, average quality 0.70.
- Perplexity mentioned you in 2 / 4 questions, average quality 0.40.
- Google AI Mode mentioned you in 4 / 4 questions, average quality 0.40.
ChatGPT mentionRate = 0.75 effective = 0.75 × (0.4 + 0.6 × 0.70) = 0.615
Perplexity mentionRate = 0.50 effective = 0.50 × (0.4 + 0.6 × 0.40) = 0.320
Google AI Mode mentionRate = 1.00 effective = 1.00 × (0.4 + 0.6 × 0.40) = 0.640
Weighted (0.615 × 0.45 + 0.320 × 0.25 + 0.640 × 0.30) = 0.5470
GEO Score round(0.5470 × 100) = 55
So a 55 — you're surfacing reliably on Google AI Mode, doing well on ChatGPT but with mixed framing, and lagging on Perplexity. The score moves when any of those three pieces does.
Why the math is shaped this way
Common questions
Does the score punish me for niche queries that get few results? No — see the third bullet above. If two of three engines didn't render an answer at all, the score uses just the engine that did.
Why isn't the score higher when I'm mentioned in every engine? Mention quality matters. Being listed as "option 4 of 7" in a roundup counts, but won't move the needle the way "the best for X" does. Use the Citation search to read the actual sentences AI used about you.
Can I see the per-engine breakdown?
Yes — every Overview snapshot exposes per-engine rate, total,
mentioned, and the engine's contribution weight to the combined score.