The AI Index ranks who's winning AI engine recommendations in every category. Click any sector to see the leaders, find the gaps, and discover where your brand stands in the battle.
Loading distribution data…
Run a scan to see how ChatGPT, Claude, Gemini, and Perplexity describe your brand — and how that compares with your category benchmark.
Every score in the AI Brand Index is generated using the same audit framework applied to client scans — structured prompts, multi-engine testing, and four-dimension scoring.
We generate buyer-intent prompts for each company — the kinds of questions real prospects ask AI tools when researching vendors.
Each prompt is tested across ChatGPT, Claude, Gemini, and Perplexity so we can compare how different systems represent the same brand.
Responses are scored across category accuracy, explanation quality, context relevance, and feature recognition to produce a 0–100 benchmark score.
Scores reflect AI outputs observed during the current index cycle · Full 4-engine audits available through AIsubtext · hello@aisubtext.ai
Run a scan to see your score, your gaps, and how your brand compares with the category benchmark.