The AI Visibility Audit: How to Measure Your Brand's Presence Across AI Platforms
To audit your AI visibility, you need to systematically test how ChatGPT, Perplexity, Claude, and Google AI Overviews respond to the queries your customers actually ask — then score your brand's presence, accuracy, and competitive positioning across each platform. A complete AI visibility audit covers three dimensions: technical access (can AI crawlers reach your content?), content quality (is your content structured for AI citation?), and authority signals (does the broader web confirm your expertise?).
Most brands have no idea how they appear — or whether they appear at all — in AI-generated responses. That blind spot is becoming increasingly expensive. Gartner projects that traditional search volume will decline 25% by 2026 as users shift to AI-powered discovery, and research shows that fewer than 1 in 10 AI-generated answers mention a specific brand. Making that shortlist requires understanding where you stand today — which is exactly what an AI visibility audit gives you.
This guide walks through the complete audit framework we use at Forged Catalyst, including the exact testing protocol, scoring methodology, and competitive benchmarking approach you can run yourself. For the broader strategic context, see our complete guide to AI search visibility.
Why Do You Need an AI Visibility Audit?
You wouldn't run a paid media campaign without tracking conversions, and you wouldn't invest in SEO without monitoring rankings. AI search visibility deserves the same rigor — but most organizations are flying blind.
Here's why an audit is no longer optional:
The discovery channel is shifting. ChatGPT has over 200 million weekly active users. Perplexity processes millions of queries daily. Google AI Overviews now appear on a significant share of searches. When a potential customer asks one of these platforms "what's the best [your category]?" and your brand isn't mentioned, that's a lost opportunity you never even knew existed.
AI citations are highly concentrated. Research shows that 85% of AI brand mentions come from third-party sources — not your own website. Without an audit, you don't know which third-party signals are driving your mentions (or your absence).
Freshness determines eligibility. Pages updated within 12 months account for 70%+ of AI citations. An audit reveals which of your content assets have aged out of citation eligibility and need refreshing.
Your competitors may already be optimizing. The GEO market reached $886 million in 2024 and is projected to hit $7.3 billion by 2031. Early movers who audit and optimize now will build citation momentum that's difficult for latecomers to displace.
An audit gives you the baseline you need to invest strategically, prioritize optimizations, and measure progress over time. Without it, you're guessing.
What Are the 15 Questions Every CMO Should Ask About AI Visibility?
Before diving into the technical audit protocol, start with the strategic questions that frame the effort. Search Engine Journal's framework for evaluating AI search readiness suggests 15 foundational questions every marketing leader should be able to answer. Here are the ones we consider essential:
- Do we know how our brand appears (if at all) across ChatGPT, Perplexity, Claude, and Google AI Overviews?
- Have we identified the top 20-30 customer queries where we should be cited?
- Are AI crawlers (GPTBot, ChatGPT-User, PerplexityBot, Google-Extended) allowed in our robots.txt?
- What percentage of our target queries result in a brand mention for us vs. competitors?
- Is the information AI provides about our brand accurate and current?
- Do we have structured data (schema markup) implemented on our key pages?
- How frequently is our content being updated with fresh data and statistics?
- What third-party sources mention our brand, and are those sources ones AI trusts?
- Do we have a presence on platforms AI heavily indexes — Reddit, Wikipedia, industry publications?
- Are our author bios, credentials, and E-E-A-T signals clearly structured?
- How does our AI citation rate compare to our top three competitors?
- Do we have content formatted in answer-first, modular structures that AI can easily extract?
- What budget have we allocated specifically for AI search visibility?
- Who on our team owns AI visibility as a KPI?
- How often are we re-auditing to measure progress?
If you can't confidently answer more than half of these, the audit process below will fill the gaps. If you can't answer any of them, you're significantly behind — but the good news is that most of your competitors can't answer them either.
How Do You Run a Step-by-Step AI Visibility Audit?
The audit has five phases: query development, platform testing, scoring, competitive benchmarking, and technical/content assessment. Here's the full protocol.
Phase 1: Develop Your Query Set
Build a list of 30-50 queries that represent how your customers discover solutions in your category. Organize them into three tiers:
Tier 1 — Brand queries (10-15 queries):
- "What is [your brand]?"
- "[Your brand] reviews"
- "[Your brand] vs [competitor]"
- "Is [your brand] worth it?"
Tier 2 — Category queries (10-15 queries):
- "Best [your category] in [year]"
- "What [your product type] should I use for [use case]?"
- "Top [your category] companies"
- "[Your category] recommendations"
Tier 3 — Problem-aware queries (10-15 queries):
- "How do I [solve the problem your product addresses]?"
- "What's the best way to [achieve outcome you enable]?"
- "Why is my [problem you solve] not working?"
Document every query in a spreadsheet with columns for the query text, the tier, the date tested, and the results per platform. This becomes your audit tracking sheet.
Phase 2: Platform-by-Platform Testing
Test each query across all four major AI platforms and document the results. Here's the platform-specific methodology:
ChatGPT (GPT-4o with browsing):
- Use a new conversation for each query to avoid context contamination
- Test both with and without browsing enabled where possible
- Record: Was your brand mentioned? In what context? What sources were cited? Were competitors mentioned?
Perplexity:
- Test in default mode (Perplexity shows citations inline, making source attribution straightforward)
- Record: Were you cited as a source? Was your brand mentioned in the answer text? Which of your pages (or third-party pages about you) were cited?
Claude:
- Test without web search first (to evaluate training data presence), then with web search enabled
- Record: Was your brand mentioned from memory? Did web search change the results?
Google AI Overviews:
- Search Google for your queries and check whether an AI Overview appears
- Record: Does the AI Overview mention your brand? Which pages are cited as sources? Is your page among them?
Key documentation tip: Take screenshots of every result. AI responses are non-deterministic — the same query can produce different responses at different times. Screenshots create a point-in-time record.
Phase 3: Score Your Visibility (0-100 Scale)
Use this scoring methodology to convert your raw audit data into an actionable visibility score:
Mention Score (0-40 points):
- For each query, assign: 0 = not mentioned, 1 = mentioned briefly, 2 = mentioned prominently or recommended
- Calculate: (total points earned / total points possible) x 40
Accuracy Score (0-20 points):
- For each mention, assess: Is the description accurate? Is pricing/positioning current? Are there any factual errors?
- Calculate: (number of accurate mentions / total mentions) x 20
Source Score (0-20 points):
- How often is your own content cited as the source (vs. third-party mentions of you)?
- Calculate: (queries where your site is cited as source / total queries tested) x 20
Competitive Share Score (0-20 points):
- For category and problem-aware queries, what percentage of the time are you mentioned vs. competitors?
- Calculate: (your mentions / total competitor + your mentions across category queries) x 20
Total AI Visibility Score = Mention Score + Accuracy Score + Source Score + Competitive Share Score
| Score Range | Interpretation |
|---|---|
| 0-20 | Invisible — AI doesn't know your brand exists |
| 21-40 | Minimal presence — occasional mentions, often inaccurate |
| 41-60 | Emerging visibility — mentioned for some queries, absent for many |
| 61-80 | Strong presence — consistently mentioned, mostly accurate |
| 81-100 | Dominant — the brand AI recommends first, with accurate details |
Most brands we audit score between 15 and 35 on their first assessment. If you're in that range, you're normal — but you have significant room for improvement.
Phase 4: Competitive Benchmarking
Run the same Tier 2 and Tier 3 queries for your top three to five competitors. Document:
- Citation frequency: How often each competitor is mentioned across platforms
- Citation context: Are they recommended, merely mentioned, or compared?
- Source quality: Which third-party sources are driving their AI mentions?
- Content patterns: What format, depth, and structure does their cited content follow?
Create a competitive citation matrix — a table with your target queries as rows and brands (yours + competitors) as columns. Mark each cell as "mentioned," "recommended," or "absent." This matrix reveals exactly where you're winning, where you're losing, and where opportunities exist.
Phase 5: Technical and Content Assessment
The final audit phase evaluates the underlying factors that determine whether AI can access and will choose to cite your content.
How Do You Assess Technical Access for AI Crawlers?
Technical access is the foundation. If AI crawlers can't reach your content, nothing else matters.
Robots.txt audit: Check your robots.txt file for rules that might block AI crawlers. Look for:
User-agent: GPTBot— OpenAI's crawler for ChatGPTUser-agent: ChatGPT-User— ChatGPT's browsing mode crawlerUser-agent: PerplexityBot— Perplexity's crawlerUser-agent: Google-Extended— Google's AI training crawlerUser-agent: ClaudeBot— Anthropic's crawlerUser-agent: Bytespider— ByteDance's AI crawler
If any of these are disallowed, you've found your first fix. Many sites inadvertently block AI bots through overly broad wildcard rules.
Structured data audit: Check your key pages for schema markup implementation. AI systems rely on structured data to understand entities, relationships, and content types. Prioritize:
- Organization schema on your homepage
- Article/BlogPosting schema on content pages
- FAQ schema on pages with question-answer content
- Product schema on product/service pages
- Person schema on author bio pages
Use Google's Rich Results Test or Schema.org's validator to verify implementation. For a deep dive, see our guide on content structure that AI systems prefer.
Sitemap and indexation audit: Verify your XML sitemap is current, submitted to Google Search Console and Bing Webmaster Tools, and includes all pages you want AI to discover. Bing indexation is particularly important because ChatGPT's browsing mode relies on Bing's search index.
How Do You Evaluate Content Quality for AI Citation?
Content quality determines whether AI chooses to cite you once it can access your content. Assess each key page against these criteria:
Answer-first formatting: Does the page open with a direct, clear answer to the question it targets? Research from AirOps' 2026 State of AI Search report found that 44% of all LLM citations come from the first 30% of a page's text. If your critical information is buried below an introduction, AI is less likely to cite it.
Data density: Does the page include specific statistics, data points, and cited research? The Princeton GEO study found that including citations and statistics boosts AI visibility by 30-40%. Pages with vague claims and no supporting data are significantly less likely to be cited.
Modular structure: Is the content organized with clear H2/H3 headings that allow AI to extract specific sections? AI systems don't cite entire pages — they pull specific passages. Content organized in self-contained, clearly labeled sections is easier for AI to parse and cite.
Freshness signals: When was the page last substantively updated? Include visible "last updated" dates. Remember that pages updated within 12 months account for the vast majority of AI citations — stale content is effectively invisible.
Unique value: Does the page offer original research, proprietary data, expert perspectives, or unique frameworks that can't be found elsewhere? AI systems have access to essentially all public information. Content that merely restates common knowledge provides no "Information Gain" and won't be prioritized. For more on structuring content for AI, see our guide on how to appear in ChatGPT.
What Tools Can You Use for AI Visibility Monitoring?
Running audits manually is essential for your first assessment, but ongoing monitoring benefits from tooling.
Otterly.ai: Currently the most purpose-built tool for AI search monitoring. Otterly tracks your brand's presence across ChatGPT, Perplexity, and Google AI Overviews, providing citation tracking, competitive benchmarking, and trend analysis. It automates the manual testing process described above at a scale that's impractical to maintain by hand.
Manual testing protocol: For teams without dedicated tooling budget, establish a monthly manual audit cadence. Test your top 20 queries across all platforms, using the scoring methodology from Phase 3. Track results in a shared spreadsheet over time to identify trends.
Google Search Console and Bing Webmaster Tools: Monitor indexation status, crawl errors, and structured data validation. These don't track AI citations directly, but they verify the technical foundation that enables citation.
Brand monitoring tools (Mention, Brandwatch, etc.): Track third-party mentions of your brand across the web. Since 85% of AI brand mentions originate from third-party sources, monitoring where you're being discussed is critical context for understanding your AI visibility.
Google Analytics 4: Monitor referral traffic from AI platforms. Look for referral sources including chat.openai.com, perplexity.ai, and other AI domains. This traffic data validates whether AI visibility translates into actual site visits.
For a complete framework on connecting AI visibility to business outcomes, see our guide on measuring GEO ROI.
How Often Should You Run an AI Visibility Audit?
AI search is dynamic. Models update, training data refreshes, and competitors optimize. A single audit is a snapshot — ongoing monitoring is what drives improvement.
Recommended cadence:
| Audit Type | Frequency | Scope |
|---|---|---|
| Quick pulse check | Weekly | Test 5 high-priority queries, note changes |
| Standard audit | Monthly | Full 30-50 query test with scoring |
| Competitive benchmark | Quarterly | Full audit including competitor analysis |
| Comprehensive audit | Bi-annually | All five phases including technical and content assessment |
After your first comprehensive audit, the monthly standard audit becomes your primary tracking mechanism. Use it to measure the impact of your optimization efforts and catch changes early.
What Should You Do With Your Audit Results?
An audit that sits in a slide deck is worthless. Here's how to turn findings into action.
Immediate fixes (Week 1):
- Unblock any AI crawlers in robots.txt
- Correct any inaccurate brand information appearing in AI responses
- Update structured data on pages missing it
Content optimization (Weeks 2-4):
- Reformat your top 10 pages using answer-first structure
- Add statistics, citations, and data points to thin content
- Update any page that hasn't been refreshed in 12+ months
- Implement FAQ schema on relevant pages
Authority building (Months 2-3):
- Identify the third-party sources competitors are being cited from, and pursue mentions on those same platforms
- Build presence on Reddit, LinkedIn, and industry publications
- Pursue guest content on authoritative sites in your space
Ongoing measurement:
- Track your AI Visibility Score monthly and set quarterly improvement targets
- Monitor competitive citation share for category queries
- Adjust strategy based on which optimization tactics move the score
For a step-by-step implementation plan, download our 65-Task GEO Checklist which maps directly to the audit findings framework described above.
Frequently Asked Questions
How long does a complete AI visibility audit take?
A comprehensive first audit — covering all five phases across four platforms — typically takes 8-12 hours of focused work for one brand. This includes query development, platform testing (the most time-consuming phase), scoring, competitive benchmarking, and technical/content assessment. Subsequent monthly audits take 2-3 hours because the query set and scoring framework are already established.
Can I automate the entire audit process?
Partially. Tools like Otterly.ai can automate the ongoing monitoring of specific queries across platforms, which handles the most labor-intensive part. However, the strategic components — query set development, content quality assessment, competitive analysis interpretation, and action planning — require human judgment. We recommend automating the data collection and investing your time in analysis and action.
What if my brand scores below 20 on the AI Visibility Score?
A score below 20 is common for brands that haven't optimized for AI search, so don't panic. It typically means AI platforms either don't have enough signals about your brand (authority gap) or can't access your content (technical gap). Start with the technical fixes — ensuring crawler access and implementing structured data — then move to content formatting and authority building. Most brands can move from below 20 to the 40-60 range within 90 days of focused effort.
Should I audit every AI platform, or focus on the most important one?
Audit all four major platforms (ChatGPT, Perplexity, Claude, Google AI Overviews) because they each draw from different source pools. ChatGPT uses Bing's index, Google AI Overviews uses Google's index, Perplexity has its own crawling system, and Claude draws from its training data plus web search. A strong showing on one platform doesn't guarantee visibility on the others. Your audit may reveal that you're well-represented on Google AI Overviews but completely absent from ChatGPT — which tells you exactly where to focus.
How do I convince leadership to invest in AI visibility monitoring?
Frame it in terms of risk and opportunity. The risk: if traditional search volume declines 25% as projected, brands invisible in AI search will lose a significant discovery channel with no fallback. The opportunity: fewer than 1 in 10 AI answers mention brands, which means the competition for AI citations is still relatively thin compared to traditional SEO. Early investment builds compounding authority. Present your audit results alongside competitor data — nothing motivates action like showing leadership that a competitor is being recommended where your brand is absent.
Does blocking AI crawlers improve or hurt my position?
Blocking AI crawlers almost always hurts your AI visibility. While some publishers block crawlers over content licensing concerns, doing so prevents your content from being retrieved and cited in real-time AI responses. If your business benefits from being discovered and recommended by AI platforms — and most businesses do — you want those crawlers to have full access. Review your robots.txt as the very first step of your audit.
Not sure where your brand stands in AI search? Book a free AI visibility audit with the Forged Catalyst team. We'll test your brand across ChatGPT, Perplexity, Claude, and Google AI Overviews, score your visibility, benchmark you against competitors, and build a prioritized action plan to increase your AI citation rate.