ChatGPT's search mode is the single highest-leverage AI surface for most B2B and B2C content today. It runs a Bing-backed search, fetches pages with OAI-SearchBot, and synthesises an answer with citations. The pages it cites end up read by tens of millions of users a month, and the rules for getting cited overlap with classic SEO in ways that surprise people. This is the per-engine playbook for ChatGPT, sitting under the broader pillar on generative engine optimization.
Key takeaways
- What ChatGPT rewards — Brand co-occurrence, canonical answer block, sourced quotations and statistics, full Schema.org coverage, and server-rendered HTML. The five do most of the work.
- What it cares less about — Recency (vs Perplexity), keyword density, bulleted-list formatting, page-load speed (within reason).
- The crawlers — OAI-SearchBot handles the search-mode fetches; GPTBot handles broader crawling. Allow both unless you have a reason.
- How long it takes — Two to four weeks for a single-page intervention to show up in citation trackers. Brand-authority moves take months.
How ChatGPT decides what to cite
The flow inside ChatGPT's search mode follows the standard generative-engine loop with one wrinkle: the retrieval layer is Bing-backed, but the rerank and extraction layers belong to OpenAI. That split is important because it means classic SEO still drives the candidate pool, while a different set of signals decides which of those candidates end up in the answer.
- Decompose the question into one to four sub-queries.
- Search via Bing's index for each sub-query in parallel.
- Rerank the candidate set with an OpenAI model that weights brand authority, evidence quality, and structural clarity.
- Fetch the top candidates with OAI-SearchBot.
- Extract the spans that answer the sub-queries, preferring direct prose answers over bulleted lists or marketing copy.
- Synthesise a response with numbered citations back to the source pages.
Three implications follow. First, if your page does not show up in Bing for the sub-query, it does not get reranked, so traditional SEO on Bing matters. Second, the rerank step is the place where ChatGPT's preferences diverge from a plain web search: brand co-occurrence and on-page evidence quality count more here than in either Google or Bing alone. Third, the extraction step rewards a specific writing style, which is why two pages with identical SEO can have very different ChatGPT citation rates.
The seven-step playbook
Tactics ordered by the ratio of citation lift to effort, calibrated specifically for ChatGPT.
- Allow OAI-SearchBot and GPTBot in robots.txt. The first one is what makes you eligible to be cited; the second is optional but commonly allowed. Confirm with a fetch test, not just by reading the file. Some CDN-level WAF rules silently block bots even when robots.txt allows them.
- Ship a canonical answer block at the top of each page. Two to four sentences that directly answer the primary query in the way a user would phrase it. ChatGPT's extraction step pulls this kind of block at a far higher rate than it pulls from mid-page prose. Treat it as the machine-readable abstract.
- Include at least one sourced statistic and one authoritative quotation. The two highest-lift interventions in the November 2023 Aggarwal et al. paper that named GEO. ChatGPT specifically over-indexes on pages that already cite somebody else, and quoting a primary source pays off in two ways: the source itself gets pulled, and the surrounding context becomes more citation-worthy.
- Expand Schema.org coverage. At minimum:
Article,FAQPage,Organization, andPersonfor the author. IncludedateModified, a real author URL, and where it appliesreviewedBy. OpenAI's models read raw JSON-LD during the extraction step even when rich results never surface in Google or Bing. - Publish an llms.txt at your root. A structured index of your most-cited-worthy pages with one-line descriptions, grouped by topic. The spec is emerging rather than standardised (proposed by Answer.AI in 2024, not yet endorsed by OpenAI), but the major agents already read it; CTAIO Labs measured a positive 14-day citation delta on two of three test sites in the 30-day llms.txt experiment.
- Render content server-side. Many OpenAI extraction calls run the page through JavaScript, but the safer assumption is that they fall back to the initial HTML payload if extraction times out. Put the answer, the headings, and the structured data in the first response, not after hydration.
- Earn brand co-occurrence in trusted contexts. The slowest and largest lever. The rerank step weighs how often your brand appears alongside the query topic in third-party sources the model considers authoritative. Coverage in primary research outlets, podcast transcripts that surface in search, analyst reports, and the Wikipedia entries that link your domain compound over months. There is no shortcut for this one, but it is the durable advantage that brand-authority leaders end up with.
What's different from Perplexity, Gemini, and AI Overviews
The seven tactics above are not specific to ChatGPT in spirit. Every generative engine rewards roughly the same shape of page. The differences are operational, and they shift the prioritisation. CTAIO Labs ran an A/B test of the same article rewritten under three optimisation frameworks and measured citation deltas across ChatGPT, Perplexity, and Gemini; the methodology and results are in the framework test.
- Perplexity weights recency far more heavily than ChatGPT. A page with a stale
dateModifiedwill drop out of Perplexity citations long before it drops out of ChatGPT. If Perplexity matters, set up a refresh cadence for your priority pages. - Gemini sits closer to Google's organic ranking than to either ChatGPT or Perplexity. The classic SEO fundamentals do most of the work; the GEO-specific moves are a smaller lift.
- Google AI Overviews are powered by Gemini but constrained by the AI Overview product's stricter content rules. Pages with clean schema and clear answer blocks surface there far more often than pages without.
- Bing Copilot Search shares Bing's index with ChatGPT but ranks the results differently, so a strong ChatGPT performer often does well in Copilot too, with one or two outliers.
Measurement
The single failure mode that derails most ChatGPT optimisation programmes is the absence of a measurement loop. Without one, every change feels like a coin flip. Build the loop early, with three layers:
- Citation tracker. Profound, Peec AI, AthenaHQ, Otterly, or one of the others. Pick a fixed query set of fifty to one hundred prompts that map to your highest-value pages and track citation rate weekly across at least ChatGPT, Perplexity, and Gemini. The Radar's scored shortlist is at 6 GEO Tools the Radar Actually Recommends; CTAIO Labs tested ten of them head-to-head in the visibility tools test.
- GA4 channel grouping. Add
chatgpt.com,perplexity.ai,gemini.google.com, andcopilot.microsoft.comas referral sources. Coverage is partial (not every session carries a referrer), but the trend is informative, and the conversion rate is unusually high. - Branded query volume in GSC. The cleanest signal that AI citation is translating to brand equity is users who later search for your name directly. The lag is months; the signal is unambiguous.
Field evidence
Related reads
Frequently asked questions
How does ChatGPT decide what to cite?
When ChatGPT runs its search mode, it issues queries to a search index (Bing, with a layer of its own ranking), retrieves the top results, fetches the highest-ranked pages with OAI-SearchBot, extracts the relevant spans, and synthesises an answer with citations. The rerank step weighs brand authority and on-page evidence quality; the extraction step heavily favours pages with clean structure, explicit answers, and sourced claims.
Which user agent should I allow for ChatGPT?
Two. OAI-SearchBot is the crawler that fetches pages during ChatGPT's search action; allowing it is what makes you eligible to be cited. GPTBot is the broader OpenAI crawler used for model training and other purposes; many teams allow it as well, though blocking GPTBot does not prevent OAI-SearchBot from working. Both are documented at platform.openai.com/docs/bots.
Does ChatGPT pass referrer traffic to my analytics?
Partially. Sessions that originate from chatgpt.com sometimes carry a referrer; sessions that originate from within the ChatGPT app or via the API often do not. Set up a GA4 channel grouping for chatgpt.com to catch what comes through, and assume actual traffic is two to three times higher than what shows up. Conversion rate is the cleaner metric than raw count.
How long does it take to start getting cited?
Two to four weeks for a single high-quality intervention on an existing page (canonical answer block, sourced statistic, Schema.org expansion). CTAIO Labs' 30-day citation experiment measured per-engine deltas on three sites and saw early signal within fourteen days. Brand-authority moves (coverage in trusted third-party sources) take months, but they are the largest and most durable lever.
What is the difference between ranking in ChatGPT and ranking in Perplexity?
ChatGPT is more forgiving on recency and rewards brand authority. Perplexity prizes freshness and direct quotation. The same page can rank in both, but the playbook tilts: for ChatGPT, invest in brand mentions and structured prose; for Perplexity, keep dateModified current and lead with a quotable answer. The pillar at /en/ai-search/generative-engine-optimization/ covers the cross-engine tactics. CTAIO Labs ran an A/B framework test across both at /en/labs/agentic-search/framework-test/.
Does keyword stuffing or fluency optimisation help with ChatGPT?
No, and possibly the reverse. The November 2023 Aggarwal et al. paper that named GEO tested both interventions on a benchmark engine and found fluency optimisation produced almost no measurable citation lift, while keyword stuffing produced a small negative effect in several topic domains. Time on either is better spent on the seven tactics in this guide.
Does ChatGPT cite content behind a paywall or login?
Rarely. OAI-SearchBot generally cannot get past authentication walls, soft paywalls, or cookie banners that gate the content. If a section of your site is paywalled but important for citation, consider exposing a teaser, an abstract, or an llms.txt entry that summarises the gated page in a way the model can cite without crossing the wall.
What tools measure my ChatGPT citation rate?
Several. Profound, Peec AI, AthenaHQ, Otterly, Scrunch, Evertune, Rankscale, Bluefish, Semji, and Goodie AI all track LLM citations across engines including ChatGPT. CTAIO Labs scored ten of them in a head-to-head on three real brand portfolios. The Radar's shortlist of the six that earned a recommendation lives at /en/radar/geo-tools/.
How does ChatGPT decide what to cite?
When ChatGPT runs its search mode, it issues queries to a search index (Bing, with a layer of its own ranking), retrieves the top results, fetches the highest-ranked pages with OAI-SearchBot, extracts the relevant spans, and synthesises an answer with citations. The rerank step weighs brand authority and on-page evidence quality; the extraction step heavily favours pages with clean structure, explicit answers, and sourced claims.
Which user agent should I allow for ChatGPT?
Two. OAI-SearchBot is the crawler that fetches pages during ChatGPT's search action; allowing it is what makes you eligible to be cited. GPTBot is the broader OpenAI crawler used for model training and other purposes; many teams allow it as well, though blocking GPTBot does not prevent OAI-SearchBot from working. Both are documented at platform.openai.com/docs/bots.
Does ChatGPT pass referrer traffic to my analytics?
Partially. Sessions that originate from chatgpt.com sometimes carry a referrer; sessions that originate from within the ChatGPT app or via the API often do not. Set up a GA4 channel grouping for chatgpt.com to catch what comes through, and assume actual traffic is two to three times higher than what shows up. Conversion rate is the cleaner metric than raw count.
How long does it take to start getting cited?
Two to four weeks for a single high-quality intervention on an existing page (canonical answer block, sourced statistic, Schema.org expansion). CTAIO Labs' 30-day citation experiment measured per-engine deltas on three sites and saw early signal within fourteen days. Brand-authority moves (coverage in trusted third-party sources) take months, but they are the largest and most durable lever.
What is the difference between ranking in ChatGPT and ranking in Perplexity?
ChatGPT is more forgiving on recency and rewards brand authority. Perplexity prizes freshness and direct quotation. The same page can rank in both, but the playbook tilts: for ChatGPT, invest in brand mentions and structured prose; for Perplexity, keep dateModified current and lead with a quotable answer. The pillar at /en/ai-search/generative-engine-optimization/ covers the cross-engine tactics. CTAIO Labs ran an A/B framework test across both at /en/labs/agentic-search/framework-test/.
Does keyword stuffing or fluency optimisation help with ChatGPT?
No, and possibly the reverse. The November 2023 Aggarwal et al. paper that named GEO tested both interventions on a benchmark engine and found fluency optimisation produced almost no measurable citation lift, while keyword stuffing produced a small negative effect in several topic domains. Time on either is better spent on the seven tactics in this guide.
Does ChatGPT cite content behind a paywall or login?
Rarely. OAI-SearchBot generally cannot get past authentication walls, soft paywalls, or cookie banners that gate the content. If a section of your site is paywalled but important for citation, consider exposing a teaser, an abstract, or an llms.txt entry that summarises the gated page in a way the model can cite without crossing the wall.
What tools measure my ChatGPT citation rate?
Several. Profound, Peec AI, AthenaHQ, Otterly, Scrunch, Evertune, Rankscale, Bluefish, Semji, and Goodie AI all track LLM citations across engines including ChatGPT. CTAIO Labs scored ten of them in a head-to-head on three real brand portfolios. The Radar's shortlist of the six that earned a recommendation lives at /en/radar/geo-tools/.
Ready to Find the Right AI Tools?
Browse our data-driven rankings to find the best AI tools for your team.