← Journal May 13, 2026

How to write pages ChatGPT cites.

We opened the rhinoplasty article of one of our clients — a plastic surgeon in Tijuana, twelve years in practice, real patient on camera every Monday — and counted. 3,400 words. Zero statistics. Zero quotes from the surgeon. Zero outbound links to a single medical source. The article was written impeccably. For 2018 Google. For 2026 ChatGPT, it was invisible.

Last week we wrote that the game shifted from blue links to citations inside the answer. This week is the practical part: line by line, what does a page a generative model chooses to cite actually look like.

Where the method comes from

We're not inventing this. In November 2023, six researchers — five from Princeton, one from IIT Delhi — published GEO: Generative Engine Optimization (Aggarwal et al., arXiv:2311.09735). They built a benchmark of 10,000 real queries, tested nine different tactics, and published the numbers. Three tactics moved the needle: cite sources, add statistics, quote experts. The paper reports aggregate gains up to +40% in visibility, and for sites ranked fifth on Google, +115.1% by applying the cite-sources tactic alone (Aggarwal et al., 2023, Table 2). That's the empirical base. Everything else in this article is how those three things apply to a Mexican professional-services site, using the rhinoplasty page as the example.

A meta note before going further: this article is using the three tactics on itself. The Aggarwal citation above, the 115.1% number with attribution, the practitioner quote that comes below. It's part of the demonstration.

Tactic 1 — Cite sources

The Princeton paper found that pages with outbound links to verifiable sources get cited more often by models. It sounds counterintuitive. The SEO instinct of ten years ago said "don't link out, you lose link juice." The GEO instinct says the opposite: the model needs to know your content rests on something real.

Here's what that looks like, before and after.

Before (what the clinic had):

Rhinoplasty is one of the most popular aesthetic procedures in the world. Every year, thousands of people decide to improve the appearance of their nose to feel more confident.

This is text that could be on any blog of any clinic in any country. The model has nothing to grab. No data point, no source, no surgeon.

After:

According to the cosmetic procedure statistics from the American Society for Aesthetic Plastic Surgery, rhinoplasty was the fifth most-performed aesthetic surgical procedure in 2023. Initial nasal bone recovery takes between seven and ten days according to a review published in Aesthetic Surgery Journal in 2022; final results settle between nine and twelve months.

The difference: the second paragraph has two outbound links to authoritative primary sources, two concrete numbers with their unit and context, and two realistic time windows. For the model, there's now something to cite.

Tactic 2 — Add statistics

The paper measured that pages with concrete, attributed figures rise in visibility. The common Mexican mistake is adding the number without the source. "Most patients return to work within a week" doesn't help the model. "According to our clinic's postoperative follow-up with 412 patients between 2022 and 2024, 78% returned to office work by day eight" does.

The second version is citable. It has an n, a time window, an origin. A model can grab it, attribute it, and include it in an answer.

A distinction we learned the hard way: generic internet numbers don't work. If you copy a statistic from a US blog without verifying it in the original source, the model notices — because five other sites already have the same number without a source — and it ignores you. The numbers that work best are from your own operation. Your n. Your patients. Your timelines. Those don't exist anywhere else.

Tactic 3 — Quote experts

Aggarwal et al. found that direct quotes from real practitioners raise citation probability. The model's logic: a textual quote attributed to an identifiable person with credentials is more trustworthy than anonymous prose.

Before:

It's important for the patient to follow postoperative instructions to ensure good recovery.

Empty. Anyone could have written it. A copywriter hired for $200 pesos an article probably did.

After:

"The most expensive mistake I see in rhinoplasty patients is removing the nasal splint before day seven because it doesn't hurt anymore," says the clinic's lead surgeon, certified by the Mexican Council of Plastic, Aesthetic and Reconstructive Surgery (CMCPER). "The bone is still consolidating. A week of patience prevents a second surgery."

It has voice. It has credential. It takes a concrete position against a concrete behavior. The model cites it because it can attribute it.

Operational detail: the quote isn't invented. It's obtained by recording fifteen minutes with the surgeon and transcribing. It's work. That's why almost nobody does it. That's why it's an advantage.

What didn't work for us

We spent three weeks, in September, trying to automate the quotes with an assistant that interviewed the surgeon over chat. The idea was good on paper. In practice, the quotes came out flat — the surgeon answered in short WhatsApp-style sentences because that's what the format invites — and nothing of the rigor he speaks with in consultation. Fifteen-minute in-person transcription, with the recorder on the table, gives ten times better material. We went back to the analog method. It costs more in time, it's infinitely better as a product.

Second thing that didn't work: asking the client to write their own quotes. Three out of three self-censored. They wanted to sound "professional" and ended up writing the same thing that was already in the old article. The surgeon speaking freely says interesting things; the surgeon writing for the internet flattens.

What a citable page looks like, concretely

An individual service page — rhinoplasty, in this case — ready for GEO has an identifiable shape. We describe it as a spec, because that's what it is:

That's the page. It's disciplined, not complicated. A person with technical knowledge can finish it in an afternoon per service, once the surgeon has recorded their quotes.

The crawler detail

None of the work above matters if the bot can't get in to read. OpenAI documents three distinct bots: GPTBot, OAI-SearchBot, and ChatGPT-User. The one that matters most for live citations is ChatGPT-User — it's the one that fires when a patient is asking ChatGPT in that moment. If your robots.txt blocks it, you don't appear. Period.

The emerging standard llms.txt, proposed in 2024 by Jeremy Howard, adds a markdown file at the site root with a structured summary of the content. It's not universal yet. We put it in place because the cost is one hour and the option that it matters in six months is real.

Why the 83% number matters

The Similarweb and SparkToro zero-click study published in July 2025 reported that 58.5% of Google searches in the US end without a click. On searches that trigger an AI Overview, the number rises to 83%. That's the magnitude of the problem. Almost four out of five times a patient asks something and Google answers with an AI Overview, nobody clicks on anything. The organic traffic that depended on appearing in the blue links no longer arrives. The citation inside the answer is what's left.

Mexico is not at 83% yet. We're one to two years behind the US in AI Overview adoption and ChatGPT-as-search. It's a window. Whoever builds citable pages now appears inside the answers when the Mexican market reaches those numbers. Whoever waits until it's obvious arrives late.

Timeline reality

A service page rewritten following the three tactics, with schema and recorded quotes, shows up in citation metrics in six to twelve weeks. Not in thirty days. Any agency that promises otherwise is selling the old game with a new name. The LLM citation monitoring platforms — AthenaHQ, Profound, Otterly — need a few weeks of data to show a trend.

Google ranked pages. Models cite sentences. Write sentences worth citing.