Common misconceptions about showing up in ChatGPT

Common misconceptions about showing up in ChatGPT

Written by Ripenn Team on September 3, 2025

blog

Introduction

How do we show up in ChatGPT?

This question can be a great stressor for business owners and entrepreneurs. There is much misconception about how AI search works today, which invites misinformation.

This blog aims to present the answer by walking you through how LLMs (large language models) work. We will discuss how AI answer engines use the web, the differences between SEO (search engine optimization) and GEO (generative engine optimization), then provide some practical links for getting started.

The goal of this piece is to provide the critical information needed to prevent readers from being misguided by sentiment and anecdotal experience online.

Nailing the basics down

All answer engines today are LLMs (or LFMs, large foundational models, if you like). This means ChatGPT, Claude, Grok, DeepSeek are all LLMs. There are other approaches to creating AI, but the most powerful models are constrained to this structure due to research limitations.

When chatted with, LLMs use the vast amount of data they were trained on to respond. They do this by predicting the most likely next token (a letter, punctuation mark, or sometimes a whole word) one at a time, until a sufficient message has been generated. To put it simply, they are guessing what the best response is one piece at a time.

If a chatter asks something that was covered in training, the LLM will answer correctly [1]. If however, a chatter asks about something foreign (unseen in training), the model will respond incorrectly. Correct or not, the LLM is generating the most probable response based on its configuration.

Increasingly, LLMs are becoming better at determining if they were insufficiently trained to answer a question. In these cases, LLMs can turn to the internet to retrieve more information before responding [2]. In the context of appearing in AI responses, this means:

  1. If your service or product was present in training, the LLM doesn’t need to use the web to surface your brand.
  2. If your service or product wasn’t present in training, the LLM may or may not use the web to find your brand.

Your brand as training data

Once a model is trained, its knowledge corpus is fixed. You can test this yourself. Ask any LLM to tell you what it knows about your brand without using the web [3]. You might show up (maybe with outdated information), you might not. What a model knows “offline” cannot be changed. Brands have to wait for a re-training period if they want offline gains 💪.

Your brand’s weight after training is fixed, but there are strategies you can use to regularly outperform competition if the LLM chooses to use the web.

Your brand online

When AI agents go online, brands have room to play. This is where SEO and GEO become important.

When LLMs use the web, they go through many phases of retrieving and summarizing:

  1. The “retrieving” stage involves selecting from a searched / indexed pool of pages at query time.
  2. The “summarizing” stage involves consolidating all the information from what was found online into a concise recommendation / discussion / message.

SEO is a retrieving enhancement. GEO is a summarizing enhancement.

In other words, it’s SEO first and then GEO.

LLMs use search engines first before synthesizing information. Like people, they pick from a list of the most popular search results. The better ranked page wins. Outranking your competitor here, involves all the long-standing practices of search engine optimization, as well as a new one called query fan out.

Today, GEO strategies do not make a brand more findable. Take the popular /llms.txt for instance. There’s a lot of confusion about whether or not this GEO strategy makes your page more likely to be found by AI. Currently, it does not. So far, only SEO has been shown to improve a webpage’s chances of being found by AI agents [4]. The /llms.txt file is intended to help LLMs browse your website after they’ve found it. This isn’t to say that it currently does what its intended to do. Only that it’s a good future-proofing strategy. Regardless, once an LLM has sufficiently used the web, it must summarize what it found.

You can influence the summary to favor your brand with GEO, which requires upkeep. This involves staying current with new strategies and LLM updates, while also iterating through versions of the same content. The goal is to optimize how AI agents summarize your material without changing how often it gets retrieved. Without the right systems in place, this cycle of monitoring and iteration can quickly become overwhelming to manage alone.

How you can get started

Ripenn.ai has some public tools available right now. They include

  1. Web Analytics: a site-wide SEO and GEO analyzer.
  2. Content Analysis: a content analysis tool for smaller chunks of text.
  3. llms.txt Checker: an llms.txt checker to verify your file’s formatting.

Use them, they’re free! See where your brand stands right now.

In general, brands stand to benefit more from focusing on SEO, then pivoting to GEO. It is less fruitful to optimize for the summarizing stage without having a good likelihood for retrieval first.

Conclusion

With so much confusion around how LLMs work and the differences between SEO and GEO, there’s a lot of misinformation around how to show up in ChatGPT. This blog, and some of Ripenn’s tools aim to dispel some of the confusion.

We want all of our readers and users to understand where we are aiming. Every tool we build has its foundations in how LLMs work and an up-to-date perspective on how GEO and SEO play together.

Curious to learn more? Reach out to us anytime. We’re happy to chat!

Footnotes

[1] It’s not sufficient for an answer to be present once in training data. It must also have sufficient weight. At the end of the day, these models are simply predicting the next token. Often the most likely answer isn’t the right one.

[2] These models will also occasionally use their capacity to reason about a response (such as when performing mathematics) when a structured set of steps is required to arrive at a proper conclusion.

[3] LLMs are less likely to use the web for general inquiries, such as “How do I clean a fresh wound?” In this case, its more likely that an AI will respond broadly about antiseptics. It may recommend a brand that it has seen in training, instead of the web to recommend new brands.

[4] This says nothing about the future. New retrieval strategies will emerge. For instance, it’s very possible that LLMs will begin to select pages that have an /llms.txt file on their root domain instead of wasting tokens on pages that don’t. There is no reason to believe this will be the case, the point is that in the future GEO might seep into SEO territory (improving retrieval).