From Google to ChatGPT: Is It Time to Start Creating LLM-Friendly Content?

From Google to ChatGPT: Is It Time to Start Creating LLM-Friendly Content?

As search habits shift toward AI-powered tools, it’s becoming essential for websites to adapt. But where does the still-unofficial llms.txt file fit into this changing landscape?

We’re witnessing a major shift in how people search for information. More and more people are skipping Google and going directly to large language models (LLMs) like ChatGPT to find answers. And this isn’t just a small change in user behavior; it could reshape how content is created and distributed across the web.

To respond to this trend, a new file format called llms.txt has been proposed. That said, it’s important to be clear: no major LLM providers (OpenAI, Anthropic, Google) officially support it yet. So creating an llms.txt file right now is more like preparing for what might come next. But if it’s not supported yet, what’s the point?

Well, we’ve seen something similar before. The rise of search engines led websites to adopt files like robots.txt to manage crawling and sitemap.xml to organize content. In the same way, llms.txt could become an early building block for making content more understandable to language models.

What Is llms.txt and What Problem Is It Trying to Solve?

llms.txt is a simple Markdown-based file designed to help large language models better understand the content of a website. Like robots.txt, it sits in the root directory of your site (/llms.txt).

So, why do we need it?

The current web setup is designed mostly with search engines in mind. While robots.txt controls what bots can crawl and sitemap.xml shows what pages exist, neither really tells an AI what your content means or what’s important. That’s the gap llms.txt aims to fill.

What Makes llms.txt Different?

  • It organizes content in a format that LLMs can more easily parse (simple Markdown, clear writing, link lists).

  • It highlights your key content and separates the less critical parts into an “Optional” section.

  • It shows how your pages connect to other sources through external links.

  • It helps LLMs give better answers during inference (i.e., real-time generation), though it’s not meant for training data.

A Quick Look at the llms.txt Structure

  • # H1 Heading: The name of the site or project

  • > Blockquote: A short description

  • Explanatory Paragraphs or Lists: Includes extra details like which content is highlighted and how it should be used

  • ## Titled File Lists: Specific content categories such as Documentation, Tutorials, API Reference, etc.

  • ## Optional Section: Content that can be considered less important or skipped

# FastHTML

> FastHTML is a python library which brings together Starlette, Uvicorn, HTMX, and Fastcore.

Important notes:

- Although parts of its API are inspired by FastAPI, it is _not_ compatible with FastAPI syntax and is not targeted at creating API services.
- FastHTML is compatible with JS-native web components and any vanilla JS library, but not with React, Vue, or Svelte.

## Docs

- [FastHTML quick start](https://answerdotai.github.io/fasthtml/tutorials/quickstart_for_web_devs.html.md)
- [HTMX reference](https://raw.githubusercontent.com/path/reference.md): Brief description of all HTMX attributes...

## Examples

- [Todo list application](https://raw.githubusercontent.com/path/adv_app.py): Detailed walk-thru...

## Optional

- [Starlette full documentation](https://gist.githubusercontent.com/path/starlette-sml.md)

📌 This example is based on the reference structure provided by llmstxt.org.
It’s built around a real project called FastHTML and shows how an llms.txt file can be structured.
To generate a custom file for your own site, you can get an auto-generated llms.txt suggestion based on your sitemap at llmstxt.firecrawl.dev.

Who’s Using llms.txt?

Even though it’s not a standard yet, several tech companies and open-source projects have started experimenting with it. Names like Cloudflare and Anthropic are among early adopters.
You can find more examples at llmstxt.site or directory.llmstxt.cloud.

So, Is llms.txt Really Necessary?

Right now, llms.txt feels more like a thoughtful gesture than a pressing need, kind of like saying:

“Maybe LLMs don’t fully understand our site yet… let’s give them a hand.”

But the truth is, today’s LLMs are already pretty good at understanding content.
Thanks to existing tools like sitemap.xml and robots.txt, most websites are already reasonably clear to these models.

Still, as LLMs evolve and expectations rise, for deeper context, structured summaries, and better linking, having more curated, targeted content could become more valuable.

At that point, llms.txt might give early adopters a useful advantage.

So, will it make a big difference today? Probably not.
Is it easy to set up? Yes.
Any downside? Not really.
Could it help in the future? Possibly.

That makes llms.txt a “low risk, low cost, potential benefit” kind of move.