At CascadiaJS last year, I gave a talk about a problem I’ve been running into while maintaining Arcjet’s docs: more developers are
At CascadiaJS last year, I gave a talk about a problem I’ve been running into while maintaining Arcjet’s docs: more developers are reading documentation through AI assistants instead of directly on the web. Tools like ChatGPT, Copilot, Cursor, and others now sit between the user and the documentation itself. That shift raises a simple but important question: can these assistants actually read what we write?
If you’d like to watch the talk first, you can find the recording on YouTube. What follows is a written version of that talk, expanded a bit for clarity.
For years, SEO effectively meant “optimize for search engines so humans can find your page.” Most journeys started with Google. Today many journeys start inside an assistant. If those tools can’t read your content, then users relying on those assistants never get the information. They get stuck and move on.
So the challenge is no longer just “write good docs for humans.” It’s also “make the docs legible to the tools humans increasingly use to read them.”
AI assistants can get information from several sources: training data, proprietary knowledge bases, live web fetching, embedded browsers, dedicated plugins or tools.
For this project I focused on live fetching, because that’s what developers interact with most today: ChatGPT search, GitHub Copilot’s #fetch, Cursor’s @web, and similar features.
These tools all promise to “fetch this URL” and summarize, extract, or answer questions about it. The problem is we have no idea what they can actually see. Are they running JavaScript? Applying CSS? Executing scripts? Ignoring hidden content? We can’t rely on assumptions.
So I treated each assistant like a black box and tested it.
The basic idea is simple:
With enough test cases, you can reconstruct precisely what a given assistant reads or ignores.
I created a page containing two special flags:
Copilot consistently reported the control flag but never the hidden one. That strongly suggests Copilot is executing stylesheets and stripping visually hidden content.
This was surprising. I expected most assistants to read raw HTML, not evaluate styling.
To make this repeatable, I wrote a small JavaScript CLI tool called LLMO World. It:
The tool includes multiple templates, such as JavaScript execution tests where inline code appends the test flag to the DOM. This lets us see whether an assistant runs scripts before reading the page. The most surprising finding so far is that some assistants execute JavaScript, evaluate stylesheets, and strip hidden elements, and these behaviors differ significantly between tools. Copilot’s #fetch behaves noticeably differently from other assistants I tested. These differences already matter for documentation authors.
Here’s some results:
| Test category | ChatGPT /search | Copilot #fetch |
|---|---|---|
| Server‑rendered text | ✅ | ✅ |
| JS‑inserted content | ❌ | ✅ |
Literal <script> contents |
❌ | ❌ |
Literal <style> contents |
❌ | ❌ |
CSS hidden (display: none) |
✅ | ❌ |
Everything here is early and needs more validation, but a pattern is emerging: If you care about AI assistants understanding your documentation, you probably want to:
Basically, it looks like classic SEO rules are useful again. If the content isn’t clearly present in the HTML the assistant fetches, there’s a good chance it won’t “see” it.
This project is still early. There are many more templates to build and many more assistants to test. But the initial results show clear differences in how these tools fetch and interpret documentation. The only reliable way to know what they can read is to test them directly.
If you’d like to explore the tests or run them yourself, the source code is available on GitHub . Arcjet supported this work because understanding how developers consume documentation is central to helping them build secure applications. Better docs lead to better integrations and fewer mistakes, and that helps everyone.
Get the full posts by email every week.