Most conversations about AI search treat visibility as a single problem: get cited by ChatGPT, Perplexity, Gemini, or Claude. In practice, three different things have to go right, and they fail for different reasons. Understanding the three layers is the difference between fixing the wrong problem and fixing the right one.
Layer 1 is retrieval. Can AI crawlers actually reach your site, parse it, and read route-specific content from the initial HTML response? This is mostly an architectural question. Sites built on modern JavaScript frameworks often deliver the same near-empty shell on every URL, with content painted in later by the browser. AI crawlers don't always wait. When they don't, your homepage and your service pages can look identical to them — which means the system has nothing route-specific to cite.
Layer 2 is entity recognition. Even when a page is retrievable, AI systems still have to figure out who you are, what you do, where you operate, and which third parties corroborate those claims. This is where structured data, sameAs links to authoritative profiles, clear authorship, and substantive on-page facts matter. A page can be perfectly readable and still leave the AI system uncertain about your identity. Uncertain systems cite cautiously, or not at all.
Layer 3 is what the AI actually decides to cite in a given session, given a given prompt, with the model's own grounding and the platform's own policies. No vendor can control this directly. What we can do is make Layer 1 and Layer 2 strong enough that when an AI system is choosing between sources, your page is a credible option.
This framing changes what "fixing AI visibility" means. Architecture without entity clarity gets you crawled and ignored. Entity clarity without retrievable architecture means the signals never get read. Both layers strong, in combination, is what gives a page a real chance at Layer 3.
When we audited our own site, Entity Level Authority, on May 1, 2026, we scored 37. The first round of work was almost entirely Layer 1 — making the site retrievable. The score moved to 65 in a day. The second round addressed deeper architectural questions and moved it to 69. The remaining drag now lives in Layer 2: structured data completeness, authorship signals, focused topic coverage.
Architecture is necessary. It is not sufficient.
If you're a plaintiff trial firm trying to understand why your name doesn't surface when prospective clients ask AI systems for help, the answer is rarely one thing. It's almost always a combination of layers — and the work is figuring out which layer is actually binding.