When a prospective client asks ChatGPT, Perplexity, Gemini, or Claude for help finding a lawyer, three different things have to happen for your firm to appear in the answer. They depend on different parts of your site, fail for different reasons, and are fixed by different kinds of work. Treating AI visibility as a single problem — the way most agencies still treat it — is the most common reason that money spent on "AI SEO" produces nothing.

This article lays out the three-layer framework we use at Entity Level Authority to audit and remediate AI visibility for plaintiff trial firms. It is the same framework that produced the findings in our publicly documented case study, where our own site moved from a visibility score of 37 to 65 in a single day, and from 65 to 69 over the following two weeks. The framework also explains why the second move was smaller than the first — and what that tells us about where the remaining work actually lives.

The Layers

Layer 1 is retrieval mechanics. Can AI crawlers reach your site, parse it, and pull route-specific content from the initial HTML response?

Layer 2 is entity recognition. When an AI system reads what it retrieves, can it identify who your firm is, what you do, where you operate, and which third parties corroborate those claims?

Layer 3 is governed retrieval. In a specific session, given a specific prompt, does the AI system treat your page as a trustworthy source worth citing?

A page can be strong on one layer and limited by another. Crawler access alone does not produce citations. Entity clarity alone does not guarantee retrieval. And even a strong Layer 1 and Layer 2 profile is not a guarantee of Layer 3 selection inside any specific AI answer. Each layer is necessary. None is sufficient on its own.

Layer 1: Can the Crawler Read Route-Specific Content?

The first question is mechanical. AI systems use crawlers — automated agents that fetch web pages the same way a browser does — to build the index they later retrieve from. If a crawler cannot reach your site, or reaches it but reads only a generic shell instead of route-specific content, no later layer matters.

There are three common failure modes at Layer 1.

The first is access. Robots.txt or firewall rules sometimes block major AI crawlers — GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider, Applebot-Extended, and others. This is often unintentional, inherited from old SEO configurations written before AI crawlers existed.

The second is rendering. Many modern sites are built as single-page applications. A user's browser executes JavaScript that paints content into a near-empty HTML shell. Some AI crawlers execute JavaScript. Many do not, or do so inconsistently, or do so only for a sample of pages. When the crawler doesn't execute JavaScript, every URL on the site can return the same shell, and the system has no way to tell your "Truck Accidents" page from your "Premises Liability" page from your homepage. Route-specific content has to be present in the initial HTML response for reliable retrieval.

The third is structure. Even when content is in the initial HTML, missing heading hierarchy (H1 through H4), unclear URL patterns, absent canonical tags, and incomplete sitemaps make a site harder to index well. AI systems do not require perfection here, but the gap between "indexable" and "well-indexed" is real.

The remediation work at Layer 1 is architectural. Server-side rendering, pre-rendering specific routes, or edge-rendered HTML for crawler user agents are the three main approaches. Which approach is correct depends on the platform the site is built on, which routes matter most, and what trade-offs are acceptable for human users. There is no universal answer.

When we ran our own first audit on May 1, 2026, our site scored 37. The first round of remediation focused almost entirely on Layer 1: rebuilding the llms.txt file, inlining JSON-LD schema in static HTML, adding static internal-linking content per route, and adding FAQPage schema. The next day the score was 65. That 28-point movement was Layer 1 work showing up in the audit.

Layer 2: Can the AI System Tell Who You Are?

Layer 2 is about identity, corroboration, and extractable facts.

An AI system reading your site needs to answer questions a search engine never had to answer well. Who is this organization? What do they do? Where do they operate? Who are the people behind it? What credentials do they hold? Which third parties — courts, bar associations, journalism, directories, professional networks — confirm what the site claims about itself?

The signals that answer these questions are mostly structured. Schema.org markup in JSON-LD blocks: Organization, LegalService, Person, Article, FAQPage, BreadcrumbList. Stable @id identifiers that connect schema blocks across pages. A canonical identity page (typically /about) that carries the firm's full identity record — name, address, telephone, sameAs links to authoritative external profiles, and a clear practice description — in one place. Author attribution on content pages, linked to Person entities. Explicit date signals on important pages so AI systems can judge currentness.

Layer 2 also depends on the visible page content. Thin pages — under 300 words — give AI systems little to extract and cite. Pages without clear question-shaped headings or short direct-answer paragraphs are harder to use as sources for the kinds of factual queries that AI search excels at. Pages without clear authorship can still rank, but they're harder to cite confidently when the model is choosing which sources to attribute.

One useful diagnostic: focused topic pages. If a firm claims to handle truck accidents, premises liability, and product liability, does each topic have its own page with substantive content, or are all three mentioned only in passing on a generic services overview? AI systems are more likely to retrieve and cite pages that answer a specific topic clearly than pages that gesture at many topics shallowly.

The remediation work at Layer 2 is content-and-schema work. Adding missing schema types where they're warranted. Adding sameAs links to authoritative profiles. Designating a canonical identity page. Expanding thin pages with substantive content. Adding visible author bylines and matching Person schema. Building a coherent set of focused topic pages instead of relying on omnibus pages.

In our case, Layer 2 is where the current drag sits. As of our most recent audit, the Layer 1 sub-scores are 100 (AI Crawler Access), 86 (Technical GEO Foundations), 83 (LLM Discoverability), and 77 (AI Citability). The Layer 2 sub-scores are 48 (Content Quality / E-E-A-T) and 34 (Structured Data). The remaining work on our own site is mostly Layer 2 work.

Layer 3: What the AI System Actually Decides to Cite

Layer 3 is what the model does in a specific session, with a specific prompt, given everything it knows.

This is the layer where the conversation often goes wrong. Buyers want to be told that a vendor can guarantee citations. No vendor can. Layer 3 is shaped by the model's internal grounding, the prior context in the conversation, the specificity of the user's prompt, the platform's policies about what kinds of sources it cites, and factors none of us can directly observe.

What we can do is influence Layer 3 indirectly by making Layer 1 and Layer 2 strong enough that, when the AI system is choosing between sources, your page is a credible option.

This is also the layer where honest measurement gets hard. ELA cannot measure Layer 3 from public HTML the way it measures Layers 1 and 2. The only valid way to observe Layer 3 is empirical: run priority queries in fresh AI chat sessions, without prior context, repeat them several times because answers vary between runs, and track whether the firm's pages are cited never, sometimes, or consistently. Never-cited pages are below the threshold. Sometimes-cited pages are unstable. Consistently-cited pages are closer to what we call citation-ready.

This is what we mean when we say AI visibility is threshold-driven, not incremental. The goal isn't to move every page a little. It's to move priority pages across the threshold from never-cited to sometimes-cited, and from sometimes-cited to consistently-cited. Small uniform improvements across many pages often produce no observable change in Layer 3 behavior. Concentrated work on a few priority pages often does.

Why Phase 1 Was Right, and Incomplete

When we published our Phase 1 case study in early May, the headline finding was that a site relying on a JavaScript-heavy framework without server-side rendering or pre-rendering would hit a sub-score ceiling. That was correct, but it was Layer 1 talking.

The 28-point movement from 37 to 65 was a Layer 1 story: a site whose route-specific content was previously invisible to non-JavaScript crawlers became visible to them. Once that happened, the audit's Layer 1 sub-scores rose substantially, and the composite score with them.

The next four points — from 65 to 69 over the following two weeks — were the Layer 1 work continuing to compound, plus a small additional architectural improvement (edge-rendered HTML for crawler user agents on the blog route pattern). But the curve flattened, because the architectural work was running out of headroom. The site had become retrievable. What it had not yet become was an entity that AI systems could fully recognize, corroborate, and confidently cite.

That's the refinement. Phase 1's claim about SSR was true at Layer 1. The three-layer model formalizes what Phase 1 implied: architecture is necessary. It is not sufficient. The next phase of work moves into Layer 2, and the case study documenting it will look different from the first one — fewer architectural shifts, more entity clarity, more focused topic pages, more authorship signals, more third-party corroboration through sameAs links and visible expertise.

What This Means for Plaintiff Trial Firms

If you're a plaintiff trial firm trying to understand why your name doesn't surface when prospective clients ask AI systems for help finding a lawyer, the diagnostic question isn't "are we doing AI SEO?" The question is "which layer is binding for us right now?"

Sometimes it's Layer 1. Sites built on Squarespace, Wix, or older WordPress builds with heavy JavaScript theming sometimes look identical to AI crawlers on every URL. The fix is architectural: enable server-side rendering, pre-render the highest-value routes, or serve crawler-specific HTML at the edge. Until that fix lands, no amount of content or schema work will produce reliable retrieval.

Sometimes it's Layer 2. Sites with sound architecture still get under-cited because the firm's identity is diffuse — no canonical identity page, no sameAs links to bar listings or court directories, thin practice-area pages, content without clear authorship. The fix here is patient: clean up the entity record, build focused topic pages, attribute content to specific attorneys, add the structured data that lets an AI system corroborate what the page claims.

And sometimes — increasingly — it's both. The firms that win in AI search over the next 12 to 24 months will be the ones that fix Layer 1 quickly, then commit to the slower Layer 2 work that turns a retrievable site into a citable one.

Layer 3 is not something a vendor can sell you. It's the consequence of doing the first two layers well. Any agency that promises specific citations in ChatGPT or Perplexity is promising something they can't deliver. What they can deliver — what we deliver — is a site whose underlying architecture and entity signals make it a credible source for AI systems to choose, when they're choosing.

That is what we audit for. That is what the AI Search Visibility Audit measures. That is the framework.


Entity Level Authority publishes its own case studies as part of the audit deliverable. The Phase 1 case study, documenting the 37 → 65 movement, is available at /case-studies/ela-phase-1.html. A second case study documenting the Layer 2 work is in preparation.