01 / Premise
The audit we run on clients, run on ourselves.
Most case studies hide their constraints. This one names them.
Entity Level Authority builds AI Search Visibility audits for plaintiff trial law firms. The methodology is straightforward: run a site through a seven-dimension diagnostic, return a score from zero to one hundred, and produce a remediation plan tailored to the actual technology stack the site is built on.
On May 1, 2026, we ran the audit on our own site. The result was a 37. Below the threshold the engine labels "AI-Invisible." We had a structured-data sub-score of zero out of one hundred. We had an LLM Discoverability sub-score of eighteen.
This is the case study of what we did about it the next day, told without hedging the parts that did not work and without overstating the parts that did.
02 / Starting Point
What the engine saw on day zero.
The site was built on React, Vite, and Lovable — a single-page application architecture where most page content renders client-side after JavaScript executes. AI crawlers, including the ones used by ChatGPT, Perplexity, Google AI Overviews, and Claude, generally do not run JavaScript. They take the raw initial HTML response, extract what is there, and move on.
What was there: a viewport tag, a canonical link, an HTTPS certificate, and a robots.txt that did not block any of the twelve AI crawlers the engine checks for. What was missing: structured data of any kind, a complete llms.txt, route-specific internal linking, citable Q&A content, and an FAQPage schema.
The audit's executive summary was direct about the cause: the visibility constraints were not isolated page-level mistakes. They traced to the architecture and to a small number of high-leverage signals that had not been deployed. Some of those signals could be addressed without changing rendering architecture. Some could not.
That distinction defined the scope of Phase 1. We worked on what was addressable inside the architecture, and we deferred what was not.
03 / Constraints
What we deliberately did not do.
The audit flagged a single architecture-limited finding: the SPA rendering pattern itself. The recommended fix — server-side rendering or static prerendering — would have resolved several downstream findings simultaneously. It would also have meant migrating off Lovable's default build pipeline, which was out of scope for a one-day cycle.
So we did not implement SSR. We did not change rendering architecture. We did not migrate hosting. We did not rewrite any existing page content beyond what the priorities below required.
This matters for one reason: it sets the score ceiling honestly. Phase 1 plus a future Phase 2 can move ELA's score meaningfully higher than 65, but the path to 80 and above runs through rendering architecture. That is a future decision. The number we report below is the number you can reach without it.
04 / The Honest Part
P3 took six iterations. Here is why that is informative.
The first three priorities deployed cleanly. P1 landed on attempt one. P2 validated against Google's Rich Results Test on the first deploy. P4 produced a clean FAQPage schema in a single pass.
P3 took six tries. The first attempt rendered the Related Resources block as a normal React component. It looked correct in the browser. View Source on the deployed page showed nothing in the raw HTML — the links existed only in the post-hydration DOM. Invisible to crawlers.
The fix was a Vite build-time plugin that emits per-route HTML files with the Related Resources block already injected. That worked, but Lovable's hosting did not honor the Netlify-style _redirects file we used to serve those per-route files. Same content was being served to every URL. Wrong link sets on every page.
The final solution: inject all four route blocks into the single root index.html, hide them by default, and use a tiny inline script — running before React mounts — to reveal the matching block. Crawlers see the full link graph in raw HTML on every page. Users see the React-rendered version after hydration.
"The architecture is real. Working around it is real work. The audit we sell does not pretend otherwise."
This is what makes the case study worth publishing. A site stack that does not natively support what AI crawlers need will require workarounds. Some of those workarounds are clean. Some require multiple iterations to land. Pretending otherwise is how AI visibility consulting becomes overpromise-and-underdeliver work.
05 / What This Means
The case study you can hand to your firm.
A few things this work demonstrates that are worth naming directly:
Structured data deployment is a precondition, not a guarantee.
Phase 1 made ELA technically discoverable. Two of the three major AI systems we tested already recognize it independently. The third has not yet — that is normal, and it will catch up. The work creates the conditions for AI recognition. The recognition itself happens on the AI systems' own ingestion timelines.
Most of the high-leverage work is not architectural.
An audit score going from 37 to 65 in one day, on a site that did not change rendering architecture, addresses the question "how much can a firm fix without rebuilding?" The answer is: meaningfully more than most firms assume. Structured data, llms.txt, citable content, and inlined static signals are deployable inside any modern stack — including SPA stacks that AI crawlers are otherwise hostile to.
The architectural ceiling is honest about itself.
A site staying on Lovable's default rendering will hit a sub-score ceiling that SSR or prerendering would clear. We chose not to address that in this round, and the score reflects that choice. Phase 2 will address what is addressable without architecture; an eventual architecture review would address what is not. The audit framework is precise about which findings live in which category.
Residual Items We Did Not Hide
The site removed its public subscribe and digest routes during this work. Lovable's SPA hosting serves the in-app NotFound page at HTTP 200 for unknown paths, which is a soft 404. We added noindex meta directives via react-helmet to mitigate, but the directives only apply post-hydration — meaning AI crawlers without JS execution will see the NotFound shell as a 200 response. This is the same SPA constraint that gates the higher score. Documented, not glossed over.
06 / Next
Phase 2 is content depth, expert signals, and trust.
The sub-score that moved least in Phase 1 was Content Quality / E-E-A-T — five points, from 35 to 40. That is the entire shape of the next round. The audit identifies six pages flagged as thin-content, weak expertise signals in crawler-visible text, and direct-answer density gaps. Those are content-engineering problems, not technical ones, and they need editorial attention rather than build-pipeline workarounds.
The sameAs signal also has room: one social profile on the founder's Person schema, four more available. The Organization schema can absorb a logo asset and a few additional brand-presence signals.
None of that requires architecture changes either. Phase 2 will follow the same pattern as Phase 1 — identify what is addressable, deploy it cleanly, verify in the raw HTML, and re-audit to measure delta.