Entity Level Authority
Case Study · May 2026
Phase 1 Field Report

From AI-Invisible to AI-Visible in 24 hours.

A real case study with real numbers. Entity Level Authority's own AI Search Visibility score moved from 37 to 65 in a single day of focused remediation work. Here is exactly what we did, what worked, what we deferred — and what AI systems can now see that they could not see yesterday.

The site: entitylevelauthority.com
The stack: React / Vite / Lovable
The work: 4 priorities, 6 deploys, 1 day
The Headline Number
May 1 · Before
37
AI-Invisible
May 2 · After
65
AI-Visible
+28 points
Overall AI Search Visibility · One day · No architecture changes
01 / Premise

The audit we run on clients, run on ourselves.

Most case studies hide their constraints. This one names them.

Entity Level Authority builds AI Search Visibility audits for plaintiff trial law firms. The methodology is straightforward: run a site through a seven-dimension diagnostic, return a score from zero to one hundred, and produce a remediation plan tailored to the actual technology stack the site is built on.

On May 1, 2026, we ran the audit on our own site. The result was a 37. Below the threshold the engine labels "AI-Invisible." We had a structured-data sub-score of zero out of one hundred. We had an LLM Discoverability sub-score of eighteen.

This is the case study of what we did about it the next day, told without hedging the parts that did not work and without overstating the parts that did.

02 / Starting Point

What the engine saw on day zero.

The site was built on React, Vite, and Lovable — a single-page application architecture where most page content renders client-side after JavaScript executes. AI crawlers, including the ones used by ChatGPT, Perplexity, Google AI Overviews, and Claude, generally do not run JavaScript. They take the raw initial HTML response, extract what is there, and move on.

What was there: a viewport tag, a canonical link, an HTTPS certificate, and a robots.txt that did not block any of the twelve AI crawlers the engine checks for. What was missing: structured data of any kind, a complete llms.txt, route-specific internal linking, citable Q&A content, and an FAQPage schema.

The audit's executive summary was direct about the cause: the visibility constraints were not isolated page-level mistakes. They traced to the architecture and to a small number of high-leverage signals that had not been deployed. Some of those signals could be addressed without changing rendering architecture. Some could not.

That distinction defined the scope of Phase 1. We worked on what was addressable inside the architecture, and we deferred what was not.

03 / Constraints

What we deliberately did not do.

The audit flagged a single architecture-limited finding: the SPA rendering pattern itself. The recommended fix — server-side rendering or static prerendering — would have resolved several downstream findings simultaneously. It would also have meant migrating off Lovable's default build pipeline, which was out of scope for a one-day cycle.

So we did not implement SSR. We did not change rendering architecture. We did not migrate hosting. We did not rewrite any existing page content beyond what the priorities below required.

This matters for one reason: it sets the score ceiling honestly. Phase 1 plus a future Phase 2 can move ELA's score meaningfully higher than 65, but the path to 80 and above runs through rendering architecture. That is a future decision. The number we report below is the number you can reach without it.

The Sub-Score Detail

Where the points actually moved.

The audit produces seven sub-scores plus an overall score. Two sub-scores account for most of the gain. Two barely moved. The honest version of the story includes both.

Dimension Before After Δ
Overall Score 37 65 +28
LLM Discoverability 18 83 65
Structured Data 0 40 40
Platform Readiness 35 66 31
AI Citability 35 64 29
Technical GEO Foundations 57 86 29
Content Quality / E-E-A-T 35 40 5
AI Crawler Access 100 100 0

The story the table tells: LLM Discoverability and Structured Data carry most of the gain, because those were the two lowest-scoring dimensions and the two most directly addressable. Content Quality moved only five points — that is Phase 2 territory. AI Crawler Access did not move because it was already at one hundred and there was nothing to fix.

The Four Priorities

What we actually deployed.

Each priority was scoped to be implementable in one Lovable deploy cycle. Each was verified against the raw HTML response — what AI crawlers actually see — not the post-JavaScript browser view.

P1 / LOW EFFORT

Rebuild llms.txt with the six missing sections.

Static file at /llms.txt

A plain-text file at site root that gives AI systems a structured summary of who, what, where, and how. The audit flagged six missing sections — Organization, URL, Description, Locations, Key People, Credentials. Rebuilt clean, served as text/plain.

Target: LLM Discoverability sub-score (18 → 83)
P2 / CRITICAL

Inline JSON-LD schema in static index.html.

Organization · Person · WebSite

Three schema blocks deployed inside <head> before any JavaScript runs. Cross-referenced @id linkage so the entity graph resolves cleanly. The critical detail: no react-helmet, no useEffect — schema injected by React after mount is invisible to AI crawlers.

Target: Structured Data sub-score (0 → 40)
P3 / SPA WORKAROUND

Static internal-linking blocks per route.

Vite plugin · build-time injection

A custom Vite plugin that injects four route-scoped Related Resources blocks into the static index.html. A small inline script reveals the matching block on page load. Crawlers see all four link sets in raw HTML; users see the React-rendered version. Six iterations to land — the architecture is real.

Target: Internal linking + Discoverability
P4 / DUAL PURPOSE

Deploy FAQ content with FAQPage schema.

/faq route + global JSON-LD

Five Q&A pairs written for citability — short, factual, directly answerable. Visible content on the FAQ page; matching FAQPage JSON-LD inlined globally. Validated by Google Rich Results Test as a valid FAQPage entity. Both citability and structured data move on a single deploy.

Target: Citability + Structured Data
04 / The Honest Part

P3 took six iterations. Here is why that is informative.

The first three priorities deployed cleanly. P1 landed on attempt one. P2 validated against Google's Rich Results Test on the first deploy. P4 produced a clean FAQPage schema in a single pass.

P3 took six tries. The first attempt rendered the Related Resources block as a normal React component. It looked correct in the browser. View Source on the deployed page showed nothing in the raw HTML — the links existed only in the post-hydration DOM. Invisible to crawlers.

The fix was a Vite build-time plugin that emits per-route HTML files with the Related Resources block already injected. That worked, but Lovable's hosting did not honor the Netlify-style _redirects file we used to serve those per-route files. Same content was being served to every URL. Wrong link sets on every page.

The final solution: inject all four route blocks into the single root index.html, hide them by default, and use a tiny inline script — running before React mounts — to reveal the matching block. Crawlers see the full link graph in raw HTML on every page. Users see the React-rendered version after hydration.

"The architecture is real. Working around it is real work. The audit we sell does not pretend otherwise."

This is what makes the case study worth publishing. A site stack that does not natively support what AI crawlers need will require workarounds. Some of those workarounds are clean. Some require multiple iterations to land. Pretending otherwise is how AI visibility consulting becomes overpromise-and-underdeliver work.

Live AI System Tests

What the AI systems can see now.

The audit measures what's deployable. The real test is whether AI systems independently recognize, describe, and cite ELA correctly when asked. We ran six identical queries across ChatGPT, Gemini, and Perplexity in incognito, logged out, with no prior conversation context.

Perplexity
Recognized

Independently identified Paul Bruemmer, his role, his location, and named Entity Level Authority as his strategic project — citing LinkedIn and the site directly.

"His strategic project, Entity Level Authority, explores how AI platforms like ChatGPT, Google's AI Overviews, and Perplexity are changing how law firms earn trust, rank online, and attract the right cases."
Gemini
Recognized

Identified Paul Bruemmer as a digital marketing pioneer with 35+ years of experience. Named EntityLevelAuthority.com, AISearchCheckup.com, and FactCheckWeekly.com under "Platform Development." Used the term "Entity-Level Authority (ELA)" verbatim.

"He is heavily involved in Generative Engine Optimization (GEO) and Entity-Level Authority (ELA), developing tools to help businesses and nonprofits understand how AI search engines perceive them."
ChatGPT
Not yet recognized

Generic data-governance definition for "Entity Level Authority" as a phrase. For "Paul Joseph Bruemmer," confused the query with L. Paul Bremer III, the U.S. diplomat. ChatGPT's training and retrieval cycles run slower than Perplexity's real-time crawling.

"Entity-level authority (ELA) is a concept used in data governance, security, and systems design..."
The Pattern That Matters

Different AI systems update on different cadences. Perplexity, the most aggressive at real-time web crawling, already returns Phase 1's signal. Gemini, with Google Search grounding, recognizes the entity. ChatGPT, which depends more on training cycles, has not caught up yet — and that's the case for almost every newly-optimized site. Phase 1 sets the table. Recognition follows on a multi-week to multi-month curve.

The Evidence

Six identical queries, three AI systems, incognito and logged out. The top row shows what each system returns when asked about Paul Bruemmer directly. The bottom row shows what each system returns when asked about the term “Entity Level Authority” in isolation — where older training-data definitions still dominate.

Query 1 · “Who is Paul Joseph Bruemmer”
Perplexity result for the query Who is Paul Joseph Bruemmer, identifying Paul Bruemmer as a digital marketing pioneer, naming Entity Level Authority as his strategic project, and citing LinkedIn.
Perplexity · Recognized
Names ELA as strategic project, cites LinkedIn, 9 sources.
Gemini result for the query Who is Paul Joseph Bruemmer, naming AISearchCheckup.com, EntityLevelAuthority.com, and FactCheckWeekly.com under Platform Development, and using the term Entity-Level Authority verbatim.
Gemini · Recognized
Names all three platforms, uses “Entity-Level Authority (ELA)” verbatim.
ChatGPT result for the query Who is Paul Joseph Bruemmer, returning information about L. Paul Bremer III the U.S. diplomat instead.
ChatGPT · Not yet recognized
Returns L. Paul Bremer III, the U.S. diplomat.
Query 2 · “What is Entity Level Authority”
Perplexity result for the query What is Entity Level Authority, returning a generic corporate-governance definition about board oversight and SOX compliance.
Perplexity · Generic
Corporate-governance definition. Term not yet associated with the firm.
Gemini result for the query What is Entity Level Authority, returning a cybersecurity and identity access management definition.
Gemini · Generic
Cybersecurity / IAM definition. Term not yet associated with the firm.
ChatGPT result for the query Entity Level Authority, returning a data governance and systems design definition.
ChatGPT · Generic
Data governance / systems design definition.
05 / What This Means

The case study you can hand to your firm.

A few things this work demonstrates that are worth naming directly:

Structured data deployment is a precondition, not a guarantee.

Phase 1 made ELA technically discoverable. Two of the three major AI systems we tested already recognize it independently. The third has not yet — that is normal, and it will catch up. The work creates the conditions for AI recognition. The recognition itself happens on the AI systems' own ingestion timelines.

Most of the high-leverage work is not architectural.

An audit score going from 37 to 65 in one day, on a site that did not change rendering architecture, addresses the question "how much can a firm fix without rebuilding?" The answer is: meaningfully more than most firms assume. Structured data, llms.txt, citable content, and inlined static signals are deployable inside any modern stack — including SPA stacks that AI crawlers are otherwise hostile to.

The architectural ceiling is honest about itself.

A site staying on Lovable's default rendering will hit a sub-score ceiling that SSR or prerendering would clear. We chose not to address that in this round, and the score reflects that choice. Phase 2 will address what is addressable without architecture; an eventual architecture review would address what is not. The audit framework is precise about which findings live in which category.

Residual Items We Did Not Hide

The site removed its public subscribe and digest routes during this work. Lovable's SPA hosting serves the in-app NotFound page at HTTP 200 for unknown paths, which is a soft 404. We added noindex meta directives via react-helmet to mitigate, but the directives only apply post-hydration — meaning AI crawlers without JS execution will see the NotFound shell as a 200 response. This is the same SPA constraint that gates the higher score. Documented, not glossed over.

06 / Next

Phase 2 is content depth, expert signals, and trust.

The sub-score that moved least in Phase 1 was Content Quality / E-E-A-T — five points, from 35 to 40. That is the entire shape of the next round. The audit identifies six pages flagged as thin-content, weak expertise signals in crawler-visible text, and direct-answer density gaps. Those are content-engineering problems, not technical ones, and they need editorial attention rather than build-pipeline workarounds.

The sameAs signal also has room: one social profile on the founder's Person schema, four more available. The Organization schema can absorb a logo asset and a few additional brand-presence signals.

None of that requires architecture changes either. Phase 2 will follow the same pattern as Phase 1 — identify what is addressable, deploy it cleanly, verify in the raw HTML, and re-audit to measure delta.

If your firm is invisible to AI search, the cause is rarely a mystery.

The audit takes a few minutes. The remediation roadmap is specific to your stack. The work is real but it is bounded — and the score either moves or it doesn't, on a timeline you can verify yourself.

Request an Audit