← Run Validator
SCORING METHODOLOGY · PHASE 0

How testmyllms.com
scores your llms.txt

Every check on this tool has a reason. Every dimension maps to a documented behaviour of AI recommendation systems. This page explains the criteria, the framework they are built on, and what the score does — and does not — tell you.

● Framework: ESC (Entity Clarity · Semantic Authority · Cross-Source Trust) ● Author: Anurag Gupta, ShodhDynamics.com ● Phase 0: declaration quality only — not live AI querying
CONTENTS
  1. 01 The Foundation — what llms.txt is and why it matters
  2. 02 The Scoring Model — why 8 dimensions
  3. 03 Each Dimension Explained
  4. 04 The ESC Framework
  5. 05 What the Score Means
  6. 06 What This Score Does Not Measure
  7. 07 Further Reading
01 · FOUNDATION

What llms.txt is, and why it matters for AI visibility

llms.txt is a plain-text file placed at the root of a website — yourdomain.com/llms.txt — that provides structured, machine-readable context about the entity behind the site. It is the intentional signal layer between your website and AI systems that read it.

The format was proposed in 2024 and has since been adopted by publishers, researchers, and practitioners building for AI-mediated discovery. Unlike robots.txt (which tells crawlers what not to read) and sitemap.xml (which tells crawlers where to go), llms.txt tells AI systems who you are, what you do, and how to interpret your content.

The core problem it solves: AI systems like ChatGPT, Perplexity, and Google AI Overviews do not read your website the way a human does. They synthesise answers from structured signals. If those signals are absent, ambiguous, or contradictory, the AI either ignores the entity or misrepresents it. A well-formed llms.txt reduces ambiguity at the point of AI interpretation.

The shift from search-result visibility to AI-answer visibility is structural, not cosmetic. In search, a business needed to rank in a list. In AI-mediated discovery, a business needs to be recommended — which requires the AI to have sufficient confidence in the entity's identity, authority, and relevance. llms.txt is how you declare that confidence-building data directly.

This tool validates whether your llms.txt file is structured to support that confidence-building process. It does not test what AI systems currently believe about you — that is Phase 1. It tests whether your declaration is complete, well-formed, and internally consistent.


02 · SCORING MODEL

Why 8 dimensions — and how they connect

The 8 dimensions are not arbitrary categories. They map to the distinct types of signals AI recommendation systems use to evaluate whether an entity is trustworthy, authoritative, and relevant enough to surface in a synthesised response.

Three of the dimensions — Identity Clarity, Topical Authority, and Cross-Source Trust — map directly to the three pillars of the ESC Framework (Entity Clarity · Semantic Authority · Cross-Source Trust), published independently at ShodhDynamics.com. The remaining five dimensions address the structural and operational requirements that make the ESC signals readable and usable.

The prerequisite logic: Identity Clarity is weighted highest because it is the prerequisite for everything else. An AI system cannot assess your topical authority or cross-source trust if it cannot reliably identify who you are. Entity disambiguation is the first gate. Every other dimension is downstream of it.

The dimension weights reflect this logic: checks that establish foundational identity carry more points than checks that refine or extend it. Failing Identity Clarity checks costs more than failing optional enrichment checks — because the consequence in AI systems is correspondingly more severe.

DIMENSION ESC MAPPING ROLE IN AI RECOMMENDATION
Structure Prerequisite File must be parseable. Malformed files are skipped silently.
Identity Clarity E — Entity Clarity AI must unambiguously identify the entity before it can recommend it.
Content Precision E — Entity Clarity AI extracts service and summary data to answer "what does this business do."
Cross-Source Trust C — Cross-Source Trust Multiple corroborating sources reduce AI hallucination risk on entity facts.
Relationship Completeness Structural Author/publisher chains establish content ownership — critical for citation.
Topical Authority S — Semantic Authority Topic and term declarations map the entity's expertise domain explicitly.
Content Navigability S — Semantic Authority Articles and featured pages give AI a content inventory to reference.
Temporal Currency Recency Signal Publication dates signal an active entity — stale content lowers AI confidence.

03 · DIMENSIONS

Each dimension explained

Below is every dimension, what it measures, why AI systems require it, and the specific checks used to evaluate it. Point values reflect relative importance in the AI recommendation candidacy model.

STRUCTURE PREREQUISITE

Validates that the file follows the expected section format and ordering. AI parsers and LLM context ingestion pipelines read llms.txt files programmatically. A file with missing sections or incorrect ordering may be partially parsed or ignored entirely — silently, without error.

AI systems require this because: structured files allow deterministic extraction of entity signals. A well-ordered file with all required sections ensures the AI receives a complete, unambiguous context package, not a partial one.

Checks: All required sections present (10pts) Sections in plugin canonical order (5pts)
IDENTITY CLARITY E — ENTITY CLARITY

The highest-weighted dimension. Validates that the entity is unambiguously declared — correct type (Person or Organization), substantive description (minimum 120 characters), geographic service area, and canonical IDs. Identity Clarity is the prerequisite for every other signal in the file.

AI systems require this because: entity disambiguation is the first step in any knowledge graph resolution process. If an AI cannot confidently identify who the entity is, it cannot confidently recommend them. Ambiguous or thin entity declarations result in the AI defaulting to competitors with clearer identity signals.

Checks: Name and type declared (8pts) Description >120 chars (10pts) Job title — Person mode (5pts) Geographic area declared (6pts) #organization fragment (6pts) #website fragment (4pts)
CONTENT PRECISION E — ENTITY CLARITY

Validates the quality and substance of descriptive content — the SUMMARY section and, where applicable, the SERVICES section. The SUMMARY must be a substantive entity description (minimum 150 characters), not a tagline or marketing headline.

AI systems require this because: when a user asks "what does [business] do?", the AI extracts the answer from the SUMMARY and service descriptions in the llms.txt file. A 101-character summary is still a tagline. A tagline tells the AI almost nothing actionable about the entity's actual function. The test detects tagline patterns specifically.

Checks: Summary >150 chars (8pts) Summary not a tagline (4pts) Service descriptions present (5pts)
CROSS-SOURCE TRUST C — CROSS-SOURCE TRUST

Validates the presence of sameAs profile declarations — the Also at: lines in the PRIORITY ENTITY section. Minimum three external profile URLs required. These declarations tell AI systems where to find corroborating information about the entity across independent sources.

AI systems require this because: hallucination risk on entity facts decreases when multiple independent sources corroborate the same claims. An entity that exists only on its own website provides no corroboration signal. An entity with declared LinkedIn, GitHub, and Gravatar profiles gives the AI three independent verification points, which increases confidence in entity claims and reduces fabrication risk.

Checks: ≥3 profile URLs declared (10pts)
RELATIONSHIP COMPLETENESS STRUCTURAL

Validates the ENTITY RELATIONSHIPS section — specifically author chains, publisher chains, and person-organisation links. These relationships map content ownership and organisational structure in a form AI systems can parse and cite.

AI systems require this because: when an AI cites content, it attributes authorship. If no author chain is declared, the AI cannot attribute the content — which reduces citation likelihood and may attribute the content to no one or to the wrong entity. Publisher chains establish organisational endorsement of content claims. Person-organisation links are critical for personal brand entities where the individual and the business are distinct but related.

Checks: Author chains present (7pts) Publisher chain present (5pts) Person↔Org link — Person mode (5pts)
TOPICAL AUTHORITY S — SEMANTIC AUTHORITY

Validates CORE TOPICS, KEY TERMS, and FRAMEWORKS declarations. Core topics must number between 3 and 15 — fewer signals insufficient expertise; more than 15 dilutes the authority signal. Key terms must include descriptions, not just names. Frameworks declaration is weighted to reflect the value of original IP in AI authority inference.

AI systems require this because: topical authority is how AI systems decide which entity is the most credible answer to a domain-specific question. A business that explicitly maps its expertise through structured topic and term declarations gives the AI a navigable expertise model. Without it, the AI defaults to whichever entity it has the most general knowledge about — typically the most prominent competitor, not the most relevant expert.

Checks: Topics declared (8pts) Topic count 3–15 (4pts) Key terms with descriptions (5pts) Frameworks declared (6pts)
CONTENT NAVIGABILITY S — SEMANTIC AUTHORITY

Validates ARTICLES, CANONICAL IDS, and FEATURED PAGES sections. Requires a minimum of 3 articles with descriptions. Each article should carry a description so AI systems can summarise the content without reading the full page. Canonical IDs anchor the entity graph and allow AI systems to reference specific content objects with precision.

AI systems require this because: AI recommendation depends on having a navigable content inventory. A business with no declared articles or pages gives the AI no evidence of content production — which reduces perceived authority regardless of what actually exists on the site. The articles section is not about indexing; it is about making the evidence of expertise visible to AI in a structured, parseable form.

Checks: ≥3 articles declared (6pts) Article descriptions present (5pts) Article canonical IDs ≥1 (3pts) Featured pages declared (4pts)
TEMPORAL CURRENCY RECENCY SIGNAL

Validates that the most recently published article has a publication date within the last 90 days. Requires ISO 8601 date format in Published: fields. Stale content — the most recent article older than 90 days — is flagged with a note that the entity may appear inactive to AI systems.

AI systems require this because: recency is a proxy for entity activity. An entity that has not published in over three months sends a lower confidence signal than one that published last week. AI systems trained on time-stamped corpora have implicit temporal weighting. Declaring recent, dated content is a direct input into that recency signal — and the absence of dates removes the signal entirely.

Checks: Most recent article <90 days (8pts)

04 · THE ESC FRAMEWORK

Entity Clarity · Semantic Authority · Cross-Source Trust

The ESC Framework was developed independently by Anurag Gupta at ShodhDynamics.com to describe the three structural conditions AI systems appear to require before surfacing an entity in a synthesised recommendation response. It is a practitioner-developed framework grounded in observable AI behaviour, not a formal academic standard.

Published reference: The ESC Framework is documented at shodhdynamics.com/frameworks/esc-framework/. The publication predates testmyllms.com and provides the independent research basis for the scoring model used here.
PILLAR WHAT IT DESCRIBES FAILURE CONSEQUENCE
E — Entity Clarity The AI can unambiguously identify who the entity is, what type it is, where it operates, and what it does. Name, type, description, geography, and canonical IDs all contribute. AI either misidentifies the entity, conflates it with a similarly-named entity, or omits it in favour of a more clearly declared competitor.
S — Semantic Authority The AI associates the entity with a specific, coherent domain of expertise. Topics, terms, frameworks, and structured content inventory all contribute to authority inference. AI treats the entity as a generic provider in a broad category rather than a specialist. Generic positioning loses to specialist positioning in AI recommendation decisions.
C — Cross-Source Trust The AI can verify entity claims against multiple independent sources. Profile declarations, external citations, and third-party mentions all contribute to corroboration density. AI assigns lower confidence to unverified claims. Entities with low cross-source corroboration are more likely to be misrepresented or omitted. Hallucination risk increases.

The three pillars are interdependent. Entity Clarity is the prerequisite — without it, Semantic Authority and Cross-Source Trust cannot attach to a stable entity reference. Semantic Authority without Cross-Source Trust produces an entity that appears expert but unverified. Cross-Source Trust without Entity Clarity produces corroboration that cannot be attributed.

A complete, well-formed llms.txt addresses all three pillars simultaneously, which is why the scoring model weights Identity Clarity checks highest — they establish the foundation on which the other signals rest.


05 · SCORE RANGES

What the score means in practice

The score is a declaration quality index — it measures how completely and correctly your llms.txt file communicates entity signals to AI systems. It is not a guarantee of AI recommendation; it is a measure of whether the prerequisite conditions for recommendation candidacy are in place.

90–100
EXCELLENT
All structural requirements met. Entity is fully declared across all dimensions. The llms.txt file provides AI systems with a complete, unambiguous, corroborated entity context. Recommendation candidacy conditions are in place — AI perception is now the variable (Phase 1).
75–89
STRONG
Core identity and authority signals are well-declared. Minor gaps in optional or enrichment checks. The file is functional and reasonably complete. Focus remaining work on the specific checks flagged in the Fixes tab.
55–74
MODERATE
Some identity or authority signals are present but incomplete. Likely missing cross-source profiles, thin descriptions, or undeclared topics. AI systems may identify the entity but with low confidence. Competitors with stronger declarations will be preferred in ambiguous recommendation scenarios.
35–54
WEAK
Significant gaps across multiple dimensions. The entity is partially declared but missing critical signals — likely no cross-source profiles, no canonical IDs, thin summary, or missing relationship chains. AI systems have insufficient data to recommend with confidence. Prioritise the top 3 fixes in the Fixes tab immediately.
0–34
CRITICAL
Core required sections missing or malformed. The file does not provide enough structured data for AI systems to identify, classify, or contextualise the entity. The business is effectively invisible in AI-mediated discovery regardless of its website quality or real-world reputation.

06 · SCOPE BOUNDARIES

What this score does not measure

Honest scope definition is part of a credible methodology. The following are explicitly outside the scope of the Phase 0 validator — not because they are unimportant, but because they require different tooling, different data sources, or are addressed in Phase 1.

Live AI system responses This tool does not query ChatGPT, Perplexity, or Google AI Overviews. It validates what you have declared — not what AI systems currently believe about you. The gap between declaration and AI perception is the subject of Phase 1. A perfect score here does not guarantee correct AI representation.
Backlinks, domain authority, or search rankings This is not an SEO tool. Backlinks and domain authority are traditional search signals. They are relevant to AI visibility indirectly — through their influence on what AI training data contains — but they are not measured here and cannot be inferred from an llms.txt file.
Guarantee of AI recommendation A high score means your declaration is strong. It does not mean AI systems will recommend you. Recommendation depends on query context, competitive density, training data recency, and AI provider-specific factors beyond the scope of any declaration tool.
Cross-source consistency verification The validator checks whether you have declared sameAs profiles. It does not verify whether the information on those profiles is consistent with your llms.txt declarations. Inconsistency between declared and actual profile data is a separate audit — also addressed in Phase 1.
Content quality on linked pages The validator checks that articles are declared with descriptions and dates. It does not read or evaluate the content of the linked pages. A declared article with a thin or low-quality page is a structural pass but a substantive failure — which is why Phase 1 AI querying exists.

07 · FURTHER READING

Sources and reference material

The methodology behind this tool draws on published frameworks, academic research, and practitioner documentation. The links below are the primary sources behind the scoring model.